Skip navigation

Here is Ira's challenge: "Can a mission statement be brief, amorphic and semantically mutable: i.e. 'We do research across all disciplines on the impact of interactive technology. (Come play with us!).'"  Also, he wants us to say what we've got that makes us distinctive.

What's distinctive about us?  We can cross disciplines more easily at Miami: "we've got interdisciplinarity." I wanted to transform "interdisciplinarity" or "interactivity / responsiveness / implicatedness among disciplines" from a noun into a verb: "Our Research Mission as transdisciplinary faculty is to explore the implications of our respective fields for each other's fields insofar as their disciplinary processes occur in interactive media."  It's still ugly, and that means something.  How about "Our research mission is to implicate our fields in each other's work through interactive media."  Oiy, what does that mean?

Ira's second challenge: "What if your field is interactive media?"  Ira's proposed revision: "IMS Research is a very, very, very dynamic, sloppy, collaborative, confusing, alogical, passionate, playful, exploratory, terribly exciting and (hopefully) insanely irreverent space. Come play with us."

Advertisements

At Ian's birthday party on Sat. Robin and I forgot our camera (Thanks to Glenn P. for a save here.) It caused me to reflect some on the obsessive need to document our lives and then "replay" them. It's almost as if, without the documentation, we can't be sure an event occured or how we felt during it. I felt guilt and sadness for forgetting the camera; this would be time permanently lost. This idea of capturing time is obviously not new, but we are now capturing far more time (as a culture) than actually passes. Thus, it would take more than a lifetime to (re)view a lifetime. And this hyper-documentation is exponentially increasing, expanding the present and even creating history in the moment.
I don't know what we're going to do with all this time, since unfortunately there is both too little and too much at the same time. Perhaps we need time munchers–little bots that roam around and eat time, but only the time poorly spent. This way our replayed histories could be even more glorious.

This is from a letter I sent to the faculty of the Interactive Media Studies program at Miami, regarding the development of an IMS research mission statement.

A point that I would like to see stressed in the mission is the deeper fundamental impact that computation brings to media studies. I think it is easy to lose the forest for the trees here—aided in large part by the software industry. The tendency, both in and out of academia, is to see computation (the glass half full view) as a facilitating and democratizing tool/force, which in itself is not a bad thing. However, this somewhat superficial perspective I think misses the much more significant potential of computation as a distinctive medium and even alternative "intelligence"*. The tool/force perspective relies on an industrial age paradigm-technology enhances, frees, empowers, etc. Computation fits neatly within this continuum of being another incremental step toward full automation. Again, this is a valid and useful signification. However, it seems also to me to be overly egocentric-the individual remains in control of the machine; ideally it serves his/her wishes (ultimately completely.)

In contrast, computation can be a much less agreeable and cooperative agent. As a tool, it is arguably highly inefficient. Consider the actual costs of system development, operations, training, deployment and maintenance, vis-a-vis work productivity. Of course current human demand for "toys" makes these numbers work, but if we try to separate the fulfillment of actual human needs from wants, I wonder how productive computer technology really is (yeah, yeah, I know this is wimpy lefty thinking.) Considering computation as a medium offers a significant break from the older productivity model. As a medium, computation offers universal mutability; it can model/process/analyze/generate visual, aural, tactile, kinetic, textual, etc. data–it can take the form of (perhaps) everything. Thus, when we segment to: digital media, digital humanities, etc, we are expressing a bias based on older disciplinary boundaries rather than any limit inherent within the medium. This is something to consider seriously. And filtering further to digital video, 3D, multimedia, etc seems highly problematic.

A current problem is how to get a literal grip on/in/around the medium. The software industry has stepped in to categorize/granularize the medium for us, and make a whole lot of moola in the process. They have been very effective in confusing the mechanism for the medium. Epistemology is not a high priority in the engineering process, so our software tools don't ask why only how, and we keep buying up the stuff, even if most of us never use 9/10ths of the features in these bloated tools-yet we dutifully upgrade every cycle. I would argue that to stop this cycle and get a "grip" on the medium we need greater fluency in the actual computation medium–not simply facility in manipulating commercial software applications. And this is best achieved through developing programming literacy. I believe IMS should be at the forefront of this-not to train computer scientists, but rather to provide essential education. If we want our students to be able to parse, interpret, analyze, etc. shouldn't they have that proficiency in perhaps the single most dominant and controlling medium in their lives? And obviously I think IMS research should blaze a path in this area. Let me stress again that this is not about low-level computer science based research, but rather fluency in the computation medium and work/research that reflects this fluency and hopefully helps define our emerging field.

* I'll offer some additional half-baked thoughts on "alternative intelligence" in a future post.

There is a new book by Nancy Armstrong called How Novels Think.  It's brilliant, congruent with recent work by Andrew Elfenbein (in PMLA and elsewhere) which discusses print presentation, the look and feel of early 19th-c texts, as "interface."  Armstrong's premise is that, since novels do a certain amount of thinking for us, they are bundles of smart data.   Novelistic conventions, then, are basically a software package for making information smart.  The really brilliant piece of her argument (it may be obvious, but I still think it is brilliant) is her idea that software packages and data bundles in-form: they form the inside of us — our psyches, our selves — as a means and effect of giving us information.

Armstrong's argument really helps me understand something that John Maeda is worried about in thinking about the computer as the artist's material.  In Creative Code, he says that he is worried that software is becoming too complex for people to use as a tool (intuitively, without laboriously reading manuals) while programming is becoming easier at the expense of creativity.  I can really understand what he's saying here if I think about software as a set of conventions for a specific type of novel — historical romance or gothic fiction, e.g. — and so the programmers of this software as the artists who come up with new genres, new forms, usable by many other very creative people.  Here is Maeda expressing his worry:

Programming tools are increasingly oriented toward fill-in-the-blank approaches of the construction of code . . . . The experience of using the latest software, meanwhile, has made even expert uses less likely to dispose of their manuals, as the operation of the tools is no longer self-evident.  Can we, therefore, envision a future where software tools are coded less creatively [i.e., a future of impoverished novelistic genres]? Furthermore, will it someday be the case that tools are so complex that they become an obstacle to free-flowing creativity [i.e., that you can't churn out gothic or sci fi]?

Maeda’s own software “Illustrandom” seems to me a beautiful example of something that took complicated rather than fill-in-the-blank programming and renders software that is pretty intuitive and so will allow creativity to flow.

Also, is it possible to discuss some of Ira's work, Protobytes, as the kind of work that intervenes in Maeda's problematic?  Ira, you said that you used bits of code, without thinking it, as a painter might use brush strokes, throwing up bits of it, then seeing what happened?

We celebrated the "birth" of CHAT at a party yesterday: it involved a cake, singing, sliding on water (not ice this time).  Officiating were Glenn, Bettina, Ira, and me.  Appropriate to the theme of "birth," I had invited only those CHAT members and affiliates with children young enough to be infatuated with a water slide.  Unfortunately, that meant that we were without poets and narrativists.  A video of the event marking CHAT's birth is coming.

All the skating and sliding suggests that there exists an as yet unarticulated "rule" for CHATting, that it must somehow involve speed, glide, fundamental elements, and happiness.  Intellectual propulsion will feel effortless but will in fact involve a very skillful manipulation of surface friction.  Have I just described a MMORPG, and could a CHAT tool, or in fact our blog, work as a MMORPG?

We discussed widening the scope of collaborators, trying to get natural scientists involved: any ideas about fostering collaborations of any sort, whether among HA or H,A, and NS?

After another highly successful skating CHAT session (YES! you can/should/better join us) Laura and I had a meeting with particle physicist turned coder Dave W about the development of a poetry visualization tool. Laura presented her exciting vision for the tool, and Dave and I discussed how we could help Laura build it.

Dave will handle the back-end database component, and I'll try to tackle the front-end graphical stuff. The visualization will be a dynamically generated 3D plot of user selected data fields. For example, a user may select a list of poems based on a certain time period, meter structure, theme, etc. The tool will plot the results as a series of relational nodes in 3D space, with the different axes and node types representing the relevant metrics. In addition, users will be able to specify style characteristics for nodes as well as save images of the visualizations. We'll soon be shaking the grant trees for funding (ideas/cash welcome) and developing a prototype.

Inspired by Laura's skating and generative prowess, I created a little code piece in the spirit of her vis tool (ok, it will also be an example in my book.) As usual, paste the code below in Processing and run the dang thing. If the animation runs too slowly, try lowering the number of cubies (int cubies = 150;).

// Paste the code below into Processing

Cube stage; // external large cube
int cubies = 150;
Cube[]c = new Cube[cubies]; // internal little cubes
color[][]quadBG = new color[cubies][6];

// controls cubie's movement
float[]x = new float[cubies];
float[]y = new float[cubies];
float[]z = new float[cubies];
float[]xSpeed = new float[cubies];
float[]ySpeed = new float[cubies];
float[]zSpeed = new float[cubies];

// controls cubie's rotation
float[]xRot = new float[cubies];
float[]yRot = new float[cubies];
float[]zRot = new float[cubies];

// size of external cube
float bounds = 300;

void setup(){
  size(400, 400, P3D);
  framerate(30);
  for (int i=0; i<cubies; i++){
    // each cube face has a random color component
    float colorShift = random(-75, 75);
    quadBG[i][0] = color(175+colorShift, 30, 30);
    quadBG[i][1] = color(30, 175+colorShift, 30);
    quadBG[i][2] = color(30, 30, 175+colorShift);
    quadBG[i][3] = color(175+colorShift, 175+colorShift, 30);
    quadBG[i][4] = color(175+colorShift, 30, 175+colorShift);
    quadBG[i][5] = color(175+colorShift, 87+colorShift, 30);

    // cubies are randomly sized
    float cubieSize = random(5, 10);
    c[i] =  new Cube(cubieSize, cubieSize, cubieSize);

    //initialize cubie's position, speed and rotation
    x[i] = 0;
    y[i] = 0;
    z[i] = 0;

    xSpeed[i] = random(-2, 2);
    ySpeed[i] = random(-2, 2);
    zSpeed[i] = random(-2, 2);

    xRot[i] = random(40, 100);
    yRot[i] = random(40, 100);
    zRot[i] = random(40, 100);
  }
  // instantiate external large cube
  stage =  new Cube(300, 300, 300);
}

void draw(){
  background(50);
  // center in display window
  translate(width/2, height/2, -130);
  // outer transparent cube
  noFill();
  // rotate everything, including external large cube
  rotateX(frameCount*PI/225);
  rotateY(frameCount*PI/250);
  rotateZ(frameCount*PI/275);
  stroke(255);
  // draw external large cube
  stage.create();
 
  //move/rotate cubies
  for (int i=0; i<cubies; i++){
    pushMatrix();
    translate(x[i], y[i], z[i]);
    rotateX(frameCount*PI/xRot[i]);
    rotateY(frameCount*PI/yRot[i]);
    rotateX(frameCount*PI/zRot[i]);
    noStroke();
    c[i].create(quadBG[i]);
    x[i]+=xSpeed[i];
    y[i]+=ySpeed[i];
    z[i]+=zSpeed[i];
    popMatrix();

    // draw lines connecting cubies
    stroke(35);
    if (i< cubies-1){
      line(x[i], y[i], z[i], x[i+1], y[i+1], z[i+1]);
    }

    // check wall collisions
    if (x[i]>bounds/2 || x[i]<-bounds/2){
      xSpeed[i]*=-1;
    }
    if (y[i]>bounds/2 || y[i]<-bounds/2){
      ySpeed[i]*=-1;
    }
    if (z[i]>bounds/2 || z[i]<-bounds/2){
      zSpeed[i]*=-1;
    }
  }

}

/*
Extremely simple  class to
 hold each 3D vertex
 */
class Point3D{

  float x, y, z;

  // constructors
  Point3D(){
  }

  Point3D(float x, float y, float z){
    this.x = x;
    this.y = y;
    this.z = z;
  }
}

/* custom Cube class
slightly cooler than Processing's
box() function */
class Cube{
  Point3D[] vertices = new Point3D[24];
  float w, h, d;

  //constructor
  Cube(float w, float h, float d){
    this.w = w;
    this.h = h;
    this.d = d;

    // cube composed of 6 quads
    //front
    vertices[0] = new Point3D(-w/2,-h/2,d/2);
    vertices[1] = new Point3D(w/2,-h/2,d/2);
    vertices[2] = new Point3D(w/2,h/2,d/2);
    vertices[3] = new Point3D(-w/2,h/2,d/2);
    //left
    vertices[4] = new Point3D(-w/2,-h/2,d/2);
    vertices[5] = new Point3D(-w/2,-h/2,-d/2);
    vertices[6] = new Point3D(-w/2,h/2,-d/2);
    vertices[7] = new Point3D(-w/2,h/2,d/2);
    //right
    vertices[8] = new Point3D(w/2,-h/2,d/2);
    vertices[9] = new Point3D(w/2,-h/2,-d/2);
    vertices[10] = new Point3D(w/2,h/2,-d/2);
    vertices[11] = new Point3D(w/2,h/2,d/2);
    //back
    vertices[12] = new Point3D(-w/2,-h/2,-d/2);
    vertices[13] = new Point3D(w/2,-h/2,-d/2);
    vertices[14] = new Point3D(w/2,h/2,-d/2);
    vertices[15] = new Point3D(-w/2,h/2,-d/2);
    //top
    vertices[16] = new Point3D(-w/2,-h/2,d/2);
    vertices[17] = new Point3D(-w/2,-h/2,-d/2);
    vertices[18] = new Point3D(w/2,-h/2,-d/2);
    vertices[19] = new Point3D(w/2,-h/2,d/2);
    //bottom
    vertices[20] = new Point3D(-w/2,h/2,d/2);
    vertices[21] = new Point3D(-w/2,h/2,-d/2);
    vertices[22] = new Point3D(w/2,h/2,-d/2);
    vertices[23] = new Point3D(w/2,h/2,d/2);
  }
  void create(){
    // draw cube
    for (int i=0; i<6; i++){
      beginShape(QUADS);
      for (int j=0; j<4; j++){
        vertex(vertices[j+4*i].x, vertices[j+4*i].y, vertices[j+4*i].z);
      }
      endShape();
    }
  }
  void create(color[]quadBG){
    // draw cube
    for (int i=0; i<6; i++){
      fill(quadBG[i]);
      beginShape(QUADS);
      for (int j=0; j<4; j++){
        vertex(vertices[j+4*i].x, vertices[j+4*i].y, vertices[j+4*i].z);
      }
      endShape();
    }
  }
}

Engaged in our weekly frictionless dialogue, Laura helped me to see just how utterly confused I am, all while performing greatly improving cross-overs. Fortunately, the elliptical frozen surface kept us from getting too lost. We tripped over art, text, code, beauty, kitsch, courage–freezing but not once hitting our asses on the ice.

The (pretty bad) poem below will execute in Processing. I tried (with my very limited capabilities) to illustrate an example of what I'm calling "supertext", where the source code is semantically coded and also executable. I'd be very happy to collaborate with some–more literate person than myself–on this. I could probably sling the code and conceptualize some visuals, if you could handle the wordsmithing.

// paste everything below into Processing and hit the run arrow or (OSX: cmd + r WIN: cntrl + r)

/* hunger
Ira Greenberg
original "puff" code October 22, 2005
revised "hunger" May 23, 2006
*/

// head of the beast
float heaving;
float ascension;
float anxiety = .7;
float hope = .9;
int darkness = 0;
int heavens;
int dirt;

// body of the beast
int flesh = 2000;
float[]guts= new float[flesh];
float[]blood= new float[flesh];
float[]girth = new float[flesh];
float[]heft = new float[flesh];
float[]fate = new float[flesh];
float[]compulsivity = new float[flesh];
float[]tenderness = new float[flesh];
color[]phlegm = new color[flesh];

void setup(){
size(400, 400);
heavens = width;
dirt = height;
background(255);
noStroke();
// begin in the center
heaving = heavens/2;
ascension = dirt/2;

//fill body
for (int i=0; i< flesh; i++){ girth[i] = random(-7, 7); heft[i] = random(-4, 4); compulsivity[i]= random(-9, 9); tenderness[i] = random(16, 40); phlegm[i] = color(255, 50+random(-70, 70), 30, 3); } framerate(30); }

void draw(){
background(darkness);

// purpose
for (int i =0; i< flesh; i++){
fill(phlegm[i]);
if (i==0){
guts[i] = heaving+sin(radians(fate[i]))*girth[i];
blood[i] = ascension+cos(radians(fate[i]))*heft[i];
}
else{
guts[i] = guts[i-1]+cos(radians(fate[i]))*girth[i];
blood[i] = blood[i-1]+sin(radians(fate[i]))*heft[i];

// wrenching
if (guts[i] >= heavens-tenderness[i]/2 || guts[i] <= tenderness[i]/2){
girth[i]*=-1;
tenderness[i] = random(1, 40);
compulsivity[i]= random(-13, 13);
}
if (blood[i] >= dirt-tenderness[i]/2 || blood[i] <= tenderness[i]/2){
heft[i]*=-1;
tenderness[i] = random(1, 40);
compulsivity[i]= random(-9, 9);
}
}
// creation
ellipse(guts[i], blood[i], tenderness[i], tenderness[i]);
// divinty
fate[i]+=compulsivity[i];
}

// mind wandering
heaving+=anxiety;
ascension+=hope;

// hopes edge
if (heaving >= heavens-tenderness[0]/2 || heaving <=tenderness[0]/2){
anxiety*=-1;
}
if (ascension >= dirt-tenderness[0]/2 || ascension <= tenderness[0]/2){
hope*=-1;
}
}

Wow!!! “Phatic”, “zeugma”, “syllepsis”. Laura obviously spent her time in Ithaca much more productively than I did (too much turpentine sniffing.) And she even does her homework!

“Linguistic power comes not just from the connotative dimension but also from its performativity…performativity is unlimited, dependent upon uptake and context, but those aren’t extraneous exactly – they can be coded in the linguistic production itself…”

I’m not sure I fully understand “performativity”. My point is not that natural language is limited in its potential to describe, express, etc. Obviously rich complex worlds have been built in words. But, in comparison to mathematical language, these worlds are fuzzy (in a good way.) When we say or write anything, I don’t believe the signification can ever be fully known. However, the expression 2+2 = 4 can (perhaps) never be unknown. The former is dynamic and mutable, the latter static and immutable. I am not passing a value judgment on either of these systems. We can of course, as Laura suggested, code in more context, but as we add specificity we simply approach the infinite (Zeno’s paradox).

“While it is true that in English you can say "I love my skates" and "I love my mother," it really only seems to be the case (or is only true of syntactic rules) that the verb "love" doesn't have a declared datatype for its object.”

This is precisely what I think I was trying to say. The concept of a declared (immutable) datatype is foreign to natural language, right? We can use other explicit structures to build a context of meaning, but ultimately any datatype abstraction needs to be subordinate to a dynamic emergence; language needs elbow space. This is what I meant by “semantic expansion”. From a coding perspective Datatypes (classes), in object-oriented programming, are static constructs that enforce encapsulation and contractual communication. In a pure OOP system, everything would be an object, (based on a datatype.) “Love” would be forced to choose its type. Although, through inheritance, the possibility does also exist for “Love” to be of multiple types. Regardless though, some discrete datatype(s)/object binding is required*.

“Arthur Quinn says that ‘the simplest definition of a figure of speech is an intended deviation from ordinary usage,’ an intentional mistake, and that's what your ‘I love my skates, and I love my mother’ (I'm rewriting it to make a point) would be if they appeared in the same sentence. The sentence is a specific kind of mistake…”

A discussion on the notion of “mistake” would make another worthy post (if anyone’s sitting on the sidelines ready to jump in.) I might argue (probably very foolishly) that there are ONLY mistakes in natural language and no mistakes in mathematical language. When I taught painting (prior to selling out) I described painting as a series of near misses. I guess I’m thinking of mistake as deviation from intention. Thus every human gesture is a small (or larger) mistake. Mathematically we can prove this, referring back to Zeno, but that would be damn boring. In math, until something is proven, it remains unproven; there is no figure of speech territory. Coding does offer some mistake territory, as I tried to illustrate with my fuzzy polygon program, based on random number generation.

“Barthes's S/Z is really a program that codes Balzac's short story "Sarrasine." That text demonstrates that the program for generating the story — really the program for generating any natural sentence in all its connotative and performative grandeur — would have to be so much longer than the sentence or story itself, and I'm not sure any of it would ever be generalizable to other sentences or stories, which is why such coding would be a worthless endeavor, as was my attempt to write an XSL transform to write Wordsworth's poem "A Slumber Did My Spirit Seal."

This last point I agree with. Using code as a mimetic or transformative tool is usually more work than it’s worth. However, using code as a primary generative medium offers unique and fresh possibilities, outside of the domains of natural and mathematical languages. Because code has access to the rigid precision of mathematical language and the narrative fuzziness of natural language, it offers (I still think) possibilities for a new (whole brain) integration, especially needed at our esteemed (disciplinary biased) institutions of higher learning.

* Some languages such as Java rely on late binding, allowing objects to be bound to datatypes dynamically at runtime. This approach supports polymorphism, promoting a high level of object abstraction.

. . . first, as assigned by John.  I built a game / interactive fiction in Inform 7, and you can see the results here.

Second, Ira's assignment: I generated the triangle, the polygon, and the fuzzy polygon, which IS beautiful.  But I can't somehow get from it to language, partly because I'm stuck on some of the things you are saying about language.  I want to lay out my thinking about the differences / similarities between natural language and code, based on your posting.

Linguistic power comes not just from the connotative dimension but also from its performativity.  I say "I love you" to different people to whom it means different things, but I also do different things when I say it: it can serve a phatic function, express an obsession, enact insecurity, compensate, even wound somebody — performativity is unlimited, dependent upon uptake and context, but those aren’t extraneous exactly – they can be coded in the linguistic production itself (they are, in the hands of Jane Austen, e.g., incredibly clear).

While it is true that in English you can say "I love my skates" and "I love my mother," it really only seems to be the case (or is only true of syntactic rules) that the verb "love" doesn't have a declared datatype for its object.  Arthur Quinn says that "the simplest definition of a figure of speech is 'an intended deviation from ordinary usage," an intentional mistake, and that's what your "I love my skates, and my mother" (I'm rewriting it to make a point) would be if they appeared in the same sentence.  The sentence is a specific kind of mistake — often labeled zeugma but it's really "syllepsis," I think, and the most famous example of it is Alexander Pope's line about Belinda who is in danger: Belinda may either "stain her honour, or her new brocade."  That mistake is funny because it violates rules of decorum (I'm not sure whether they are rules about connotation or rules about performance).  The performative effect, however, is to make us think about Belinda — she is clearly a ninny, someone for whom staining a dress and losing her chastity are acts of the same magnitude.  And your sentence "I love my skates, and my mother" similarly tells us something about you, which you of course recognize with the parentheses and the wink!  Rules can be written to express the performative effect: you could, I sincerely believe, make a Jane Austen game (a game about psychological realism).  If "skates" were entered into the game coded "thing [datatype] lovedObject [variableName]" while "Mom" were coded "person lovedObject," your program wouldn't ever substitute skates for Mom, or would do so only if you called function "syllepsis."  Language is only baggier than code if you don't take into account all that it is doing at any given moment, all of which can be coded.  Barthes's S/Z is really a program that codes Balzac's short story "Sarrasine."  That text demonstrates that the program for generating the story — really the program for generating any natural sentence in all its connotative and performative grandeur — would have to be so much longer than the sentence or story itself, and I'm not sure any of it would ever be generalizable to other sentences or stories, which is why such coding would be a worthless endeavor, as was my attempt to write an XSL transform to write Wordsworth's poem "A Slumber Did My Spirit Seal."