Skip navigation

Monthly Archives: June 2006

I was reading the section of Ira’s book, Foundation Processing, called “Motion.” In it, he says:

In many ways, computer monitors are animation machines, as the pixels being continuously refreshed on the screen, at your monitor’s refresh rate, are a form of animation (not a terribly exciting one). This fact that animation can happen in front of your eyes and be wholly undetectable is significant.

I hadn’t really understood, not REALLY, what was at stake in “flickering signifiers” (N. Katherine Hayles, “Virtual Bodies and Flickering Signifiers,” Electronic Culture, ed. Timothy Druckrey). But this passage made sense for me of a phenomenon I encountered a while ago at the library. I’m not sure it’s still true, but, when King Library first put up surveillance cameras, I remember looking at them and seeing the computer screens just behind me. They looked like TV screens used to look late at night, after stations went off the air, lines cycling up over and over again; the quickly moving lines were all that could be seen on a gray screen. Then I would turn around, look at the library’s computer screens with my own eye, and see no lines, no cycling, a perfectly normal desktop picture. I’d turn back to the video camera, and it would still be there. Things are moving all over the place on these screens, but our eyes distill them into still images.

I at first posted about how this idea surfaces in sci fi (and poetry).  More interesting — Ira says “significant,” but it could also be sinister — is to think about the effects of distilling moving, flickering, animated images into still images.  Is anything subtracted from our view in the process of distillation?  Is the process completely “natural,” or informed by ideology, by habits of seeing? It strikes me that the eye’s propensity to render moving images still is very much like the capitalist’s desire to render relationships as commodities.  Not all reduction is bad — some of it is essential to sanity, if nothing else.  But does the eye’s desire to reduce movement to stillness affect our ideas of beauty or even promote the use of Ritalin for ADHD? (sorry for such a huge leap)  Counteracting the sinister is why it is so important to teach code; knowing a bit of it enables looking under the hood.

Here is Ira's challenge: "Can a mission statement be brief, amorphic and semantically mutable: i.e. 'We do research across all disciplines on the impact of interactive technology. (Come play with us!).'"  Also, he wants us to say what we've got that makes us distinctive.

What's distinctive about us?  We can cross disciplines more easily at Miami: "we've got interdisciplinarity." I wanted to transform "interdisciplinarity" or "interactivity / responsiveness / implicatedness among disciplines" from a noun into a verb: "Our Research Mission as transdisciplinary faculty is to explore the implications of our respective fields for each other's fields insofar as their disciplinary processes occur in interactive media."  It's still ugly, and that means something.  How about "Our research mission is to implicate our fields in each other's work through interactive media."  Oiy, what does that mean?

Ira's second challenge: "What if your field is interactive media?"  Ira's proposed revision: "IMS Research is a very, very, very dynamic, sloppy, collaborative, confusing, alogical, passionate, playful, exploratory, terribly exciting and (hopefully) insanely irreverent space. Come play with us."

At Ian's birthday party on Sat. Robin and I forgot our camera (Thanks to Glenn P. for a save here.) It caused me to reflect some on the obsessive need to document our lives and then "replay" them. It's almost as if, without the documentation, we can't be sure an event occured or how we felt during it. I felt guilt and sadness for forgetting the camera; this would be time permanently lost. This idea of capturing time is obviously not new, but we are now capturing far more time (as a culture) than actually passes. Thus, it would take more than a lifetime to (re)view a lifetime. And this hyper-documentation is exponentially increasing, expanding the present and even creating history in the moment.
I don't know what we're going to do with all this time, since unfortunately there is both too little and too much at the same time. Perhaps we need time munchers–little bots that roam around and eat time, but only the time poorly spent. This way our replayed histories could be even more glorious.

This is from a letter I sent to the faculty of the Interactive Media Studies program at Miami, regarding the development of an IMS research mission statement.

A point that I would like to see stressed in the mission is the deeper fundamental impact that computation brings to media studies. I think it is easy to lose the forest for the trees here—aided in large part by the software industry. The tendency, both in and out of academia, is to see computation (the glass half full view) as a facilitating and democratizing tool/force, which in itself is not a bad thing. However, this somewhat superficial perspective I think misses the much more significant potential of computation as a distinctive medium and even alternative "intelligence"*. The tool/force perspective relies on an industrial age paradigm-technology enhances, frees, empowers, etc. Computation fits neatly within this continuum of being another incremental step toward full automation. Again, this is a valid and useful signification. However, it seems also to me to be overly egocentric-the individual remains in control of the machine; ideally it serves his/her wishes (ultimately completely.)

In contrast, computation can be a much less agreeable and cooperative agent. As a tool, it is arguably highly inefficient. Consider the actual costs of system development, operations, training, deployment and maintenance, vis-a-vis work productivity. Of course current human demand for "toys" makes these numbers work, but if we try to separate the fulfillment of actual human needs from wants, I wonder how productive computer technology really is (yeah, yeah, I know this is wimpy lefty thinking.) Considering computation as a medium offers a significant break from the older productivity model. As a medium, computation offers universal mutability; it can model/process/analyze/generate visual, aural, tactile, kinetic, textual, etc. data–it can take the form of (perhaps) everything. Thus, when we segment to: digital media, digital humanities, etc, we are expressing a bias based on older disciplinary boundaries rather than any limit inherent within the medium. This is something to consider seriously. And filtering further to digital video, 3D, multimedia, etc seems highly problematic.

A current problem is how to get a literal grip on/in/around the medium. The software industry has stepped in to categorize/granularize the medium for us, and make a whole lot of moola in the process. They have been very effective in confusing the mechanism for the medium. Epistemology is not a high priority in the engineering process, so our software tools don't ask why only how, and we keep buying up the stuff, even if most of us never use 9/10ths of the features in these bloated tools-yet we dutifully upgrade every cycle. I would argue that to stop this cycle and get a "grip" on the medium we need greater fluency in the actual computation medium–not simply facility in manipulating commercial software applications. And this is best achieved through developing programming literacy. I believe IMS should be at the forefront of this-not to train computer scientists, but rather to provide essential education. If we want our students to be able to parse, interpret, analyze, etc. shouldn't they have that proficiency in perhaps the single most dominant and controlling medium in their lives? And obviously I think IMS research should blaze a path in this area. Let me stress again that this is not about low-level computer science based research, but rather fluency in the computation medium and work/research that reflects this fluency and hopefully helps define our emerging field.

* I'll offer some additional half-baked thoughts on "alternative intelligence" in a future post.

There is a new book by Nancy Armstrong called How Novels Think.  It's brilliant, congruent with recent work by Andrew Elfenbein (in PMLA and elsewhere) which discusses print presentation, the look and feel of early 19th-c texts, as "interface."  Armstrong's premise is that, since novels do a certain amount of thinking for us, they are bundles of smart data.   Novelistic conventions, then, are basically a software package for making information smart.  The really brilliant piece of her argument (it may be obvious, but I still think it is brilliant) is her idea that software packages and data bundles in-form: they form the inside of us — our psyches, our selves — as a means and effect of giving us information.

Armstrong's argument really helps me understand something that John Maeda is worried about in thinking about the computer as the artist's material.  In Creative Code, he says that he is worried that software is becoming too complex for people to use as a tool (intuitively, without laboriously reading manuals) while programming is becoming easier at the expense of creativity.  I can really understand what he's saying here if I think about software as a set of conventions for a specific type of novel — historical romance or gothic fiction, e.g. — and so the programmers of this software as the artists who come up with new genres, new forms, usable by many other very creative people.  Here is Maeda expressing his worry:

Programming tools are increasingly oriented toward fill-in-the-blank approaches of the construction of code . . . . The experience of using the latest software, meanwhile, has made even expert uses less likely to dispose of their manuals, as the operation of the tools is no longer self-evident.  Can we, therefore, envision a future where software tools are coded less creatively [i.e., a future of impoverished novelistic genres]? Furthermore, will it someday be the case that tools are so complex that they become an obstacle to free-flowing creativity [i.e., that you can't churn out gothic or sci fi]?

Maeda’s own software “Illustrandom” seems to me a beautiful example of something that took complicated rather than fill-in-the-blank programming and renders software that is pretty intuitive and so will allow creativity to flow.

Also, is it possible to discuss some of Ira's work, Protobytes, as the kind of work that intervenes in Maeda's problematic?  Ira, you said that you used bits of code, without thinking it, as a painter might use brush strokes, throwing up bits of it, then seeing what happened?