Skip navigation

Category Archives: code

Laura and I finally had a moment to regroup post Chicago. Both of us were excited by the warm reception(s) we received for our “strange” work at MLA. We decided to continue the journey, but to of course increase the strangeness. To that end, I ran some new google searches today – my favorite being: “natural language grammatical parsers”. This search brought me to Stanford’s Natural Language Processing Group (“SNLP”). If you decided to pass on the link, here’s the key info:

“…algorithms that allow computers to process and understand human languages…”

Does that sound awesome or what!?

What is sooo exciting about this phrase (to me) is the potential it suggests for deeper semantic visualization.

SNLP incorporates both: “…innovative probabilistic and machine learning approaches to NLP…”; this includes the ability to train the system, which is pretty spectacular!! I suspect in some circles this is old news, but to me the possibilities, in regard to my and Laura’s continued pursuits of strangeness, is mind-boggling.

Of course that being said, the software will take some time to understand and integrate in the existing work. Here’s an online version of the parser.

Ira

• Please Note: The Stanford parser returns a very abbreviated shorthand of the parts of speech (or perhaps it has been too long since I sat in an English class). Here’s the decoder:

I’ve created two (VERY) simple semantic visualizations based on a search for terms defined as positive or negative. I was originally planning on dynamically generating word lists using WordNet or some other dictionary api. However, good-old dictionary.com has a much wider and deeper word well (including returns from WordNet). I looked into programatically parsing the returned dictionary.com url (which I may eventually do), but for now have generated the word lists manually (I know, I know, this is admitting some defeat). The visualizations plot a linear and then radial gradient based on lines containing the pos or neg terms. I keep track of the number of pos/neg terms, should a line contain multiple terms (some do). Each line (or concentric ring) overlaps its neighbors and is translucent, allowing some optical color mixing. Arbitrarily– red is pos and blue is neg. The gray is neutral.

Links:

Linear Visualization

Radial Visualization

The next visualization (yes, yes it’s a day late) plots an array of protobits (pixels) based on all the characters in the poem (including spaces). The syntactic elements are the colored pixels in their actual location in the poem. The poem is read (so to speak) by an arthropod-esque bot that moves across the characters. The arthropod’s motion is affected by the respective syntactic elements it crosses. Any characters the arthropod head touches are displayed in the bottom right of the window. The syntactic elements are also displayed in the center and remain there until the next element is reached. The arthropod, built as a series of interconnected springs, is a metaphor for the stream of reading that is affected by syntax, as well as its own inertia.

Link to syntactic visualization

I created my first visualization today for the project. Keeping things simple I plotted word usage count as a particle graph. (I also settled on the term protoBits).

Link to Visualization

My process: All the words in the poem were sorted alphabetically, and duplicate words were counted. I plotted the unique words along the x-axis and the duplicate words along the y-axis. Each particle initially occupies a unique position, but to keep things more interesting I made them dynamic and added a random jitter to their x-position upon impact with the ground. Although there is acceleration along the y-axis, there is no gravity/friction–so the system never stabilizes. Moving the mouse over a particle reveals the word plotted. The higher particle columns represent the more common words. Particles turn orange once they’ve been rolled over.

My goal will be to try to create a unique visualization (of increasing complexity) each day leading up to the conference, (so please stop by tomorrow ;-) )

I’ve been able to get the WordNet API integrated into a simple Java app. One amusing side-note is that I got stuck for a day trying to get the WordNet .jar file to run in my Java app. After spending a few hours of unsuccessful Googling, I picked up my own book, in which I explained (to myself) how to solve the problem. So what I thought originally would be the more time consuming and challenging parts of the project–parsing and semantic relationships–have been (at least initially) fairly straightforward. The larger challenge that looms before me is what the heck I’m going to do with all this data.

The problem is not actually what to do, but rather what to do in the next 2 weeks, prior to MLA. I wish I could just explore this material without the burden of deadline. This was supposed to be how I was going to spend my sabbatical this fall–yeah, right!

My thoughts about the visualization process today are to begin with single cell creatures and work my way up. I’ve been thinking about a name for these fundamental organisms: microbots, micro-protobytes, microbytes, protobits, protobots. My thought for these initial creatures is single pixels that bounce in 1 dimension: distance = word usage. I know this is fairly boring, but I feel like I need to begin simply and fundamentally. I will post a few Processing sketches of these initial tests next.

It’s time this blog is resuscitated.

Fortunately I have something to write about, as I am beginning a very interesting collaboration with Laura on the visualization of 18th century romantic poetry-a subject I am severely ignorant about. Here is a recent note I sent to Laura:

Sent Dec 12, 2007

… Some initial thoughts I want to share:

1. I’ve been thinking and working on parsing:
Thus far I’ve been able to input the poem and generate some relatively simple statistical data about overall syntax and word usage (i.e. number of occurrences of terms). I could (and will) parse deeper and collect phoneme groups, prefixes, suffixes, etc as well. In addition, I really want more semantic “meat”, so I’ve downloaded WordNet ( a “lexical database for the English Language” developed at Princeton). WordNet should (I’m hoping) allow me to query all terms against a simplified semantic interface. For example, I would like to be able to identify any term that relates to birth or death or love or hate, etc. This seems the only logical way to approach mapping semantics. Of course, once I collect buckets of terms based on these more general concepts, finer semantic filtering could occur recursively (man that sounds pretentious-put it on the poster “fer sure”!). For example, all the terms that semantically connect to birth, could be further separated–giving forth of an idea, creating a life-form, heritage, lineage, noun vs verb, etc., etc.

If time permits (hah!) it would be good to find some other dictionary api’s; for example aural data (relating to phonemes), etymology, etc.

Once all this mess of data is collected and statistics are generated, I’ll connect the data to a visualization tool. For now, I’m thinking about using my protobyte forms as sort of a conceptual armature (genus perhaps?). I would love to have the poem visualizations/protobytes motile in 3D (ultimately evolving)-–poetry creating virtual life!!!

I just read the most excellent review of Manovich’s The Language of New Media; the review is by Bill Warner.  A lot of what Bill says about this book dovetails with some things I’ve been thinking about lately.

First, I have been incredibly amazed, in a dumb sort of way, by the spirit / body connection in computers: I just can’t see how turning on a switch makes a program execute, a program that will in turn operate little on / off switches on a microchip via 0s and 1s.  Ira said in response to this problem that it’s really no more amazing than what happens when you turn on a light switch.  Here is a relevant passage from Manovich that Warner quotes and explicates:

In retrospect, the shift from a material object to a signal accomplished by electronic technologies represents a fundamental conceptual step towards computer media.  In contrast to a permanent imprint in some material, a signal can be modified in real time by passing it through a filter or filters . . . . [A]n electronic filter can modify the signal all at once . . . .  [A]n electronic signal does not have a singular identity — a particular qualitatively different state from all other possible states. . . .  In contrast to a material object, the electronic signal is essentially mutable. . . . This mutability of electronic media is just one step away from the ‘variability’ of new media. . . . Put differently, in the progression from material object to electronic signal to computer media, the first shift is more radical than the second.  (132-133)

Here is the really interesting part, though, about the shift from electronic mutability to computer variability: the change happens through software.  I remember sitting in my office with a tech guy right after getting my second laptop, trying and trying to figure out why we couldn’t get any sound — and then I found a volume button on the side of the machine.  My new laptop has no such button.  Here is Warner explaining and quoting Manovich:

The increase in range of variation in the digital is accounted for by two factgors: “modern digital computers separate hardware and software” (so for example, changing the volume will just be a software change) and second, “because an object is now represented by numbers, that is, it has become computer data that can be modified by software.  In short a media object becomes ‘soft’ — with all the implications contained in this metaphor” (133).  The mutability of TV (with hue, brightness, vertical hold, etc.) becomes the much wider range of variability for display of a page in a browser window.  (Warner 11).

Both Warner and Manovich argue that what’s significant about the new media is not in fact computers but computers-running-software, that it really doesn’t matter what media instantiates the software at all.  Here’s Manovicth:

[T]he fundamental quality of new media that has no historical precedent [is] programmability.  Comparing new media to print, photography, or television will never tell us the whole story.  For although from one point of view new media is indeed another type of media, from another it is simply a particular type of computer data, something stored in files and databases, retrieved and sorted, run through algorithms and written to the output device.  That the data represent pixels and that this device happens to be an output screen is beside the point. . . .  New media may look like media, but this is only the surface.  (47-48; qtd. Warner 14)

That smacks of idealism and transcendence: code is spirit, and matter / media doesn’t matter at all.  But somehow they are both trying to avoid that trend, analyzed so well by N. Katherine Hayles in her posthuman book.  I’m not sure how, but here is a paragraph in Warner dealing with the problem:

Although the phrase “the computer running software” is redundant, it offers a way to emphasize the way a relatively immaterial thing — software — invades and dematerializes its supposedly hard home, what is conventionally called “hardware” but what we sometimes mistakenly identify as “the computer.”  This is a mistake, not just because hardware needs software the way, by analogy, we might say that the human body needs the communications media of neurons, enzymes and electric signals as a condition of life.  From the beginning of computing, even the hardest components of design — the arrangement of circuits and vacuum tubes, the code embedded on read-only memory, and microprocessors made of silicon — were designed to embed “logic blocks” (like “and,” “or,” “invert”) and algorithms first expressed as software.  In other words, there is a very real sense in which the computer is software all the way down.

A chapter of Manovich’s book that Warner spends a lot of time analyzing is called “Selection,” and it is about how software has made creating into a process of selecting and filtering — a fettering of creativity that may be akin to transforming a writer to a mere reader or worse, critic, assembler of other people’s words into sentences and ideas (she wrote nervously, looking away from her own postings).  This reminds me of Maeda’s attack on the new software programs for programming, as well as Ira’s refusal to let his Flash students click on any of the menu options in the “Actions” portion of Flash (he makes them write out the action script by hand).

I was reading the section of Ira’s book, Foundation Processing, called “Motion.” In it, he says:

In many ways, computer monitors are animation machines, as the pixels being continuously refreshed on the screen, at your monitor’s refresh rate, are a form of animation (not a terribly exciting one). This fact that animation can happen in front of your eyes and be wholly undetectable is significant.

I hadn’t really understood, not REALLY, what was at stake in “flickering signifiers” (N. Katherine Hayles, “Virtual Bodies and Flickering Signifiers,” Electronic Culture, ed. Timothy Druckrey). But this passage made sense for me of a phenomenon I encountered a while ago at the library. I’m not sure it’s still true, but, when King Library first put up surveillance cameras, I remember looking at them and seeing the computer screens just behind me. They looked like TV screens used to look late at night, after stations went off the air, lines cycling up over and over again; the quickly moving lines were all that could be seen on a gray screen. Then I would turn around, look at the library’s computer screens with my own eye, and see no lines, no cycling, a perfectly normal desktop picture. I’d turn back to the video camera, and it would still be there. Things are moving all over the place on these screens, but our eyes distill them into still images.

I at first posted about how this idea surfaces in sci fi (and poetry).  More interesting — Ira says “significant,” but it could also be sinister – is to think about the effects of distilling moving, flickering, animated images into still images.  Is anything subtracted from our view in the process of distillation?  Is the process completely “natural,” or informed by ideology, by habits of seeing? It strikes me that the eye’s propensity to render moving images still is very much like the capitalist’s desire to render relationships as commodities.  Not all reduction is bad — some of it is essential to sanity, if nothing else.  But does the eye’s desire to reduce movement to stillness affect our ideas of beauty or even promote the use of Ritalin for ADHD? (sorry for such a huge leap)  Counteracting the sinister is why it is so important to teach code; knowing a bit of it enables looking under the hood.

Follow

Get every new post delivered to your Inbox.