Sound Poem 9: Prosthetic Imagination
there’s a great piece there
and variation exists as examples
the machine and the poetry is a wonderful summer
always be some delightful
little less theoretically interesting
poems folks leveraging computational creativity
this equipment generates a table
another the adjective position
for various times in performance
output has appeared in computational skeleton
a singer during the collection of electronic literature
April 4-6, 2010, supervised generation with type-based bigrams using stochastic beam search and phonemic evaluation. Source text: excerpts from Jim Carpenter’s “Prosthetic Imagination” blog. Generator: ePoGeeS
So there’s this guy, and he works as a software engineer for a while, and then he gets a teaching job. But apparently teaching isn’t enough for him, because in his spare time he does poetry generation. He puts his poetry generator online for a couple years, blogs about it, then he retires and the generator comes down (before I get a chance to try it out! darn.) The only record left of his years of work is the blog and a couple mentions of a “project” in which the generator’s poems were published under other poets’ names without their permission. (update: I eventually came across a talk he gave back in ’04 – it includes a pdf including source code! I haven’t read through it yet.)
But anyways, I copy-pasted a bunch of the blog entries into a text file because there’s some good stuff there and, you know, eventually the guy’s gonna drop dead in traffic and blogspot will shuffle him offline. Then I figured I’d put his blog excerpts through my generator. (I shall immortalize him in verse! lol) But I also wanted to think through some of the things he talked about in his blog.
To begin with there’s this post from Thursday, November 16, 2006 titled “Cost of the first poem“, where he says:
“I’m approaching the point in etc3 where it will compose its first complete poem. There have been some fragments from unit tests, but nothing that exercises this new system as I intend for it to. I’ve been working since May, just on the infrastructure and formatting of the source texts. … The cost of finding out if this monster can compose a decent poem is high: Seven months of database stuff, document planners, poetic forms classes, TAG trees, TAG nodes, word classes, randomizers, enums, utilities, etc., etc., etc., with nary a poetic moment.”
Now, I’ve been a full-time programmer myself, so I know where the guy’s coming from. What you do is:
- work with the client to develop a contract, statement of work, and schedule
- plan the system components, data representations, messaging formats, etc.
- beta-test and deploy
And that’s cool, and if working that way is so enjoyable to the guy that he wants to do it in his spare time, that’s cool too. But if you’re doing poetry generation for fun, and if (like me) you’re not the kind of person who can work on something for seven months straight without a regular payoff, you could try the following approach:
- read a bunch of poetry-generation papers. read a bunch of nlp and computational linguistics papers. think about what would be fun to implement.
- implement it. make sure it isn’t too long (a day or two) before it generates something poetic, even if it’s simple and you have to edit it.
- generate a bunch of poems. think about what would make them better. go back to step 1.
- profit!!! lol there is no step 4. (and ‘sides, it’d be unreachable)
My point is: developing poetry generators can be a process of exploring the technical possibilities, guided by reflection and re-evaluation based on the published prior work of others, and driven by the needs of the poetry being generated as a result. In other words, it can be a creative practice.
There’s a possible objection to this approach suggested in a post on July 1 2007:
My goals in the etc project include making software whose first draft is the final draft. Since Charles Hartman, folks in this field have held that their generated poetry was a dropping off point, that the work wanted a human touch. But the problem with that approach is that it gives the human-centric critics, with their socially constructed and entrenched intelligism, what they see as a reason to dismiss us. The argument goes that if a piece requires editing, the machine isn’t really doing the work. I’ve been working to get there. And I’m reasonably satisfied with etc3’s results. …
However, though etc3 puts out some decent stuff, it doesn’t always or consistently do so. The poems that I post here are ones that I’ve selected from the many that I encountered during testing. This too is an editorial function. But where previously the editorial process was engaged with revision and rewrites, the editorial process now is one of selection, rather like the judgments journal editors make when selecting (from the baskets of poems they receive each quarter) some few for publication, in essence a process of rejection, with what’s left over or “good” released into the wild.
But it seems that the argument “if a piece requires editing, the machine isn’t really doing the work” is weak.
- Clearly, the machine is doing some work. You can attempt to quantify exactly how much of the work is done by the machine, or you can decide (as I did) that you probably could quantify it but aren’t really interested in doing so.
- Supervised poetry generation, where the human and a machine are both doing some non-zero amount of work, is promising and inherently interesting. Most of the computer poetry floating around these days is basically Flash animations. No disrespect to Flash animations! But computer science, especially AI, NLP, and CL, has more to offer the practice of computer poetry, so there’s plenty to explore.
- Unsupervised poetry generation, where the human is doing zero or near-zero work (like pushing a ‘start’ button or setting parameters), is a vastly unexplored problem that is arguably AI-complete.
- What are the evaluation metrics? Don’t say ‘Turing Test’ because template-based generators can already trick a fair number of people into thinking that they produce human-authored poetry, especially if their output is compared to mediocre human-generated poetry. If you don’t have metrics, how can you tell if you’re doing better or worse than before?
- What are the right goals? To model the human creative process? To generate poems that faithfully follow the various poetic forms? To generate poems that have a unique style? To generate poetry that gets consistently accepted in poetry journals?
- Let’s say you achieve all those goals. How can you legitimately claim that the program is generating poetry that is inherently meaningful? Or is it really the case that the poem’s meaning comes from the human who authored the generation program? (This a key unanswered theoretical question in AI, btw.)
- Let’s say you answer all the questions above. Is that the only way to do things? Are there other ways that are more efficient, more interesting, or produce results that your original approach can’t? How can you prove or quantify that?
- And here’s the key question: can you really answer all these questions in your spare time, by yourself, with little encouragement and less guidance? My point is: probably not. The unsupervised poetry generation problem is not something that can be solved by a single project, so saying that a given project hasn’t solved it may be true, but not very interesting. Unsupervised poetry generation may or may not be something worth working towards, but that’s another argument.
Given everything above, I will summarize my approach to computer-generated poetry as: supervised poetry generation, with an eye to exploring and possibly answering questions in unsupervised poetry generation.
At any rate, I will agree with the d00d on one thing:
“Getting this right takes time, practice, and lots of patience. Either that or a grizzled old coder driven just to the edge who doesn’t quite get it that to most folks he makes no sense at all.”
Computer generated poetry? There’s no glory, no pay, and at the end of the day the flarf kiddies will take whatever you built and shit all over it. You gotta be crazy: worlds-biggest-ball-of-twine crazy not lone-gunman crazy, but still CWAZY!!1! (where’s the blink tag when I need it? lol)