Memoria was fabricated from two separate ‘biases’ or saved n-gram states. Pages 3 through 10 (which disclude the title page and preface) are generated from the KJV translation of the Bible. All poems thereafter are a mixture of Revelation, Daniel, Genesis, and The Best Women’s Erotica of 2001.
Hi all, and thanks for inviting me to join this blog.
I’ve been engaged in an ongoing computer poetry experiment I call ‘The Augmented Imagination Project’ or ‘AI Project’(!) for short. I see the project as a creative experiment open to numerous interpretations, and no more susceptible to a definitive summary than a poem. But there are some things I can say by way of introduction to the project:
In the AI project, algorithms randomly generate words, which are shown in a randomized fashion in a digital display. Alphabetical constraints can be put on the words to be generated, for example, the constraint that only words containing the letter ‘d’ are displayed. I am currently working on adding further constraints so as to limit e.g. number of syllables or rhyme-endings.
My interest in the project centers on the possibility of exploring the impact of computers on the creative process itself. Unlike most other computerised poetry generators, the AI project is not designed to produce finished poems, but to offer an interface for human-computer interaction whose promise is an enhancement or ‘augmentation’ of the poetic imagination, specifically with respect to its word-generating function.
As I argue in an essay on my website, it is possible to locate this project within a subtradition of computer poetry I call ‘computer-assisted poetry’ (as distinct from ‘computer-generated poetry’) that would include for some poetic experiments by Charles Hartman, and Ray Kurzweil’s Cybernetic Poet software.
I think of myself as having a pluralistic poetics encompassing both avant-garde and more traditional perspectives. Consequently, I have personally used the AI project to assist in writing poetry in a variety of styles, including styles that may not appear in the least computer-generated. (An example can be found here.) This neutrality between a variety of approaches to poetry and poetics is another thing that excites me about the project.
To see the program in action, and my essay ‘Seven ways of looking at the Augmented Imagination Project’ visit my website www.jonahwilberg.net.
I’ve also created a resource site for digital poetics. It’s still in its early stages, and I’d be grateful for any suggestions of links to add to it.
Thanks again. I welcome comments on the project, and will keep you posted on updates.
I have finally gotten around to finishing and updating IM3’s new scripting language (JSB).
Here is an example of JSB (with output)
var s = "";
s += JJ + " " + NNS + newline;
s += JJ + " " + NNS + newline;
s += "the " + NN + " of " + NNS;
print( s );
out( s );
the pollio of housings
- killer article on interactive poetry generators: Reading the Drones at eliterature 2014. a strong focus on JanusNode and other generators, says: “Some of the most interesting contemporary poetry is written in collaboration with [Interactive Poetry Generators] and is yet to receive the critical attention it deserves – see, for example, the Gnoetry Daily group blog and Eric Elshtain’s Beard of Bees press.” word! (thanks for pointing it out, Nick!)
- a Turing test for poetry: botpoet.com! of particular amusement is the fact that in their leaderboard, 4 of the top 5 poems in the “Most computer-like computer poems” (i.e. computer-generated poems that were correctly identified as computer-generated) were produced by jGnoetry. Of the “Most human-like computer poems”, 2 of the top 5 are by JanusNode. But see, that’s why I don’t like that whole “try to fool people into thinking a computer-generated poem is actually human-generated” angle. The whole point of jGnoetry is that you spend time during interactive generation to make it interesting… that way it’s yours, but it isn’t as computer-generated. With JanusNode, all the work is done beforehand to make it human-like, but it isn’t really yours… and since it was so carefully human-authored, is it really solely computer-generated? Still, the website is a neat idea, check it out!
- Yet again, I missed the Computational Linguistics for Literature workshop. There seemed to be fewer talks this year, though, and nothing really on poetry or generation…
- the code poetry slam isn’t completely about poetry generation, but it’s still kind of cool…
- an interesting story about text generation with Markov chains from the writer of the infamous Scientific American “computer recreations” column that introduced the idea to north america
- no more ads on Gnoetry Daily! when did they start adding ads anyway? screw you, wordpress.com!
- the always-interesting poetry with mathematics blog had some interesting posts over the past year, including oulipo-esque symmetric squares and growing lines, as well as poetry with mathematics readings!
- in the every-growing tradition of twitter bots… @pentametron!
- just so you know… Gnoetry Daily Volume 2 is in the works!!1!!
Continuing my overview of National Novel Generation Month, in which novel-length texts were computationally generated…
jiko’s project “Gen Austen” (hah!) produced several novels derived from Jane Austen (or in one case, Austen-related fanfic).
One uses trigrams with some POSing, one uses some numerological approach, two of them use dialogue-replacement algorithms (replacing the dialogue of one novel with the dialogue of another) and one is passed through an anagram generator to produce, basically, a list of anagrams. jiko also provides a list of resources he’s worked on (or just finds useful?)
Nick’s novel World Clock consist of a series of template-generated paragraphs which describe a time in a location, a person in a place, and an action performed by that person.
For an added bit of class, the script outputs TeX, for easy pdf-ing.
Next up is catseye, who developed a story engine that also powered dariusk’s entry.
catseye’s novel Dial S for Swallows discusses the interactions of two characters, an object, and a location. It reminds me a bit of the time I copy-pasted the output of a mud (a text “multi-user dungeon”, kids) repeatedly until it was novel-length. Skimming the source code, it looks like it is indeed a high-level simulator running several agents through a world representation. Another thing worth looking through. catseye also has several very interesting thoughts on the process worth further study.
Looks like elib wrote a twitter scraper that collated all lines it found that began with “The way that…” Not bad. Works best as conceptual writing.
Looks like MichaelPaulukonis did several NLP-based transformations on texts, including named entity recognition swaps between texts. It looks like something more is going on, but it’s not clear what; need to look through the final text and the source code a bit more.
ianrenton is apparently using some kind of spam-generation technique called Bayesian Poisoning (which is a technique that adds common non-spam words to spam in an attempt to have them classified as spam, thereby rendering the classifier unreliable.) It’s a great idea, since it’s likely to add to a text the kind of words you’d expect to see there (i.e. not spam-like.) ianrenton produced a text using fanfic from fanfiction.net as a basis. I haven’t looked too closely through the source code, but it seems to work by collating sentences from different stories before performing any Bayesian Poisoning techniques.
lilinx is pulling all sentences with the word hit/hurt from a French translation of Homer’s Iliad, and reversing their order. My French isn’t very good, but it’s a great idea. Script, notes, and output.
Then there’s ikarth’s “Gutenberg shuffle” approach. Described as: “Take a collection of texts from Gutenberg, generate a list of all of the names you find. Swap the names around. Shuffle the sentences. Shuffle and redeal for every paragraph.” Here’s the code, which allegedly uses OpenNLP, and the resulting novel.
Strand’s “simple solution” is
puts "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. " * 6250
ah ha ha ha ha…. more conceptualism. or is it flarf? or just good ol’ fashioned lulz? I like the sentence, but… I dunno…
jonkagstrom’s Abscission is deep. Apparently , the approach is to part-of-speech tag Kafka’s Metamorphosis, then modify each paragraph according to a series of transformations directed by WordNet. Pretty awesome on a number of levels. Here’s the output and the code.
I am currently working on a new scripting language for im3 which has a more BASIC-like syntax but also uses JSON objects as its base variable type. Variable declares, math operators, string operators, and basic math and string functions have been added as well as FOR and WHILE loops, array plucking and assignment, if/else procedures and I think I finally have the parser working properly. SMUP was clunky and difficult to work with so I decided to create a language which I felt was more enjoyable to play with and more powerful functionality wise. And importantly, I’m having a great deal of fun actually programming it.
I have also compiled my dictionary into a JSON object. It contains around 50k words, and has CMU’s Pronouncing dictionary appended to it. It contains part of speech information, keyword tagging, and stress information. That can be downloaded here:
im3 now makes it easier to add words en masse and any dictionary can now be exported as a JSON object, so I’m super excited about the project.
Also, today my EEG reader was delivered by NeuroSky and the developer’s kit is free, and written in object C, which I can work with. So there are plans in the works to start developing for the head set, which I think will be super fun.