Skip to content

Infinite Monkeys v1.61

November 24, 2011
by

miniature ends state explanations Series sick
starts side exchange
send data
Source eventually Your revolves scribes
nowhere earns scattered
rid disk kingdom; tyranny You undoubtedly
you until likely
Same| mysterious struggle
means swathe
launderers star rectangle emits Source eRoGK7; albeit
selecting guiding grasping getting go otter-view
otherwise effect thought throwing Gnoetry; lying ghosts
albeit temple emits sent there earns superstition
son News search help playing gertbot|
span news
NOT tools spectacle everybody Yeailliam men
system markov Voyage epy Yeats Sidenote:
GRASS staples statement tat
l leaf-mould divided drifting grafting guided
you’re extreme

<*><*><*><*><*><*>

This is an unsupervised generation done with Infinite Monkeys. Believe it or not, this was done completely with n-grams gathered from our newest release, the Gnoetry Daily chapbook. Normally, n-grams are biased in a linear fashion, ie: the word that comes after a certain word becomes one of its links. In this case, the bias was constructed out of some basic rules. Can you figure out the rule that wrote this poem?

The next word in the line’s sequence must begin with the same letter that ended the last word. For example: spam manners slap patterns.

This is one of two new ways to bias the n-grams data, the other being simple (naive) alliteration (which doesn’t take phonetic components into account).

Check it out:

https://code.google.com/p/infinitemonkeys/downloads/list

*Please also note

Firstly, if you clear the bias first you will find that  the “rule” applies basically without fail. However, in the case that a word has no links, another random word will be drawn.

Also, if you clear the bias, and generate a line out of the data, since their are no links, every word will be drawn at random.

Advertisements
2 Comments leave one →
  1. November 27, 2011 4:28 am

    The next word in the line’s sequence must begin with the same letter that ended the last word. For example: spam manners slap patterns.

    Oulipo-like constraints on the algorithmic details of n-gram generation, very nice.

    Reminds me of something I learned in grad school: some of the most interesting research questions are the ones you only come across once you start implementing something.

    This is one of two new ways to bias the n-grams data, the other being simple (naive) alliteration (which doesn’t take phonetic components into account).

    Simple alliteration actually isn’t that bad… you miss a few things like “k” sounding like hard “ch”, but it’s only once you get to assonance and rhyme that you really need some kind of phonetic analysis.

  2. November 27, 2011 8:14 am

    Thanks, edde. I was pretty pleased with myself over this one. Also, in the next release (which may be soon) you’ll be able to call n-gram routines from the scripts. I should really think about revamping the scripting language.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: