Skip to content

Sound Poem 8

April 4, 2010

tired skin sliced up for their crimes
shout the modern age weary sometimes

you you you
find a fine the youth
you about you
say to lose every move

the young hearts fail young men the problem broke day
to lies lies lies lies within clay
so lose the betray
lights shined like today
snort decay

April 1-3 2010, supervised generation with type-based bigrams and unigram rhymes using stochastic beam search and phonemic evaluation. Source texts: Joy Division, Minor Threat, Nine Inch Nails and Suicidal Tendencies lyrics. Generator: ePoGeeS

Recently I’ve been looking through the bigram models to pick tokens, rather than sticking with randomly suggested bigrams. (though “you you you” and “lies lies lies lies” showed up during random generation!)

One thing I’ve noticed is: the token that is most likely to be picked next is often pretty boring. For example,
for “up” the most frequently seen next-token is: “my”
for “was” the most frequently seen next-token is: “something”
to start a line, the most frequently seen token is: “I”
etc.
and the loaded words like “integrity,” “corrupted,” “overdosed,” etc. all have lower frequency values. Of course you don’t want to have too many loaded words in a row, and the probabilities make it likely that loaded words show up frequently enough, but it’s still perfectly possible to get bland lines like:
“I was up in a lot of a little in a way”
which then need to be weeded out by a human (although it has some nice approximant phonemes!) In other words, there is no automatic explicit judgment of “blandness” vs “emotional loadedness” of words in a line.

Another thing this has highlighted for me is: standard n-gram models that draw from multiple sources don’t distinguish between sources. So, for example, in the first line of this poem, the bigram “tired skin” comes from a nine inch nails lyric, but “skin sliced” comes from a suicidal tendencies lyric. For those who are familiar with both of these lyrics, this produces the phrase “tired skin sliced…” that is at once recognizable but novel, and therefore kinda cool. But when your typical n-gram model is looking at the word “skin” and deciding what to choose next, the fact that “sliced” comes from a different source than “tired” is never part of the equation.

I think the issue has to do with the relation between author, source texts, and tools when doing supervised poetry generation. Typical n-grams are blind to anything except the probability of the next token; when generating poetry, anything more sophisticated is left to the judgment of the author, although there’s a level of authorship implicit in the sorts of judgments the tools facilitate. (For example, Gnoetry facilitates the authors’ judgments about the probability weight of any given source text to be selected for n-gram generation, and ePoGeeS facilitates the author’s judgments about the types of phonemes that will be selected for during generation.) If “what is the ideal relation between author, source texts, and tools?” really is the question, then a possible first answer is: whatever the poet/programmer chooses, and the correctness of the decision will be determined by the quality of the poetry it produces.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: