Skip to content

half a hundred / news April 2012

April 3, 2012
I heard 
your brothere he wrote 

low key

         his 
eyes loose 

     He didn't 
  let 
proof

        Written in these crowd 

   a 
   half 
a half a 
      half 

          a hundred 

stalking 
him solo

 

4/3/12 – character 5-grams, 30% chance of inserting a newline after a word
50% chance of inserting 6 ± 5 initial spaces with charNG source: Nas lyrics to “One Love

Hey folks! besides the post above, here are some fun news items:

  • killer article by Johanna Drucker about the shortcomings of conceptualism, cleverly appropriated and re-contextualized by a conceptualist!
  • JanusNode 3.08 is out! Now with lipograms er, “Bokifications”! (man, those canadians stick TIGHT)
  • Funkhouser’s Newark Review looking for “writing (all forms), video, sound, animation, multimedia, digital artwork, photojournalism, gaming, and software”
  • I agree with Adam Parrish (er, “@aparrish“), we oughta call our stuff Black Hat NLP. (well, maybe Gray Hat NLP?) Coincidentally, I’ve been thinking of doing some topic modeling for generation.
  • Someone is running a request for poems made out of source code – so the poem has to compile/be interpreted. Presumably putting the poem in comments or as the argument of a “print” statement doesn’t count! You could come up with something cool based on interesting variable names, but I think there’s room for something innovative using Prolog – i.e. that declares a poetic situation and also (when run) resolves it. Not really my thing, though.
  • it’s never too soon to start thinking of Gnoetry Daily Volume 2, which I propose we start working on sometime September for dissemination around early December.
  • check out othermichael’s work-in-progress TextMunger generator! I’ve been trying to convince him to post here, but til then you can check out his stuff @XraysMonaLisa.
Advertisements
12 Comments leave one →
  1. April 4, 2012 10:35 am

    The repetition-loop with randomized spacing is great.

    I can almost hear the crack-heads calling each to each.

  2. April 4, 2012 2:57 pm

    I can almost hear the crack-heads calling each to each.

    I do not think that they will call to me. (’cause they know I’m broke and can’t do nothing for them!)

    I’m thinking of removing the crackhead line, when generating from rap lyrics I try to remove things that people will too readily associate with that genre, to focus on the rest of the language.

    • April 18, 2012 11:02 am

      I _think_ I liked it in there.

      It’s from the source, it’s not typically “literary”, yes — it’s “too readily associate[d] with that genre” — but what’s wrong with that?
      By removing it, aren’t you bowlderizing the resultant texts for your reader’s sensibilities?
      You’re already assaulting [to hyperbolize] them with machine-generated poems.
      Surely they can handle some human-generated genre-references.

      Editing is certainly acceptable, and editing to focus on “the rest of the language” is certainly acceptable. Probably all forms of editing are acceptable. But I think you’re selling the source short.

      If anything, “crackheads” are more accessible than “the computer that ate Shakespeare.”

      • April 20, 2012 3:15 am

        It’s from the source, it’s not typically “literary”, yes — it’s “too readily associate[d] with that genre” — but what’s wrong with that?

        I admire rappers like Nas, Rakim, and Biggie more than I admire any contemporary poets I’ve read, and I’m pretty sure those rappers will be studied in the following decades more than any of the folks talked about on, say, “Harriet“. But I think the profanity and some of the topics the rappers talk about distract from the beauty of their word use. Using “non-profanity/non-thug-genre” as a constraint generates verse that highlights universal themes such as alienation and encourages an entry into the primary material on its own terms.

        Editing is certainly acceptable, and editing to focus on “the rest of the language” is certainly acceptable. Probably all forms of editing are acceptable.

        Yeah, I’m all for editing. In the charNG poems I delimit selections with double returns. (so all “stanzas” are generated chunks that have been selected and ordered.)

        But in general I favor editing with notice but without apology. Never let an algorithm get in the way of a good poem, I always say.

  3. April 4, 2012 10:44 pm

    “A lipogram (from Greek lipagrammatos, “missing letter”) is a kind of constrained writing or word game consisting of writing paragraphs or longer works in which a particular letter or group of letters is _avoided_” (Wikipedia, emphasis added; I had to look it up). Bökification is the opposite: in ‘Eunoia’ Bök uses only words that _contain_ a given letter.

    Anyway, JanusNode trumps them both, since its new Bökification function can simultaneously use and avoid one or more specified letters (or sets of letters).

    Speaking of which, are you Gnoetry guys thinking much about what computers can do _better_ than humans? Lipogrammification [;)] and Bökification (and alliteration, assonanciation, and consonanciation, for that matter) are a pain for people, but trivial for machines. I think automatic generation should focus on the low-hanging fruit = we should focus on getting machines to do what they can now do _better_ than humans (until they can do everything better than humans). What do you think?

    Thanks for announcing 3.08.

    Janus

    • April 18, 2012 2:45 pm

      >thinking much about what computers can do _better_ than humans?

      Count real fast. Which usually doesn’t interest me.
      Directly ingest electricity. Fascinating, but not a good long-term strategy.
      Do exactly what they’re told. Except we continue to speak different languages, unaware of our differences, stubbornly insisting the other is mistaken.

      If I never attempted to code what computer can NOT do _better_ than humans, than computers could never do it better.

      but

      Computers can’t love words they way I do.
      They can’t dream.
      They can’t wish to fling words like buckets of paint, to swirl them on their digital tongues, to fill their bellies* in their sweet, inky nectar.

      but

      attempt to mimic the creation of poetry?
      become a writer-manqué?
      misinterpret my feeble attempts at poetic instructions and come up with some never-before-seen text in a new formant**?

      oh, yeah! they’re gonna do that. we’ll stay up all night sharing electrons until our buffers are exhausted and our bandwidth runs out at the heels of our boots. we are SO gonna do that thing.

      computers are also much better at subject-verb agreement, and not switching metaphors mid-stream.

      * like the holy cow, I have 7 stomachs, one for each soul.
      ** this word may not resonate for you, but it keeps ringing in my ears. Phase IV.

      • April 18, 2012 2:53 pm

        And, again.

        Computers can do some things SOOO much better than humans — like create long palindromes.

        But that’s great, digital plastic perfection.

        I’m so much more interested in their failures, in the plastic melting and causing the punch-card operators to choke on the fumes, for sparks flying up from the blinking-light-panels when the leviathan is faced with “What is Love?”, in attempting to do something right, and missing the mark spectacularly.

        In that lies beauty.

        Like a human drummer — the beauty is not in hitting each and every beat precisely, but in the leading and lagging beats.

        Like Tinguely’s machines — the art is not in the perfection of the mechanism, but in the imperfection, the surprising variances, the “failures.”

  4. April 4, 2012 10:55 pm

    P.S. Speaking of poems made of source code, and of Christian Bök, do you know about his Xenotext experiment? http://www.wired.com/magazine/2010/03/st_dnapoetry/

    Janus

  5. April 5, 2012 7:47 am

    “A lipogram (from Greek lipagrammatos, “missing letter”) is a kind of constrained writing or word game consisting of writing paragraphs or longer works in which a particular letter or group of letters is _avoided_” (Wikipedia, emphasis added; I had to look it up). Bökification is the opposite: in ‘Eunoia’ Bök uses only words that _contain_ a given letter.

    oohh… I wasn’t aware of that. but good ol’ wikipedia calls Euonia “an anthology of univocalics“, where univocalics are defined as “a type of lipogrammatic [sic] constrained writing that uses only a single vowel,” and they cite an example from 1890.

    Anyway, JanusNode trumps them both, since its new Bökification function can simultaneously use and avoid one or more specified letters (or sets of letters).

    yah, sounds like JanusNode contains a full range of lipogram-like constraints. I aint hating on Bok anyways, I dig his twitter stream. (I mean to say: “They dig @christianbok.”) I guess I just have a preference for the long historical tradition over the relatively insignificant contemporary cliques and personalities. or maybe…. I’m just envious of Canadians!!

    Speaking of which, are you Gnoetry guys thinking much about what computers can do _better_ than humans? Lipogrammification [;)] and Bökification (and alliteration, assonanciation, and consonanciation, for that matter) are a pain for people, but trivial for machines. I think automatic generation should focus on the low-hanging fruit = we should focus on getting machines to do what they can now do _better_ than humans (until they can do everything better than humans). What do you think?

    Well, I don’t really think of myself as a Gnoetry guy (more like one of those critters that creeps up on another species’ nest and lays its own eggs there) but here’s my opinion… When it comes to comp-poetry I’ve been guided by the maxim “only do what you feel like doing.” I’ve thought about what you describe, and think it’d be pretty straightforward to write programs to automatically generate text that is constrained in various ways (lipograms, univocalics, palindromes, etc.) and maybe is n-grammed or grammar-based as well. But I figure that’s the sort of thing that any half-decent programmer can implement. You and I are highly trained people with a set of skills that relatively few others have; in principle, we should “do what only we can do.” In my case that’d be writing poetry generators that apply approaches from natural language processing like topic modeling, semantic role labeling, grammar induction, etc. (Or better yet, collaborative language learning, user modeling, and relational agency.) However, that doesn’t seem to be what I feel like doing, maybe because it seems too much like work! So instead I’m exploring the various generators I’ve developed so far… Sooner or later I’ll run out of ideas with them, maybe then I’ll do the fancier ones.

    As another doctor once said, “These things are fun and fun is good!”

  6. April 5, 2012 1:11 pm

    As far as things that computers can do better than I can (as a poet) there is only one: randomize. I can assonate and alliterate far better than any algorithm. However, there are naive functions for this in IM that work with n-gram data. Basically you can call words which begin, end, or have some substring contained within a word and IM will only generate those. The issue is it hasn’t been rigged up to the parser yet. This (among other features) will be available next update. At any rate, I never waste time implementing something that I can do better myself, largely because I’m a so-so programmer and feel like the semantics end of the generation is my job.

    However, given my background in language oriented research, my opinion is that the best possible results via semantic algorithms can be derived from what JanusNodes call “hypernyms” and “hyponyms.” If these can be augmented with a learning AI I think they’ll model human reasoning fairly well. The n-gram data structure can be used to model the ‘hierarchy’ from generalities to specifics linking words in a categorical way. The more self-organizing this is, the better. It could have an interface that scans through words, and when it finds one it doesn’t know it asks the user to help categorize it.

    Thus it would take advantage of the n-gram structure, without the naive linearity governing the associative process.

    Thoughts?

  7. April 7, 2012 1:02 am

    I never waste time implementing something that I can do better myself, … feel like the semantics end of the generation is my job.

    I guess I agree agree with that, and it goes along with what Janus was saying about “focus on getting machines to do what they can now do _better_ than humans.”

    If these can be augmented with a learning AI I think they’ll model human reasoning fairly well.

    Not my specialty, but I think that’s a fairly tough challenge. I don’t think it’s easy to identify hypernym/hyponym relationships from text.

  8. April 7, 2012 8:59 am

    i think the AI would probably have to be trained to handle it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: