2 Lutz Fragments
EVERY DAY IS QUIET.
NOT EVERY STRANGER IS SILENT.
A LOOK IS FREE THEREFORE
A CASTLE IS QUIET.
NO TOWER IS LARGE.
A CHURCH IS NEAR.
A confession is hell.
Every rifle is a dead ancestor.
A joke is an algorithmic chaos.
Every youngster is Carl Jung.
A bathhouse is the real purpose of prayer.
Not every anarchy is your simple-minded model of reality.
A pelvis is a person living a life of quiet desperation.
Every feud is natural selection.
No brick is a Zen moment.
So according to Funkhouser (who knows his stuff way better than I do!) the first computer poetry generator was Théo Lutz’s “Stochastische Texte” (presumably cause Stratchey’s Love Letter algorithm isn’t really poetry – which begs the question of whether Stochastische Texte really is poetry, but… nevermind.) Anyways, I checked out a description and looked at an emulator chugging away in German and figured I oughta implement it in JanusNode.
So according to google translate, Theo Lutz was a German professor who coded computer-generated poems way back in oh, maybe it was 1959, on a Z22, presumably in assembly on punch-cards, ’cause back then they didn’t exactly have interpreted scripting languages doing automatic memory management let alone interactive I/O. So, you know, when I implement his algorithm in one line of JanusNode code I feel like I’m missing the point. I mean think of it this way. It’s the 50s: you’re in West Germany, global thermonuclear war is a real possibility, these weird computing machines come up and you… decide to use them to generate poetry. That seems like a pretty good idea? Why did he do so, was it really a good idea, and what would the comparable sort of idea be today?
Reading the paper I’m struck by how producing pseudorandom numbers and applying that to a set of possible choices is described as a task worth mentioning. I mean, this wasn’t exactly Perl, you couldn’t just use rand() and an array to do it. But the real contribution here wasn’t in pseudorandom number generation or data structures.
This guy Lutz hung out with one Max Bense, who was apparently an interesting fellow full of awesome ideas:
[Bense] phrased a rational aesthetics, which defines the components of language – words, syllables, phonemes – as a statistical language repertoire, and which opposes literature that is based upon meaning. Conversely, Bense studied the concept of style, which he applied to mathematics – following Gottfried Wilhelm Leibniz’ Mathesis Universalis –, designing a universal markup language.
I mean that’s obvious now… now that you have terabytes of data, much of it language-related, and the need to either number-crunch it or manually annotate it. But back when you had “14 words of 38-bit RAM implemented as core memory” with “8192 word (38 KiB) magnetic drum storage”?!?
Anyway, back to Lutz:
It seems to be very significant that it is possible to change the underlying word quantity into a “word field” using an assigned probability matrix, and to require the machine to print only those sentences where a probability exists between the subject and the predicate which exceeds a certain value. In this way it is possible to produce a text which is “meaningful” in relation to the underlying matrix.
Such a rectangular matrix contains e.g. the so-called transition probability of subject m to predicate n at point (m, n) i.e. this is a correlation number between these two constituent parts of a sentence. If one extends the program via a super program so that this is capable of increasing the transition possibilities between subject and predicate in those sentences found to be “meaningful”, and of reducing other probabilities in accordance with the mathematical connection, then the machine has “learned” in a certain way: It prefers certain subject/object combinations during the course of time.
Great idea, pal. I mean… I just need to think about that a while longer.
Back to the generation algorithm. As you can probably tell, it’s a template:
do the following: print a Logical Operator print a Subject print "is" print a Predicate print a Logical Constant while the Logical Constant is not "."
Apparently Lutz used words from a Kafka story. The first fragment generated above uses this approach, with Kafka words taken from the paper. The second fragment above uses words from other sources in JanusNode. (some of those utterances are DEEP, btw.)
There are a couple things I don’t like about my implementation. JanusNode automatically figures out when to say “an” vs “a”, but it doesn’t capitalize, so you get things like:
An EYE IS NEW.
It’s rare, though. It’s all good.
Also I got lazy with the Logical Constants; I got rid of them for the Lutz Modern variations, and didn’t concatenate lines in the Lutz Kafka version. It’s all good.
Anyways, I’ve added my Lutz implementation to my set of JanusNode files for your downloading convenience. If you just want the Lutz files, they’re:
TextDNA/Lutz Kafka TextDNA/Lutz Modern BrainFood/e_lutz_logicalConstants BrainFood/e_lutz_logicalOperators BrainFood/e_lutz_predicates BrainFood/e_lutz_subjects BrainFood/e_lutzModern_logicalOperators
But I still don’t know what to make of it all. Lutz, why did you do it? What is the equivalent thing I should do? Lutz, o Lutz, why didn’t you have a blog?
It is to be hoped that the distrust of some more traditionally minded philologists towards the achievements of modern technology will soon make way for widespread and fruitful co-operation.
– Theo Lutz, 1959
Oh Lutz you make me want to cry…
p.s. looks like there’s a new Mac version of JanusNode out! Janus needs an email list or something…