LITERARY THEORY FOR ROBOTS: How Computers Learned to Writeby Dennis Y. Tenen
In “Literary Theory for Robots,” Dennis Yi Tenen’s playful new book about artificial intelligence and how computers learned to write, one of his strongest examples comes in the form of a tiny mistake.
Tenen connects modern chatbots, pulp-fiction plot generators, old-fashioned dictionaries and medieval prophecy wheels. Both utopians (robots will save us!) and the fatalists (robots will destroy us!) are doing it wrong, he argues. There will always be an irrevocably human aspect to language and learning—a critical core of meaning that emerges not just from syntax but from experience. Without it, you just hear the chatter of parrots, who, “according to Descartes in his ‘Mediations,’ just repeated without understanding,” Tenen writes.
But Descartes did not write the Mediations. Tenen must have meant “Meditations” – the missing “t” will escape any spell checker because both words are perfectly legal. (The book’s index lists the title correctly.) This tiny typo has nothing to do with Tenen’s argument. if nothing else, it strengthens the case he wants to make. Machines are getting stronger and smarter, but we still decide what makes sense. A man wrote this book. And, despite the robots in the title, it’s meant for other people to read.
Tenen, now a professor of English and comparative literature at Columbia, was a software engineer at Microsoft. He uses his disparate skill sets in a book that’s surprising, funny, and resolutely fearless, even as it sneakily asks big questions about art, intelligence, technology, and the future of work. I suspect the small size of the book – it’s under 160 pages – is part of the point. Humans are not tireless machines, relentlessly consuming vast volumes on vast matters. Tenen has figured out how to present a web of complex ideas on a human scale.
To that end, he tells stories, beginning with the 14th-century Arab scholar Ibn Khaldun, who recounted the use of the prophecy wheel, and ending with a chapter on the 20th-century Russian mathematician Andrey Markov, whose probability analysis of of letter sequences in Pushkin’s “Eugene Onegin” was a fundamental building block of genetic artificial intelligence (Regular players of the Wordle game sense such possibilities all the time.) Tenen writes knowledgeably about the technological obstacles that have hindered previous models of computer learning , before “the raw power required to edit most anything published in the English language” was so readily available. It urges us to be alert. He also urges us not to panic.
“Intelligence evolves along a spectrum, ranging from ‘partial assistance’ to ‘full automation,'” Tenen writes, offering the example of an automatic transmission in a car. Driving an automatic in the 1960s must have been exciting for manual transmission users. An automatic worked by automating key decisions, downshifting on hills and sending less power to the wheels in bad weather. Removed the option to freeze or grind your tools. It was “artificially intelligent,” even if no one used those words for it. American drivers now take its magic for granted. It has been debunked.
As for the current debates about artificial intelligence, this book also tries to debunk them. Instead of talking about AI as having a mind of its own, Tenen talks about the collective work that went into building it. “We take a cognitive-linguistic shortcut by condensing and attributing agency to the technology itself,” he writes. “It’s easier to say,The phone completes my messages” instead of “The engineering team behind the autocomplete tool writing software based on the following dozens of research papers completes my messages.”
So our common metaphors for artificial intelligence are misleading. Tenen says we should be “suspicious of all metaphors that attribute familiar human cognitive aspects to artificial intelligence. The machine thinks, speaks, explains, understands, writes, feels, etc., only by analogy.” This is why so much of his book revolves around linguistic issues. Language allows us to communicate and understand each other. But it also allows for deception and misunderstanding. Tennen wants us to “unravel the metaphor” of artificial intelligence—a proposition that might seem like an English professor’s hobby at first glance, but turns out to be perfectly apt. A metaphor that is too general can make us complacent. Our sense of possibility is shaped by the metaphors we choose.
Text generators, whether in the form of 21st-century chatbots or 14th-century “letter magic,” have always faced the problem of “external validation,” Tenen writes. “Procedurally generated text may make grammatical sense, but it may not always sense concept.” Take Noam Chomsky’s famous example: “Colorless green ideas sleep furiously.” Anyone who has lived in the physical world would know that this syntactically flawless sentence is nonsense. Tennen goes on to refer to the importance of “lived experience” because that describes our situation.
Tenen does not deny that AI threatens much of what we call “knowledge work.” Nor does he deny that automating something devalues it as well. But he also puts it differently: “Automation lowers barriers to entry, increasing the supply of goods for everyone.” Learning is cheaper now, and so having a large vocabulary or repertoire of memorized facts is no longer the competitive advantage it once was. “Today’s writers and scholars can challenge themselves with more creative tasks,” he suggests. “Works that are tedious have been outsourced to machines.”
I take his point, even if that prospect still strikes me as bleak, with an ever-smaller slice of the population doing challenging, creative work while a once-thriving ecosystem crumbles. But Tenen also argues that we, as social beings, have power as long as we allow ourselves to accept the responsibility that comes with it. “Individual AIs are a real danger given their ability to pool power in pursuit of a goal,” he admits. But the real danger comes “from our inability to hold technology makers accountable for their actions.” What if someone wanted to attach a jet engine to a car and see how it fared on the streets of a busy city? Tenen says the answer is obvious: “Don’t do that.”
Why “Don’t do that” may seem easy in one field but not in another requires more thought, more precision, more control—all qualities that are pushed aside when we bow to artificial intelligence, treating technology as a singular god instead of a multitude of machines built by many men. Tenen leads by example, bringing his human intelligence to artificial intelligence. Reflecting on our collective habits of thought, he offers a meditation of his own.
LITERARY THEORY FOR ROBOTS: How computers learned to read | By Dennis Y Tenen | Norton | 158 p. | $22