In Thomas Pynchon’s novel Gravity’s Rainbow, the disembodied ghost of Walther Rathenau appears to a group of Nazis conducting a séance. Rathenau, a Jewish German physicist who rose to prominence during WWI, dodges their questions. (Ghosts, it seems, sometimes have agendas of their own.) Instead, Rathenau delivers a lecture on the chemical properties of coal tar, a miraculous compound from which manmade solvents, plastics, fertilizers, and fuels are all derived.
As you might expect from a speaker who’s dead, Rathenau starts heading for bizarre, even mystical territory. He invokes “a thousand different molecules,” all of them manufactured from coal tar. “This is the sign of revealing,” he says. “Of unfolding.” Then he concludes, all of a sudden, with a very haunting warning. “You must ask two questions. First, what is the real nature of synthesis? And then: what is the real nature of control?”
These questions have never been more pressing than right now, at the dawn of the AI revolution. Artificial intelligence has become indispensable to “knowledge work,” around the globe, at a blinding speed. Professional translators defer to it. Modern novelists crib from it, even award-winning ones like Sheila Heti. Heti published “According to Alice” in The New Yorker after goosing a chatbot on subjects like childhood, progeniture, and the rules people ought to live by. (Alice’s first rule, “[people] can only go through doors,” only gives you a faint idea how cryptic their dialogue rapidly became.)
Email gets handed off to AI routinely now; even personal letters are lousy with it. I have a friend who still won’t speak to her father after he wrote her, using AI, to apologize for his failings as a man. I’m a high school English teacher. When I’m not stuck in some training seminar about “transforming education” using AI, I’m combing through my students’ essays, looking for traces of it. Chatbots run interference for call centers, ghost-write clickbait articles, and churn out newsletter spam. They create the restaurant descriptions on Google Maps. As A. O. Scott put the matter, writing for The New York Times, “the most talked-about actual bots among us…are writers.” He brings this into sharper focus shortly afterwards: “The main proof of concept for ChatGPT and other similar programs,” Scott observes, “has been a flood of words.”
With that, Scott puts his finger on the problem. The real issue at stake is not whether an AI can talk and write at the level of a human expert. It can’t. The issue is whether a big helping of human expertise is really necessary when a flood of instant, mediocre prose will get the same job done. AI is good enough, its salesforce promises, while emptying buckets of flavorless prose into every corner of our lives. “Don’t make perfect the enemy of done,” they remind us. Half the time, we’ll jump in ourselves, hastening to agree before they’re even finished making the pitch.
Lee Child, writing for The Economist, and ostensibly reviewing Fourteen Days—a modern update of Boccaccio’s Decameron, written by a bunch of authors during peak Covid—sounded enthusiastic writing that said novel-by-committee feels “like sitting by a campfire.” The “conceit” of Fourteen Days, which is about trying to pass the time during quarantine, “is helpful, given the number of collaborators. The book’s plot is simple.” Child follows that dubious upvote with another one, tossed in ChatGPT’s direction like a party favor: “AI services…have started to become co-authors, too. A more collective approach to authorship is on the rise.”
That’s how cheerful professional critics already feel towards computer-generated novels—which are rare, Olympian feats compared to something like ad copy that’s supposed to be anemic. Forbes, on a November day in 2023, dutifully allocated a whole column to marketing CEO Jodi Amendola, to let her assert that “writing—good writing—is hard work.” What ChatGPT can do, by comparison, “is ho-hum.” “I’m not ready to give up on human writers,” Amendola cries, like she’s the last humanist on Earth. The magazine publishing her, however, hedged its bets just two weeks later. Forbes published “Humans Prefer AI-Generated Content,” by Roger Dooley. Citing a single study by researchers at MIT and UC Berkeley, Dooley claims that “AI can produce a result at least as good as the average professional human writer.” This is because, as Dooley comments with a shrug, “most marketing writing is fairly mundane.”
He’s right. Not only that, but when I read the MIT study firsthand, I vastly preferred the computer’s writing. “Transform your life by choosing healthier alternatives to junk food!” ChatGPT wrote cheerfully. Its human counterpart, meanwhile, only proffered this: “Real food tastes better.” Now there’s a tagline as sullen, and depressing, as it is untrue.
ChatGPT wanted people to get “prepared for the unexpected” by purchasing new emergency travel kits, ones “designed for two people” and perfect for some long, dangerous honeymoon. Yet human writers, aware that most Americans aren’t emergency-ready, still couldn’t do better than this highly regrettable yawn: “Gain peace of mind knowing you are taking your emergency preparedness to the next level.”
But the dynamics here are more complex, and more dangerous than anyone would imagine who pits a computer-generated sentence against something from a human. First, the study’s parameters were bogus. This wasn’t a real-world challenge. It was an exercise in pure abstraction: forcing humans to create copy for products, and campaigns, that don’t exist, then judging their work against a computer’s merciless adequacy. To a human, hypotheticals are dispiriting. There’s no glory. No audience. No laurels to win. Everyone who wrote for this farce knew it was just a drill.
To an AI, on the other hand, everything is hypothetical. To a large language model, there’s no such thing as a “real world” where, sooner or later, untruths sputter out. There’s only the gray automatism of mining some plausible novelty from the silt of things already said. Look at the character this study invents: the “average professional human writer,” a phantom who only exists in the pitiless imagination of employers. Look, they argue. A computer can do the same thing for less.
Then consider the average response of the study’s front-end participants. For the sum of $1.50, per sample, these poor saps rated each phony advertisement and laughable PSA. But, lo! How they adored their work! The average satisfaction rating for an ad hawking a fictional emergency kit was 5.29 out of 7. That’s just slightly north of “somewhat interested,” but well short of actually “interested.” The difference between those ratings, and the ratings human beings got, was approximately 0.3 on a seven-point scale. That’s a difference of less than five percent. One could say this means that computers are, without a doubt, adequate to perform menial writing jobs. On the other hand, perhaps most people, when asked to evaluate something that doesn’t immediately drive them nuts, will politely respond that they’re “somewhat interested” in it. Even if it’s a plastic mounted trout that sings Christmas carols.
The authors of the study found that people didn’t care, either way, when they learned an AI had written the ads and campaigns they found so unoffensive. Of course not! Who can stay mad at a computer writing hypothetical ads? Even if the products had been real, and the ads debuted at a local mall, nobody would’ve complained. We don’t expect to hear anything really groundbreaking from an advertisement for an emergency preparedness kit. We put up with crap like that unthinkingly, because we see approximately 10,000 advertisements every single day. It’s like oxygen, or iocane powder. The writing seems odorless, colorless. It dissolves instantly in liquid. That is the true nature of synthesis: a world where nobody expects “the industry’s” latest onslaught of chipper malarkey to sound any different than their last one. We’d all probably be, on some unconscious level, even more miserable than usual if it did.
In fact, we’re a step beyond complacency. We’re “somewhat interested.” We’re willing to award such efforts a positive rating. When a bunch of weirdos, like the guys who became the German band Kraftwerk, or the French band Daft Punk, start imitating the dance moves and facial expressions of robots, it seems either kitschy or high-concept. But that’s a serious misunderstanding. Kraftwerk and Daft Punk aren’t imitating robots from the future. They’re just a stripped-down mirror held up to the ugliness of our own present-day. We’re the ones trying to keep things at the absolute zero of simplified plots, average professional writing, and “mundane” marketing schemes. The computers aren’t there to replace us. They’re around to remind us, in case we forget, that we could always be doing a slightly better job approximating the nothingness of a cog. And that, in the end, is the real nature of control.