ChatGPT bored me after a few days. I’d found the program enthralling initially, but as a toy. I’d given it various odd requests, including to write a song about Splice Today in the style of Bob Dylan. I’d asked it a number of times how many fingers von Stauffenberg had, getting various answers, most of which were incorrect; one answer contained the knowledge that the anti-Hitler conspirator had lost fingers in combat, but presumed he later had all his fingers anyway.
Virginia Postrel engaged the program to write poetry about Bill Gates, getting dubious results. As I noted, much alarm about AI focuses on apocalyptic scenarios, such as that it will destroy humanity by converting all available matter into paper clips (a long-standing example of how a program set up to optimize some objective could go wrong). More mundane, but more imminent, is the prospect the technology will undermine intellectual and artistic standards by churning out garbage material that superficially resembles intelligent or creative work.
Against such considerations of AI’s dangers and failings, the technology is bringing formidable benefits, such as those listed in a recent Scientific American piece. Its advances also enable countermeasures to some of the technology’s own risks; for example, using AI to detect whether a piece of writing was illicitly AI-generated. In any case, we’re at an early stage of figuring out the alignment problem, the question of how (and whether it’s possible) to keep AI aligned with human interests and objectives. A recent piece at Quanta notes “there’s something of an AI culture war, with one side more worried about these current risks than what they see as unrealistic techno-futurism, and the other side considering current problems less urgent than the potential catastrophic risks posed by superintelligent AI.”
It's odd, anyway, how human perceptions of risks can shift rapidly. Elon Musk’s a long-standing worrier about the dangers of AI, including expressing a concern that “ChatGPT is scary good. We are not far from dangerously strong AI.” He’s also been a proponent of settling Mars, in part on the basis that Earth’s environment is in peril. But he also recently tweeted that “The woke mind virus is either defeated or nothing else matters,” which strikes me as a strange priority amid the threats he’s mentioned (and many other problems he hasn’t).
Musk’s attention-getting antics on Twitter, his self-reinvention as right-wing culture warrior, his transformation of the platform, nonsense about Fauci and vendettas against critics who’re supposedly putting Musk at risk of “assassination,” have inspired me to move elsewhere. I opened an account at Mastodon, but find its multiple servers a confusing complication, and its graphics boring. I’m happier, for now, to be posting at Post.News, a service that’s still in beta-mode but offers nicer graphics and an easier time of making connections with other users.
A downside of ChatGPT and other chatbots is that they could flood social media with what looks like a public movement on behalf of some cause or opinion, but in fact is just numerous variations of a message that some individual or group wants amplified. In the recent past, such operations required something like the Internet Research Agency, Russia’s troll factory, but increasing automation will enable this to occur on a faster and bigger scale. Wherever you’re posting or reading, give some thought as to whether it’s a human you’re interacting with.
Whether generated by humans or AI, science and technology writing includes vast quantities of misinformation and hype. “Physicists Teleport Bullshit Through Wormhole!” is the apt headline of a cogent post by science writer John Horgan dissecting recent claims that a quantum-computing experiment created a portal in space and time, as opposed to a simulation of one.
—Kenneth Silber is author of In DeWitt’s Footsteps: Seeing History on the Erie Canal and currently posts at Post.News.