As a fact-checker, I can only do so much. An opinion piece came across my desk the other day saying that “Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it.” The resulting superintelligent AI, author Tamlyn Hunt wrote, “will be able to run circles around programmers and any other human by manipulating humans to do its will.” But then Hunt argued that such an AI will be dangerous regardless of whether it’s conscious, since “a nuclear bomb can kill millions without any consciousness whatsoever.”
I flagged this nuclear-bomb analogy as insufficient. If the concern is that the AI will manipulate humans, and make them do the AI’s “will,” I wanted some argument as to why consciousness isn’t necessary for those things. A nuke is dangerous because people might detonate it, deliberately or accidentally, but the bomb has no capacity to outwit people into making it explode. Still, as I acknowledged, Hunt’s point could be defended as opinion and speculation. My pushback resulted in some rewrite about how an AI could result in nukes exploding. I would’ve liked to hear more about why consciousness isn’t the issue, but my role as fact-checker is to guard against factual errors and omissions, not to ensure an argument is one that I find compelling.
I’m of two minds about the “existential threat” that AI’s increasingly been described as posing since ChatGPT was unveiled late last year. I’m impressed by what such chatbots can do, recognize such advances as exceeding expectations I held not long ago, and stipulate that various troubling uses of AI are occurring or in prospect. Still, before embracing a concern about AIs wresting control of the planet from humans, I’d like some more-extensive scenario-building as to why the AIs do this; whether they’re motivated by power and resource-control, and if so why the absence of biological drives such as reproduction is no hindrance; and whether consciousness really is unnecessary for such an AI uprising, and if not, how such artificial sentience might be instilled or arise.
As one aid to thinking about such matters, imagine that you’re offered a promotion, a raise, and numerous dinner dates with glamorous romantic partners. The only catch is that you’ll never experience any of this. To receive the benefits, you’re required to be in a dreamless coma for the rest of your life. You’ll probably consider the gains to be unreal, the offer entirely unacceptable. This is a bizarre scenario, but arguably less bizarre than one in which you jump at the chance for a raise, etc., even while already unconscious. Our capacity to experience things seems to be integral to what we want.
No one at present seems to have any idea how to program consciousness into a computer, nor how it might arise as an “emergent” property. This is probably good, as humanity doesn’t seem to anywhere near ready to grapple with the moral implications of creating a conscious AI. For one thing, such an entity would presumably have the capacity to suffer, perhaps in extreme or unfixable or undetectable ways. Turning the machine off or replacing it with an upgraded version would raise ethical issues that aren’t applicable to dealing with a non-conscious entity. And if such an AI existed, it’d be much easier to imagine it plotting its own course in the world, regardless of whatever safeguards humans have sought to build into its programming.
At present, human understanding of consciousness remains sketchy. There’s no consensus among scientists or philosophers as to how far consciousness extends in the animal world (or beyond, as some have attributed it to plants or even rocks). There are no ready answers as to what enables human consciousness to exist or when and how it arose or how to define it or identify when it’s present. Given all that, the prospect of creating an artificial analogue of it, either deliberately or accidentally, seems remote. Whether it’s even possible is an open question.
I recently suggested a “Pazuzu strategy” of developing AIs to counter negative effects of other AIs; the demon portrayed as a malevolent possessor in The Exorcist served in mythology as a protector against other demonic forces. The use of a demon analogy came naturally after I’d learned that negative-sum incentives, including in AI development, had come to be labeled “Moloch,” after a malevolent entity demanding child sacrifice. That powerful technological forces have been unleashed in the 21st century is undeniable, but dark scenarios from sci-fi and fantasy still need to be assessed with a critical eye.
—Kenneth Silber is author of In DeWitt’s Footsteps: Seeing History on the Erie Canal and posts at Post.News.