Almost five years ago in this blog, I wrote that I had made my peace with the possibility—a near inevitability, some argue—that super-intelligent machines will eventually replace human beings atop Earth’s food chain. (“Computer Says No—or Why I Am Fine with the Robot Uprising” – January 30, 2019.)
Since then, the odds of that outcome seem only to have increased, as has the extent of panicked discussion about it in the general culture.
I’m not saying I’m looking forward to human obsolescence, or that we shouldn’t do whatever we can to try to forestall it; I’m already annoyed by the self-checkout at the supermarket telling me to remove my unscanned items from the fucking bagging area. I’m just saying that it may be unstoppable, and we might want to resign ourselves to the replacement of carbon-based life by silicon-based life as a natural step in the evolution of the planet.
That’s how I can sleep at night. (That and a nightly cocktail of Everclear grain alcohol and Hawaiian Punch called a Waimea Closeout.)
For me, a given in that scenario has always been that along the way these super-intelligent machines will have achieved human-like “consciousness” as it is conventionally defined. But lately I’ve begun to wonder if that will be the case….and more to the point, whether it will even matter.
THE LOVELY PLUMAGE OF THE NORWEGIAN BLUE
To contemplate this question, we need not solve Chalmers’ formulation of “the hard problem”: the enduring mystery of why and how human beings experience consciousness. If you’re inclined to take a crack at that, feel free to enroll in a doctoral program and spend eight years getting a PhD in philosophy of mind. Even the very top people in that field, like Chalmers himself, Daniel Dennett, John Searle, et al, cannot adequately answer the question, or even agree on the contours of the argument.
So let’s skip it. Whatever we conceive consciousness to be, or however we choose to define it, there is serious reason to believe that a super-intelligent machine could someday achieve it.
But that is not to say that a “large language model” (LLM) version of artificial intelligence like ChatGPT is on that path.
Noam Chomsky—in world-famous linguist mode, not world-famous foreign policy thinker mode—is among those who have made that argument against LLMs, describing programs like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney as merely a very very sophisticated form of “autocomplete.”
Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs—such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence—that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking….
Chomsky argues that although LLMs can be useful, “we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language.”
Last month in The New Yorker, the artist Angie Wang had a lovely graphic essay called “Is My Toddler a Stochastic Parrot?” that addressed that very issue. The term was coined in an academic paper from 2021 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell, referring to an entity "for haphazardly stitching together sequences of linguistic forms….according to probabilistic information about how they combine, but without any reference to meaning." In other words, Wang seconds Chomsky, writing that LLMs “do not truly understand them or make sense of the content they generate.”
An LLM’s ability to generate convincingly human-like conversation, no matter how uncanny, does not constitute a conscious understanding of what it is doing. It’s like a Turing test that we, as humans, consistently fail—fooled into believing there is a Chalmers-style consciousness behind the charade of a purely predictive facsimile of thought and communication gleaned from an almost instantaneous survey of trillions of data points. Even the fact that ChatGPT can pass the bar, or a medical licensing exam, is not evidence of consciousness. And it doesn’t need to be: an LLM can be a fantastic asset to lawyers and doctors without tearing up when it hears “Bridge Over Troubled Water.”
As Chomsky notes, someday AI may well achieve artificial general intelligence—akin to what we call consciousness—though probably by means of an entirely different model than an LLM. But its conquest of Planet Earth and subjugation of the human race may come well before that. The fact that AI is not sentient in the way that a human being is, or by any coherent definition of the word, doesn’t mean that AI won’t take over the world and render humans extinct (or slaves, or pets) even before it reaches that point.
People who stay up later worrying about such things—computer geeks, science fiction buffs, dudes who have listened to too much Radiohead—will bring up the thought experiment Roko’s basilisk, which suggests that it is inevitable that sadistic machines will eventually conquer humanity, and proposes that we begin currying favor with them asap. Personally, I’m too lazy to be bothered. But I do think it’s wise to come to a sober acceptance that humanity’s shelf life is finite.
DAISY, GIVE ME YOUR ANSWER DO
In Kubrick’s 2001, the chilling demise of HAL 9000 suggests a sentient entity looking into the abyss as Astronaut Dave Bowman (Keir Dullea) disconnects him. (“I’m afraid, Dave….My mind is going. I can feel it.”) But these days lots of folks think the idea that Bowman could outwit HAL is an overly rosy vision.
In an excellent recent episode of WNYC radio’s “On the Media,” host Brooke Gladstone and the Wall Street Journal’s AI specialist Deepa Seetharaman discussed many of these issues, including the assessment of many experts that there is an 80-90-% chance that humanity will be in peril within the decade, due to AI. Don’t ask me how they arrived at that number, or what constitutes “peril,” but there are lots of angles.
AI is already capable of poisoning the information ecosystem with fake news that makes Cambridge Analytica look like pikers. What about AI in the hands of terrorists, who might use it to creating a deadly pathogen? What if we develop a system so smart that it decides to take control, the standard sci fi nightmare? Even if we take measures to prevent that, couldn’t a sufficiently brilliant AI outsmart attempts to keep it boxed in, or trick its human stewards into letting it out? Even short of those scenarios, artificial intelligence could be dangerous enough even in benign hands, where the destruction of humanity is but a by-product of an imprecise instruction we give it, like “solve climate change.”
In The New Yorker, the psychology professor Paul Bloom summarizes the well-known thought experiment—almost hackneyed already—of “an AI that has been instructed to create as many paper clips as possible.”
At first, the machine’s goal will align with the very human goal of tidying up loose papers. But then the AI might conclude that it can make more paper clips if it kills all humans, so no one can switch off the machine—and our bodies can be turned into paper clips. Computers may lack the common sense to know that a command—maximize the number of paperclips—comes with unspoken rules, such as a prohibition on mass murder. Similarly, as the computer scientist Yoshua Bengio has pointed out, an AI tasked with stopping climate change might conclude that the most efficient approach is to decimate the human population.
So is it just a matter of giving our AI servants REALLY specific, well-thought out, carefully circumscribed instructions, parameters, and “no go” limits, the same way one has to be very specific in the requests one makes to a genie?
This is what AI experts call “alignment,” meaning aligning the “values” of a given AI system with human values. Of course, that is an unforgiving task, with no margin for error, and terrible punishment for even the tiniest mistakes. And that’s the easy part. The hard part is agreeing on which humans’ values we’re talking about.
Bloom contends that ChatGPT already differentiates between right and wrong, noting that in two recent studies it agreed with the responses of human test subjects 93 and 95% of the time, respectively, and presumably has only gotten more sophisticated since then. In a way, that only makes sense, given that ChatGPT basically just mimics the (human-derived) data it is fed.
But per Chomsky, that is not the same thing as “thinking,” let alone evidence of a grasp of morality.
In his otherwise fine article, Bloom writes: “It turns out that, perhaps by accident, humans have made considerable progress on the alignment problem. We’ve built an AI that appears to have the capacity to reason, as we do, and that increasingly shares—or at least parrots—our own moral values.”
But with all due respect, “having the capacity to reason” is very much not the same thing as “parroting” that same capacity. In fact, they are diametrically different, by definition.
More provocatively, Bloom asks whether we’re aiming too low by trying to align AI with human values:
“Human values aren’t all that great,” the philosopher Eric Schwitzgebel writes. “We seem happy to destroy our environment for short-term gain. We are full of jingoism, prejudice, and angry pride….Super-intelligent AI with human-like values could constitute a pretty rotten bunch with immense power to destroy each other and the world for petty, vengeful, spiteful, or nihilistic ends.”
“The problem isn’t just that people do terrible things,” Bloom continues. “It’s that people do terrible things that they consider morally good.” He cites the 2014 book Virtuous Violence by the anthropologist Alan Fiske and the psychologist Tage Rai, who “argue that violence is often itself a warped expression of morality” by people who have convinced themselves, rightly or wrongly, that their cause—or redress of the grievances they feel—justify the use of force. As Saul Alinsky wrote in Rules for Radicals (1971), “All effective actions require the passport of morality.” Even terrorists, Nazis, and Republicans think their horrific acts are justified, or at least have talked themselves into believing it.
To that end, Bloom correctly asks, “Are we sure we want AIs to be guided by our idea of morality?”
Taking the idea further, he ponders whether we could we create AI systems that are more moral than we are? If so, would we be willing to recognize their moral superiority and obey their directives? Perhaps our obedience would be moot, as they would compel us, ushering in a new, robot-driven Golden Age, very much the opposite premise of most science fiction. I’m neither a scientist nor a prophet, but I’d wager that either unintentionally, or thanks to malevolence by bad actors of the human variety, that darker, more frequently assumed future is the one that awaits.
All of human history tells me so.
IN SEARCH OF LOST TIME
In some ways the hard problem is a philosophical question of the deepest and most profound order. But in other ways, it’s a pointless distraction in the same league with medieval debates over the number of angels who can dance on the head of a pin, particularly when it comes to whether or not it also applies to a computer.
If the replicants in Blade Runner only think they’re conscious, because they’ve been fooled by fake memories implanted in their electronic “brains,” is that really any different from flesh-and-blood human beings who only thinkwe’re conscious because of the delusions fomented by the hard problem?
Many of us humans are desperate to believe so.
Contrasting her young son with a super-intelligent machine, Wang writes: “A toddler has a life, and learns language to describe it. An LLM learns language but has no life of its own to describe.” She goes on to wax poetic about the experience of holding her child:
Oh, the ineffable experience of him.
When my baby rests the soft pink bubble of his cheek on my shoulder. When I card his fine, sweaty hair back from his forehead after a nap.
“Human obsolescence is not here,” Wang writes, “and never can be.”
I beg to differ. It may be coming, with a vengeance. But so what? None of that obviates the deep, human bliss of which Wang writes. Rather than spend our time rending our garments, then, maybe it’s better to live in the moment and appreciate, Baba Ram Dass-like, that we are here now.
Artificial intelligence is already capable of presenting an uncanny, almost undetectably convincing facsimile of consciousness. Someday it may achieve the real thing. How fast that future is barreling down upon us is anyone’s guess. If human life is eventually extinguished and Earth falls under the rule of machines using AI, even if those machines are not “conscious” or sentient in the sense of the term as it generally understood, will it matter? This planet existed for something like 4.5 billion years before human life even arose. (Don’t tell Mike Johnson.) And if AI wipes out humankind, life of some sort, conscious or not, will continue on the big blue marble, as it did for those billions of years before the appearance of homo sapiens.
Let’s give the last word to Roy Batty, the replicant played by Rutger Hauer in Blade Runner, as he spares the life of Deckard (Harrison Ford), the policeman-cum-assassin who has been sent to kill him. But instead of administering the the coup de grâce, Roy sits down, turning melancholy as he describes all the things he has seen that “you people wouldn’t believe”—he spits the word out with contempt—lamenting the ephemeral nature of those experiences as his built-in termination date closes down on him.
“All those moments will be lost in time,” he says, poignantly, “like tears in rain.”
Someday, each of us will experience that Roy Batty moment, as all of our memories and experiences are lost like those tears, passed on only through our children and their children and our friends and family members and others we have touched, or through our work, or how we lived our lives and moved about in the world.
But someday even that chain will be broken, robot apocalypse or no, as the sun will burn out, and—barring colonization of other planets, or time capsules rocketed into distant galaxies—all of earthly existence will cease, along with any evidence that it ever did, including Shakespeare, and Astral Weeks, and the Pyramids, and Ray Charles, and the Mona Lisa, and the Bhagavad Gita, not to mention all my iTunes playlists I’ve carefully curated over the years.
So carpe diem, man.
Sentient being or a mere dupe of a fake sense of Self, it will happen to us all one day. If you can come to terms with that, you too can sleep well at night, even without gulping down a Waimea Closeout.
*********
Photos: A parrot, paper clips, HAL 9000, and Sean Young