scorecardresearch Skip to main content
IDEAS

The hubris of AI hype

Artificial intelligence is a powerful technology, but it’s still just a human tool that portends neither deliverance nor apocalypse.

Adobe

It would be an exaggeration to suggest that the scientists and engineers behind the seminal 1956 Dartmouth conference on artificial intelligence believed that they could quickly create a virtual mind with human-like characteristics, but not as much of an exaggeration as you’d think. The drafters of the paper that called for convening the conference believed that a “two-month, 10-man study” could result in a “significant advance” in domains like problem-solving, abstract reasoning, language use, and self-improvement. Diving deeper into that document and researching the conference, you’ll be struck by the spirit of optimism and the confidence that these were ultimately mundane problems, if not easy ones. They really believed that AI was achievable in an era in which many computing tasks were still carried out by machines that used paper punch cards to store data.

Needless to say, these issues ultimately required more computing power than was available in that era, to say nothing of time, money, and manpower. In the seven decades since that conference, the history of artificial intelligence has largely been one of false hopes and progress that seemed tantalizingly close but always skittered just out of the grasp of the computer scientists who pursued it. The old joke, which updated itself as the years dragged on, was that AI had been 10 years away for 20 years, then 30, then 40 . . . .

Advertisement



Ah, but now. I hardly need to introduce anyone to the notion that we’re witnessing an AI renaissance. The past year or so has seen the unveiling of a great number of powerful systems that have captured the public’s imagination — Open AI’s GPT-3 and GPT-4, and their ChatGPT offshoot; automated image generators like Dall-E and Midjourney; advanced web search engines like Microsoft’s new Bing; and sundry other systems whose ultimate uses are still unclear, such as Google’s Bard system. These remarkable feats of engineering have delighted millions of users and prompted a tidal wave of commentary that rivals the election of Donald Trump in sheer mass of opinion. I wouldn’t know where to begin to summarize this reaction, other than to say that everyone seems sure that something epochal has happened. Suffice it to say that Google CEO Sundar Pichai’s pronouncement that advances in artificial intelligence will prove more profound than the discovery of fire or electricity was not at all exceptional in the current atmosphere.

In the background of this hype, there have been quiet murmurs that perhaps the moment is not so world-altering as most seem to think. A few have suggested that, maybe, the foundations of heaven remain unshaken. AI skepticism is about as old as the pursuit of AI. Critics have insisted for years that the approaches that most computer scientists were taking in the pursuit of AI were fundamentally flawed. These critics have tended to focus on the gap between what we know of beings that think, notably humans; what we know of how that thinking occurs; and the processes that underlie modern AI-like systems.

Advertisement



One major issue is that most or all of the major AI models developed today are based on the same essential approach, machine learning and “neural networks,” which are not similar to our own minds, which were built by evolution. From what we know, these are machine-language systems that leverage the harvesting of impossible amounts of information to iteratively self-develop internal models that can extract answers to prompts that are statistically likely to satisfy those prompts. I say “from what we know” because the actual algorithms and processes that make these systems work are tightly guarded industry secrets. (OpenAI, it turns out, is not especially open.) But the best information suggests that they’re developed by mining unfathomably vast data sets, assessing that data through sets of parameters that are also bigger than I can imagine and then algorithmically developing responses. They are not repositories of information; they are self-iterating response-generators that learn, in their own way, from repositories of information.

Advertisement



Crucially, the major competitors are (again, as far as we know) unsupervised models — they don’t require a human being to encode the data they take in, which makes them far more flexible and potentially more powerful than older systems. But what is returned, fundamentally, is not the product of a deliberate process of stepwise reasoning like a human might utilize but a vestige of trial and error, self-correction, and predictive response.

This has consequences.

If you use Google’s MusicLM to generate music based on the prompt “upbeat techno,” you will indeed get music that sounds like upbeat techno. But what the system returns to you does not just sound like techno in the human understanding of a genre but sounds like all of techno — through some unfathomably complex process, it’s producing something like the aggregate or average of all extant techno music, or at least an immense sample of it. This naturally satisfies most definitions of techno music. The trouble, among other things, is that no human being could ever listen to as much music as was likely fed into major music-producing AI systems, calling into question how alike this process is to human songwriting. Nor is it clear if something really new could ever be produced in this way. After all, true creativity begins precisely where influence ends.

Advertisement



The very fact that these models derive their outputs from huge data sets suggests that those outputs will always be derivative, middle-of-the-road, an average of averages. Personally, I find that conversation with ChatGPT is a remarkably polished and effective simulation of talking to the most boring person I’ve ever met. How could it be otherwise? When your models are basing their facsimiles of human creative production on more data than any human individual has ever processed in the history of the world, you’re ensuring that what’s returned feels generic. If I asked an aspiring filmmaker who their biggest influences were, and they answered “every filmmaker who has ever lived,” I wouldn’t assume they were a budding auteur. I would assume that their work was lifeless and drab and unworthy of my time.

Part of the lurking issue here is the possibility that these systems, as capable as they are, might prove immensely powerful up to a certain point, and then suddenly hit a hard stop, a limit on what this kind of technology can do. The AI giant Peter Norvig, who used to serve as a research director for Google, suggested in a popular AI textbook that progress in this field can often be asymptotic — a given project might proceed busily in the right direction but ultimately prove unable to close the gap to true success. These systems have been made more useful and impressive by throwing more data and more parameters at them. Whether generational leaps can be made without an accompanying leap in cognitive science remains to be seen.

Advertisement



Adobe/Globe Staff Illustration

God-like claims

Core to complaints about the claim that these systems constitute human-like artificial intelligence is the fact that human minds operate on far smaller amounts of information. The human mind is not “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as Noam Chomsky, Ian Roberts, and Jeffrey Watumull argued earlier this year. The mind is rule-bound, and those rules are present before we are old enough to have assembled a great amount of data. Indeed, this observation, “the poverty of the stimulus” — that the information a young child has been exposed to cannot explain that child’s cognitive capabilities — is one of the foundational tenets of modern linguistics. A 2-year-old can walk down a street with far greater understanding of the immediate environment than an automatically driven Tesla, without billions of dollars spent, teams of engineers, and reams of training data.

In Nicaragua, in the 1980s, a few hundred deaf children in government schools developed Nicaraguan sign language. Against the will of the adults who supervised them, they created a new language, despite the fact that they were all linguistically deprived, most came from poor backgrounds, and some had developmental and cognitive disabilities. A human grammar is an impossibly complex system, to the point that one could argue that we’ve never fully mapped any. And yet these children spontaneously generated a functioning human grammar. That is the power of the human brain, and it’s that power that AI advocates routinely dismiss — that they have to dismiss, are bent on dismissing. To acknowledge that power would make them seem less godlike, which appears to me to be the point of all of this.

The broader question is whether anything but an organic brain can think like an organic brain does. Our continuing ignorance regarding even basic questions of cognition hampers this debate. Sometimes this ignorance is leveraged against strong AI claims, but sometimes in favor; we can’t really be sure that machine-learning systems don’t think the same way as human minds because we don’t know how human minds think. But it’s worth noting why cognitive science has struggled for so many centuries to comprehend how thinking works: because thinking arose from almost 4 billion years of evolution. The iterative processes of natural selection have had 80 percent of the history of this planet to develop a system that can comprehend everything found in the world, including itself. There are 100 trillion synaptic connections in a human brain. Is it really that hard to believe that we might not have duplicated its capabilities in 70 years of trying, in an entirely different material form?

John McCarthy, seen here in the AI lab at Stanford in 1974, is credited with coining the term "artificial intelligence" in 1956. Proposing an AI workshop at Dartmouth that summer, McCarthy and colleagues said they wanted to "find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer."Chuck Painter

“The attendees at the 1956 Dartmouth conference shared a common defining belief, namely that the act of thinking is not something unique either to humans or indeed even biological beings,” Jørgen Veisdal of the Norwegian University of Science and Technology has written. “Rather, they believed that computation is a formally deducible phenomenon which can be understood in a scientific way and that the best nonhuman instrument for doing so is the digital computer.” Thus the most essential and axiomatic belief in artificial intelligence, and potentially the most wrongheaded, was baked into the field from its very inception.

I will happily say: These new tools are remarkable achievements. When matched to the right task, they have the potential to be immensely useful, transformative. As many have said, there is the possibility that they could render many jobs obsolete and perhaps lead to the creation of new ones. They’re also fun to play with. That they’re powerful technologies is not in question. What is worth questioning is why all of that praise is not sufficient, why the response to this new moment in AI has proven to be so overheated. These tools are triumphs of engineering; they are ordinary human tools, but potentially very effective ones. Why do so many find that unsatisfying? Why do they demand more?

There’s no escaping reality

These tools are also, of course, triumphs of commerce. As I suggested above, Pichai’s grasping for the most oversaturated comparison he could find, to demonstrate the gravity of the present moment, was not unusual at all; the internet is now wallpapered with similar rhetoric. I understand why Pichai would engage in it, given that he and his company have direct financial incentive to exaggerate what new machine-learning tools can do. I also understand why Eliezer Yudkowsky, the stamping, fuming philosopher king of the “rationalist” movement, would say that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” Yudkowsky has leveraged impossibly overheated rhetoric about the consequences of artificial intelligence into internet celebrity, a career as a professional Cassandra for an extinction-level event that he can keep imagining further and further into the future. AI, for him, is an identity, and thus his AI politics are identity politics in exactly the conventional sense. There are others like him. And with them at least I can say that money is on the line.

But I’m not sure why so many people who aren’t similarly invested are such strident defenders of AI maximalism. In the last couple of years, a new kind of internet-enabled megafan has invaded online spaces, similar to your Taylor Swift stans or Elon Musk fanboys in their fanatical devotion. The AI fanboys are both triumphalist and resentful, certain that these systems are everything they’ve been made out to be and more and eager to shout down those who suggest otherwise. It would be a mistake, though, to think that they’re celebrating ChatGPT or Midjourney or similar as such. They are, instead, celebrating the possibility of deliverance from their lives.

I write a newsletter about politics and culture, where a version of this essay first appeared. My readers and commenters are habituated to controversy and are able to keep level heads in debates about abortion, the war in Ukraine, LGBTQ rights, and other hot-button issues. Yet I’ve been consistently surprised by how many of them become viscerally unhappy when I question the meaning of recent developments in machine learning, large language models, and artificial intelligence. When I express skepticism about the consequences of this technology, a voluble slice of my readership does not just disagree. They’re wounded. There’s always a little wave of subscription cancellations. This is the condition that captivates me, not the technology as such.

Talk of AI has developed in two superficially opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators. Here I can trot out the old cliché that love and hate are not opposites but kissing cousins, that the true opposite of each is indifference. So too with AI debates: the war is not between those predicting deliverance and those predicting doom, but between both of those and the rest of us who would like to see developments in predictive text and image generation as interesting and powerful but ultimately ordinary technologies. Not ordinary as in unimportant or incapable of prompting serious economic change. But ordinary as in remaining within the category of human tool, like the smartphone, like the washing machine, like the broom. Not a technology that transcends other technology and declares definitively that now is over.

That, I am convinced, lies at the heart of the AI debate — the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians. In common with all millenarians they yearn for a future in which some vast force sweeps away the ordinary and frees them from the dreadful accumulation of minutes that constitutes human life. The particular valence of whether AI will bring paradise or extermination is ultimately irrelevant; each is a species of escapism, a grasping around for a parachute. Thus the most interesting questions in the study of AI in the 21st century are not matters of technology or cognitive science or economics, but of eschatology.

Fredrik deBoer is the author of “The Cult of Smart” and the forthcoming book “How Elites Ate the Social Justice Movement.” This essay was adapted with permission from “AI, Ozymandias” on his Substack newsletter.