Is Artificial Intelligence a Myth?
The answer depends on the bar we require the technology to clear
Nuclear Fusion for Neurons
In a recent TED talk that already has millions of views, Mustafa Suleyman wants you to know he’s feeling vindicated. The CEO of Microsoft AI opens by throwing a little shade at his former doubters:
[Fifteen years ago] talk of AI was, I guess, kind of embarrassing. People generally thought we were weird. . .It wasn't long, though, before AI started beating humans at a whole range of tasks that people previously thought were way out of reach.
Understanding images, translating languages, transcribing speech, playing Go and chess, and even diagnosing diseases. People started waking up to the fact that AI was going to have an enormous impact.
How enormous? Suleyman claims that:
AI is to the mind what nuclear fusion is to energy: limitless, abundant, world-changing.
Hype or Hurricane?
I grant that overhype is the norm in tech circles, but I also experienced ChatGPT disrupt my own job as a curriculum writer for an education company. World-changing? Time will tell. Job-transforming? It already is. And if you don’t want to take my word for it, just ask a musician for their thoughts on Udio.
Or turn to English professor Martin Puchner in this piece for his broader take:
AI is coming for white-collar jobs in advertising, PR, communications, all kinds of content creation. . . .
Meanwhile some claim that recent progress in AI is so rapid that the metrics used to track its progress can’t keep up:
“A decade ago, benchmarks would serve the community for 5–10 years” whereas now they often become irrelevant in just a few years, says Nestor Maslej, a social scientist at Stanford and editor-in-chief of the AI Index. “The pace of gain has been startlingly rapid.”
The Steadfast Skeptic
With so many indications that AI is now provably a big deal, I was surprised when I came across a book called The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson, who’s also on Substack. My first reaction was: sure, computers don’t really think, but isn’t calling artificial intelligence a “myth” a bit much? Then I noticed that the book came out in 2021. Could the author have suffered from poor timing? After all, by 2023 or so, wasn’t there widespread agreement that Suleyman’s and other AI true believers were right?
Not exactly. Larson hasn’t changed his tune much, as evidenced by a recent post titled “Five Reasons We’re Heading for an AI Winter”.
The Real Versus The Rumored
So which is it? Are we descending into an AI winter? Or, in the words of Suleyman, as people “[wake] up to the fact that AI [is] going to have an enormous impact,” is summer just around the corner?
It’s actually possible that both are true. This is because a variety of overlapping concepts fit under the umbrella term “AI.” For while they sound similar, artificial intelligence (AI) and artificial general intelligence (AGI) don’t mean the same thing. Consider:
AI - Artificial Intelligence
refers broadly to computer systems capable of performing complex tasks that historically only a human could do
usually geared to a specific task like playing chess, producing human-like language, or driving a car
currently exists and some forms are widely used
Versus. . .
AGI - Artificial General Intelligence
refers to a hypothetical kind of advanced artificial intelligence
would have the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence.
does not currently exist and may not ever
Here’s a simple, if cynical, way to think about the difference: Imagine you’re the head of a company that’s looking to use AI to replace employees and cut costs. If you want to lay a writer off, you’ll need a particular kind of AI program. But if you want to cut a bookkeeper, you’ll need a different one. If true AGI existed, it could replace one just as effectively as the other.
A Snowstorm in July
But once again, no one’s created AGI yet. A related example shows why it’s so difficult to achieve: Let’s imagine that one day an LLM (large language model) like ChatGPT can replace a high school English teacher. That’s still a maybe, but if you’ve experimented with that kind of AI tool, you can imagine it. That’s “regular” AI that does a specific task: it can communicate like a human in a particular context.
But even if an LLM could replace a teacher in a classroom, that doesn’t mean it could meet all of that same teacher’s communication needs in every area of life. That would require AGI, artificial general intelligence. To do that, the app would also have to understand finances, maintain personal relationships, and plan for the future, in addition to much more. AGI evokes the human-like robots with broad capabilities that often show up in sci-fi. Think C-3PO in Star Wars.
So, it’s possible to have an AI summer in the sense that new mind-boggling apps are appearing every week. And you can simultaneously experience AI winter because that in no way assures AGI is any closer. It has less to do with the progress of technology and more to do with where you set the bar. In other words, what standard does AI have to meet to rise above mythic status? It’s possible to just keep raising the bar. Every advancement in AI enables us to imagine further horizons not yet achieved. But AGI offers something of a clear benchmark: a computer system as broadly capable and perceptive as a human. And we’re definitely not there.
Trendy Tech’s Tradeoffs
Larson takes this point further. While his critiques of AI and the hype surrounding it are wide-ranging, I think I can fairly summarize one of his major concerns: there’s little evidence that AI (as it currently exists) will lead to AGI.
For example, in his book, Larson points out that improving an AI’s ability at one type of task actually makes it worse at others. In other words, an AI first trained to play chess, if then trained to write poetry, will get worse at chess. In the machine learning field, this phenomenon is known by the memorable term “catastrophic forgetting.” In a human, this would be like if learning to play baseball caused you to forget how to ride a bike.
Silicon Valley Sleight of Hand?
Larson’s critique caused me to notice something I missed the first time I listened to Mustafa Suleyman’s TED Talk, the one I quoted at the beginning of this post. Because while the lecture is called “What Is an AI Anyway?”, Suleyman doesn’t bother to clearly distinguish between AI generally (which exists) and AGI (which doesn’t).
This quote from the talk comes right before the one I used up top. Italics mine, to highlight the subtle shift in word choice:
[Fifteen years ago] working on AI was seen as way too out there. In 2010, just the very mention of the phrase “AGI,” artificial general intelligence, would get you some seriously strange looks and even a cold shoulder. “You’re actually building AGI?” people would say. “Isn’t that something out of science fiction?” People thought it was 50 years away or 100 years away, if it was even possible at all. Talk of AI was, I guess, kind of embarrassing.
At first listen, it seems possible he’s just varying his vocabulary. It sounds awkward to use the exact same word over and over again. I’ve struggled myself with how to keep the language lively in a paragraph that refers to AI, AI and still more AI.
But I don’t think that’s all that’s going on here. Because anyone with Suleyman’s vast experience would know that the promise of AGI lies well beyond today’s AI. And he would also know that the average Ted Talk viewer would have no idea that’s the case. So instead of clarifying, he seems to have chosen to blur the lines.
There’s no doubt Suleyman has seen AI leapfrog some of his detractors’ naysaying. It’s true that recent AI advances have surprised experts, and anyone who’s played around with an LLM or text-to-image generator can grasp why. But Larsen’s arguments helped me to see how it’s disingenuous to present recent AI breakthroughs as evidence that AGI is right around the corner.
AI itself isn’t sure AGI will be possible. Here’s what ChatGPT said when I asked its expectation (emphasis mine):
Predictions vary widely, with some believing [AGI] could happen within a few decades, while others think it might take much longer, or even that it may never be achieved.
Something In The Way
And Larson is hardly the only human airing doubts. Gary Marcus, a cognitive scientist and founder of a robotics company, has recently written about AI’s difficulties handling outliers. This means the technology needs to see a specific example of something to “recognize” it and can’t generalize the way a human brain does. As a dangerously practical matter, Marcus notes that a self-driving car may not know what to do when it comes upon an overturned double trailer. Why not? Because the AI piloting the car hasn’t been trained on that exact image. We take for granted that a human driver who has never seen the underside of a semi truck will stop because there’s something huge in the road.
Marcus says Big Tech cheerleaders are looking past such problems (emphasis mine):
[A]lmost everything that people like [OpenAI CEO] Altman and [Tesla CEO] Musk and [futurist] Kurzweil are currently saying about AGI being nigh seems like sheer fantasy, on par with imagining that really tall ladders will soon make it to the moon.
Cross-Examining the Crafters of AI
The one who states his case first seems right, until the other comes and examines him. (Proverbs 18:17)
I first listened to the Suleyman Ted Talk months ago, right after it first came out. Although I realized a lot of the talk was speculative, I didn’t catch the AI-for-AGI sleight-of-hand right at the beginning of the video. It’s a good reminder to pay attention to voices like Larsen and Marcus and others who have the expertise and willingness to question Silicon Valley narratives.
A broader debate over AI’s hype versus helpfulness doesn’t really have a conclusion. That’s because every genuinely practical breakthrough unleashes a new wave of speculation. AI applications do some tasks better than others. In my previous job as a writer for an edtech company, the results we got from AI ranged from startling effectiveness to laughable uselessness. It all depended on the task attempted. But even with mixed outcomes, the mere arrival of AI can send unpredictable shockwaves through a workplace.
AGI—a computer system as a full-spectrum replacement for human intelligence—could prove unachievable. As Christians, we hold on to certain truths regardless. Either way, humans will remain God’s unique image bearers. Either way, the Lord will hold us accountable for how we use the fleeting time, resources, and technology we’ve been given. And as we navigate this moment, we should remember that AI that already exists will deliver massive changes to work, communication, and probably even society as we know it.
The questioning of AI, for Christians, still seems to come down to the idea that living the Christian life should be presented as more than a practical problem or even a theological one. It's not just doing what you should, sometimes, but asking for help from a supernatural God when you can't. AI seems like a solution for practical problems, but one that doesn't necessarily help one live the Christian life.
Another interesting angle to examine is the concept of "model collapse" where successive generations of AI models use too much of their own synthesized data sets (with bias) in their self-training resulting in data output becoming (looking, feeling, sounding, etc.) homogenized.