Elon Musk thinks we’re close to solving AI. That doesn’t make it true.

elon-musk-thinks-we’re-close-to-solving-ai-that-doesn’t-make-it-true.
Elon Musk speaks in front of a Tesla Model Y.
Musk speaks at the opening of a new Tesla plant in 2022. | Christian Marquardt/Getty Images

What he gets right — and very wrong — about AI, from driverless cars to ChatGPT.

Elon Musk is at or near the top of pretty much every AI influencer list I have ever seen, despite the fact he doesn’t have a degree in AI and seems to have only one academic journal article in the field, which received little notice.

There’s not necessarily anything wrong with that; Yann LeCun was trained in physics (the same field as one of Musk’s two undergraduate degrees) but is justifiably known for his pioneering work in machine learning. I’m known for my AI work, too, but I trained in cognitive science. The most important paper I ever wrote for AI was in a psychology journal. It’s perfectly fine for people to influence different fields, and Musk’s work on driverless cars has undoubtedly influenced the development of AI.

But an awful lot of what he says about AI has been wrong. Most notoriously, none of his forecasts about timelines for self-driving cars have been correct. In October 2016, he predicted that a Tesla would drive itself from California to New York by 2017. (It didn’t.) Tesla has deployed a technology called “Autopilot,” but everybody in the industry knows that name is a fib, more marketing than reality. Teslas are nowhere close to being able to drive themselves; the software is still so buggy seven years after Tesla started rolling it out that a human driver still must pay attention at all times.

Musk also seems to consistently misunderstand the relationship between natural (human) intelligence and artificial intelligence. He’s repeatedly argued that Teslas don’t need Lidar — a sensing system that virtually every other autonomous vehicle company relies on — on the basis of a misleading comparison between human vision and cameras in driverless cars. While it’s true that humans don’t need Lidar to drive, current AI doesn’t seem anywhere close enough to being able to understand and deal with a full array of road conditions without it. Driverless cars need Lidar as a crutch precisely because they don’t have human-like intelligence.

Teslas can’t even consistently avoid crashing into stopped emergency vehicles, a problem that the company has failed to solve for more than five years. For reasons still not publicly disclosed, the perceptual and decision-making systems for the cars haven’t managed to drive with sufficient reliability yet, without human intervention. Musk’s claim is like saying that humans don’t need to walk because cars don’t have feet. If my grandmother had wheels, she’d be a car.

ChatGPT isn’t the profound AI advance that it seems

Despite a spotty track record, Musk continues to make pronouncements about AI, and when he does, people take it seriously. His latest, first reported by CNBC and picked up widely thereafter, took place a few weeks ago at the World Government Summit in Dubai. Some of what Musk said is, in my professional judgment, spot-on — and some of it is way off.

What was most wrong was his implication that we are close to solving AI — or reaching so-called “artificial general intelligence” (AGI) with the flexibility of human intelligence — claiming that ChatGPT “has illustrated to people just how advanced AI has become.”

That’s just silly. To some people, especially those who haven’t been following the AI field, the degree to which ChatGPT can mimic human prose seems deeply surprising. But it’s also deeply flawed. A truly superintelligent AI would be able to tell true from false, to reason about people and objects and science, and to be as versatile and quick in learning new things as humans are — none of which the current generation of chatbots is capable of. All ChatGPT can do is predict text that might be plausible in different contexts based on the enormous body of written work it’s been trained on, but it has no regard for whether what it spits out is true.

That makes ChatGPT incredibly fun to play with, and if handled responsibly, sometimes it can even be useful, but it doesn’t make it genuinely smart. The system has tremendous trouble telling the truth, hallucinates routinely, and sometimes struggles with basic math. It doesn’t understand what a number is. In this example, sent to me by the AI researcher Melanie Mitchell, ChatGPT can’t understand the relation between a pound of feather and two pounds of bricks, foiled by the ridiculous guardrail system that prevents it from using hateful language but also keeps it from directly answering many questions, which Musk himself has complained about elsewhere.


Examples of ChatGPT fails like this are legion across the internet. Together with NYU computer scientist Ernest Davis and others, I have assembled a whole collection of them; feel free to contribute your own. OpenAI often fixes them, but new errors continue to appear. Here’s one of my current favorites:

These cases illustrate that, despite superficial appearances to the contrary, ChatGPT can’t reason, has no idea what it’s talking about, and absolutely cannot be trusted. It has no real moral compass and has to rely on crude guardrails that try to prevent it from going evil but can be broken without much difficulty. Sometimes it gets things right because the text you type into it is close enough to something it’s been trained on, but that’s incidental. Being right sometimes is not a sound basis for artificial intelligence.

Musk is reportedly looking to build a ChatGPT rival — “TruthGPT,” as he put it recently — but this also misses something important: Truth just isn’t part of GPT-style architectures. It’s fine to want to build new AI that addresses the fundamental problems with current language models, but that would require a very different design, and it’s not clear that Musk appreciates how radical the changes will need to be.

Where the stakes are high, companies are already figuring out that truth and GPT aren’t the closest of friends. JPMorgan just restricted its employees from using ChatGPT for business, and Citigroup and Goldman Sachs quickly followed suit. As Yann LeCun put it, echoing what I’ve been saying for years, it’s an offramp on the road to artificial general intelligence because its underlying technology has nothing to do with the requirements of genuine intelligence.

Last May, Musk said he’d be “surprised if we don’t have AGI by” 2029. I registered my doubts then, offered to bet him $100,000 (that’s real money for me, if not so much for him), and wrote up a set of conditions. Many people in the field shared my sentiment that on predictions like these, Musk is all talk and no action. By the next day, without planning to, I’d raised another $400,000 for the bet from fellow AI experts. Musk never got back to us. If he really believed what he’s saying, he should have.

We should still be very worried

If Musk is wrong about when driverless cars are coming, naive about what it takes to build human-like robots, and grossly off on the timeline for general intelligence, he is right about something: Houston, we do have a problem.

At the Dubai event last month, Musk told the crowd, “One of the biggest risks to the future of civilization is AI.” I still think nuclear war and climate change might be bigger, but these last few weeks, especially with the shambolic introductions of new AI search engines by Microsoft and Google, lead me to think that we are going to see more and more primitive and unreliable artificial intelligence products rushed to market.

That may not be precisely the kind of AI Musk had in mind, but it does pose clear and present dangers. New concerns are appearing seemingly every day, ranging from unforeseen consequences in education to the possibility of massive, automated misinformation campaigns. Extremist organizations, like the alt-right social network Gab, have already begun announcing intentions to build their own AI.

So don’t go to Musk for specific timelines about AGI or driverless cars. But he still makes a crucial point: We have new technology on our hands, and we don’t really know how this is all going to play out. When he said this week that “we need some kind of, like, regulatory authority or something overseeing AI development,” he may not have been at his most eloquent, but he was absolutely right.

We aren’t, in truth, all that close to AGI. Instead, we are unleashing a seductive yet haphazard and truth-disregarding AI that maybe nobody anticipated. But the takeaway is still the same. We should be worried, no matter how smart (or not) it is.

Gary Marcus (@garymarcus) is a scientist, bestselling author, and entrepreneur. He founded the startup Geometric Intelligence, which was acquired by Uber in 2016. His new podcast, Humans versus Machines, will launch this spring.

0 Shares:
You May Also Like