The (problematic) case for building AI as fast as we can.
OpenAI, the creators of ChatGPT and the underlying technology for Bing’s Sydney chatbot, have always had a ludicrously ambitious mission: to bring the world AI systems that are generally smarter than humans.
In a post last week titled “Planning for AGI and Beyond,” OpenAI CEO Sam Altman explained what the company means by that: OpenAI is going to build and release successively more powerful systems, which he argues is the best approach to ensuring that the terrifying transition to a world with superintelligence goes well.
“Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history,” Altman wrote. “Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.”
I disagree with many details of OpenAI’s plan for reaching AGI, but I think this statement does one incredibly valuable thing: It makes clear what OpenAI is working to do, so that it can be subject to the kind of accountability and external scrutiny that its leadership acknowledges is needed.
Taking OpenAI at its word
Until recently, I think relatively few people really took OpenAI’s declaration that it was working on a world-changing technology very seriously. After all, tech companies often declare they’re going to “change the world,” and just as often, it’s all marketing and no substance. There was little reason to dig into the details of OpenAI’s plans, talk about accountability, or try to engage with the question of how to make powerful AI go well if you thought OpenAI was just engaging in substance-free hype.
“One thing that struck me, reading some of the critical reactions to Open AI LP, was that most of your critics don’t believe you that you’re gonna build AGI,” I said to Greg Brockman, the co-founder and president of OpenAI, in 2019. “So most of the criticism was: ‘That’s a fairy tale.’ And that’s certainly one important angle of critique. But it seems like there’s maybe a dearth of critics who are like, ‘All right. I believe you that there’s a significant chance — a chance worth thinking about — that you’re going to build AGI … and I want to hold you accountable.’”
“I think I go even one step further,” Brockman said at the time, “and say that it’s not just people who don’t believe that we’re gonna do it but people who don’t even believe that we believe we’re going to do it.”
It’s become much clearer since that OpenAI means it when it says it aims to develop very powerful AI systems, systems where “The bad case — and I think this is important to say — is like lights out for all of us,” as Altman recently said.
ChatGPT has more than 100 million users and has sparked a race among tech companies to surpass it with their own language models. OpenAI’s valuation has soared to more than $100 billion. OpenAI is at this point pushing forward public knowledge of AI, enthusiasm about AI, competition around AI, and development of powerful systems.
“Tech companies have not fully prepared for the consequences of this dizzying pace of deployment of next-generation AI technology,” NYU professor Gary Marcus pointed out recently, arguing that the “global absence of a comprehensive policy framework to ensure AI alignment — that is, safeguards to ensure an AI’s function doesn’t harm humans — begs for a new approach.” An obvious new approach — and the one he favors — is slowing down deployment of powerful systems.
But OpenAI argues that we should instead do the exact opposite.
The theory for hurrying to develop powerful AI as fast as possible
If you believe that AI is very risky, as many experts do and as Altman himself says he does, it might seem strange to rush new and more powerful models to release as fast as possible. OpenAI has been (justifiably, I think) fiercely criticized for its fast pace of release in light of serious concerns about existing models, and “Planning for AGI and Beyond” is best read as, in part, a response to those criticisms.
That response has basically two elements. One is that Altman argues we will learn through doing when it comes to AI: releasing models will help us notice the problems with them, explore their eccentricities, and figure out what’s up with them. “We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize ‘one shot to get it right’ scenarios,” he writes.
The other crucial half of the argument (mostly made in a footnote) is that it will be good for humanity’s efforts to solve the alignment problem if we develop very powerful, human-level AI systems when it is still very expensive to do so, instead of holding off until it becomes very inexpensive to do so. (Computer chips get cheaper over time, so anything we can do in 2030 we should expect we’ll be able to do more cheaply in 2040.)
I’ve seen this read as a post hoc rationalization for what OpenAI wanted to do anyway, which is invent lots of really cool stuff and not be stuck doing less rewarding safety work on existing models. It’s hard to rule out an element of that, but I think there’s a genuine argument here, too.
There are a lot of advantages to extremely powerful and dangerous systems being developed while it’s expensive to do so: Fewer actors will be involved in the race, it’ll be easier to regulate and audit advanced AI systems without impacting huge swaths of the economy, and some AI catastrophe scenarios become less likely because it may be harder for an AI system’s capabilities to increase quite as fast.
With AI, we are all a stakeholder
But the real problem — as Altman more or less acknowledges — is that this is a ludicrously high-stakes decision for OpenAI to be in a position to make more or less on its own. Rushing ahead with developing powerful systems may literally kill every person on the face of the planet. But it’s not pure downside — some of the advantages OpenAI discusses are real.
Should we do it? I lean no, but I am sure it is absurd that the way the decision gets made is by every AI company deciding for itself whether to go full steam ahead.
OpenAI says all the right things about not wanting to proceed unilaterally, arguing for external audits and oversight and promising that it may reconsider its fast-moving approach as more and more powerful systems are deployed. But in practice, worrying about powerful AI systems is still pretty niche, and there aren’t good ways for the other stakeholders — which is to say, every person alive — to offer OpenAI useful oversight.
I don’t want to specifically go after OpenAI here. In describing what it’s doing and what it thinks is at stake, it is attracting a lot of attention that it could have avoided if it had just never admitted that AGI is its ultimate aim.
Many other AI labs are worse. Yann LeCun at Facebook, who has also said he aims to build machines that can “reason and plan” and “learn as efficiently as humans and animals,” has written in Scientific American that all the discussion of the extraordinary risks is silly: “Why would a sentient AI want to take over the world? It wouldn’t.”
But while I commend OpenAI for saying that the risks are real, and explaining its case that its approach is the best one for managing risks, I remain skeptical that the benefits of developing incredibly powerful AI systems as fast as possible outweigh the risks of doing so — and I hope that OpenAI means it when it says it’s open to external checks on the work it does.
A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!