The Intelligence Landing is Near
Executives at frontier AI companies (OpenAI, Anthropic, Google DeepMind, SSI) have grown increasingly confident about AGI's arrival within just a few years
Nick Bostrom, one of the world’s foremost philosophers on the consequences of AGI, and author of Superintelligence & Utopia, was recently asked to respond in one sentence what comes to mind when he hear’s the word “future.” His response?
“Hold on for dear life. Buckle down.”
Leaders of AI corporations, who have direct access to cutting-edge systems, assert that AGI could be developed within just a few years. Their public statements overwhelmingly support this view1.
They’re joined by innumerable independent experts — including Geoffrey Hinton and Yoshua Bengio, two of the world’s most influential A.I. researchers, Ben Buchanan, who was the Biden administration’s top A.I. expert, a host of other prominent economists, mathematicians, national security officials, and ex-frontier lab employees, including Leopold Aschenbrenner and Daniel Kokotajlo, who are calling for safety measures.
To be fair, some experts doubt that AGI is imminent. However, even if we set aside those working at AI companies or those with vested interests, enough credible independent voices predict short AGI timelines that their warnings deserve serious consideration.
Never before have we seen so many experts urgently calling for safety measures despite enormous economic incentives to rush forward.
When industry insiders advocate for regulation, it usually signals regulatory capture2. Here, the evidence strongly suggests otherwise. These warnings deserve our serious attention.
Humans have an uncanny ability to be skeptical of new technological developments:
Yet sometimes, warning signs become impossible to ignore. I believe we have reached such a moment.
Summary
Executives at frontier AI companies (OpenAI, Anthropic, Google DeepMind, SSI) have grown increasingly confident about AGI's arrival within just a few years
Independent experts including Turing Award winners (Hinton, Bengio), government officials, economists, and mathematicians have joined in warning about short AGI timelines
Despite enormous economic incentives to rush forward, an unprecedented number of experts are advocating for safety measures and regulation
While humans have historically been skeptical of technological breakthroughs, the current warning signs from those with insider knowledge deserve serious consideration
In recent months, the leaders of frontier AI companies have grown increasingly confident about rapid progress:
OpenAI’s Sam Altman: Shifted from saying in November “the rate of progress continues” to declaring in January “we are now confident we know how to build AGI”
Anthropic’s Dario Amodei: Stated in January “I’m more confident than I’ve ever been that we’re close to powerful capabilities… in the next 2-3 years”
Google DeepMind’s Demis Hassabis: Changed from “as soon as 10 years” in autumn to “probably three to five years away” by January.
SSI’s Ilya Sutskever: “Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”
When a company engages in regulatory capture, it manipulates the very rules meant to keep it in check—usually by influencing the regulators directly or indirectly.
How it happens:
The company lobbies to get favourable laws passed.
It places former executives inside the regulatory agency (or vice versa—regulators are promised future jobs at the company).
It floods the agency with technical information only it can interpret, making the regulator reliant on it.
It funds think tanks or research that shapes policy in its favour.
Example: A big pharmaceutical company might push for its former employees to work at the FDA. It then uses its insider influence to relax drug testing rules, speed up approvals, or block generic competitors. The result? Higher profits for the company, potentially at the expense of public safety.
The company “captures” the system that was meant to control it.
Bill Gurley’s 2,851 Miles talk at the All-In Summit gives a great overview of how it’s played out over the course of history in America.