(2025-04-02) Toner The Collapse Of Long Ai Timelines
Helen Toner: The collapse of 'long' AI timelines. Nothing embodies that acceleration more viscerally than the prospect of human-level AGI arriving not in some distant sci-fi future, but within the span of a few years—or, at most, a couple of decades.
This is why I’m thrilled to publish this guest essay by Helen Toner, whose insights on the governance of advanced AI are deeply informed, unusually clear-headed and urgently needed.
She is widely recognized as an AI policy expert and researcher with over a decade of experience in the field. She served on OpenAI’s board of directors from 2021 until the leadership crisis in November 2023
Whether you think AGI is five years away or fifty, Helen argues that we no longer have the luxury of treating these questions as fringe
Arguments about timelines typically refer to “timelines to AGI,” but throughout this post I’ll mostly refer to “advanced AI” or “human-level AI” rather than “AGI.” In my view, “AGI” as a term of art tends to confuse more than it clarifies, since different experts use it in such different ways.1 So the fact that “human-level AI” sounds vaguer than “AGI” is a feature, not a bug—it naturally invites reactions of “human-level at what?” and “how are we measuring that?” and “is this even a meaningful bar?” and so on, which I think are totally appropriate questions as long as they’re not used to deny the overall trend towards smarter and more capable systems.
Back in the dark days before ChatGPT, proponents of “short timelines” argued there was a real chance that extremely advanced AI systems would be developed within our lifetimes—perhaps as soon as within 10 or 20 years. If so, the argument continued, then we should obviously start preparing
Opponents with “long timelines” would counter that, in fact, there was no evidence that AI was going to get very advanced any time soon
Whoever you think was right, for the purposes of this post I want to point out that this debate made sense.
Today, in this era of scaling laws, reasoning models, and agents, the debates look different.
What counts as “short” timelines are now blazingly fast—somewhere between one and five years until human-level systems.
For those in the “short timelines” camp, “AGI by 2027” had already become a popular talking point before one particular online manifesto made a splash with that forecast last year
It’s obvious that we as a society are not ready to handle human-level AI and all its implications that soon. Fortunately, most people think we have more time.
But… how much more time?
Here are some recent quotes from AI experts who are known to have more conservative views
probably over the next decade or two
10 or 20 years from now
I think actual transformative effects (e.g. most cognitive tasks being done by AI) is decades away (80% likely that it is more than 20 years away
These “long” timelines sure look a lot like what we used to call “short”!
To be clear, this doesn’t mean:
We’ll definitely have human-level AI in 20 years.
We definitely won’t have human-level AI in the next 5 years.
Human-level AI will definitely be built with techniques that are popular today
But it does mean:
Dismissing discussion of AGI, human-level AI, transformative AI, superintelligence, etc. as “science fiction” should be seen as a sign of total unseriousness
If you want to argue that human-level AI is extremely unlikely in the next 20 years, you certainly can, but you should treat that as a minority position where the burden of proof is on you.
We need to leap into action on many of the same things that could help if it does turn out that we only have a few years. These will likely take years and years to bear fruit in any case, so if we have a decade or two then we need to make use of that time.
Personally, my favorite description of AGI is from Joe Carlsmith: “You know, the big AI thing; real AI; the special sauce; the thing everyone else is talking about.”
I routinely hear people sliding back and forth between extremely different definitions, including “AI that can do anything a human can do,” “AI that can perform a majority of economically valuable tasks,” “AI that can match humans at most cognitive tasks,” “AI that can beat humans at all cognitive tasks,” etc. I hope to dig into the potentially vast gulfs between these definitions in a future post.
Edited: | Tweet this! | Search Twitter for discussion
No backlinks!
No twinpages!