(2025-03-25) ZviM On Not Feeling The AGI

Zvi Mowshowitz: On (Not) Feeling the AGI. Ben Thompson interviewed Sam Altman recently about building a consumer tech company, and about the history of OpenAI. Mostly it is a retelling of the story we’ve heard before, and if anything Altman is very good about pushing back on Thompson when Thompson tries to turn OpenAI’s future into the next Facebook, complete with an advertising revenue model. It is such a strange perspective to witness. They do not feel the AGI, let alone the ASI. The downside risks of AI, let alone existential risks, are flat out not discussed, this is a world where that’s not even a problem for Future Earth

A Note on GPT-2

this is probably totally safe but we don’t know for sure’ was exactly the correct attitude to initially take to GPT-2, given my understanding of what they knew at the time

What Even is AGI

Altman repeatedly emphasizes that what he wanted to do from the beginning, what me still most wants to do, is build AGI. Altman’s understanding of what he means by that, and what the implications will be, continues to seem increasingly confused. Now it seems it’s… fungible? And not all that transformative?

The whole thing is pedestrian, he’s talking about the Next Great Consumer Product. As in, Ben Thompson is blown away that this is the next Facebook, with a similar potential

hopes to avoid the advertising that I sense has Thompson very excited, as he continues to assume ‘people won’t pay’ so the way you profit from AGI (!!!) is ads. It’s so weird to see Thompson trying to sell Altman on the need to make our future an ad-based dystopia, and the need to cut off the API to maximize revenue

Seeking Deeply Irresponsibly

Altman’s answers in the DeepSeek section are scary. But it’s Thompson who really, truly, profoundly, simply does not get what is coming at all, or how you deal with this type of situation, and this answer from Altman is very good (at least by 2025 standards):

Other Altman statements, hinting at getting more aggressive with releases, are scarier. They get to regulation, where Thompson repeats the bizarre perspective that previous earnest calls for regulations that only hit OpenAI and other frontier labs were an attempt at regulatory capture.

Others Don’t Feel the AGI

Spor: I genuinely get the feeling that no one actually believes in superintelligence except for the doomers

John Pressman: There’s an entire genre of Guy on here whose deal is basically “Will the singularity bring me a wife?” and the more common I learn this guy is the less I feel I have in common with others.

Ethan Mollick: Interestingly, if you look at almost every investment decision by venture capital, they don’t really believe in AGI either, or else can’t really imagine what AGI would mean if they do believe in it

Epoch Feels the AGI

Epoch AI Research creates the GATE model, explaining that if AI is highly useful, it will also get highly used to do a lot of highly useful things, and that would by default escalate quickly. The model is, as all such things are, simplified in important ways, ignoring regulatory friction issues and also the chance we lose control or all die

Epoch’s Ege Erdil and Matthew Barnett argue that most AI value will come from broad automation, not from R&D and scientific progress. That’s not a knock on the value of R&D, it’s saying the broad automation is even more valuable

Epoch argues against the likelihood of a full-on AI R&D singularity (or full RSI) via the standard arguments and emphasizing the need for a variety of cognitive skills. I don’t find this generally convincing, but they’re making a narrower argument than usual, that AI will first be able to fully automate a large share of the general workforce

True Objections to Widespread Rapid Growth

Alas, I worry that the model here does not address most people’s ‘true objections.’ I don’t intend for this to be my definitive version of this, but sketching quickly, the plausibly true objections fall roughly into two categories

First, there’s the ones that are often driving objections but that aren’t actually objections in this context: (list)

That’s all understandable, but not especially relevant

Second, there’s actual arguments, in various combinations, such as: (list)

One common pattern is that relatively ‘serious people’ who do at least somewhat understand what AI is going to be put out highly pessimistic estimates and then call those estimates wildly optimistic and bullish. Which, compared to the expectations of most economists or regular people, they are, but that’s not the right standard here.

Thinking about AI impacts down the line without robotics seems to me like thinking about the steam engine without railroads, or computers without spreadsheets.

I will reiterate my position that various forms of ‘intelligence only goes so far’ are almost entirely a Skill Issue, certainly over a decade-long time horizon and at the margins discussed here, amounting to Intelligence Denialism.

Tying It Back

Altman’s model of the how AGI will impact the world is super weird if you take it seriously as a physical model of a future reality

It’s certainly possible that the first AGI-level product will come out – maybe it’s a new form of Deep Research, let’s say – and initially most people don’t notice or care all that much. People often ignore exponentials until things are upon them, and can pretend things aren’t changing until well past points of no return. People might sense there were boom times and lots of cool toys without understanding what was happening, and perhaps AI capabilities don’t get out of control too quickly. It still feels like an absurd amount of downplaying, from someone who knows better. And he’s far from alone.


Edited:    |       |    Search Twitter for discussion

No Space passed/matched! - http://www.wikiflux.net/wiki/ZoomIt