On June 10, 2025, Sam Altman published a blog post entitled “The Gentle Singularity”, in which he wrote that “[w]e are past the event horizon; the takeoff has started”.
Interesting stuff. I have to say that based on my work and side project experience (although I don't work at a lab) I'm a convert to the O-ring theory that the speed and outcome of a process is typically limited by its weakest link. It's basically just standard operations thinking, now that I think about it.
There are reasons you can't win with unlimited human interns or ICs now, and some of those apply to AI as well. When they fail at making a correct decision, more interns or AI agents typically don't resolve that bottleneck. You need a senior person or group to figure it out. If you have a well defined reward function, you can brute force it like RL, but that's relatively uncommon in real life. You may have it for certain tasks, but you rarely have it for full jobs.
So then let's say we can fully automate 50% of an AI researcher's job. That gets you a 2X speed-up at best because you still have the remaining 50% of the job, and you probably lose some speed-up because now you end up doing additional tasks you weren't doing before (Jevons paradox). You need meaningful new capabilities to automate beyond the 50%, you can't just get it through applying what you have. You're now on some kind of exponential improvement curve, but you don't know over what time frame. And if the inputs required for further automation are also growing exponentially, perhaps at a faster rate than your improvements, then the next automation steps will actually take longer despite your exponential productivity curve.
So I think we've on the curve, but progress may not get subjectively faster.
This resonates a lot. One way I’d extend the O-ring framing is that, in AI research, the “weakest link” isn’t purely technical—and it isn’t static.
As automation improves, bottlenecks tend to migrate outward into social, organizational, and deployment layers (e.g. where models can actually be applied, governed, or monetized).
That’s why even large gains in partial automation often don’t feel like acceleration: you’re not just speeding up a fixed pipeline, you’re constantly re-exposing new O-rings. In that sense, subjective progress can stay flat even while objective capability compounds.
True, these are the main participants—the ones with enough resources.
But predicting the winner is hard. Automating research accelerates exploration, yet major breakthroughs might still require a kind of “intuition” or unexplainable insight.
Unlimited compute could in theory explore any reward function, but the funds, time, and social infrastructure needed are unpredictable.
So even with strong capabilities, financial and organizational constraints may shape outcomes.
I agree with the article’s conclusion—it follows naturally from the logic of recursive improvement.
That said, once we factor in unavoidable social and economic constraints, I’d add two observations for discussion:
1. Relative model advantage does not automatically translate into real-world leverage.
Suppose one AI lab achieves automated AI research and gains a decisive speed advantage over others. What can it actually do with that capability?
Today, the biggest bottleneck is not model intelligence, but how models are embedded into society. Current GPT-style interfaces severely underutilize model capability; the chat interface simply cannot carry it. This is also why OpenAI’s consumer business struggles to monetize at Meta-scale.
Increasing model intelligence alone does not solve this. To fully deploy such capability, the lab would have to move into hard tech, manufacturing, energy, or similar domains. Unlike chat or media, these are not “gentle” applications. They would almost certainly trigger intense regulatory scrutiny—and if the advantage is truly uncatchable, some form of de facto takeover or strict control becomes inevitable.
2. During a gradual transition to fully automated research, survival depends on escaping the chat interface.
Even if AI-to-AI research progresses rapidly, there may be a long intermediate phase where intelligence grows fast but cannot yet deliver decisive advantages in manufacturing or hard tech. During this phase, financial viability depends almost entirely on whether labs can monetize in consumer domains beyond the chat box.
In programming and enterprise services, higher model capability is unlikely to justify higher prices. It mostly enables doing more work for the same—or lower—revenue, i.e. AI-driven deflation.
The only fundamentally non-infinite resource is human attention. In that sense, it’s not accidental that Meta has been one of the biggest winners in market cap growth since the ChatGPT moment.
If Ilya is right about us re-entering the age of research, compute constraints maybe bypassed by new architectures or improvements in algorithmic efficiencies. So there is the possibility of a hard takeoff right now, as opposed to 2027 or thereabouts .
Interesting stuff. I have to say that based on my work and side project experience (although I don't work at a lab) I'm a convert to the O-ring theory that the speed and outcome of a process is typically limited by its weakest link. It's basically just standard operations thinking, now that I think about it.
There are reasons you can't win with unlimited human interns or ICs now, and some of those apply to AI as well. When they fail at making a correct decision, more interns or AI agents typically don't resolve that bottleneck. You need a senior person or group to figure it out. If you have a well defined reward function, you can brute force it like RL, but that's relatively uncommon in real life. You may have it for certain tasks, but you rarely have it for full jobs.
So then let's say we can fully automate 50% of an AI researcher's job. That gets you a 2X speed-up at best because you still have the remaining 50% of the job, and you probably lose some speed-up because now you end up doing additional tasks you weren't doing before (Jevons paradox). You need meaningful new capabilities to automate beyond the 50%, you can't just get it through applying what you have. You're now on some kind of exponential improvement curve, but you don't know over what time frame. And if the inputs required for further automation are also growing exponentially, perhaps at a faster rate than your improvements, then the next automation steps will actually take longer despite your exponential productivity curve.
So I think we've on the curve, but progress may not get subjectively faster.
This resonates a lot. One way I’d extend the O-ring framing is that, in AI research, the “weakest link” isn’t purely technical—and it isn’t static.
As automation improves, bottlenecks tend to migrate outward into social, organizational, and deployment layers (e.g. where models can actually be applied, governed, or monetized).
That’s why even large gains in partial automation often don’t feel like acceleration: you’re not just speeding up a fixed pipeline, you’re constantly re-exposing new O-rings. In that sense, subjective progress can stay flat even while objective capability compounds.
Very interesting indeed. So this is maybe a race between Anthropic, OpenAI, xAI, and Google in the West, plus maybe a couple of Chinese companies?
True, these are the main participants—the ones with enough resources.
But predicting the winner is hard. Automating research accelerates exploration, yet major breakthroughs might still require a kind of “intuition” or unexplainable insight.
Unlimited compute could in theory explore any reward function, but the funds, time, and social infrastructure needed are unpredictable.
So even with strong capabilities, financial and organizational constraints may shape outcomes.
Loved the clear writing.
Elon clearly believes AI is unbounded.
I agree with the article’s conclusion—it follows naturally from the logic of recursive improvement.
That said, once we factor in unavoidable social and economic constraints, I’d add two observations for discussion:
1. Relative model advantage does not automatically translate into real-world leverage.
Suppose one AI lab achieves automated AI research and gains a decisive speed advantage over others. What can it actually do with that capability?
Today, the biggest bottleneck is not model intelligence, but how models are embedded into society. Current GPT-style interfaces severely underutilize model capability; the chat interface simply cannot carry it. This is also why OpenAI’s consumer business struggles to monetize at Meta-scale.
Increasing model intelligence alone does not solve this. To fully deploy such capability, the lab would have to move into hard tech, manufacturing, energy, or similar domains. Unlike chat or media, these are not “gentle” applications. They would almost certainly trigger intense regulatory scrutiny—and if the advantage is truly uncatchable, some form of de facto takeover or strict control becomes inevitable.
2. During a gradual transition to fully automated research, survival depends on escaping the chat interface.
Even if AI-to-AI research progresses rapidly, there may be a long intermediate phase where intelligence grows fast but cannot yet deliver decisive advantages in manufacturing or hard tech. During this phase, financial viability depends almost entirely on whether labs can monetize in consumer domains beyond the chat box.
In programming and enterprise services, higher model capability is unlikely to justify higher prices. It mostly enables doing more work for the same—or lower—revenue, i.e. AI-driven deflation.
The only fundamentally non-infinite resource is human attention. In that sense, it’s not accidental that Meta has been one of the biggest winners in market cap growth since the ChatGPT moment.
Curious to hear counterarguments.
If Ilya is right about us re-entering the age of research, compute constraints maybe bypassed by new architectures or improvements in algorithmic efficiencies. So there is the possibility of a hard takeoff right now, as opposed to 2027 or thereabouts .