The Race to RSI
Spring 2026 Update
In January, Dario Amodei told a stunned audience at Davos that the coding agents developed by Anthropic will be used “to create the new generation of models, and speed it up, create a loop that would increase the speed of [AI] model development”. Anthropic views Claude Code as the path towards automation of AI research and, eventually, recursive self-improvement (RSI). Similarly, OpenAI is using Codex to accelerate its own development of AI models and expects that a future version of Codex will eventually become the automated AI research intern:
Where AI researchers have great hope to help themselves... is that if you could just say ‘hey, Codex, this is the idea, and it’s fairly clear what I’m saying, please just implement it so it runs fast on this 8-machine setup or 100-machine setup’. I think that’s what OpenAI [means by] an AI intern by the end of [2026].
—Lukasz Kaiser, OpenAI
OpenAI and Anthropic are racing to automate AI research and reach RSI. But is it a two-horse race, or might any other labs join them? Read on to find out.
OpenAI
OpenAI’s goal announced in October 2025 is to develop an automated AI research intern (i.e., the system as described by Lukasz Kaiser, above), running on “hundreds of thousands of GPUs”, by September 2026. Jakub Pachocki recently said that, based on the improving coding capabilities of Codex, he thinks the intern is “on track” to be developed by September - now just 5 months away. Pachocki also described the differences between the “intern” and the fully automated AI researcher (which OpenAI expects to develop by March 2028):
The way I would distinguish a research intern from a full automated researcher is the span of time that we would have it work mostly autonomously or the specificity of the task that has to be given. I don't expect we'll have systems where you tell them: “Go improve your model capability, go solve alignment” - and they will do it. Not this year. I think we might get there at some point. But for more specific technical ideas - like this particular idea how to improve the models, how to run this evaluation differently - I think we have the pieces that we mostly just need to put together.
In another interview, Pachocki said that the “intern” is a system to which “you can delegate tasks that would take a person a few days”.
It is not clear whether OpenAI is deliberately being conservative with its September 2026 timeline for developing the “intern” and/or its March 2028 timeline for developing the fully automated AI researcher. Interestingly, Sam Altman recently said that “it’s going to be a faster takeoff than [he] originally thought”.
Anthropic
Anthropic’s publicly stated timeline for reaching fully automated AI research is significantly more aggressive than OpenAI’s. Dario Amodei expects 2026 to “have a radical acceleration that surprises everyone… I think we are on the precipice of something incredible”. According to Anthropic’s Frontier Safety Roadmap, released in February 2026, it is “plausible, as soon as early 2027, that [Anthropic’s] AI systems could fully automate, or otherwise dramatically accelerate, the work of large, top-tier teams of human researchers in domains [including development of] AI itself”. Echoing this timeline, Anthropic co-founder and chief science officer Jared Kaplan told Time magazine in March 2026 that fully automated Al research could be "as little as a year away".
Also in line with these predictions, Jack Clark continues to believe that “a country of geniuses in a datacenter” (i.e., AGI)1 will be achievable in late 2026, and “running many copies” in 2027.2
Google
Sitting across from Dario Amodei at Davos in January 2026, Demis Hassabis was diplomatically skeptical about coding models leading to RSI:
The full closing of the loop, I think is an unknown... I think it's possible to do, you may need AGI itself to be able to do that in some domains where there's more messiness around them [and] it's not so easy to verify your answer very quickly. There are MP-hard domains, and I also include for AGI physical AI, robotics. And then you've got hardware in the loop that may limit how fast the self-improvement systems can work - but I think in coding and mathematics, I can definitely see that working.
“If self-improvement doesn’t deliver the goods on its own”, Hassabis said, “then we’ll need other things to work” - i.e., world models, robotics and continual learning.
Under Demis Hassabis’ leadership, Google has indeed focused on reaching AGI via the path of developing continual learning, world models and “physical AI” (i.e., robotics). This is a vastly different path from that currently being pursued by OpenAI and Anthropic. Demis Hassabis estimates that building AI on this path will result in Google achieving AGI in 5 to 10 years.
But is there a change coming at Google? On April 20, 2026, The Information reported that Sergey Brin has formed a “strike team” to improve Google’s coding models. “The end goal”, the article reads, “is AI takeoff or AI that can improve itself… Brin has told staffers that improving Google AI’s coding abilities is a step toward that eventual goal.”
Is Google joining the race? If so, will it throw enough compute and other resources at the problem so as to actually be able to catch up with Anthropic and OpenAI? We will find out over the next few months.
xAI
There is no publicly available evidence to date that xAI is focused on achieving automated AI research or RSI. In January 2026, reports emerged that xAI’s team was using Anthropic’s models through Cursor instead of using Grok. And when co-founder Jimmy Ba left xAI earlier this year, he tweeted that he was leaving to “recalibrate his gradient on the big picture” because “[r]ecursive self improvement loops likely go live in the next 12mo.”
In response, Elon Musk has chosen to focus his energies on beating Anthropic on coding capabilities.3. On April 21, 2026, SpaceX announced that it will be “working closely together” with Cursor “to create the world’s best coding and knowledge work AI”. As part of the deal, SpaceX will either pay Cursor $10B for this collaboration in 2026 or, at its option, will purchase Cursor for $60B.
Does xAI’s senior leadership realize that coding models and the resulting enterprise revenue are merely an (extremely useful) milestone on the path to RSI, or is xAI’s goal limited to mimicking Anthropic’s success in delivering agentic models to enterprises? Time will tell.
Meta
The Meta Superintelligence Labs team, assembled at a great cost in mid-2025, has been working for many months on developing new AI models and related tools. Thus far, these efforts have culminated only in the release of Muse Spark, a new reasoning model.
There is no publicly available evidence currently that Meta is focusing any attention on coding models, automation of AI research or RSI.
Microsoft
An underrated player in the quest for AGI, Microsoft has a major trump card up its sleeve: its licensing deal with OpenAI means that Microsoft has rights to OpenAI’s “research IP” (including models intended for internal deployment or research only - which should include automated AI research models) until the earlier of 2030 or verification of OpenAI’s declaration of AGI by an independent expert panel. After OpenAI declares AGI, we can expect Microsoft to use this research IP to undertake its own quest for superintelligence.
Possibly in preparation for this move (at least in part), Microsoft has been aggressively expanding its data center capacity.
Others?
DeepSeek has been suspiciously quiet recently, with no major model releases since December 2025. Given DeepSeek’s technical prowess and taste, it would not be surprising to the author if it turned out that DeepSeek has built its own coding model internally and is using it to accelerate its own coding and AI research capabilities. These efforts, if they indeed do exist, may be significantly constrained by compute availability and other factors.
For now, the smaller U.S. labs and “neolabs” (e.g., SSI, Thinking Machines, Core Automation - just to name a few) generally appear to be headed in different directions from the one chosen by OpenAI and Anthropic.4
Finally, time will tell whether any other Chinese labs are willing and able to join the race to RSI.
As defined in Dario Amodei’s “Machines of Loving Grace”.
Jack kindly provided a detailed explanation of why he thinks this timeline is plausible in a thread on X, which is well worth reading in its entirety.
The goal, as stated by Musk in February 2026, was to “get pretty close [to Anthropic] by April, and roughly similar by May, so probably better by June”. As of the date of this article (April 22, 2026), this stated goal remains elusive.
This is understandable because, among other things, automation of AI research will likely be a very compute-intensive path. For example, as noted above, OpenAI’s automated AI research intern will likely be running on hundreds of thousands of GPUs.

