Discussion about this post

User's avatar
Greg G's avatar
16hEdited

Interesting stuff. I have to say that based on my work and side project experience (although I don't work at a lab) I'm a convert to the O-ring theory that the speed and outcome of a process is typically limited by its weakest link. It's basically just standard operations thinking, now that I think about it.

There are reasons you can't win with unlimited human interns or ICs now, and some of those apply to AI as well. When they fail at making a correct decision, more interns or AI agents typically don't resolve that bottleneck. You need a senior person or group to figure it out. If you have a well defined reward function, you can brute force it like RL, but that's relatively uncommon in real life. You may have it for certain tasks, but you rarely have it for full jobs.

So then let's say we can fully automate 50% of an AI researcher's job. That gets you a 2X speed-up at best because you still have the remaining 50% of the job, and you probably lose some speed-up because now you end up doing additional tasks you weren't doing before (Jevons paradox). You need meaningful new capabilities to automate beyond the 50%, you can't just get it through applying what you have. You're now on some kind of exponential improvement curve, but you don't know over what time frame. And if the inputs required for further automation are also growing exponentially, perhaps at a faster rate than your improvements, then the next automation steps will actually take longer despite your exponential productivity curve.

So I think we've on the curve, but progress may not get subjectively faster.

Expand full comment
Owen Lewis's avatar

Very interesting indeed. So this is maybe a race between Anthropic, OpenAI, xAI, and Google in the West, plus maybe a couple of Chinese companies?

Expand full comment
5 more comments...

No posts

Ready for more?