Sam Altman caused some commotion on X today with his post that the new Head of Preparedness role at OpenAI would be responsible, inter alia, for “gain[ing] confidence in the safety of running systems that can self-improve”.
I read the AI 2027 predictions by Kokotajlo et al. and thought *bah* this is way too fast. No way what they describe happens in two and a half years. I still believe that 2027 is too short of a runway, but I will say, my timelines are shortening...
Per Jack Clark, "a country of geniuses in a data center" is happening in or around early 2027 (but will not necessarily be widely distributed in the economy by that time). Let's wait and see.
Claude Opus 4.5 is pretty smart. But it doesn't have the level of self-awareness I would grant to a true AI, absent memory. I expect that to change this year.
I read the AI 2027 predictions by Kokotajlo et al. and thought *bah* this is way too fast. No way what they describe happens in two and a half years. I still believe that 2027 is too short of a runway, but I will say, my timelines are shortening...
Per Jack Clark, "a country of geniuses in a data center" is happening in or around early 2027 (but will not necessarily be widely distributed in the economy by that time). Let's wait and see.
Claude Opus 4.5 is pretty smart. But it doesn't have the level of self-awareness I would grant to a true AI, absent memory. I expect that to change this year.