Artificial general intelligence (AGI)—a theoretical form of AI capable of reasoning at or beyond human levels—remains an area of intense debate and speculation. However, experts at the forefront of AI research believe that some version of AGI could emerge within the next few years.
Miles Brundage, former head of policy research and AGI readiness at OpenAI, shared on the Hard Fork podcast that we are on the verge of developing “systems that can essentially perform any task a person can do remotely on a computer,” including controlling a mouse and keyboard or even simulating a human in a video chat. “Governments should be thinking about what that means in terms of sectors to tax and education to invest in,” Brundage advised.
The race to AGI has become an almost obsessive focus among industry watchers. Some of the most influential voices in the field suggest that AGI may be only a few years away. John Schulman, a cofounder of OpenAI who recently left the company, echoed this, estimating that AGI could arrive soon. Similarly, Dario Amodei, CEO of Anthropic, a competitor to OpenAI, projects that an initial form of AGI may emerge by 2026.
Brundage, who recently departed OpenAI after six years, has an insider’s perspective on the company’s AGI development timeline. He clarified that safety concerns did not drive his decision to leave, stating, “I’m pretty confident that there’s no other lab that is totally on top of things.”
Explaining his exit in a post on X, Brundage noted his intention to make a broader impact on AI policy and regulation from outside the private sector. Expanding on this in Hard Fork, he revealed two main reasons for his decision: “First, I wasn’t able to work on everything I wanted to, including broader industry-wide issues like regulation. Second, I wanted to be more independent and unbiased—so my views wouldn’t just be seen as corporate spin.”