Covering Scientific & Technical AI | Wednesday, January 8, 2025

OpenAI Sets Ambitious Course Toward Superintelligence 

Sam Altman, CEO of OpenAI, starts 2025 with a bold declaration for the future of artificial intelligence. According to Altman, OpenAI is now confident in its understanding of how to create AGI (artificial general intelligence) and is shifting focus toward the next frontier: superintelligence.

OpenAI's latest model, o3, which was unveiled in December and is undergoing safety evaluations, successfully passed the ARC-AGI challenge, a leading AGI benchmark. Under standard compute conditions, the model achieved an impressive score of 75.7%, with a more powerful version reaching 87.5%. While this milestone in the ARC-AGI test is notable, it does not yet confirm that OpenAI has unlocked the path to AGI.

“Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don't think o3 is AGI yet,” shared François Chollet, the creator of ARC. “o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.”

OpenAI appears to be fully committed to the concept of superintellignce and is investing heavily in this pursuit. A new team has been formed, led by Ilya Sutskever and Jan Leike with a focus on superintelligence and AGI. It is also dedicating a significant 20% of the compute resources to support the work of this new team. 

Interestingly, Ilya Sutskever, co-founder of OpenAI, left the company in May 2024 and was replaced by Jakub Pachocki. Since Sutskever’s departure, there has been no public announcement of his return to OpenAI. Sutskever is reported to have played a key role in the failed attempt to oust Altman as CEO. This makes his name popping up as the co-leader of the Superalignment team a bit of a surprise.

OpenAI has decided to label its approach to addressing the challenges of aligning superintelligent AI with human values as Superalignment. This term reflects the growing concern within the AI community about how to ensure that AI systems, once they surpass human intelligence, do not act in ways that are harmful or misaligned with human goals.

Ilya Sutskever Co-Founder OpenAI
Credit: Stanford HAI

So what’s the big deal with AGI and superintelligence and why has it created so much hype? The buzz around AGI and superintelligence is driven by their potential to revolutionize everything from science to everyday life. 

While there is no consensus as to what AGI is exactly, it represents computers or machines that can perform any human task autonomously. Superintelligence would go even further, surpassing human capabilities in every domain.

“Superintelligence will be the most impactful technology humanity has ever invented and could help us solve many of the world’s most important problems,” shared OpenAI in an article introducing superalignment."But the vast power of superintelligence could also be very dangerous and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade.”

OpenAI has faced criticism for some of its strategic moves including disbanding of teams that were focused on AI safety. This move led to the departure of several safety-focused researchers from the company. 

Altman’s claim about AGI and superintelligence has met with skepticism. The shifting timelines from Altman on when OpenAI’s path to superintelligence has fueled doubts. Not to forget some fundamental issues with AI such as hallucinations, astonishingly high energy demands, and a tendency to make mistakes raises concerns about the technology's readiness for the big promises Altman is making.

Despite the challenges, Altman remains confident in OpenAI's ability to achieve its ambitious goals. While his vision is optimistic, it raises the crucial question: can OpenAI deliver on such high-stakes promises, or will the hype outpace reality?

Sam Altman, CEO of OpenAI (left), and Microsoft CEO Satya Nadella

“We love our current products, but we are here for the glorious future,” Altman wrote in his personal blog. “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.”

Altman emphasized that AI systems are generally smarter than humans, and with superintelligence, AI systems are only going to get smarter.  

OpenAI’s ambitious claim to be on the brink of achieving AGI and even venturing into superintelligence is undoubtedly audacious. However, considering the company’s past accomplishments, it’s hard to dismiss entirely. If OpenAI does succeed in realizing AGI, the pace of change will be nothing short of staggering. It could potentially reshape every aspect of our lives in ways we can only begin to imagine.

AIwire