OpenAI‘s cofounder and former chief scientist, Ilya Sutskever, stepped back into the public eye with a bang at this year’s Conference on Neural Information Processing Systems (NeurIPS) in Vancouver. After spending much of the year out of the spotlight following his departure from OpenAI to launch Safe Superintelligence Inc., Sutskever delivered a keynote that could very well redefine the future trajectory of artificial intelligence.
“Pre-training as we know it will unquestionably end,” Sutskever declared, signaling a seismic shift in how AI models are developed. Pre-training, the initial phase where AI models like those behind ChatGPT learn from vast swathes of unstructured data, might be reaching its limits. According to Sutskever, the internet, which has been a primary source for this data, is like a finite resource—akin to fossil fuels.

“We’ve achieved peak data and there’ll be no more,” he stated, emphasizing that AI developers must now innovate with the data at hand because “there’s only one internet.” This revelation points to a looming challenge: how to continue advancing AI when the traditional pipeline of data is drying up.
Sutskever’s vision for the future of AI involves models that are not just larger but fundamentally different in operation. He spoke of AI becoming “agentic,” meaning these systems would operate autonomously, making decisions, executing tasks, and interacting with other software independently. This shift towards agentic AI isn’t just about efficiency; it’s about capability. These models, he believes, will possess genuine reasoning abilities, moving beyond mere pattern recognition to something that resembles human-like problem-solving.
“The more a system reasons, the more unpredictable it becomes,” Sutskever noted, drawing a parallel with chess-playing AI that can outmaneuver human grandmasters in ways that are not predictable by human standards. He envisions AI that can “understand things from limited data” and “not get confused,” suggesting a leap towards AI that can learn and adapt with far less data than what’s currently needed.
In an intriguing twist, Sutskever compared the scaling of AI to evolutionary biology, particularly how brain size scaled in hominids. He posited that AI could evolve similarly, finding new ways to scale intelligence beyond the brute-force data accumulation of pre-training.
The discussion took an even more speculative turn when an audience member posed a question about the rights and freedoms of AI. Sutskever’s response was reflective, acknowledging the complexity and philosophical depth of such questions. “I feel like in some sense those are the kind of questions that people should be reflecting on more,” he said, admitting his hesitance to provide definitive answers due to the need for a comprehensive, possibly governmental framework.
Cryptocurrency was humorously brought into the conversation as a potential mechanism for AI rights, though Sutskever was cautious not to weigh in deeply on this. Instead, he encouraged speculation, acknowledging the unpredictability of AI’s future societal integration. “Maybe that will be fine… I think things are so incredibly unpredictable. I hesitate to comment but I encourage the speculation,” he mused, leaving the audience with much to ponder.
Sutskever’s address at NeurIPS suggests we’re at a pivotal moment in AI development. As the well of pre-training data runs dry, the community must look towards new methodologies, possibly leading to more autonomous, reasoning-capable AI systems. This shift could not only change how AI is built but also how it interacts with and integrates into our society, raising profound questions about control, ethics, and even the rights of intelligent machines.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
