OpenAI’s CEO, Sam Altman, has confirmed that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4, which was released in March. Altman’s comments came during a discussion on the threats posed by AI systems at MIT, where he was asked about an open letter calling for labs to pause the development of AI systems “more powerful than GPT-4” due to safety concerns. Altman dismissed the letter as lacking technical nuance but stressed that OpenAI was considering the safety implications of its work.
While Altman’s statement may seem reassuring, it actually highlights a significant challenge in the debate about AI safety: the difficulty of measuring and tracking progress. Many in the industry have fallen for the fallacy of version numbers, believing that higher numbers equate to greater capability. However, this is a misconception that is perpetuated by marketing tactics and is not rooted in actual progress.
Instead, we should focus on capabilities and demonstrations of what AI systems can and can’t do, as well as predictions of how this may change over time. Even if OpenAI is not working on GPT-5, the company is still expanding the potential of GPT-4 and optimizing it, while others in the industry are building similarly ambitious tools. It’s clear that society has its hands full with the AI systems currently available, and GPT-4 is still not fully understood.
While fears about AI safety should not be dismissed, we need to move away from simplistic assumptions based on version numbers and focus on a more nuanced understanding of progress in the field. This will require ongoing evaluation and discussion of AI’s capabilities and their potential risks, as well as efforts to develop safety measures that keep pace with advancements in the field.