The world of artificial intelligence felt like a scene from a Hollywood sci-fi film this week as OpenAI‘s CEO, Sam Altman, unveiled the company’s latest virtual assistant, GPT-4o. With a single word – “Her” – posted on X (formerly Twitter), Altman drew a parallel to the 2013 movie where a man falls in love with an advanced AI system voiced by Scarlett Johansson.
For some experts, GPT-4o’s release is an unsettling reminder of concerns over AI’s rapid progress, exemplified by a key OpenAI safety researcher’s recent departure following disagreements over the company’s direction. Others see it as a confirmation of continued innovation in a field promising immense benefits for all.
As ministers, experts, and tech executives converge in Seoul next week for the global AI summit, both perspectives will be heard, underscored by a pre-meeting safety report highlighting AI’s potential upsides and numerous risks.
Last year’s inaugural AI Safety Summit at Bletchley Park, UK, announced an international testing framework for AI models amid calls from some concerned voices for a six-month pause in developing powerful systems. The resulting Bletchley Declaration, signed by the UK, US, EU, China, and others, hailed AI’s “enormous global opportunities” while warning of its potential for “catastrophic” harm. It also secured commitments from major tech firms like OpenAI, Google, and Meta to cooperate with governments on testing models before release.
Despite the UK and US establishing national AI safety institutes, the industry’s development march has continued unabated. Major tech players have all recently announced new AI products:
- OpenAI released GPT-4o for free online
- Google previewed its new AI assistant Project Astra and updates to Gemini
- Meta released new versions of its Llama model as open-source
- Anthropic, formed by former OpenAI staff, updated its leading Claude model
Dan Ives, an analyst at Wedbush Securities, estimates this year’s generative AI spending boom will reach $100 billion, part of a $1 trillion expenditure over the next decade.
Further landmark developments loom large. OpenAI is working on GPT-5 and a search engine, Google is preparing Astra’s release and AI-generated search queries outside the US, Microsoft is reportedly developing its own model and has hired Mustafa Suleyman to oversee an AI division, and Apple is rumored to be in talks with OpenAI to integrate ChatGPT into iPhones.
Billions in AI investment are pouring into tech firms of all sizes. Hardware startups like Humane and Rabbit race to build AI-powered smartphone replacements, while others experiment with training AI in every aspect of a person’s life. The US startup Rewind markets a product recording all computer screen activity to train highly personalized AIs, with lapel mics and cameras planned for offline activities.
“We’re going to keep seeing these flashy releases…until something sticks from a user perspective,” says Niamh Burns, senior analyst at Enders Analysis, as companies backed by multi-billion investments vie for consumer adoption.
The six months since Bletchley have seen significant changes, according to Rowan Curran, Forrester analyst. The emergence of “multi-modal” models like GPT-4 and Gemini that handle multiple formats – text, image, audio – is “opening up possibilities.”
Other breakthroughs include video generators like Sora convincing filmmaker Tyler Perry to halt an $800 million studio expansion, and retrieval-augmented generation (RAG) for giving generalist AIs specialties.
Some already see a market that will be dominated by a handful of wealthy companies who can afford the vast energy and data-crunching costs that come with building AI models and operating them. Would-be competitors are also being brought under their wings, to the concern of competition authorities in the UK, the US and the EU. Microsoft, for instance, is a backer of OpenAI and France’s Mistral, while Amazon has invested heavily in Anthropic.
“The market for GenAI is febrile,” says Andrew Rogoyski, a director at the Institute for People-Centred AI at the University of Surrey. “It is so costly to develop large language models that only the very largest companies, or companies with extraordinarily generous investors, can play.“
Meanwhile, some experts feel safety is not the priority it should be, because of the rush. “Governments and safety institutes say they plan to regulate and the companies say they are concerned too,” says Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the UN’s advisory body on AI. “But progress is slow because companies have to react to market forces.“
Google and OpenAI point to statements about safety alongside this week’s announcements, with Google referring to making its models “more accurate, reliable and safer” and OpenAI detailing how GPT-4o has safety “built-in by design“. However, on Friday a key OpenAI safety researcher, Jan Leike, who had resigned earlier in the week, warned that “safety culture and processes have taken a backseat to shiny products” at the company. In response, Altman wrote on X that OpenAI was “committed” to doing more on safety.
The UK government will not confirm which models are being tested by its newly established AI Safety Institute, but the Department for Science, Innovation and Technology said it was continuing to “work closely with companies to deliver on the agreements reached in the Bletchley declaration.”
The biggest changes are yet to come. “The last 12 months of AI progress were the slowest they’ll be for the foreseeable future,” the economist Samuel Hammond wrote in early May. Until now, “frontier” AI systems, the most powerful on the market, have largely been confined to simply handling text. Microsoft and Google have incorporated their offerings into their office products, and given them the authority to carry out simple administrative functions upon request. But the next step of development is “agentic” AI: systems that can truly act to influence the world around them, from surfing the web, to writing and executing code.
Smaller AI labs have experimented with such approaches, with mixed successes, putting commercial pressure on the larger companies to give their own AI models the same power. By the end of the year, expect the top AI systems to not only offer to plan a holiday for you, but book the flights, hotels and restaurants, arrange your visa, and prepare and lead a walking tour of your destination.
But an AI that can do anything the internet offers is also an AI with a much greater capability for harm than anything before. The meeting in Seoul might be the last chance to discuss what that means for the world before it arrives. The world will be watching to see if the accelerating industry can get that balance right before artificial intelligence outpaces our ability to control it.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
