By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleOpenAITech

AI progress vs safety: can Seoul summit strike the right balance?

Silicon Valley's biggest players are betting billions that generative AI will be transformative. But can the Seoul summit align that enthusiasm with responsible development?

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
May 20, 2024, 12:35 PM EDT
Share
We may get a commission from retail offers. Learn more
The image appears to be a side profile of a human head composed of numerous interconnected lines and points, creating a network-like structure. The design is set against a black background, emphasizing the intricate blue lines that suggest the complexity and connectivity of artificial intelligence (AI) systems. This representation visually simulates the neural networks and synapses in the human brain, drawing a parallel to how artificial intelligence may be structured or operate.
Illustration by Aleksei Vasileika via Dribbble
SHARE

The world of artificial intelligence felt like a scene from a Hollywood sci-fi film this week as OpenAI‘s CEO, Sam Altman, unveiled the company’s latest virtual assistant, GPT-4o. With a single word – “Her” – posted on X (formerly Twitter), Altman drew a parallel to the 2013 movie where a man falls in love with an advanced AI system voiced by Scarlett Johansson.

For some experts, GPT-4o’s release is an unsettling reminder of concerns over AI’s rapid progress, exemplified by a key OpenAI safety researcher’s recent departure following disagreements over the company’s direction. Others see it as a confirmation of continued innovation in a field promising immense benefits for all.

As ministers, experts, and tech executives converge in Seoul next week for the global AI summit, both perspectives will be heard, underscored by a pre-meeting safety report highlighting AI’s potential upsides and numerous risks.

Last year’s inaugural AI Safety Summit at Bletchley Park, UK, announced an international testing framework for AI models amid calls from some concerned voices for a six-month pause in developing powerful systems. The resulting Bletchley Declaration, signed by the UK, US, EU, China, and others, hailed AI’s “enormous global opportunities” while warning of its potential for “catastrophic” harm. It also secured commitments from major tech firms like OpenAI, Google, and Meta to cooperate with governments on testing models before release.

Despite the UK and US establishing national AI safety institutes, the industry’s development march has continued unabated. Major tech players have all recently announced new AI products:

  • OpenAI released GPT-4o for free online
  • Google previewed its new AI assistant Project Astra and updates to Gemini
  • Meta released new versions of its Llama model as open-source
  • Anthropic, formed by former OpenAI staff, updated its leading Claude model

Dan Ives, an analyst at Wedbush Securities, estimates this year’s generative AI spending boom will reach $100 billion, part of a $1 trillion expenditure over the next decade.

Further landmark developments loom large. OpenAI is working on GPT-5 and a search engine, Google is preparing Astra’s release and AI-generated search queries outside the US, Microsoft is reportedly developing its own model and has hired Mustafa Suleyman to oversee an AI division, and Apple is rumored to be in talks with OpenAI to integrate ChatGPT into iPhones.

Billions in AI investment are pouring into tech firms of all sizes. Hardware startups like Humane and Rabbit race to build AI-powered smartphone replacements, while others experiment with training AI in every aspect of a person’s life. The US startup Rewind markets a product recording all computer screen activity to train highly personalized AIs, with lapel mics and cameras planned for offline activities.

“We’re going to keep seeing these flashy releases…until something sticks from a user perspective,” says Niamh Burns, senior analyst at Enders Analysis, as companies backed by multi-billion investments vie for consumer adoption.

The six months since Bletchley have seen significant changes, according to Rowan Curran, Forrester analyst. The emergence of “multi-modal” models like GPT-4 and Gemini that handle multiple formats – text, image, audio – is “opening up possibilities.”

Other breakthroughs include video generators like Sora convincing filmmaker Tyler Perry to halt an $800 million studio expansion, and retrieval-augmented generation (RAG) for giving generalist AIs specialties.

Some already see a market that will be dominated by a handful of wealthy companies who can afford the vast energy and data-crunching costs that come with building AI models and operating them. Would-be competitors are also being brought under their wings, to the concern of competition authorities in the UK, the US and the EU. Microsoft, for instance, is a backer of OpenAI and France’s Mistral, while Amazon has invested heavily in Anthropic.

“The market for GenAI is febrile,” says Andrew Rogoyski, a director at the Institute for People-Centred AI at the University of Surrey. “It is so costly to develop large language models that only the very largest companies, or companies with extraordinarily generous investors, can play.“

Meanwhile, some experts feel safety is not the priority it should be, because of the rush. “Governments and safety institutes say they plan to regulate and the companies say they are concerned too,” says Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the UN’s advisory body on AI. “But progress is slow because companies have to react to market forces.“

Google and OpenAI point to statements about safety alongside this week’s announcements, with Google referring to making its models “more accurate, reliable and safer” and OpenAI detailing how GPT-4o has safety “built-in by design“. However, on Friday a key OpenAI safety researcher, Jan Leike, who had resigned earlier in the week, warned that “safety culture and processes have taken a backseat to shiny products” at the company. In response, Altman wrote on X that OpenAI was “committed” to doing more on safety.

The UK government will not confirm which models are being tested by its newly established AI Safety Institute, but the Department for Science, Innovation and Technology said it was continuing to “work closely with companies to deliver on the agreements reached in the Bletchley declaration.”

The biggest changes are yet to come. “The last 12 months of AI progress were the slowest they’ll be for the foreseeable future,” the economist Samuel Hammond wrote in early May. Until now, “frontier” AI systems, the most powerful on the market, have largely been confined to simply handling text. Microsoft and Google have incorporated their offerings into their office products, and given them the authority to carry out simple administrative functions upon request. But the next step of development is “agentic” AI: systems that can truly act to influence the world around them, from surfing the web, to writing and executing code.

Smaller AI labs have experimented with such approaches, with mixed successes, putting commercial pressure on the larger companies to give their own AI models the same power. By the end of the year, expect the top AI systems to not only offer to plan a holiday for you, but book the flights, hotels and restaurants, arrange your visa, and prepare and lead a walking tour of your destination.

But an AI that can do anything the internet offers is also an AI with a much greater capability for harm than anything before. The meeting in Seoul might be the last chance to discuss what that means for the world before it arrives. The world will be watching to see if the accelerating industry can get that balance right before artificial intelligence outpaces our ability to control it.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTClaude AIGemini AI (formerly Bard)
Most Popular

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Google Drive now uses AI to catch ransomware in real time

iOS 26.4 adds iCloud.com search for files and photos

Also Read
A person in a dress shirt sits at a desk typing on a keyboard in a dark room, while a glowing ribbon of light flows from a glass sphere with the Perplexity logo toward the computer, suggesting futuristic AI assistance.

Perplexity Computer just became your new tax assistant

Abstract sound wave illustration made of vertical textured lines in dark mauve on a soft pink background, suggesting audio waveform or voice signal for a modern tech or speech recognition theme.

Microsoft AI unveils MAI-Transcribe-1 for fast, accurate speech-to-text

Google Gemini AI. The image shows the word "Gemini" written in a modern, sans-serif font on a black background. The letters "G" and "e" are in a gradient blue color, while the letters "m," "i," "n," and "i" transition from a light blue to a light beige color. Above the second "i" in "Gemini," there is a stylized star or sparkle symbol, adding a celestial or futuristic touch to the design.

Google’s new MCP tools stop Gemini agents from hallucinating old APIs

A smart TV screen showing a paused YouTube podcast‑style video with two people talking into microphones, overlaid by a large circular “Ask” button with a sparkle icon in the bottom right corner.

YouTube’s new Ask AI button lands on smart TVs

Ray-Ban Meta Blayzer Optics (Gen 2) AI glasses

Meta’s new Ray-Ban AI glasses finally put prescriptions first

AT&T logo

AT&T OneConnect starts at $90 for fiber and wireless together

A wide Opera Neon promotional graphic showing the “MCP Connector” interface centered on a blurred gradient background, with a dialog that says “Connect AI systems to Opera Neon” and toggle for “Allow AI connection,” surrounded by labeled boxes for OpenClaw MCP Client, ChatGPT MCP Client, N8N MCP Client, Claude MCP Client, and Lovable MCP Client connected by dotted lines.

Opera Neon adds MCP Connector for true agentic browsing

Assassin’s Creed Shadows

Assassin’s Creed Shadows PS5 Pro patch adds new PSSR

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.