By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

Scientists unite to prevent AI from creating bioweapons

Top minds in biology and AI unite to safeguard emerging protein design technologies. The landmark agreement prioritizes responsible innovation to maximize societal benefits while mitigating bioweapons risks.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 10, 2024, 8:40 AM EDT
Share
We may get a commission from retail offers. Learn more
Scientists unite to prevent AI from creating bioweapons
Image: responsiblebiodesign
SHARE

In the bustling halls of Congress last year, a stark warning echoed from Dario Amodei, the chief executive of the prominent artificial intelligence (AI) start-up Anthropic. He cautioned that emerging AI technologies could soon enable even unskilled and malicious actors to orchestrate large-scale biological attacks, unleashing viruses or toxic substances capable of widespread disease and death.

Amodei’s words sent shockwaves through the Senate chambers, igniting alarm across party lines. Meanwhile, AI researchers in industry and academia engaged in heated debates, grappling with the gravity of the threat he had outlined.

Now, over 90 biologists and scientists specializing in AI-driven protein design have united, signing an agreement that seeks to ensure their groundbreaking research remains a force for good, without exposing the world to catastrophic harm.

Among the signatories are luminaries such as Nobel laureate Frances Arnold, representing laboratories from the United States and beyond. These pioneers argue that the latest AI technologies hold far more promise than peril, paving the way for new vaccines, life-saving medicines, and scientific breakthroughs yet unimagined.

“As scientists engaged in this work, we believe the benefits of current AI technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads, a rallying cry for responsible innovation.

The accord does not seek to suppress the development or distribution of AI technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material – the critical link that could transform theoretical designs into tangible bioweapons.

“Protein design is just the first step in making synthetic proteins,” explained David Baker, the director of the Institute for Protein Design at the University of Washington, who played a pivotal role in shepherding the agreement. “You then have to actually synthesize DNA and move the design from the computer into the real world – and that is the appropriate place to regulate.”

This initiative is part of a broader effort to weigh the risks and rewards of AI, as experts sound alarms about the technology’s potential to spread disinformation, displace jobs at an unprecedented rate, and – in the most dire scenarios – imperil the very existence of humanity itself. Tech companies, academic labs, regulators, and lawmakers find themselves at the forefront of a complex challenge: understanding these risks and devising strategies to address them.

Amodei’s congressional testimony struck a chord, as he contended that large language models (LLMs) – the cutting-edge technology powering online chatbots – could soon aid attackers in developing new bioweapons. However, he acknowledged that such a capability does not exist today. In fact, Anthropic’s own detailed study revealed that, for someone attempting to acquire or design biological weapons, LLMs offered only marginally more utility than a standard internet search engine.

While Amodei and others worry that the convergence of improving LLMs and other technologies could give rise to a serious threat within two to three years, OpenAI – the creators of the renowned ChatGPT chatbot – conducted a similar study that found LLMs pose no significantly greater danger than search engines. Aleksander Mądry, a computer science professor at MIT and OpenAI’s head of preparedness, stated that while researchers will undoubtedly continue refining these systems, he has yet to encounter evidence suggesting they could create novel bioweapons.

Current LLMs are trained on vast troves of digital text scraped from the internet, enabling them to regurgitate or recombine existing information, including data on biological attacks. However, in the quest to accelerate the development of new medicines, vaccines, and other beneficial biological materials, researchers are beginning to construct similar AI systems capable of generating original protein designs.

Biologists acknowledge that such technology could aid attackers in designing biological weapons, but they emphasize that actually constructing these weapons would necessitate multi-million-dollar laboratories equipped with DNA manufacturing equipment.

“There is some risk that does not require millions of dollars in infrastructure, but those risks have been around for a while and are not related to AI,” said Andrew White, a co-founder of the nonprofit Future House and one of the biologists who signed the agreement.

The biologists’ call to action includes developing security measures to prevent DNA manufacturing equipment from being misused with harmful materials – though the specifics of these measures remain unclear. They also advocate for safety and security reviews of new AI models before their release.

Notably, the agreement does not argue for bottling up these technologies or restricting their dissemination. As Rama Ranganathan, a professor of biochemistry and molecular biology at the University of Chicago, and a signatory of the agreement, stated, “These technologies should not be held only by a small number of people or organizations. The community of scientists should be able to freely explore them and contribute to them.”


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

Copilot’s agentic mode auto-handles your Outlook inbox and calendar chaos

Apple Vision Pro successfully guides the first eye surgery

Anthropic’s Claude links up with 9 top creative tools

Liquid Glass iPhone: subtle curves make bezels vanish forever

Google donates AP2 to FIDO, supercharging secure AI agent shopping

Also Read
Google "G" logo in gradient

Google rolls out Preferred Sources worldwide in all languages

An abstract network diagram featuring a central image of a clinician in blue scrubs with a stethoscope, connected by lines to several blurred portraits of diverse people and icons labeled "Agent." Small text bubbles indicate AI functions like "Accessing," "Referring notes," and "Consulting references."

This AI co-clinician from Google DeepMind aced 97 out of 98 clinical tests

Promotional image of the Samsung Galaxy Book6 Enterprise Edition in a sleek gray finish, shown from multiple angles highlighting its slim design, keyboard, and side ports, with the text “Effortless connectivity, elegant design” on a neutral background.

Samsung launches Galaxy Book6 Enterprise Edition with Knox security and Intel vPro

Futuristic illustration of a glowing Earth with radiating data lines, surrounded by icons representing text, audio, images, video, and AI processing, with a central cube symbolizing a multimodal AI system.

Nemotron 3 Nano Omni is NVIDIA’s new open AI model that handles video, audio, documents, images, and GUIs all at once

LG UltraGear evo AI GM9 5K gaming monitor

LG UltraGear evo GM9 goes on sale with 5K, 165Hz, and AI upscaling

Top-down view of a Rivian R2 Performance electric SUV in matte green, showing the front hood, signature oval headlights, and grille as it sits on a paved road with yellow center lines and grass along the edge.

Rivian R2 Performance Launch Package brings lifetime Autonomy+ and more

Adobe and Semrush logos displayed side by side on a dark background, separated by a plus sign, with diagonal purple accent lines on the edges.

Adobe completes $1.9 billion Semrush acquisition

Minimal graphic with the text “OpenAI DevDay [2026]” centered on a light background, with a small green abstract icon of arrows and a circle in the lower right corner.

OpenAI DevDay 2026 is set for September 29 in San Francisco

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.