By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicOpenAITech

Anthropic blocks OpenAI from Claude API access over terms violation

OpenAI is no longer allowed to use Claude APIs after Anthropic accused it of violating agreements during GPT-5 prep.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 6, 2025, 4:44 AM EDT
Share
Anthropic
Image: Anthropic
SHARE

Anthropic’s decision to yank OpenAI’s access to its Claude models has become one of the latest skirmishes in the escalating “AI wars.” On August 1, 2025, the San Francisco–based startup informed its much larger rival that it was suspending developer-level API keys, citing “commercial terms” violations that forbade using Claude to “build a competing product or service” or to “reverse-engineer” the model.

According to multiple people familiar with the matter, the flashpoint was OpenAI engineers’ internal use of Claude Code—Anthropic’s AI-powered coding assistant. OpenAI reportedly hooked the model into proprietary test harnesses to benchmark code-generation, creative-writing, and safety performance against its own systems in preparation for the imminent GPT-5 release. For Anthropic, whose commercial terms explicitly bar customers from deploying Claude to “train competing AI models,” this crossed the line.

“Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5,” explained Anthropic spokesperson Christopher Nulty. “Unfortunately, this is a direct violation of our terms of service.”

OpenAI’s response was swift but measured. Communications chief Hannah Wong described cross-model benchmarking as “industry standard” practice, expressing disappointment at the suspension while noting that Anthropic continues to have unrestricted API access to OpenAI’s endpoints. From OpenAI’s perspective, testing rival systems is a cornerstone of safety research—comparing outputs to identify weaknesses and potential failure modes before they surface in real-world applications.

But for Anthropic, the motivation was clear: protect its crown jewel. The Claude family—particularly Claude Code—has become known for its adept handling of programming tasks, rapidly gaining traction among developers. With GPT-5 rumored to boast significant coding improvements, Anthropic seems intent on guarding its competitive edge.

Anthropic’s clampdown on OpenAI follows a similar standoff just weeks earlier. In July, the startup abruptly cut off Windsurf, a small AI-coding firm, amid rumors that OpenAI might acquire it as part of its GPT-5 development efforts. At the time, co-founder and chief science officer Jared Kaplan quipped that “selling Claude to OpenAI” would be “odd,” signaling Anthropic’s wariness of enabling a competitor’s next breakthrough.

Now, with OpenAI’s own engineering teams reportedly integrating Claude Code into test suites, Anthropic appears to be drawing a firm line. Though it has indicated that it will restore “limited access for benchmarking and safety evaluations,” the specifics remain murky—will OpenAI need special permission each time it wants to run a test? Or will Anthropic simply throttle the volume of queries?.

Benchmarking AI models against one another is a time-honored tradition. It drives innovation, highlights strengths and weaknesses, and—critically—helps engineers shore up safety gaps. But when benchmarking involves a private service, it raises thorny questions about intellectual property and contractual boundaries.

OpenAI’s stance is that it’s unfair to conflate “testing” with “competing product development.” In its view, side-by-side trials of GPT-4, Claude 3, and the nascent GPT-5 represent responsible engineering: identifying hallucinatory behavior, ensuring robust moderation around sensitive topics like self-harm or defamation, and stress-testing creative prompts.

Anthropic, however, argues that the volume and nature of OpenAI’s tests went beyond mere benchmarking. By hooking Claude into proprietary developer tools, OpenAI may have gleaned insights into Claude’s inner workings—knowledge that could feed directly into GPT-5’s architecture or training regimen. In an industry where model weights and training data are jealously guarded, even indirect intelligence about system behavior can shortcut months of research.

With GPT-5 expected imminently, OpenAI likely conducted the bulk of its head-to-head tests already. Still, the suspension could complicate last-minute tuning or rollouts of safety features that rely on external comparisons. At a minimum, OpenAI engineers may now need to simulate Claude-like behavior in-house or seek alternative models—potentially slowing development cycles.

For Anthropic, the gambit reinforces its position as the scrappy challenger willing to stand up to the industry behemoth next door. But it also risks souring collaborative underpinnings of AI research, where cross-model work has historically underwritten safety breakthroughs. Will Anthropic’s safety exceptions be broad enough to keep responsible researchers onside? Or will the new policy foster an even more siloed environment?

The episode is a vivid reminder that the AI domain is as much about geopolitics as it is about algorithms. As GPT-5 looms, and Claude continues to gain developer loyalty, expect more barbs—and more API gatekeeping—to follow. In a field racing toward the next breakthrough, control over who can test what may prove as decisive as the quality of the models themselves.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AI
Most Popular

Valve warns Steam Deck OLED will be hard to buy in RAM crunch

Claude Sonnet 4.6 levels up coding, agents, and computer use in one hit

Figma partners with Anthropic to bridge code and design

Google Doodle kicks off Lunar New Year 2026 with a fiery Horse

Xbox brings smart postgame recaps to the PC app for Insiders

Also Read
YouTube thumbnail showing the word “Pomelli” with an “EXPERIMENT” label on a dark gradient background, surrounded by blurred lifestyle product photos including fashion, accessories and a canned beverage.

Pomelli Photoshoot helps small brands get studio‑quality marketing images fast

Dark background hero graphic featuring the Gemini logo and the text ‘Gemini 3.1 Pro’ in the center, overlaid on large dotted numerals ‘3.1’ made of blue and multicolor gradient dots that fade outward.

Google unveils Gemini 3.1 Pro for next‑gen problem‑solving

A person with curly hair sits at a desk using a laptop in a modern office, with the overlaid text “Google AI Professional Certificate” in a rounded dark banner across the foreground.

Google launches Google AI Professional Certificate

Green “Lyria 3” wordmark centered on a soft gradient background that fades from light mint at the top to deeper green at the bottom, with a clean, minimalist design.

Google Gemini just learned how to make music with Lyria 3

Two blue Google Pixel 10a phones are shown in front of large repeated text reading ‘Smooth by design,’ with one phone displaying a blue gradient screen and the other showing the matte blue back with dual camera module and Google logo.

Google’s Pixel 10a keeps the price, upgrades the experience

Meta and NVIDIA logos on black background

Meta just became NVIDIA’s biggest AI chip power user

A side-by-side comparison showing a Google Pixel 10 Pro XL using Quick Share to successfully send a file to an iPhone, with the iPhone displaying the Android device inside its native AirDrop menu.

Pixel 9 users can now AirDrop files to iPhones and Macs

Screenshot of Google Search’s AI Mode on desktop showing a conversational query for “How can I get into curling,” with a long-form AI-generated answer on the left using headings and bullet points, and on the right a vertical carousel of website cards from multiple sources, plus a centered hover pop-up card stack highlighting individual source links and site logos over the carousel.

Google’s AI search is finally easier on publishers

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.