By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicBusinessSpaceXTech

Anthropic’s SpaceX compute deal supercharges Claude usage limits

Claude Pro, Max, and Team users are getting room to breathe as Anthropic doubles Claude Code’s five‑hour limits and removes peak‑hour cutbacks.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
May 7, 2026, 4:22 AM EDT
Share
We may get a commission from retail offers. Learn more
Anthropic
Image: Anthropic
SHARE

Anthropic is cranking up how much you can actually do with Claude, and the reason is simple: it has just locked in a flood of new compute power, headlined by a big deal to tap all of SpaceX’s Colossus 1 data center. That extra horsepower is already being routed into higher limits for Claude Code and the Claude API, especially for power users on paid plans.

Anthropic says it has signed an agreement with SpaceX to use the entire compute capacity of Colossus 1, a massive data center in Memphis that now sits under the SpaceX/xAI umbrella. In practical terms, that translates to more than 300 megawatts of new capacity and access to over 220,000 NVIDIA GPUs coming online within about a month, a scale that would have been hard to imagine for a single AI company just a few years ago. The company is also explicitly interested in going further with SpaceX, exploring multi-gigawatt “orbital” AI data centers in space, though that part is more long‑term vision than immediate product upgrade.

The headline change users will actually feel first is on the usage side. Anthropic is doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat‑based Enterprise customers, meaning the same developers who were hitting the ceiling on bigger refactors, code reviews, or batch jobs should now have significantly more breathing room before running into throttling. On top of that, the company is removing the special “peak hours” reduction that used to kick in for Claude Code on Pro and Max accounts, a constraint that often annoyed people who work on the same weekday schedule as everyone else.

Table showing updated API rate limits for Claude Opus models across four usage tiers. The chart compares maximum input tokens per minute and maximum output tokens per minute before and after increases. Tier 1 rises from 30,000 to 500,000 input tokens and 8,000 to 80,000 output tokens. Tier 2 increases from 450,000 to 2,000,000 input tokens and 90,000 to 200,000 output tokens. Tier 3 increases from 800,000 to 5,000,000 input tokens and 160,000 to 400,000 output tokens. Tier 4 increases from 2,000,000 to 10,000,000 input tokens and 400,000 to 800,000 output tokens.
Image: Anthropic

On the API front, Anthropic says rate limits for Claude Opus are being raised “considerably,” with updated caps published in its developer documentation. While the company doesn’t spell out every number in the announcement itself, the gist is clear: the API is being re-tuned so that high-volume customers can fire off more tokens and more requests per minute without hitting hard walls as quickly. For teams building serious products on top of Claude, this is the change that turns experiments into something closer to always-on infrastructure.

The SpaceX deal is just one piece of a much larger infrastructure land grab Anthropic has been pursuing across the industry. In April, the company announced an expanded agreement with Amazon that secures up to 5 gigawatts of new capacity over time, including nearly 1 gigawatt of Trainium2 and Trainium3 compute coming online by the end of 2026. That Amazon deal gives Anthropic meaningful extra compute within the next three months and commits both sides to a long-term roadmap where Anthropic spends well over $100 billion on AWS technologies over the next decade.

A separate agreement with Google and Broadcom, announced in early April, gives Anthropic access to multiple gigawatts of next-generation TPU capacity starting in 2027. That partnership builds on prior “tens of billions of dollars” in commitments for Google TPUs and deepens a relationship where Google Cloud is effectively one of Claude’s primary training and deployment backbones. Alongside those, Anthropic also highlights a strategic partnership with Microsoft and NVIDIA that includes about $30 billion worth of Azure capacity, plus a separate fifty‑billion‑dollar investment in American AI infrastructure with Fluidstack.

Put together, these arrangements show how Anthropic is spreading its bets across different chip vendors and cloud providers rather than tying its fate to a single stack. It runs Claude on AWS Trainium, Google TPUs, and NVIDIA GPUs, choosing different hardware depending on whether it’s training new frontier models or serving live traffic. The SpaceX Colossus 1 cluster, by contrast, is very much a GPU-heavy environment, reportedly packing dense deployments of NVIDIA H100, H200, and the next-generation GB200 accelerators aimed squarely at large-scale AI workloads.

For day-to-day users, higher limits are the most obvious perk, but the strategic angle is just as important. Generative AI demand is spiky and ruthless: when a model becomes popular, usage tends to ramp faster than a single provider’s data centers can keep up. By locking in multi-gigawatt deals with Amazon and Google, layering on billions of dollars of Azure capacity, and now grabbing 300 megawatts of dedicated GPU power from SpaceX, Anthropic is essentially trying to stay ahead of the curve so Claude doesn’t buckle under its own growth.

There is also a geopolitical and regulatory dimension to where all this compute actually lives. Anthropic notes that many of its enterprise customers, especially in regulated sectors like financial services, healthcare, and government, now expect in‑region infrastructure that satisfies local compliance and data-residency rules. Some of the new capacity from the Amazon collaboration is earmarked for inference in Asia and Europe, allowing Claude to serve those markets with lower latency and fewer regulatory headaches. The company stresses that it wants to place infrastructure in democratic countries with stable legal frameworks and secure supply chains for chips, networking gear, and facilities.

The power footprint of these data centers is non-trivial, and Anthropic has already had to address the knock-on effects on local communities. In a recent pledge, the company committed to covering any consumer electricity price increases in the US that are directly caused by its data centers, and it says it is exploring ways to extend that promise internationally as it grows abroad. It also talks about working with local leaders to reinvest in the places hosting its facilities, a message clearly aimed at policymakers who are increasingly wary of energy-hungry AI projects.

The SpaceX partnership hints at a more experimental direction: orbital AI compute. Anthropic says that as part of the Colossus 1 agreement, it has expressed interest in collaborating with SpaceX on multiple gigawatts of AI computing capacity in orbit, an idea Musk has floated in the past as a way to sidestep some terrestrial constraints on land, power, and cooling. It’s far from clear when such a system would be practical, but for Anthropic, putting a stake in that conversation now signals that it doesn’t intend to be left out if “space data centers” shift from concept to reality.

From a user’s point of view, though, the bottom line is more immediate: Claude should feel less constrained. If you’re a developer using Claude Code on a Pro, Max, or Team plan, you can now push it harder during work sessions without constantly hitting five-hour caps or seeing performance throttled during busy times. If you’re building on the Claude API, especially with Opus-class models, the higher rate limits offer more room to scale your app before you need to have a conversation with Anthropic’s sales team. And as more of the Amazon and Google capacity comes online over 2026 and 2027, this wave of limit increases is likely to be the first of several, not a one-off gesture.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AIClaude Code
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Perplexity Computer is now inside Microsoft Teams

Apple gives up on Vision Pro after M5 refresh fails

Google Docs now lets you set custom instructions for Gemini

Apple’s rumored 32-inch iMac Ultra sounds absolutely wild

Google Workspace now has a central hub to control all AI and agent access

Also Read
Screenshot of a “Dreaming” interface for AI agent memory management on a light blue background. A pop-up window titled “Dream” explains that recent agent transcripts are reviewed to organize memories and surface new learnings. The interface includes dropdown menus for selecting a memory store and AI model, a session ID input field, and a “Start dreaming” button being clicked. In the background, a dashboard lists multiple memory stores with statuses, token counts, and creation times, alongside a notification reading “Dreaming started.”

Claude agents can now “dream” their way to better performance

Perplexity illustration. Abstract illustration of a transparent glass cube refracting beams of light into rainbow-like streaks across a dark, textured surface, symbolizing clarity, synthesis, and the convergence of multiple perspectives.

Perplexity Agent API now ships with Finance Search for structured financial insight

Apple showing off Siri’s updated logo at WWDC 2024.

Apple faces $250 million payout after overselling AI Siri on iPhone 16

The OpenAI logo displayed in white against a deep blue gradient background. The logo consists of a stylized hexagonal geometric shape resembling an interlocking pattern or aperture on the left, paired with the text "OpenAI" in a clean, modern font on the right. The background features subtle lighting effects with darker edges and a brighter blue glow in the upper right corner, creating a professional and technological atmosphere.

OpenAI’s rumored ChatGPT phone targets 2027 launch window

Minimal promotional graphic featuring the text “GPT-5.5 Instant” centered inside a rounded white rectangle, set against a soft abstract background with blurred pastel gradients in pink, purple, orange, and blue tones.

GPT-5.5 Instant replaces GPT-5.3 as OpenAI’s everyday ChatGPT model

Promotional interface mockup for Perplexity Computer focused on professional finance workflows, showing an “NVDA Post Earnings Impact Memo” with financial tables, charts, and analysis sections alongside a task panel requesting an AI-generated NVIDIA earnings summary with market insights and semiconductor industry implications.

Perplexity launches Computer for Professional Finance

Abstract 3D illustration of a flowing metallic ribbon with reflective gold and silver surfaces, curved in a wave-like shape against a dark background with bright light reflections and glossy highlights.

Perplexity health search gets a major upgrade with Premium Sources

Illustration of Google Chrome enhanced autofill showing three side-by-side form examples for loyalty card numbers, vehicle license plates, and travel confirmation numbers. Each input field displays a dropdown suggestion card with saved information and management options against a blue background.

Google Chrome’s enhanced autofill completely changes how you fill out tedious online forms

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.

Advertisement
Amazon Summer Beauty Event 2026