Anthropic is cranking up how much you can actually do with Claude, and the reason is simple: it has just locked in a flood of new compute power, headlined by a big deal to tap all of SpaceX’s Colossus 1 data center. That extra horsepower is already being routed into higher limits for Claude Code and the Claude API, especially for power users on paid plans.
Anthropic says it has signed an agreement with SpaceX to use the entire compute capacity of Colossus 1, a massive data center in Memphis that now sits under the SpaceX/xAI umbrella. In practical terms, that translates to more than 300 megawatts of new capacity and access to over 220,000 NVIDIA GPUs coming online within about a month, a scale that would have been hard to imagine for a single AI company just a few years ago. The company is also explicitly interested in going further with SpaceX, exploring multi-gigawatt “orbital” AI data centers in space, though that part is more long‑term vision than immediate product upgrade.
The headline change users will actually feel first is on the usage side. Anthropic is doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat‑based Enterprise customers, meaning the same developers who were hitting the ceiling on bigger refactors, code reviews, or batch jobs should now have significantly more breathing room before running into throttling. On top of that, the company is removing the special “peak hours” reduction that used to kick in for Claude Code on Pro and Max accounts, a constraint that often annoyed people who work on the same weekday schedule as everyone else.

On the API front, Anthropic says rate limits for Claude Opus are being raised “considerably,” with updated caps published in its developer documentation. While the company doesn’t spell out every number in the announcement itself, the gist is clear: the API is being re-tuned so that high-volume customers can fire off more tokens and more requests per minute without hitting hard walls as quickly. For teams building serious products on top of Claude, this is the change that turns experiments into something closer to always-on infrastructure.
The SpaceX deal is just one piece of a much larger infrastructure land grab Anthropic has been pursuing across the industry. In April, the company announced an expanded agreement with Amazon that secures up to 5 gigawatts of new capacity over time, including nearly 1 gigawatt of Trainium2 and Trainium3 compute coming online by the end of 2026. That Amazon deal gives Anthropic meaningful extra compute within the next three months and commits both sides to a long-term roadmap where Anthropic spends well over $100 billion on AWS technologies over the next decade.
A separate agreement with Google and Broadcom, announced in early April, gives Anthropic access to multiple gigawatts of next-generation TPU capacity starting in 2027. That partnership builds on prior “tens of billions of dollars” in commitments for Google TPUs and deepens a relationship where Google Cloud is effectively one of Claude’s primary training and deployment backbones. Alongside those, Anthropic also highlights a strategic partnership with Microsoft and NVIDIA that includes about $30 billion worth of Azure capacity, plus a separate fifty‑billion‑dollar investment in American AI infrastructure with Fluidstack.
Put together, these arrangements show how Anthropic is spreading its bets across different chip vendors and cloud providers rather than tying its fate to a single stack. It runs Claude on AWS Trainium, Google TPUs, and NVIDIA GPUs, choosing different hardware depending on whether it’s training new frontier models or serving live traffic. The SpaceX Colossus 1 cluster, by contrast, is very much a GPU-heavy environment, reportedly packing dense deployments of NVIDIA H100, H200, and the next-generation GB200 accelerators aimed squarely at large-scale AI workloads.
For day-to-day users, higher limits are the most obvious perk, but the strategic angle is just as important. Generative AI demand is spiky and ruthless: when a model becomes popular, usage tends to ramp faster than a single provider’s data centers can keep up. By locking in multi-gigawatt deals with Amazon and Google, layering on billions of dollars of Azure capacity, and now grabbing 300 megawatts of dedicated GPU power from SpaceX, Anthropic is essentially trying to stay ahead of the curve so Claude doesn’t buckle under its own growth.
There is also a geopolitical and regulatory dimension to where all this compute actually lives. Anthropic notes that many of its enterprise customers, especially in regulated sectors like financial services, healthcare, and government, now expect in‑region infrastructure that satisfies local compliance and data-residency rules. Some of the new capacity from the Amazon collaboration is earmarked for inference in Asia and Europe, allowing Claude to serve those markets with lower latency and fewer regulatory headaches. The company stresses that it wants to place infrastructure in democratic countries with stable legal frameworks and secure supply chains for chips, networking gear, and facilities.
The power footprint of these data centers is non-trivial, and Anthropic has already had to address the knock-on effects on local communities. In a recent pledge, the company committed to covering any consumer electricity price increases in the US that are directly caused by its data centers, and it says it is exploring ways to extend that promise internationally as it grows abroad. It also talks about working with local leaders to reinvest in the places hosting its facilities, a message clearly aimed at policymakers who are increasingly wary of energy-hungry AI projects.
The SpaceX partnership hints at a more experimental direction: orbital AI compute. Anthropic says that as part of the Colossus 1 agreement, it has expressed interest in collaborating with SpaceX on multiple gigawatts of AI computing capacity in orbit, an idea Musk has floated in the past as a way to sidestep some terrestrial constraints on land, power, and cooling. It’s far from clear when such a system would be practical, but for Anthropic, putting a stake in that conversation now signals that it doesn’t intend to be left out if “space data centers” shift from concept to reality.
From a user’s point of view, though, the bottom line is more immediate: Claude should feel less constrained. If you’re a developer using Claude Code on a Pro, Max, or Team plan, you can now push it harder during work sessions without constantly hitting five-hour caps or seeing performance throttled during busy times. If you’re building on the Claude API, especially with Opus-class models, the higher rate limits offer more room to scale your app before you need to have a conversation with Anthropic’s sales team. And as more of the Amazon and Google capacity comes online over 2026 and 2027, this wave of limit increases is likely to be the first of several, not a one-off gesture.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
