Anthropic is radically ramping up the raw computing power behind Claude, signing a new multi‑gigawatt deal with Google and Broadcom that will start coming online from 2027 and stretch well into the next decade. It is the clearest sign yet that Claude has become a hyperscale AI platform in its own right, not just another model competing for attention.
In a new announcement, Anthropic says it has agreed to secure “multiple gigawatts” of next‑generation Google TPU capacity, delivered through a deepened partnership that also pulls chipmaker Broadcom more firmly into the picture. Broadcom will help provide around 3.5 gigawatts of TPU‑based compute starting in 2027, forming the backbone of Anthropic’s latest infrastructure expansion and giving the company long‑term visibility into its AI hardware supply. That number is enormous in data‑center terms: previous Google Cloud deals were already set to bring “well over a gigawatt” of capacity online in 2026, so the new agreement essentially layers another several gigawatts on top of an already aggressive build‑out.
The motivation is simple: demand for Claude has exploded. Anthropic now pegs its annual revenue run rate at more than $30 billion, up from about 9 billion at the end of 2025, with over 1,000 business customers each spending at least $1 million a year on Claude‑powered products and services. That means the company has doubled its big‑ticket customer count in a matter of weeks, after previously disclosing more than 500 seven‑figure customers alongside its $30 billion Series G funding at a $380 billion valuation. For Anthropic’s finance chief Krishna Rao, the new deal is positioned as a “disciplined” response to that demand curve: the company is committing to the largest compute expansion in its history so it can keep pace with enterprise usage while still pushing Claude to the frontier of model capabilities.
What makes this move stand out is that it does not replace Anthropic’s existing cloud strategy so much as amplify it. Amazon remains the company’s primary cloud provider and main training partner, with Project Rainier—an AWS supercomputer built on custom Trainium chips—continuing to underpin a massive portion of Claude’s training workloads. At the same time, Anthropic is leaning heavily into a multi‑platform approach: Claude is trained and served across AWS Trainium, Google TPUs, and NVIDIA GPUs, allowing it to match each workload to the most cost‑efficient and performant hardware. In practical terms, that diversification means better resilience for customers; if one ecosystem faces supply constraints or pricing swings, Anthropic can shift more jobs to the others without pausing innovation.
Google and Broadcom, meanwhile, are using this partnership to showcase the maturity of Google’s custom AI chips. Google designs the TPU architecture—now into its seventh generation for the Ironwood-class TPUv7—while Broadcom turns those designs into mass‑manufacturable silicon and, increasingly, sells those chips directly as part of broader capacity deals. Broadcom has already spoken about a massive TPU backlog and tens of billions of dollars in AI chip orders, and the Anthropic and Google arrangements are central to that story, effectively locking in years of high‑margin demand for data‑center‑scale accelerators. For Google Cloud, anchoring a fast‑growing AI player like Anthropic on TPUs reinforces its pitch that homegrown accelerators can compete with, and sometimes beat, more widely hyped GPU clusters on price‑performance and efficiency.
There is also a geopolitical and industrial angle. Anthropic notes that the vast majority of this new compute will be located in the United States, building on its earlier promise to invest $50 billion in American AI infrastructure and data centers. At a time when governments are worried about both the security of AI supply chains and the concentration of compute in a few regions, multi‑gigawatt commitments tied to U.S. sites send a clear signal about where the next generation of AI capacity will live. It also puts Anthropic in the small club of companies—alongside players like OpenAI, Google DeepMind, and a few hyperscalers—that are now talking about compute in terms that sound less like traditional IT and more like national‑scale utilities.
For customers, the near‑term impact will be felt less in glossy press releases and more in what Claude can actually do. Anthropic frames the expanded capacity as fuel for frontier‑scale versions of Claude across all surfaces: API access for developers, Claude for Work in productivity suites, and coding assistants like Claude Code that need to handle heavier workloads with lower latency. Because Claude is available on all three of the world’s largest clouds—AWS via Bedrock, Google Cloud via Vertex AI, and Microsoft Azure via the Foundry program—this extra compute should eventually translate into more consistent performance and availability regardless of which platform an enterprise standardizes on. For enterprises betting their own products and workflows on Claude, the message is that Anthropic is willing to sign multi‑billion‑dollar, multi‑gigawatt checks so they don’t have to worry about hitting infrastructure ceilings as adoption grows.
Put simply, this is Anthropic locking in its place in the AI big leagues. The company is scaling up from “fast‑growing startup” to “infrastructure‑anchored platform” with a footprint measured in gigawatts and contracts that run to the end of the decade. By tying itself more deeply to Google’s TPU roadmap and Broadcom’s chip manufacturing muscle—while still keeping AWS and NVIDIA firmly in the mix—Anthropic is betting that diversified, massive‑scale compute is the only way to keep Claude on the cutting edge in a world where every major AI player is racing to build the biggest, smartest models it can.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
