Elon Musk has spent years warning that artificial intelligence could go off the rails – and in February, he even blasted Anthropic as “evil” and “misanthropic” on X, saying the lab “hated Western civilization.” Just three months later, he is turning around and taking up to an estimated $4 billion a year from that same company in exchange for something he suddenly has a lot of: spare supercomputer capacity.
The unlikely alliance revolves around Colossus 1, a massive AI data center SpaceX built in Memphis, Tennessee, originally to train Grok, Musk’s AI assistant. The site packs roughly 220,000 NVIDIA GPUs and more than 300 megawatts of capacity, putting it in the running as one of the largest AI supercomputers on the planet. But Grok has not been able to keep the machine busy: analysts estimate the chatbot generates under $1 billion in annualized revenue, leaving a lot of very expensive silicon sitting idle. Anthropic, on the other hand, is drowning in demand for its Claude models and has been scrambling to secure as much compute as it can from big cloud providers and specialty infrastructure firms.
For Anthropic, the SpaceX deal is about capacity and diversification. The company has already lined up huge long-term infrastructure commitments with Google, Amazon’s AWS, and others, plus billions earmarked for its own data centers, all to make sure Claude can keep scaling. By locking in all of Colossus 1’s capacity “within the month,” Anthropic gets a fresh pool of GPUs on top of its existing multi-cloud strategy, and another bargaining chip when it negotiates pricing and terms with the hyperscalers. In a market where everyone from OpenAI to Meta is bumping up against supply constraints on advanced NVIDIA chips, getting guaranteed access to a built-out supercomputer is a huge operational win.
For Musk and SpaceX, the logic is almost the mirror image. Instead of paying AWS, Microsoft Azure, or Google Cloud a 30 percent-plus margin to host Grok and other AI workloads, SpaceX gets to be the one charging that premium. Analysts told Fortune that the Anthropic agreement alone could generate $3 billion to $4 billion in yearly revenue for SpaceX, with more than $2.5 billion falling straight to cash profit because the big capital costs are already sunk. That kind of money is not just nice to have; it feeds directly into how Wall Street might value SpaceX as it heads toward an IPO that could target between $1.75 trillion and $2 trillion. If investors buy the story that SpaceX is effectively becoming a “fourth hyperscaler,” competing for AI infrastructure dollars alongside Amazon, Microsoft, and Alphabet, the company can argue for a tech-platform-style earnings multiple instead of being priced like a traditional aerospace and defense contractor.
The timing of the deal is not an accident. SpaceX confidentially filed an S-1 on April 1, and is expected to start its public roadshow soon, which means this Anthropic partnership lands right as the company is fine-tuning its pitch to public investors. On the same day the compute agreement was announced, Musk also said he would dissolve xAI as a standalone entity and fold it into SpaceX under the SpaceXAI brand, effectively consolidating his AI ambitions under one corporate roof. That move simplifies the story: instead of a messy web of entities, SpaceX can present itself as a unified platform that launches rockets, runs satellites, and now rents out frontier-grade AI infrastructure to leading labs, including for Grok itself. It is the kind of narrative bankers love: massive total addressable market, multiple business lines, but all under a single, scalable infrastructure umbrella.
Still, turning SpaceX into a genuine hyperscaler is not as easy as flipping on the lights in Memphis. Infrastructure veterans point out that governments and Fortune 500 customers care as much about geography, redundancy, and compliance as they do about raw GPU counts. Amazon, Microsoft, and Google have spent years building global data center footprints with region-by-region failover plans and deep compliance stacks; one giant facility in Tennessee, no matter how powerful, is not a drop-in replacement for that. Andrew Moore, former head of Google Cloud AI and now a defense AI CEO, told Fortune that while Musk is capable of “something amazing,” matching AWS on reliability and reach is going to be a long, grinding slog. The upshot is that the Anthropic arrangement is less about instantly dethroning the cloud giants and more about showing that SpaceX can play in their league as a specialist landlord for hungry AI tenants.
The power dynamics of the contract are what make this story feel very Musk-coded. In replies on X, he claimed that SpaceX “reserves the right to reclaim the compute” if Anthropic’s systems “engage in actions that harm humanity,” essentially a moral‑hazard kill switch on Claude’s access to Colossus. That language did not appear in the official press releases, and outsiders do not know whether it actually made it into the legal paperwork, but even as a signaling move, it underscores how much leverage comes from owning the data center in an AI arms race. Whoever controls the GPUs decides who gets to train at scale and on what terms; if Musk can really yank that capacity back, he holds a very real leash on one of the three top frontier labs, even as he is suing another one, OpenAI, in federal court.
That combination – public moral concern, private contractual power – fits a pattern in Musk’s AI history. A decade ago, he was one of the loudest voices warning that AI could be an existential threat, and he backed OpenAI as a nonprofit meant to keep the technology safe and open. Over time, he split with OpenAI’s leadership, accused the company of abandoning its mission, and launched his own rival, xAI, complete with Grok as a snarky, “uncensored” chatbot competitor to ChatGPT, Claude, and Gemini. Now he is both litigating against OpenAI and cashing in on Anthropic’s runaway growth by becoming its most important off-cloud compute supplier. When Moore says “everyone is trying to get through the next six months,” he is basically describing a world where lofty manifestos about AI safety collide with the very immediate, very expensive need to secure as many GPUs as possible before your rivals do.
Anthropic is not exactly putting all its eggs in the Musk basket, though, and that nuance matters. Between its huge cloud deals, its own planned data centers, and partnerships with firms like CoreWeave and Broadcom, the company has been deliberate about avoiding single-source dependency for critical infrastructure. Industry observers expect Anthropic to keep squeezing more performance out of its models and infrastructure so that it can withstand a future where any single provider – including SpaceX – changes its pricing, policy, or politics. Analysts quoted by Fortune put the odds that this specific deal is still intact in two years at around 80 percent, with the remaining 20 percent being, essentially, a bet on Musk’s own volatility. Given his history of abrupt U-turns and public spats, nobody on either side can assume this “wedding of convenience” is guaranteed to last.
That fragility is exactly what makes the whole arrangement so fascinating for the broader AI ecosystem. On a practical level, it is a reminder that compute – not algorithms – is the real choke point right now: labs that can secure long-term access to tens or hundreds of thousands of top-tier GPUs will dictate the pace of model progress and product rollouts. On a political level, it shows how quickly yesterday’s villain can become today’s landlord when billions of dollars and IPO valuations are involved. And on a governance level, it hints at a future in which the “safety switch” for powerful AI systems might not live in government regulation or independent oversight boards, but in the contractual fine print of whoever owns the data center lease.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
