OpenAI has quietly agreed to buy roughly $300 billion worth of computing power from Oracle over about five years, according to reporting by The Wall Street Journal — one of the largest cloud contracts in history if confirmed. The purchases are scheduled to begin in 2027 and are tied into the broader “Stargate” infrastructure push that OpenAI, Oracle and other partners have been publicizing this year.
This is the kind of headline that makes people blink: $300 billion is a headline number so big it reshuffles the mental deck on what “buying compute” looks like for an AI company. But the size is exactly why it deserves a careful read — the figure says as much about OpenAI’s ambitions and capital needs as it does about Oracle’s transformation into a serious contender for hyperscale AI workloads.
The basic facts
According to The Wall Street Journal, OpenAI will purchase about $300 billion of compute from Oracle across roughly five years, with the contract beginning in 2027. Oracle’s recent quarterly remarks also hinted at several multi-billion-dollar contracts that helped swell its backlog, a trend company executives flagged in an earnings update this month.
OpenAI has been publicly rolling out the Stargate initiative — a jaw-dropping infrastructure plan that, in aggregate, has been described as involving up to $500 billion in investment and tens of gigawatts of data-center power. As part of Stargate, OpenAI and Oracle previously announced a plan to develop an additional 4.5 gigawatts of data-center capacity in the U.S., a scale that would require enormous facility, power and networking work.
Bloomberg and other outlets have also been reporting that OpenAI expects roughly $12.7 billion in revenue this year — a number that helps put the Oracle purchase in perspective. $300 billion across five years works out to about $60 billion per year of purchased compute; that’s roughly 4.7 times the company’s projected revenue in the coming year.
Why Oracle (and why now)?
Oracle has been aggressively pitching its Oracle Cloud Infrastructure (OCI) as a built-for-AI alternative to the usual suspects (AWS, Azure, Google Cloud). The company said this quarter that it signed multiple multi-billion-dollar contracts and added hundreds of billions in future contracted revenue, statistics that helped send its stock sharply higher and lift founder Larry Ellison in billionaire rankings. For Oracle, landing a multiyear compute buyer at scale is validation that its push into faster-growing AI cloud services is paying off.
For OpenAI, the advantages are pragmatic: capacity guarantees, integration with a single large supplier for infrastructure needs, and — depending on the commercial terms — predictable pricing and co-investment in data-center buildouts. It’s also a hedge: OpenAI’s cloud strategy has become more multilateral in recent months, with partnerships and rounds of capital from varied sources. The Stargate project itself has been framed as a national-scale effort to build sovereign AI infrastructure in the U.S., which also helps explain why major hardware, software and financial players are lining up.
The chip angle (Broadcom) and the compute stack
The deal isn’t just about racks and electricity. The Wall Street Journal and other reporting have also tied OpenAI to a roughly $10 billion contract with Broadcom to design bespoke AI chips for the company’s internal use — part of a wider industry move toward custom silicon as demand for more efficient model inference and training grows. Custom chips plus a committed cloud partner is a logical one-two punch: verticalize the stack and lock in capacity.
What this means for the cloud market
A sustained, very large buyer like OpenAI changes the calculus for cloud infrastructure companies. It can underwrite data-center construction, tilt negotiations on pricing and capacity, and even influence where networks and power plants get built. Oracle has argued that its recent quarter reflects surging demand for OCI; the reported OpenAI agreement would help explain a huge part of that momentum. But it also raises competition questions — for Microsoft, which has long been OpenAI’s marquee partner and cloud host, the news signals that OpenAI is diversifying and maybe rebalancing commercial relationships.
Risks, limits and skepticism
A few important caveats. First: reporting, not confirmation. These stories come via major outlets citing people familiar with negotiations; Oracle and OpenAI have been selective in their public comments. Second: scale and execution are different things. Building, powering and operating tens of gigawatts of compute is a multiyear effort that touches local permitting, grid upgrades, supply chains and geopolitics. Past mega-announcements — including Stargate itself — have already faced scrutiny about how quickly they can materialize. Third: for OpenAI, the economics are brutal. Even if compute is contracted, the cash flow and capital structure to pay and operate at this scale are a real test of the company’s long-term plan.
Why you should care
If these numbers hold up, the deal is a signal of two converging trends: (1) AI companies are becoming infrastructure buyers at a scale rivaling nations’ budgets, and (2) the cloud business is reshaping — smaller and more nimble cloud players (or at least non-AWS/Microsoft vendors) can now capture huge, strategic customers. For enterprises and policymakers, that means more investment in data-center regions, more debate about energy and supply chains, and fresh incentives to think about where AI gets built and who controls the stack.
In short, the reported $300 billion number is headline-grabbing, and even if parts of it change in the fine print, it’s a real indicator of how the AI era is remaking the business of compute. For OpenAI, it’s both a bet and a need; for Oracle, it’s a transformational customer win; for the rest of us, it’s a reminder that the next decade’s infrastructure story will look very different from the last.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
