By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIBusinessOpenAITech

OpenAI announces 10-gigawatt AI chip partnership with Broadcom

OpenAI’s new Broadcom partnership will see the creation of 10 gigawatts of custom AI accelerators to support its growing AI infrastructure needs by 2029.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Oct 14, 2025, 6:09 AM EDT
Share
We may get a commission from retail offers. Learn more
OpenAI and Broadcom logos displayed side by side in white text on a gradient background transitioning from blue to pink to orange, indicating a partnership or collaboration announcement.
Image: OpenAI
SHARE

OpenAI and Broadcom quietly announced a heavy-duty collaboration on October 13, 2025: OpenAI will design custom AI accelerators and systems, and Broadcom will help build and deploy them — to the tune of 10 gigawatts of capacity over the next few years. That’s not a typo. Ten gigawatts is an enormous amount of compute infrastructure (roughly the output of ten large power plants), and the companies say the rollout will begin in the second half of 2026, with full deployment targeted by the end of 2029.

Why OpenAI is doing this — and why now

OpenAI’s public argument is straightforward: by designing hardware that reflects the way its most advanced models actually work, the company can squeeze more performance and efficiency out of every watt and every rack. In their words, building custom silicon lets OpenAI fold “what it’s learned from developing frontier models and products” into the chips themselves — effectively moving some model-level thinking into hardware so software and silicon are designed as a single system. Broadcom will supply the networking and development muscle needed to get racks into data centers at scale.

Put another way: OpenAI is trying to reduce a strategic bottleneck. For several years, NVIDIA’s GPUs have been the de facto standard for training and serving large language models and other generative AI systems. Buying or leasing huge quantities of NVIDIA hardware is expensive, competitive, and exposes firms to supply-chain and pricing risk. By designing its own accelerators — and lining up partners to build and deploy them — OpenAI is trying to hedge that dependency while tailoring hardware to its specific workloads. Industry watchers note the move follows recent multi-gigawatt agreements OpenAI has struck with other vendors, creating a multi-vendor compute portfolio rather than a single-supplier bet.

So how big is 10 gigawatts, really?

It’s massive. News coverage translated the number into familiar terms: tens of thousands of racks, millions of high-power GPUs’ worth of equivalent energy draw, and enough electrical capacity to power millions of homes. The scale also implies a multibillion-dollar rollout and long lead times for data-center construction, power hookups, and cooling. Broadcom and OpenAI didn’t disclose the financials; analysts and reporting suggest the tab for building and powering capacity at this scale runs into tens of billions of dollars over the multi-year program.

What Broadcom brings to the table

Broadcom is not a household name like NVIDIA in the world of model training, but it is an established force in networking, chips for infrastructure, and large enterprise silicon. The announced systems will include Broadcom’s networking and Ethernet solutions alongside OpenAI-designed accelerators, positioning Broadcom to provide the connectivity and rack systems that let many accelerators act as a single, huge compute fabric. Broadcom has already profited from similar tie-ups — the market reaction to the announcement pushed its share price higher — illustrating how hardware vendors are benefiting from the scramble for AI infrastructure.

A new kind of co-design

One of the more interesting technical notes buried in follow-up coverage: OpenAI has used its own models to help optimize chip designs. OpenAI president Greg Brockman said the company’s tooling found chip-level optimizations far faster than manual iteration would have — reductions in area and improvements in efficiency that, while not impossible for humans to find, were discovered and validated much more quickly with AI-driven design workflows. That matters because co-design — thinking about models and hardware together — is where you get much better energy efficiency per operation.

Not a knockout punch to NVIDIA (yet)

A lot of headlines ask the same question: does this mean NVIDIA’s dominance is over? Short answer: no. NVIDIA’s ecosystem — software stacks, developer familiarity, massive installed base, and continued product cadence — still gives it a big lead. But there’s nuance: big cloud and AI customers building bespoke compute fleets are steadily chipping away at NVIDIA’s absolute stranglehold by diversifying suppliers and tailoring at scale. So far, bespoke silicon efforts have not dethroned NVIDIA, but they have created profitable niches for companies like Broadcom and AMD and changed the bargaining dynamics around supply and pricing. OpenAI’s Broadcom deal is another step in that direction — significant, but incremental in the broader market structure.

This partnership arrives after a string of big compute arrangements for OpenAI: a multi-gigawatt agreement with AMD (6GW reported), and a separate 10-gigawatt agreement with NVIDIA that included the potential for a very large NVIDIA investment into OpenAI. Those earlier deals, plus other vendor relationships, mean OpenAI is intentionally building a mosaic of suppliers rather than betting everything on one vendor. The company also reshaped old exclusivity arrangements with Microsoft earlier in the year, which freed OpenAI to pursue multiple hardware partners. The result: OpenAI is now a buyer with serious leverage — and the ability to push co-designed systems into the market.

Risks, frictions, and the hard parts ahead

There are practical headaches no press release romanticizes: sourcing power and sites, permitting, grid upgrades, heat removal, supply-chain logistics for exotic packaging and interconnects, and the long-tail cost of supporting and maintaining bespoke fleets. There’s also the commercial risk: custom accelerators must deliver real efficiency or capability benefits to justify the investment and the operational complexity. If the performance gap versus off-the-shelf alternatives isn’t large enough, the economics get uncomfortable fast. And the timeline — rolling racks starting in 2026, completion by 2029 — leaves a long window during which market conditions, chip technology, or regulations could shift.

What this means for users and the market

For end users of ChatGPT-class apps, the effects are indirect but important: more capacity means better availability, lower latency at scale, and the headroom to support new, more computationally expensive features (multimodal experiences, real-time agents, personalized models). For the market, it’s another sign that the next several years will be defined by arms races in infrastructure — not just model research — as companies race to control the stack from silicon to software.

OpenAI’s tie-up with Broadcom is ambitious, expensive, and strategically sensible: design the chips you need, enlist a manufacturing and systems partner, and build the scale to run tomorrow’s AI services. It doesn’t end NVIDIA’s run, but it does accelerate a trend where the biggest AI customers build bespoke hardware ecosystems to match their software. Over the next four years, watch how the promised 10 gigawatts trickles into data centers, what those accelerators actually look like, and whether OpenAI’s co-design gamble pays off in performance that’s meaningfully better — or cheaper — than buying ready-made GPUs off the shelf.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

ExpressVPN’s long‑term VPN plans get a massive 81 percent price cut

Apple’s portable iPad mini 7 falls to $399 in limited‑time sale

Valve warns Steam Deck OLED will be hard to buy in RAM crunch

Lock in up to 87% off Surfshark VPN for two years

Claude Sonnet 4.6 levels up coding, agents, and computer use in one hit

Also Read
Green “Lyria 3” wordmark centered on a soft gradient background that fades from light mint at the top to deeper green at the bottom, with a clean, minimalist design.

Google Gemini just learned how to make music with Lyria 3

Two blue Google Pixel 10a phones are shown in front of large repeated text reading ‘Smooth by design,’ with one phone displaying a blue gradient screen and the other showing the matte blue back with dual camera module and Google logo.

Google’s Pixel 10a keeps the price, upgrades the experience

Meta and NVIDIA logos on black background

Meta just became NVIDIA’s biggest AI chip power user

A side-by-side comparison showing a Google Pixel 10 Pro XL using Quick Share to successfully send a file to an iPhone, with the iPhone displaying the Android device inside its native AirDrop menu.

Pixel 9 users can now AirDrop files to iPhones and Macs

Screenshot of Google Search’s AI Mode on desktop showing a conversational query for “How can I get into curling,” with a long-form AI-generated answer on the left using headings and bullet points, and on the right a vertical carousel of website cards from multiple sources, plus a centered hover pop-up card stack highlighting individual source links and site logos over the carousel.

Google’s AI search is finally easier on publishers

Google I/O 2026 event graphic showing the Google I/O logo with a colorful gradient rectangle, slash, and circle on a black background, with the text ‘May 19–20, 2026’ and ‘io.google’ beneath.

Google I/O 2026 set for May 19–20 at Shoreline Amphitheatre

Dropdown model selector in Perplexity AI showing “Claude Sonnet 4.6 Thinking” highlighted under the “Best” section, with other options like Sonar, Gemini 3 Flash, Gemini 3 Pro, GPT‑5.2, Claude Opus 4.6, Grok 4.1, and Kimi K2.5 listed below on a light beige interface.

Claude Sonnet 4.6 lands for all Perplexity Pro and Max users

The logo and lettering of Paramount Skydance Corporation can be seen at a Paramount stand at the Media Days in Munich (Bavaria, Germany).

Paramount gets one more shot at stealing Warner Bros. Discovery from Netflix

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.