By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

OpenAI may launch its first custom AI chip with Broadcom next year

Financial Times reports that OpenAI has partnered with Broadcom on a $10 billion chip deal, signaling a push toward proprietary silicon to power its growing AI models.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 5, 2025, 8:00 AM EDT
Share
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.
Photo by Pau Barrena / Getty Images
SHARE

OpenAI, the company behind ChatGPT, looks set to join a small but growing club of tech firms that build their own chips. According to reporting by the Financial Times, OpenAI is on track to start mass-producing a custom AI accelerator next year — a chip the company designed with Broadcom and plans to use inside its own datacentres rather than sell to other customers.

That might sound like a narrow engineering play, but it’s really a strategic maneuver that touches on three huge pressures in the AI era: rising compute costs, spotty supply of high-end accelerators, and the desire to squeeze more performance and efficiency out of model deployment. The move mirrors what Google and Amazon have already done with their in-house silicon — and it explains why Broadcom’s recent announcement of roughly $10 billion in orders from an unnamed customer set markets buzzing with speculation that the buyer was OpenAI.

Why make a chip at all?

Modern large language models and multimodal systems eat compute the way stadiums eat electricity. Training and running them at scale means either paying top dollar for third-party GPUs or building your own hardware stack. Custom chips are attractive because they let companies optimize the instruction set, memory hierarchy, and power profile for the precise math their models do — which can translate into big savings in cost per token, lower power bills, and denser racks in a data centre. Google’s Tensor Processing Units and AWS’s Trainium/Inferentia chips are textbook examples: they were developed to lower costs and increase control over infrastructure while delivering the specific throughput those firms needed.

For OpenAI, which has progressively pushed more demanding products into the wild (and reportedly continues to scale larger models internally), the math can be straightforward: save a few pennies per inference, multiply that by billions of inferences and thousands of servers, and you’re talking meaningful margin and capacity gains. But the strategic upside goes beyond cost. Designing hardware gives a company leverage — over suppliers, over performance roadmaps, and over the painful risk of a single vendor becoming a choke point.

Broadcom’s mysterious $10 billion order — the smoking gun

The story gained momentum after Broadcom told investors it had secured a new customer that had committed roughly $10 billion in AI infrastructure orders. Broadcom did not name the customer; reporters and analysts quickly connected the dots and multiple outlets reported that people familiar with the matter identified OpenAI as the buyer. Broadcom’s comments — and the market reaction — sharpened the narrative that a major AI lab was betting on custom accelerators to scale.

That $10 billion figure matters because it signals a deployment at hyperscaler scale: racks, systems and potentially millions of chips, not just a handful of prototypes. Industry analysts and trade press have floated similar-sized totals when the implicit customer is a company running large inference fleets. But it’s also worth pausing: the FT and other reports rely on unnamed sources and both Broadcom and OpenAI declined to comment publicly when first asked — so readers should treat early details as credible reporting that still contains uncertainty.

What this would mean for Nvidia — and for the chip market

Nvidia has dominated AI training and inference for several years, with its GPUs forming the backbone of most cloud and enterprise AI stacks. But when hyperscalers design their own accelerators, they chip away at that dominance — not overnight, but slowly and meaningfully. The pattern is familiar: once a cloud or AI firm proves an internal design can meet cost and performance targets, it will increasingly run its own workloads on those chips, reducing third-party demand and reshaping supplier bargaining power.

That’s already visible in the markets. Broadcom’s earnings call and the $10 billion revelation sent Broadcom shares higher and pressured other chip makers’ stocks, because investors started pricing in a scenario where a major customer is shifting a lot of workload onto custom hardware. Whether Broadcom–OpenAI accelerators ultimately blunt NVIDIA’s lead will depend on performance, software maturity, and how broadly the chips are adopted (OpenAI reportedly plans to keep them for internal use, which narrows the competitive impact compared with a chip sold publicly).

The engineering and supply chain puzzle

Designing a chip is the start; getting it into millions of servers requires supply-chain muscle. Broadcom brings experience building custom ASICs and integrating whole rack solutions, while foundries like TSMC do the actual manufacturing work. Hyperscalers that have taken this route historically pair internal architecture teams with external fabs and system integrators to turn designs into shipping racks — it’s a multi-year, capital-intensive effort. That’s why the timeline in the FT report — shipping mass-produced chips in 2026 — is notable: it implies the project has already moved well past sketching into tape-out and system validation.

But there are risks: microarchitecture choices that look great in lab benchmarks can underperform on the messy, real-world workload models actually use. There’s also the software stack: compilers, kernel drivers, and model runtimes must play nicely on the new silicon. Google and AWS had to invest heavily in software to make their chips plug into their ML ecosystems; OpenAI would face similar work to ensure models run correctly and efficiently on a new accelerator.

What to watch next

If you’re tracking this story, a few indicators will be worth watching in the coming months:

  • Official statements and regulatory filings. Broadcom’s investor communications and any follow-up remarks from OpenAI could confirm details (or not). Early reporting is strong but not identical across outlets.
  • Performance hints. Early benchmarks (if any leak) and published case studies will show whether the chips are optimized for training, inference, or both.
  • Supply-chain moves. Partnerships with foundries or system integrators, or signs of large rack purchases, would indicate the program is scaling.

The bigger picture

The chip race isn’t just a technology story — it’s an economic and strategic one. For model builders, owning hardware reduces exposure to pricing swings and supplier outages. For chipmakers, it’s a new revenue stream or a defensive moat. The net effect so far has been to diversify the landscape: Google, Amazon, and now (possibly) OpenAI are betting that bespoke silicon is a meaningful lever. That competition is likely to accelerate innovation — and make the underlying battle for the future of AI as much about engineering economics as about model architectures.

OpenAI’s steps into silicon, if confirmed in the months ahead, would be yet another sign that the next phase of the AI boom is as much about the machinery that runs models as it is about the models themselves.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

ExpressVPN’s long‑term VPN plans get a massive 81 percent price cut

Apple’s portable iPad mini 7 falls to $399 in limited‑time sale

Valve warns Steam Deck OLED will be hard to buy in RAM crunch

Lock in up to 87% off Surfshark VPN for two years

Google Doodle kicks off Lunar New Year 2026 with a fiery Horse

Also Read
Green “Lyria 3” wordmark centered on a soft gradient background that fades from light mint at the top to deeper green at the bottom, with a clean, minimalist design.

Google Gemini just learned how to make music with Lyria 3

Two blue Google Pixel 10a phones are shown in front of large repeated text reading ‘Smooth by design,’ with one phone displaying a blue gradient screen and the other showing the matte blue back with dual camera module and Google logo.

Google’s Pixel 10a keeps the price, upgrades the experience

Meta and NVIDIA logos on black background

Meta just became NVIDIA’s biggest AI chip power user

A side-by-side comparison showing a Google Pixel 10 Pro XL using Quick Share to successfully send a file to an iPhone, with the iPhone displaying the Android device inside its native AirDrop menu.

Pixel 9 users can now AirDrop files to iPhones and Macs

Screenshot of Google Search’s AI Mode on desktop showing a conversational query for “How can I get into curling,” with a long-form AI-generated answer on the left using headings and bullet points, and on the right a vertical carousel of website cards from multiple sources, plus a centered hover pop-up card stack highlighting individual source links and site logos over the carousel.

Google’s AI search is finally easier on publishers

Google I/O 2026 event graphic showing the Google I/O logo with a colorful gradient rectangle, slash, and circle on a black background, with the text ‘May 19–20, 2026’ and ‘io.google’ beneath.

Google I/O 2026 set for May 19–20 at Shoreline Amphitheatre

Dropdown model selector in Perplexity AI showing “Claude Sonnet 4.6 Thinking” highlighted under the “Best” section, with other options like Sonar, Gemini 3 Flash, Gemini 3 Pro, GPT‑5.2, Claude Opus 4.6, Grok 4.1, and Kimi K2.5 listed below on a light beige interface.

Claude Sonnet 4.6 lands for all Perplexity Pro and Max users

Anthropic illustration

Claude Sonnet 4.6 levels up coding, agents, and computer use in one hit

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.