By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

OpenAI may launch its first custom AI chip with Broadcom next year

Financial Times reports that OpenAI has partnered with Broadcom on a $10 billion chip deal, signaling a push toward proprietary silicon to power its growing AI models.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 5, 2025, 8:00 AM EDT
Share
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.
Photo by Pau Barrena / Getty Images
SHARE

OpenAI, the company behind ChatGPT, looks set to join a small but growing club of tech firms that build their own chips. According to reporting by the Financial Times, OpenAI is on track to start mass-producing a custom AI accelerator next year — a chip the company designed with Broadcom and plans to use inside its own datacentres rather than sell to other customers.

That might sound like a narrow engineering play, but it’s really a strategic maneuver that touches on three huge pressures in the AI era: rising compute costs, spotty supply of high-end accelerators, and the desire to squeeze more performance and efficiency out of model deployment. The move mirrors what Google and Amazon have already done with their in-house silicon — and it explains why Broadcom’s recent announcement of roughly $10 billion in orders from an unnamed customer set markets buzzing with speculation that the buyer was OpenAI.

Why make a chip at all?

Modern large language models and multimodal systems eat compute the way stadiums eat electricity. Training and running them at scale means either paying top dollar for third-party GPUs or building your own hardware stack. Custom chips are attractive because they let companies optimize the instruction set, memory hierarchy, and power profile for the precise math their models do — which can translate into big savings in cost per token, lower power bills, and denser racks in a data centre. Google’s Tensor Processing Units and AWS’s Trainium/Inferentia chips are textbook examples: they were developed to lower costs and increase control over infrastructure while delivering the specific throughput those firms needed.

For OpenAI, which has progressively pushed more demanding products into the wild (and reportedly continues to scale larger models internally), the math can be straightforward: save a few pennies per inference, multiply that by billions of inferences and thousands of servers, and you’re talking meaningful margin and capacity gains. But the strategic upside goes beyond cost. Designing hardware gives a company leverage — over suppliers, over performance roadmaps, and over the painful risk of a single vendor becoming a choke point.

Broadcom’s mysterious $10 billion order — the smoking gun

The story gained momentum after Broadcom told investors it had secured a new customer that had committed roughly $10 billion in AI infrastructure orders. Broadcom did not name the customer; reporters and analysts quickly connected the dots and multiple outlets reported that people familiar with the matter identified OpenAI as the buyer. Broadcom’s comments — and the market reaction — sharpened the narrative that a major AI lab was betting on custom accelerators to scale.

That $10 billion figure matters because it signals a deployment at hyperscaler scale: racks, systems and potentially millions of chips, not just a handful of prototypes. Industry analysts and trade press have floated similar-sized totals when the implicit customer is a company running large inference fleets. But it’s also worth pausing: the FT and other reports rely on unnamed sources and both Broadcom and OpenAI declined to comment publicly when first asked — so readers should treat early details as credible reporting that still contains uncertainty.

What this would mean for Nvidia — and for the chip market

Nvidia has dominated AI training and inference for several years, with its GPUs forming the backbone of most cloud and enterprise AI stacks. But when hyperscalers design their own accelerators, they chip away at that dominance — not overnight, but slowly and meaningfully. The pattern is familiar: once a cloud or AI firm proves an internal design can meet cost and performance targets, it will increasingly run its own workloads on those chips, reducing third-party demand and reshaping supplier bargaining power.

That’s already visible in the markets. Broadcom’s earnings call and the $10 billion revelation sent Broadcom shares higher and pressured other chip makers’ stocks, because investors started pricing in a scenario where a major customer is shifting a lot of workload onto custom hardware. Whether Broadcom–OpenAI accelerators ultimately blunt NVIDIA’s lead will depend on performance, software maturity, and how broadly the chips are adopted (OpenAI reportedly plans to keep them for internal use, which narrows the competitive impact compared with a chip sold publicly).

The engineering and supply chain puzzle

Designing a chip is the start; getting it into millions of servers requires supply-chain muscle. Broadcom brings experience building custom ASICs and integrating whole rack solutions, while foundries like TSMC do the actual manufacturing work. Hyperscalers that have taken this route historically pair internal architecture teams with external fabs and system integrators to turn designs into shipping racks — it’s a multi-year, capital-intensive effort. That’s why the timeline in the FT report — shipping mass-produced chips in 2026 — is notable: it implies the project has already moved well past sketching into tape-out and system validation.

But there are risks: microarchitecture choices that look great in lab benchmarks can underperform on the messy, real-world workload models actually use. There’s also the software stack: compilers, kernel drivers, and model runtimes must play nicely on the new silicon. Google and AWS had to invest heavily in software to make their chips plug into their ML ecosystems; OpenAI would face similar work to ensure models run correctly and efficiently on a new accelerator.

What to watch next

If you’re tracking this story, a few indicators will be worth watching in the coming months:

  • Official statements and regulatory filings. Broadcom’s investor communications and any follow-up remarks from OpenAI could confirm details (or not). Early reporting is strong but not identical across outlets.
  • Performance hints. Early benchmarks (if any leak) and published case studies will show whether the chips are optimized for training, inference, or both.
  • Supply-chain moves. Partnerships with foundries or system integrators, or signs of large rack purchases, would indicate the program is scaling.

The bigger picture

The chip race isn’t just a technology story — it’s an economic and strategic one. For model builders, owning hardware reduces exposure to pricing swings and supplier outages. For chipmakers, it’s a new revenue stream or a defensive moat. The net effect so far has been to diversify the landscape: Google, Amazon, and now (possibly) OpenAI are betting that bespoke silicon is a meaningful lever. That competition is likely to accelerate innovation — and make the underlying battle for the future of AI as much about engineering economics as about model architectures.

OpenAI’s steps into silicon, if confirmed in the months ahead, would be yet another sign that the next phase of the AI boom is as much about the machinery that runs models as it is about the models themselves.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

DJI’s FC200 and T200 drones push industrial delivery and agriculture into the 200kg era

ChatGPT for Clinicians is now free for verified US doctors

DJI Osmo Mobile 8P debuts with detachable remote and smarter tracking

GoPro Mission 1 series is powerful, pricey, and not for casual users

Cheap MacBook Neo spurs Microsoft to stack student deals on Windows 11 laptops

Also Read
Tesla humanoid robot Optimus standing outdoors near a building entrance, raising one hand in a waving gesture. The robot has a sleek black-and-gold design with a reflective black face panel and “TESLA” branding on its chest. Part of a Tesla Cybercab vehicle is visible in the foreground, with trees, landscaping, and people walking in the background.

Elon Musk blames copycats for delayed Tesla Optimus reveal

Abstract 3D composition of colorful geometric shapes balanced on a horizontal red beam against a black background. The arrangement includes a blue half-sphere, a red half-bowl shape, an orange cube, a green rectangular block, a blue trapezoid, a yellow sphere, and a red triangular prism, creating a minimalist modern design.

Decoupled DiLoCo brings chaos-resilient AI pre-training to Google’s global fleet

Promotional poster for Apple TV series “Star City” featuring a close-up of a person’s face partially revealed through a torn paper-like red and white graphic on a dark background. The Apple TV logo appears above the bold white title “STAR CITY” on the right side, creating a dramatic sci-fi thriller visual style.

Apple TV shares Star City trailer previewing its next premium sci-fi drama after For All Mankind

Anthropic

Investors chase Anthropic as its secondary value tops $1 trillion

ChatGPT Workspace Agents Library

OpenAI’s new workspace agents let ChatGPT run end-to-end team processes

Claude Cowork logo and text on a light grey background, featuring a coral-colored starburst icon next to the product name in black serif font.

Anthropic adds interactive charts and diagrams to Claude Cowork

Screenshot of an AI chat interface showing the model selection dropdown menu open. “Kimi K2.6 Thinking” is selected at the top, with options including Best, Kimi K2.6 (marked New), Claude Sonnet 4.6, Claude Opus 4.7 (marked Max), and Nemotron 3 Super. A tooltip on the right says “Moonshot AI’s latest model,” highlighting Kimi K2.6.

Perplexity Pro and Max just got Kimi K2.6 support

Kimi K2.6 hero image

Kimi K2.6 is Moonshot’s new engine for autonomous coding and research

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.