By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

OpenAI may launch its first custom AI chip with Broadcom next year

Financial Times reports that OpenAI has partnered with Broadcom on a $10 billion chip deal, signaling a push toward proprietary silicon to power its growing AI models.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 5, 2025, 8:00 AM EDT
Share
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.
Photo by Pau Barrena / Getty Images
SHARE

OpenAI, the company behind ChatGPT, looks set to join a small but growing club of tech firms that build their own chips. According to reporting by the Financial Times, OpenAI is on track to start mass-producing a custom AI accelerator next year — a chip the company designed with Broadcom and plans to use inside its own datacentres rather than sell to other customers.

That might sound like a narrow engineering play, but it’s really a strategic maneuver that touches on three huge pressures in the AI era: rising compute costs, spotty supply of high-end accelerators, and the desire to squeeze more performance and efficiency out of model deployment. The move mirrors what Google and Amazon have already done with their in-house silicon — and it explains why Broadcom’s recent announcement of roughly $10 billion in orders from an unnamed customer set markets buzzing with speculation that the buyer was OpenAI.

Why make a chip at all?

Modern large language models and multimodal systems eat compute the way stadiums eat electricity. Training and running them at scale means either paying top dollar for third-party GPUs or building your own hardware stack. Custom chips are attractive because they let companies optimize the instruction set, memory hierarchy, and power profile for the precise math their models do — which can translate into big savings in cost per token, lower power bills, and denser racks in a data centre. Google’s Tensor Processing Units and AWS’s Trainium/Inferentia chips are textbook examples: they were developed to lower costs and increase control over infrastructure while delivering the specific throughput those firms needed.

For OpenAI, which has progressively pushed more demanding products into the wild (and reportedly continues to scale larger models internally), the math can be straightforward: save a few pennies per inference, multiply that by billions of inferences and thousands of servers, and you’re talking meaningful margin and capacity gains. But the strategic upside goes beyond cost. Designing hardware gives a company leverage — over suppliers, over performance roadmaps, and over the painful risk of a single vendor becoming a choke point.

Broadcom’s mysterious $10 billion order — the smoking gun

The story gained momentum after Broadcom told investors it had secured a new customer that had committed roughly $10 billion in AI infrastructure orders. Broadcom did not name the customer; reporters and analysts quickly connected the dots and multiple outlets reported that people familiar with the matter identified OpenAI as the buyer. Broadcom’s comments — and the market reaction — sharpened the narrative that a major AI lab was betting on custom accelerators to scale.

That $10 billion figure matters because it signals a deployment at hyperscaler scale: racks, systems and potentially millions of chips, not just a handful of prototypes. Industry analysts and trade press have floated similar-sized totals when the implicit customer is a company running large inference fleets. But it’s also worth pausing: the FT and other reports rely on unnamed sources and both Broadcom and OpenAI declined to comment publicly when first asked — so readers should treat early details as credible reporting that still contains uncertainty.

What this would mean for Nvidia — and for the chip market

Nvidia has dominated AI training and inference for several years, with its GPUs forming the backbone of most cloud and enterprise AI stacks. But when hyperscalers design their own accelerators, they chip away at that dominance — not overnight, but slowly and meaningfully. The pattern is familiar: once a cloud or AI firm proves an internal design can meet cost and performance targets, it will increasingly run its own workloads on those chips, reducing third-party demand and reshaping supplier bargaining power.

That’s already visible in the markets. Broadcom’s earnings call and the $10 billion revelation sent Broadcom shares higher and pressured other chip makers’ stocks, because investors started pricing in a scenario where a major customer is shifting a lot of workload onto custom hardware. Whether Broadcom–OpenAI accelerators ultimately blunt NVIDIA’s lead will depend on performance, software maturity, and how broadly the chips are adopted (OpenAI reportedly plans to keep them for internal use, which narrows the competitive impact compared with a chip sold publicly).

The engineering and supply chain puzzle

Designing a chip is the start; getting it into millions of servers requires supply-chain muscle. Broadcom brings experience building custom ASICs and integrating whole rack solutions, while foundries like TSMC do the actual manufacturing work. Hyperscalers that have taken this route historically pair internal architecture teams with external fabs and system integrators to turn designs into shipping racks — it’s a multi-year, capital-intensive effort. That’s why the timeline in the FT report — shipping mass-produced chips in 2026 — is notable: it implies the project has already moved well past sketching into tape-out and system validation.

But there are risks: microarchitecture choices that look great in lab benchmarks can underperform on the messy, real-world workload models actually use. There’s also the software stack: compilers, kernel drivers, and model runtimes must play nicely on the new silicon. Google and AWS had to invest heavily in software to make their chips plug into their ML ecosystems; OpenAI would face similar work to ensure models run correctly and efficiently on a new accelerator.

What to watch next

If you’re tracking this story, a few indicators will be worth watching in the coming months:

  • Official statements and regulatory filings. Broadcom’s investor communications and any follow-up remarks from OpenAI could confirm details (or not). Early reporting is strong but not identical across outlets.
  • Performance hints. Early benchmarks (if any leak) and published case studies will show whether the chips are optimized for training, inference, or both.
  • Supply-chain moves. Partnerships with foundries or system integrators, or signs of large rack purchases, would indicate the program is scaling.

The bigger picture

The chip race isn’t just a technology story — it’s an economic and strategic one. For model builders, owning hardware reduces exposure to pricing swings and supplier outages. For chipmakers, it’s a new revenue stream or a defensive moat. The net effect so far has been to diversify the landscape: Google, Amazon, and now (possibly) OpenAI are betting that bespoke silicon is a meaningful lever. That competition is likely to accelerate innovation — and make the underlying battle for the future of AI as much about engineering economics as about model architectures.

OpenAI’s steps into silicon, if confirmed in the months ahead, would be yet another sign that the next phase of the AI boom is as much about the machinery that runs models as it is about the models themselves.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Google Drive now uses AI to catch ransomware in real time

iOS 26.4 adds iCloud.com search for files and photos

Also Read
A person in a dress shirt sits at a desk typing on a keyboard in a dark room, while a glowing ribbon of light flows from a glass sphere with the Perplexity logo toward the computer, suggesting futuristic AI assistance.

Perplexity Computer just became your new tax assistant

Abstract sound wave illustration made of vertical textured lines in dark mauve on a soft pink background, suggesting audio waveform or voice signal for a modern tech or speech recognition theme.

Microsoft AI unveils MAI-Transcribe-1 for fast, accurate speech-to-text

Google Gemini AI. The image shows the word "Gemini" written in a modern, sans-serif font on a black background. The letters "G" and "e" are in a gradient blue color, while the letters "m," "i," "n," and "i" transition from a light blue to a light beige color. Above the second "i" in "Gemini," there is a stylized star or sparkle symbol, adding a celestial or futuristic touch to the design.

Google’s new MCP tools stop Gemini agents from hallucinating old APIs

A smart TV screen showing a paused YouTube podcast‑style video with two people talking into microphones, overlaid by a large circular “Ask” button with a sparkle icon in the bottom right corner.

YouTube’s new Ask AI button lands on smart TVs

Ray-Ban Meta Blayzer Optics (Gen 2) AI glasses

Meta’s new Ray-Ban AI glasses finally put prescriptions first

AT&T logo

AT&T OneConnect starts at $90 for fiber and wireless together

A wide Opera Neon promotional graphic showing the “MCP Connector” interface centered on a blurred gradient background, with a dialog that says “Connect AI systems to Opera Neon” and toggle for “Allow AI connection,” surrounded by labeled boxes for OpenClaw MCP Client, ChatGPT MCP Client, N8N MCP Client, Claude MCP Client, and Lovable MCP Client connected by dotted lines.

Opera Neon adds MCP Connector for true agentic browsing

Assassin’s Creed Shadows

Assassin’s Creed Shadows PS5 Pro patch adds new PSSR

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.