NVIDIA is on fire. The chipmaker just reported a whopping $14 billion profit in a single quarter, and a big chunk of that success comes from its artificial intelligence (AI) chips. To capitalize on this booming market, NVIDIA is making a major change: it will now design new AI chips annually, instead of the previous two-year cycle.
This announcement came from NVIDIA CEO Jensen Huang himself during the company’s Q1 2025 earnings call. “We’re on a one-year rhythm,” Huang declared, signaling a significant shift in NVIDIA’s development strategy.
Previously, NVIDIA released new chip architectures every two years. For instance, they launched Ampere in 2020, followed by Hopper in 2022, and most recently, Blackwell in 2024. (These architectures aren’t just for AI – they power NVIDIA’s popular gaming and creative GPUs as well.) Analyst Ming-Chi Kuo had already predicted the next architecture, “Rubin,” to arrive in 2025, potentially bringing the R100 AI GPU to market next year. Huang’s comments strongly suggest Kuo might be right on the money.
The faster development cycle isn’t limited to AI chips. According to Huang, NVIDIA plans to accelerate the production of all its chips to match this new yearly cadence. “New CPUs, new GPUs, new networking NICs, new switches… a mountain of chips are coming,” he stated.
This announcement raises a question: how will the new Blackwell GPUs ramp up while Hopper is still selling well? Huang addressed this during the call, assuring customers that NVIDIA’s latest AI chips maintain electrical and mechanical backward compatibility. This means they’ll run the same software as their predecessors, allowing for a smooth transition within existing data centers. “Customers will easily transition from H100 to H200 to B100,” Huang explained.
Huang also shed light on the tremendous demand for NVIDIA’s AI products. “We expect demand to outstrip supply for some time,” he admitted, referring to the upcoming H200 and Blackwell generations. He attributes this to the significant cost savings and revenue generation these chips enable for customers eager to get their AI infrastructure up and running.
Huang employed a bit of “fear of missing out” (FOMO) marketing during the call: “The next company who reaches the next major plateau gets to announce a groundbreaking AI, and the second one after that gets to announce something that’s 0.3 percent better. Do you want to be the company delivering groundbreaking AI, or the company, you know, delivering 0.3 percent better?“
NVIDIA’s Chief Financial Officer (CFO) chimed in with some interesting insights as well. According to him, the automotive industry will become NVIDIA’s “largest enterprise vertical within data center this year.” This news comes on the heels of Tesla’s purchase of 35,000 H100 GPUs to train its “full-self driving” system. “Consumer internet companies” like Meta are also expected to remain a strong growth area. Notably, some customers have already purchased or plan to purchase over 100,000 H100 GPUs, with Meta aiming to have a staggering 350,000 of them operational by the end of the year.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
