By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

OpenAI launches GPT-5.4 mini and nano for faster, cheaper AI

OpenAI is shrinking GPT-5.4 down into mini and nano models that stay fast, cheap, and surprisingly close to flagship performance.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 18, 2026, 11:50 AM EDT
Share
We may get a commission from retail offers. Learn more
Bright abstract background in soft orange, pink, and yellow gradients with two rounded white buttons centered, labeled “5.4 mini” and “5.4 nano” in gray text, representing OpenAI’s small GPT-5.4 models.
Image: OpenAI
SHARE

OpenAI is rolling out two new small AI models, GPT-5.4 mini and GPT-5.4 nano, and the easiest way to think about them is this: they’re meant to feel fast and cheap like “lite” models, but perform uncomfortably close to the big flagship GPT-5.4 in a lot of real work.

The company describes them as its most capable small models yet, tuned for high-volume workloads where latency and cost matter as much as sheer intelligence. GPT-5.4 mini is the star of the pair: it takes the previous GPT-5 mini, then pushes coding, reasoning, multimodal understanding, and tool use to a new level while running more than twice as fast. On the publicly reported SWE‑Bench Pro coding benchmark, GPT-5.4 mini hits 54.4% versus 57.7% for the full GPT-5.4, edging remarkably close to the flagship while leaving the older GPT-5 mini’s 45.7% score far behind. GPT-5.4 nano sits one step down the ladder: it is the smallest, cheapest version of 5.4, explicitly positioned for classification, data extraction, ranking, routing, and simple coding sub‑tasks where speed and predictable behavior at a massive scale beat raw capability.

Both models exist for a specific kind of modern AI workload that has exploded over the last year: agents, copilots, background workers, and little “helpers” you never see but feel every time an app responds instantly instead of making you wait. OpenAI is blunt about this in its launch materials, calling out use cases like coding assistants that must feel responsive, sub‑agents that run in parallel on narrow tasks, and “computer-using” systems that need to parse screenshots and dense interfaces in real time. In that context, the best model is no longer the giant thinking slowly in the background; it’s the smaller model that can call tools reliably, navigate a codebase, or understand a UI screenshot at a pace that keeps a user in flow.

On the numbers, GPT-5.4 mini looks like the default “do-everything” small model in OpenAI’s lineup now. It offers a 400,000‑token context window, supports text and image inputs, tool use, function calling, web search, file search, computer use and skills, and is available across the API, Codex, and ChatGPT. In the API, OpenAI prices GPT-5.4 mini at roughly the mid‑range of its catalog: the company’s pricing page lists it at about three‑quarters of a dollar per million input tokens and a few dollars per million output tokens, in line with the announcement’s “one‑third the cost” framing versus full GPT-5.4 in Codex. The model also uses only 30% of the GPT-5.4 quota inside Codex, letting developers offload simpler coding tasks to mini without burning through their premium budget. On the user side, GPT-5.4 mini is already wired into ChatGPT: Free and Go users can access it via the “Thinking” option in the + menu, and for others, it acts as a rate‑limit fallback when GPT-5.4 Thinking is saturated.

Nano is even more aggressive on cost. OpenAI says GPT-5.4 nano is only available through the API, starting at about $0.20 per million input tokens and $1.25 per million output tokens, making it one of the cheapest ways to tap into the GPT-5.4 family at scale. The trade‑off is capability: while it’s still a clear upgrade over GPT-5 nano and even surpasses last‑generation GPT-5 mini on some coding metrics, it’s not designed to be your primary reasoning engine or your main code copilot. It shines when you want to run thousands of parallel calls to classify documents, extract structured fields from messy text, rank search results, route requests to different backends, or let a bigger orchestrator model delegate small, well‑scoped jobs.

Benchmarks tell most of the story. Across the GPT-5.4 family, GPT-5.4 mini and nano both break the 50% mark on SWE‑Bench Pro, the tough, real‑world coding benchmark that asks models to fix actual GitHub bugs rather than answer toy questions. GPT-5.4 mini lands at 54.4% and nano at 52.4%, compared with 57.7% for full GPT-5.4 and 45.7% for GPT-5 mini. On Terminal‑Bench 2.0, which stresses terminal interactions and system operations, the gap between generations is even more striking: GPT-5.4 mini scores 60.0% versus just 38.2% for GPT-5 mini, while nano reaches 46.3%. This is what has a lot of developers excited: smaller models are no longer clearly “second tier” for code—they’re closing the gap, especially when cost and latency are factored in.

Tool use and “agentic” behavior are another big focus. On MCP Atlas and Toolathlon, two benchmarks for tool‑calling and real‑world API orchestration, GPT-5.4 mini again lands much closer to GPT-5.4 than to the older mini, and nano stays competitive given its size and price. The τ2‑bench telecom benchmark, which tests industry‑specific tool use, shows GPT-5.4 mini at over 93% accuracy, approaching GPT-5.4’s near‑perfect score and leaving GPT-5 mini far behind. In practical terms, this means that the pattern of “big planner, small executors” for AI systems is becoming more viable: a flagship model decides what to do, and mini or nano agents actually call APIs, run commands, and clean up data in the background at scale.

Where mini really separates from nano is multimodal and computer‑use performance. On OSWorld‑Verified, a benchmark that asks models to control computers via screenshots and complex UIs, GPT-5.4 mini hits 72.1%, right on the heels of GPT-5.4’s 75.0%. Nano, by contrast, drops to 39.0%, even slightly below the previous GPT-5 mini’s 42.0%, which underlines that it simply isn’t built to drive full computer‑use agents. On broader multimodal benchmarks like MMMU Pro and OmniDocBench, mini again lands much closer to the flagship than its size suggests, while nano trades away visual reasoning power to hit its latency and cost targets.

Long‑context performance is more nuanced. All three 5.4 models keep very large context windows, but GPT-5.4 still leads on the toughest long‑needle tests, especially above 128K tokens. GPT-5.4 mini remains usable in the 64K–256K range, but scores drop compared to the flagship, and nano trails further. For most developers, though, the headline is that mini keeps a 400K context window while still being “small model fast,” which is a big deal if you’re stuffing logs, large documents, or multi‑file codebases into one request.

From the ecosystem’s perspective, this launch is as much about reshaping the pricing and capability curve as it is about new features. Analysts have been quick to point out that GPT-5.4 mini and nano essentially attack the mid‑ and low‑end segments that used to be dominated by “cheap but clearly weaker” models. Now, for many workloads, the choice isn’t “big smart model vs small dumb model,” but “big model vs small model that’s good enough and 3–5x cheaper.” For startups building AI products, that can be the difference between a fun demo and a unit‑economics‑positive business.

At the same time, these releases reinforce a broader trend: the real frontier isn’t just more raw intelligence; it’s orchestrating multiple models, tools, and interfaces in ways that feel seamless to users. GPT-5.4 mini and nano are clearly designed as building blocks for that future—a fast backbone for agents, copilots, and invisible background AI. If GPT-5.4 is the brain that plans, mini and nano are the hands that actually do the work, quietly, millions of times a day.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTChatGPT AtlasOpenAI Codex
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Microsoft Agent 365 launches with multi-cloud governance and shadow AI tools

Code with Claude 2026 is back – bigger, bolder, and international

OneNote Copilot now understands images, tables, and note tags

Microsoft overhauls Win+R with a faster, cleaner, Fluent Design Run dialog

Anthropic opens Claude Security to all enterprise customers in public beta

Also Read
Close-up of a silver Mac mini on a desk, showing the front with two USB-C ports, a power indicator light, and a headphone jack, with an Apple Studio display partially visible above.

The $599 Mac mini is gone – Apple’s entry price is now $799

Samsung Wallet logo with a blue-purple gradient app icon and the text “Samsung Wallet” on a light background.

Atlanta commuters can now add MARTA Breeze card to Samsung Wallet

Side-by-side comparison of two Instagram posts showing the same DJ image; the left labeled “Original” includes a caption by the creator, while the right labeled “Unoriginal” shows a repost with minimal caption, highlighting attribution differences.

Instagram now punishes accounts that repost other people’s content

Illustration of Microsoft Word interface showing a stylized document with formatting icons, user collaboration profile pictures, and a cloud background, representing Word’s cloud-based saving and collaboration features.

Legal Agent in Microsoft Word is now live for Frontier users in the US

Promotional graphic from Canva titled “The Devil Wears Prada 2,” featuring themed design templates including a bingo card, a “What’s in her bag?” layout, and a stylized quote card on a red background with a city skyline silhouette.

The Devil Wears Prada 2 templates hit Canva – and they’re seriously chic

Abigail Besdin

Mozilla names Abigail Besdin as its new Chief Operating Officer

Promotional image for Xbox Mode on Windows 11 devices, showing a desktop PC, laptops, handheld gaming devices, and an Xbox controller, all displaying a unified Xbox gaming interface with featured games on screen.

Xbox Mode is now rolling out to Windows 11 PCs

A Dell laptop with the Windows logo displayed on its screen is shown on a colorful background with pink on top and blue on the bottom, viewed at an angle with part of the keyboard visible.

You can now download ISOs for Windows Insider Preview Builds every time

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.