OpenAI is phasing out a batch of older Codex models on April 14, and gently nudging developers toward a smaller, cleaner lineup built around GPT-5.4 and the newer 5.3‑based coding models. It is less about killing off features and more about collapsing a messy model list into a few workhorses that can handle most coding and “AI pair programmer” tasks on their own.
In a post on X, the OpenAI Developers account announced that when you sign in to Codex with a ChatGPT account after April 14, you’ll no longer see six older options: gpt-5.2-codex, gpt-5.1‑codex-mini, gpt-5.1-codex‑max, gpt-5.1-codex, gpt-5.1, and gpt-5. For anyone who has been relying on these, the key detail is that this change only affects Codex when you log in with ChatGPT; if you really need a retired model, you can still bring your own OpenAI API key and keep using it outside this streamlined Codex picker.
Once the switch flips, the default Codex model lineup under ChatGPT sign‑in will center on gpt-5.4, gpt-5.4-mini, gpt-5.3-codex, gpt-5.3-codex-spark (reserved for Pro users), and gpt-5.2. OpenAI’s own guidance is pretty straightforward: start with gpt-5.4 for most coding tasks, drop to gpt-5.4-mini when you care more about speed and cost than raw depth, and turn to the 5.3‑based Codex models when you want something tuned specifically for code, long‑running software tasks, or ultra‑fast interactive coding sessions.
This deprecation fits into a bigger trend at OpenAI: the company has been slowly pruning older or overlapping models for a few years, including classic Codex APIs in 2023 and several GPT-4 variants more recently, with official documentation that lists timed shutdowns and recommended replacements. The goal is to move everyone toward a smaller set of “frontier” models that do more, cost less per unit of work, and are easier to manage at scale, instead of making developers pick from a long, confusing list of nearly identical options.
On the Codex side, the new generation is clearly built around GPT-5.3-Codex and GPT-5.4. GPT-5.3-Codex is positioned as OpenAI’s “agentic” coding model, designed to handle longer‑horizon jobs like refactoring big codebases, wiring up tools, and running multi‑step plans with minimal hand‑holding. GPT-5.3-Codex-Spark then takes that core and shrinks it into a small, speed‑first model that can answer and edit almost in real time, hitting very high token‑per‑second rates on benchmarks while still solving serious software tasks. GPT-5.4, meanwhile, blends these coding chops into a more general‑purpose reasoning model that is meant to be the new default brain for Codex and other professional workflows, so you do not have to switch models every time you move from debugging to writing docs or planning a new feature.
For everyday Codex users, the most visible change will probably be that the model picker gets simpler. Instead of scrolling past a dozen similar‑sounding 5.1 and 5.2 Codex variants, you will mostly be choosing between “main” (gpt-5.4), “cheaper/faster” (gpt-5.4-mini), and “coding‑specialist” models (5.3-codex and Spark). That design is intentional: OpenAI engineers and community members have pointed out that the new models are meant to fold the strengths of the old ones—speed of mini, depth of max, and the coding intuition of 5.2-codex—into fewer SKUs so the interface can move closer to a single “just use the best one” button.
Still, not everyone is thrilled. In the replies to OpenAI’s announcement, some developers are openly sentimental about models like gpt-5.2 and gpt-5.1-codex-mini, which they say are still excellent for tasks like aggressive prompt compaction or as cheap helper models behind the scenes. Others are worried that 5.3-codex might eventually follow the same path unless there is a clear, equally specialized 5.4/5.5‑era Codex replacement that matches its behavior on tightly scoped coding work. That tension—between stability for existing workflows and the push to consolidate around newer, more capable systems—is now a recurring theme in OpenAI’s deprecation timeline.
Practically, the April 14 change forces anyone with a Codex‑heavy workflow to do a quick audit. If your scripts or internal tools explicitly target one of the outgoing models via Codex’s ChatGPT‑linked interface, you will want to migrate those flows to gpt-5.4, 5.4-mini, or one of the 5.3-Codex variants to avoid surprises. Teams that really depend on a specific retired model can insulate themselves by switching to API‑key‑based access, at least in the short term, but long term, the direction of travel is clear: OpenAI expects serious users to keep moving forward as new frontier models pick up the torch from each old generation.
The broader takeaway is that Codex is quietly shifting from “a catalog of models” to “a single, evolving coding assistant” that just happens to have a few different modes under the hood. The retirement of older Codex models is one more step toward that vision, where the defaults keep getting smarter, faster, and more capable, and the average developer no longer needs to be a model‑selection expert just to ship software.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
