OpenAI quietly rolled back one of the most controversial parts of the GPT-5 launch: users can once again choose older models — and there’s a new model picker for GPT-5 with three distinct modes. The change is a clear nod to an online outcry that started after GPT-5 became the default and some people complained it lost the personality they liked in GPT-4o. If you use ChatGPT regularly, this is the update that actually affects your day-to-day: more choices, more limits, and a reminder that companies still have to balance improvement with the emotional attachments people form to digital assistants.
Sam Altman announced on X (formerly Twitter) that ChatGPT’s model picker for GPT-5 now includes three selectable modes: Auto, Fast, and Thinking. Altman says most people “will want Auto,” which acts like the sensible default, but you can opt for a speed-first experience or a version that takes longer to produce more comprehensive answers. OpenAI has also reintroduced GPT-4o into the model list for paying users, and there’s a “show additional models” toggle that surfaces older options like o3, 4.1, and a GPT-5 Thinking mini.
Previously, GPT-5’s rollout had simplified choices — a design decision meant to reduce confusion — but it removed models some users preferred. That created real friction: people weren’t just grumpy about performance, they missed specific tonal quirks and behaviors of GPT-4o. OpenAI’s reply has been to restore choice rather than force everyone onto one path.
Related /
- OpenAI brought GPT-4o back despite GPT-5 launch
- GPT-5 features — what OpenAI actually shipped
- OpenAI’s GPT-5 now available for free and paid ChatGPT users
When GPT-5 launched, OpenAI leaned into the idea of a single, more capable model that could handle most tasks. But a subset of users pushed back — not only about accuracy or capability, but about voice and personality. GPT-4o had a particular style some people liked; removing it without warning felt abrupt. After the backlash, OpenAI reversed course: GPT-4o returned to the picker for paying accounts, and the company promised it would give “plenty of notice” before deprecating older models in the future.
The company also said it’s tuning GPT-5’s personality: the goal is to make it “warmer” but not replicate the parts of GPT-4o users found irritating. That line — “warmer, not as annoying (to most users) as GPT-4o” — is telling: OpenAI wants to preserve usefulness while avoiding the quirks that made some people love (and others loathe) previous versions.
Not all models are equally available. OpenAI is gating higher-cost models behind subscription tiers. For example, GPT-4.5 and other premium access remain tied to the $200/month ChatGPT Pro offering — OpenAI says the more capable variants “cost a lot of GPUs,” which is why they’re limited to higher-tier subscriptions. Meanwhile, GPT-5 Thinking is subject to rate limits: OpenAI has tested a 3,000-messages-per-week cap for GPT-5 Thinking, after which you’ll be switched to a Thinking mini. Those rate limits are one of the mechanisms OpenAI is using to balance demand and costs.
Translation: you can choose a “deeper thinking” model if you need it, but it isn’t unlimited or free — higher fidelity costs real compute, and OpenAI is explicitly rationing that access through pricing and caps.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
