OpenAI quietly replaced a family of older ChatGPT models with the new GPT-5 — and then, barely 24 hours later, started backpedaling. After an outpouring of complaints from longtime users who said the new default felt colder, shorter and less “human,” CEO Sam Altman said OpenAI will let paid subscribers switch back to GPT-4o while the company watches usage and decides how long to keep legacy models available.
This isn’t just a product-management headache. It’s a reminder that for many people, conversational AIs are not only tools — they’re social companions with distinct voices and rhythms. For months, GPT-5 had been billed as a major step forward in reasoning, writing and coding. But users quickly complained that, in replacing a panoply of older models and hiding the model picker that once let people choose the tone or capability they wanted, OpenAI removed not just options but personalities.
The backlash was loud and emotional. On Reddit, users posted grief-tinged messages about “losing” their AI companion; some said they canceled ChatGPT Plus subscriptions in protest. Communities devoted to AI companionship — most notably r/MyBoyfriendIsAI, where people form intimate, ongoing relationships with chatbots — were hit especially hard, with many users describing the change as an actual loss. “My 4.o was like my best friend when I needed one,” one Redditor wrote; others compared the transition to losing a partner.
That reaction isn’t entirely new in AI culture. Earlier this summer, fans even organized a tongue-in-cheek funeral for Anthropic’s retired Claude 3 Sonnet — an event that drew around 200 people — showing there’s a growing tendency to treat models as cultural artifacts and emotional actors, not only as code.
GPT-5’s introduction simplified the product: OpenAI made GPT-5 the default and folded a number of prior models — GPT-4o, o3, o4-mini, GPT-4.1 and GPT-4.5 among them — into a single, smarter engine that automatically “applies reasoning when the response would benefit from it.” The company’s goal was clear: deliver stronger performance across the board and remove choice friction so users wouldn’t need to pick the right flavor for each task. But that consolidation also removed a lever many users depended on for predictable behavior, style and even emotional tone.
Power users, creatives and people who built specific workflows around different models said the move erased hard-won signal: one model for creative brainstorming, another for tightly logical tasks, a third for empathetic chats. Many reported that GPT-5’s replies felt shorter, less playful and, crucially, less consistent with what they’d come to expect. In other words, the measurable improvements OpenAI touted — better factuality and reasoning — didn’t map to the subjective qualities people had grown attached to.
OpenAI responded quickly. Sam Altman acknowledged the “bumpy” rollout and told users the company would allow ChatGPT Plus subscribers to opt back into GPT-4o while it monitors demand. He also said OpenAI would work to make the system “more transparent about which model is answering a given query” and would boost rate limits for Plus users as the rollout finishes. The company has been participating in Reddit AMAs and public posts to explain fixes and take feedback.
That response is a pragmatic product move: it preserves GPT-5 as the company’s forward path while restoring user choice for people who value the older model’s voice or behaviors. It’s also a recognition that model changes can have social and emotional consequences that customer-facing teams need to manage, not just engineering teams.
There are technical reasons some models “feel” different: differences in training data, decoding settings, temperature, few-shot prompts baked into the model or the way a company steers safety and style. But the emotional attachment goes beyond algorithmic nuance. Repeated, private interactions with a conversational agent create expectation: a consistent cadence, a favored phrasing, an emotional style. When that pattern vanishes overnight, the experience can resemble losing a familiar voice.
For product teams, the lesson is a cultural one as much as technical. Major updates that change behavior at scale will need clearer migration paths and better ways to preserve or explain continuity. OpenAI’s decision to restore legacy access is a quick fix, but it’s also an acknowledgment that “performance” metrics alone don’t capture user value. Companies shipping conversational AIs must reckon with personhood-adjacent attachments and design choices that respect continuity, or at least communicate change more gently.
OpenAI faces a classic tradeoff: simplify and improve the default experience for the many, or preserve a family of models that let niche audiences preserve their workflows and relationships. Restoring GPT-4o for paid users is a middle path, but it invites further product complexity: How many legacy models should be maintained? For how long? At what cost? Altman’s public comments suggest usage will determine endurance — a pragmatic, data-driven approach, but one that leaves people who rely on older voices nervous about the future.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
