There’s a question that doesn’t get asked enough amid all the enthusiasm around AI chatbots: What happens to the way you think when a machine starts doing the thinking for you? A growing group of scientists and psychologists believe they’ve started to find an answer — and it’s a little unsettling. According to a new opinion paper published in the journal Trends in Cognitive Sciences, the mass adoption of large language models (LLMs) like ChatGPT isn’t just changing how we work or write. It may be quietly eroding one of the most fundamentally human things we have — the unique, messy, sometimes brilliant way each of us thinks.
The paper, co-authored by a team of computer scientists and psychologists, including lead author Zhivar Sourati of the University of Southern California, argues that when hundreds of millions of people rely on the same small pool of AI systems to help them reason, write, and communicate, the result is an inevitable flattening of thought. “The richness of how different people write, argue, and think is one of humanity’s most valuable cognitive resources,” Sourati told CNET. And right now, that richness is at risk.
To understand why this matters, you have to appreciate just how fast the world has leaned into AI. According to Pew Research, 34% of all American adults used ChatGPT in 2024 — double the figure from 2023. Among teenagers, the numbers are even more striking: two-thirds say they use chatbots, and nearly a third use them every single day. It doesn’t stop with individuals either. Stanford’s AI Index found that 78% of organizations reported using AI in 2024, up sharply from 55% the year before. That’s an enormous slice of the world’s communication, decision-making, and creative output being routed through the same few systems.
And here’s the thing about those systems — they’re not neutral. LLMs are trained on vast pools of data scraped from the internet, and that data doesn’t represent humanity equally. It skews heavily toward Western, educated, industrialized, rich, and democratic societies, which researchers shorthand as “WEIRD.” Because LLMs are built to identify and reproduce statistical patterns in that training data, their outputs tend to mirror a narrow, particular slice of human experience. Put more plainly: when you ask ChatGPT to help you write something, the response you get reflects a pretty specific worldview. And if everyone is getting a version of that same response, the diversity of expression across billions of people starts to narrow.
What’s especially clever about how the researchers frame this isn’t just about people copying AI outputs wholesale. It’s more subtle than that. When you use a chatbot to polish an essay or draft a reply, your writing loses its stylistic fingerprint. You feel less creative ownership over what you produce. Over time, you start to defer to what the model suggests, choosing options that seem “good enough” rather than pushing toward something genuinely your own. Sourati puts it precisely: “Rather than actively steering generation, users often defer to model-suggested continuations, selecting options that seem ‘good enough’ instead of crafting their own, which gradually shifts agency from the user to the model.” It’s a slow handover — and most people don’t notice it happening.
The paper also touches on something that researchers have been quietly documenting for a while: LLMs tend to favor specific styles of reasoning. They love what’s called “chain-of-thought reasoning” — a methodical, step-by-step way of working through a problem. That sounds fine, even desirable. But it comes at a cost. It sidelines intuitive and abstract reasoning styles, which are sometimes faster, more creative, and better suited to certain problems. Think about how a seasoned doctor or an experienced designer often arrives at the right answer not through explicit logical steps, but through a kind of gut instinct built on years of pattern recognition. That kind of thinking is harder to model, and so it tends to get squeezed out when AI sets the template.
And then there’s the opinion effect, which is arguably the most politically significant finding the researchers point to. Studies have shown that after people interact with biased LLMs, their views tend to shift closer to the perspective expressed by the model. Sterling Williams-Ceci, an information scientist at Cornell University and co-author on a related piece in Nature, notes that this dynamic could eventually reduce the diversity of political views, with the direction of that shift depending on the ideological leanings embedded in whichever LLMs someone happens to use. It’s a sobering thought: AI systems, depending on how they’re built and what data they’re trained on, could become invisible nudges on public opinion at a civilizational scale.
What makes the researchers particularly concerned is that this effect doesn’t just touch people who actively use these tools. Social pressure does the rest. If everyone around you has started communicating in a smoother, more uniform, AI-polished way, the rougher edges of your own expression can start to feel out of place. “If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas,” Sourati explains. Owen Muir, an interventional psychiatrist, agrees: this “more average language” gets baked into human communication even when the machines aren’t in the room.
This is what makes the LLM moment different from every technological shift that came before it. The internet accelerated the spread of dominant cultural norms. GPS eroded localized spatial reasoning. Social media created filter bubbles. But those earlier technologies were tools for storage, retrieval, and distribution. They didn’t generate the reasoning itself. LLMs do. They write the conclusion, frame the argument, suggest the perspective — and they do it simultaneously for hundreds of millions of people. As Sourati says, “the homogenizing force is unlike anything previous technology has produced.“
The researchers aren’t calling for a halt to AI development. Their prescription is more measured, but important: AI developers need to intentionally build more cognitive and linguistic diversity into the models themselves. That means expanding training data beyond the well-worn corners of the English-speaking internet, representing more reasoning styles and cultural perspectives, and building systems that actively support the user’s own voice rather than replacing it. “We need to diversify the AI models themselves while also adjusting how we interact with them, especially given their widespread use across tasks and contexts, to protect the cognitive diversity and ideation potential of future generations,” Sourati writes.
Interestingly, there’s also a practical case here that goes beyond the philosophical. Research consistently shows that groups of people, when they bring diverse thinking to a problem, outperform both individuals and homogeneous groups at coming up with creative solutions. Studies included in the paper note that while individual users often generate more ideas with the help of LLMs, groups relying on AI tools produce fewer and less creative ideas compared to groups that simply pool their own collective thinking. In other words, the homogenization problem isn’t just a cultural loss — it’s a direct hit on the kind of collective intelligence that drives innovation, scientific breakthroughs, and social adaptation.
There’s a real irony at the heart of all of this. We built these tools to augment human capability, to make us sharper, faster, and more productive. And in many narrow, measurable ways, they do exactly that. But the broader picture being drawn by this research is of a trade-off that we’ve barely started to reckon with — where the convenience of having a machine articulate your thoughts comes at the quiet cost of your distinctiveness as a thinker. The question worth sitting with isn’t whether AI is useful. It obviously is. The question is whether we’re building the habits and the systems needed to ensure that as AI gets smarter, the full, gloriously varied spectrum of human thought doesn’t simply get smoothed away.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
