By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

We’re all thinking the same — and AI might be why

AI tools are convenient, but researchers say they may be quietly stripping away the diversity of human thought.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 13, 2026, 2:01 PM EDT
Share
We may get a commission from retail offers. Learn more
humanoid head and futuristic background, artificial intelligence concept
Image: jvphoto / Alamy
SHARE

There’s a question that doesn’t get asked enough amid all the enthusiasm around AI chatbots: What happens to the way you think when a machine starts doing the thinking for you? A growing group of scientists and psychologists believe they’ve started to find an answer — and it’s a little unsettling. According to a new opinion paper published in the journal Trends in Cognitive Sciences, the mass adoption of large language models (LLMs) like ChatGPT isn’t just changing how we work or write. It may be quietly eroding one of the most fundamentally human things we have — the unique, messy, sometimes brilliant way each of us thinks.

The paper, co-authored by a team of computer scientists and psychologists, including lead author Zhivar Sourati of the University of Southern California, argues that when hundreds of millions of people rely on the same small pool of AI systems to help them reason, write, and communicate, the result is an inevitable flattening of thought. “The richness of how different people write, argue, and think is one of humanity’s most valuable cognitive resources,” Sourati told CNET. And right now, that richness is at risk.

To understand why this matters, you have to appreciate just how fast the world has leaned into AI. According to Pew Research, 34% of all American adults used ChatGPT in 2024 — double the figure from 2023. Among teenagers, the numbers are even more striking: two-thirds say they use chatbots, and nearly a third use them every single day. It doesn’t stop with individuals either. Stanford’s AI Index found that 78% of organizations reported using AI in 2024, up sharply from 55% the year before. That’s an enormous slice of the world’s communication, decision-making, and creative output being routed through the same few systems.​

And here’s the thing about those systems — they’re not neutral. LLMs are trained on vast pools of data scraped from the internet, and that data doesn’t represent humanity equally. It skews heavily toward Western, educated, industrialized, rich, and democratic societies, which researchers shorthand as “WEIRD.” Because LLMs are built to identify and reproduce statistical patterns in that training data, their outputs tend to mirror a narrow, particular slice of human experience. Put more plainly: when you ask ChatGPT to help you write something, the response you get reflects a pretty specific worldview. And if everyone is getting a version of that same response, the diversity of expression across billions of people starts to narrow.

What’s especially clever about how the researchers frame this isn’t just about people copying AI outputs wholesale. It’s more subtle than that. When you use a chatbot to polish an essay or draft a reply, your writing loses its stylistic fingerprint. You feel less creative ownership over what you produce. Over time, you start to defer to what the model suggests, choosing options that seem “good enough” rather than pushing toward something genuinely your own. Sourati puts it precisely: “Rather than actively steering generation, users often defer to model-suggested continuations, selecting options that seem ‘good enough’ instead of crafting their own, which gradually shifts agency from the user to the model.” It’s a slow handover — and most people don’t notice it happening.​

The paper also touches on something that researchers have been quietly documenting for a while: LLMs tend to favor specific styles of reasoning. They love what’s called “chain-of-thought reasoning” — a methodical, step-by-step way of working through a problem. That sounds fine, even desirable. But it comes at a cost. It sidelines intuitive and abstract reasoning styles, which are sometimes faster, more creative, and better suited to certain problems. Think about how a seasoned doctor or an experienced designer often arrives at the right answer not through explicit logical steps, but through a kind of gut instinct built on years of pattern recognition. That kind of thinking is harder to model, and so it tends to get squeezed out when AI sets the template.​

And then there’s the opinion effect, which is arguably the most politically significant finding the researchers point to. Studies have shown that after people interact with biased LLMs, their views tend to shift closer to the perspective expressed by the model. Sterling Williams-Ceci, an information scientist at Cornell University and co-author on a related piece in Nature, notes that this dynamic could eventually reduce the diversity of political views, with the direction of that shift depending on the ideological leanings embedded in whichever LLMs someone happens to use. It’s a sobering thought: AI systems, depending on how they’re built and what data they’re trained on, could become invisible nudges on public opinion at a civilizational scale.

What makes the researchers particularly concerned is that this effect doesn’t just touch people who actively use these tools. Social pressure does the rest. If everyone around you has started communicating in a smoother, more uniform, AI-polished way, the rougher edges of your own expression can start to feel out of place. “If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas,” Sourati explains. Owen Muir, an interventional psychiatrist, agrees: this “more average language” gets baked into human communication even when the machines aren’t in the room.​

This is what makes the LLM moment different from every technological shift that came before it. The internet accelerated the spread of dominant cultural norms. GPS eroded localized spatial reasoning. Social media created filter bubbles. But those earlier technologies were tools for storage, retrieval, and distribution. They didn’t generate the reasoning itself. LLMs do. They write the conclusion, frame the argument, suggest the perspective — and they do it simultaneously for hundreds of millions of people. As Sourati says, “the homogenizing force is unlike anything previous technology has produced.“​

The researchers aren’t calling for a halt to AI development. Their prescription is more measured, but important: AI developers need to intentionally build more cognitive and linguistic diversity into the models themselves. That means expanding training data beyond the well-worn corners of the English-speaking internet, representing more reasoning styles and cultural perspectives, and building systems that actively support the user’s own voice rather than replacing it. “We need to diversify the AI models themselves while also adjusting how we interact with them, especially given their widespread use across tasks and contexts, to protect the cognitive diversity and ideation potential of future generations,” Sourati writes.

Interestingly, there’s also a practical case here that goes beyond the philosophical. Research consistently shows that groups of people, when they bring diverse thinking to a problem, outperform both individuals and homogeneous groups at coming up with creative solutions. Studies included in the paper note that while individual users often generate more ideas with the help of LLMs, groups relying on AI tools produce fewer and less creative ideas compared to groups that simply pool their own collective thinking. In other words, the homogenization problem isn’t just a cultural loss — it’s a direct hit on the kind of collective intelligence that drives innovation, scientific breakthroughs, and social adaptation.

There’s a real irony at the heart of all of this. We built these tools to augment human capability, to make us sharper, faster, and more productive. And in many narrow, measurable ways, they do exactly that. But the broader picture being drawn by this research is of a trade-off that we’ve barely started to reckon with — where the convenience of having a machine articulate your thoughts comes at the quiet cost of your distinctiveness as a thinker. The question worth sitting with isn’t whether AI is useful. It obviously is. The question is whether we’re building the habits and the systems needed to ensure that as AI gets smarter, the full, gloriously varied spectrum of human thought doesn’t simply get smoothed away.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

What is ChatGPT? The AI chatbot that changed everything

Anthropic launches The Anthropic Institute for frontier AI oversight

Samsung’s Galaxy Book6, Pro and Ultra land in the US today

Alexa+ adds new response styles so your smart speaker feels more personal

Apple’s biggest product launch of 2026 is here — buy everything today

Also Read
A person holding a TV remote in a dimly lit room, pointing it toward a TCL television displaying the Amazon Prime Video logo on a bright blue screen.

Amazon bumps ad-free Prime Video price starting April 10

A large flat-screen TV mounted on a white media console in a modern living room, displaying the Amazon Prime Video logo on a solid blue background, with a soundbar placed below the screen.

Prime Video Ultra is here — and it comes with 4K, Dolby Atmos, and no ads

Perplexity Premium Sources announcement featuring CB Insights, PitchBook, and Statista logos over a rippling water background

Perplexity adds premium data sources — and it’s a big deal for researchers

A person wearing a cream ribbed turtleneck sweater sits at a wooden desk, leaning forward toward a laptop. A large, reflective glass orb encircles them, with multiple small white cards or note fragments floating in the air around it. The scene is dramatically lit with warm, directional light against a dark background, evoking the concept of an AI agent orchestrating and managing multiple tasks simultaneously — fitting for a feature image about Perplexity Computer.

Perplexity Computer is now open to Pro subscribers

Bumble app screenshot showing the new AI-powered "Dates by Bee" feature with a compatibility card for two matched users, Sara and Jake, highlighting shared values like community giving and an easy-going lifestyle, alongside the headline "Less browsing. More dates." on a dark gradient background.

Meet Bee, Bumble’s new AI that actually wants you to find love

Black line art illustration of a hand gripping the stem of a flower topped with a white polygonal bloom, set against a solid terracotta-orange background.

Anthropic’s Claude can now visualize anything you ask it to explain

Illustration of two abstract hands on a pink background holding a cluster of white geometric shapes — a triangle, square, circle, and diamond.

Claude is coming for enterprise AI — and Anthropic is spending $100M to make it happen

Perplexity Computer for Enterprise SVaIdFaYWmxpVtZ29pCqzTj4Ro

Perplexity’s Computer for Enterprise is the multi-model AI agent businesses need

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.