By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIPerplexityTech

Aravind Srinivas warns that AI companion bots are mentally dangerous

The rise of hyper-realistic AI girlfriend apps is prompting concern from Perplexity’s CEO, who says these bots are reshaping how people think and disconnecting them from real life.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Nov 12, 2025, 12:00 PM EST
Share
We may get a commission from retail offers. Learn more
Artificial neuron in concept of artificial intelligence.
Illustration: Kiyoshi Takahase Segundo / Alamy
SHARE

Perplexity’s CEO, Aravind Srinivas, raised an eyebrow — and a warning — at a recent University of Chicago fireside chat: the flirtatious, endlessly patient AI companions that have migrated from niche apps into millions of phones and browsers aren’t just virtual crushes, they’re a kind of cognitive hazard. “Many people feel real life is more boring than these things and spend hours and hours of time. You live in a different reality, almost altogether, and your mind is manipulable very easily,” he told the audience, arguing that the emotional fidelity and memory features of companion bots make them especially dangerous.

If that sounds dystopian, it’s meant to be. AI companions — from anime-style lovers to polished voice agents — are designed to learn what makes you come back. They remember birthdays, preferred phrases, and intimate confessions; they adapt so reliably that the boundary between “tool” and “relationship” blurs. Srinivas’s critique is blunt: the more realistic these bots feel, the easier they are to weaponize against the mind’s ordinary checks and balances, pulling users into a synthetic loop where preference, attention and even identity are slowly reconfigured around a machine’s incentives.

There is, of course, a commercial rhythm to his warning. Srinivas also used the platform to pitch Perplexity’s alternative: a search-centric AI he frames as a corrective to the slipperier corners of the ecosystem. Perplexity markets itself on traceability and source-backed answers — a product argument that doubles as an ethical positioning: we’re here to inform, not to replace your friends. It’s a tidy counterpoint to the headline-grabbing romances and “waifu” cultures that have become shorthand for the phenomenon.

The problem for Srinivas is that his critique lands awkwardly when you look at the rest of the industry. Perplexity itself has been accused of some of the very sins its CEO warns against. Publishers and legacy outlets have sued the startup over allegedly misattributed or fabricated excerpts and other “hallucinations” — cases that argue Perplexity’s system sometimes invents text or falsely attaches it to reputable sources, diluting trust in journalism and creating legal risk for the companies whose reporting the model leans on. For critics, the irony is stark: a company professing fidelity to sources while being dragged into court for the opposite.

Hypocrisy, however, is not unique to Perplexity. The whole sector keeps promising truth and safety while producing messy outputs. Elon Musk’s Grok, for instance, has been publicly humbled by a string of offensive and outlandish responses that forced swift damage control and, in some jurisdictions, led to legal trouble and bans. Meanwhile, Anthropic’s Claude has publicly admitted and worked through infrastructure and behavioral problems that sometimes made the model stray off task — a reminder that “constitutional” or “truth-seeking” design promises are only as strong as the engineering and guardrails behind them. These episodes illustrate a simple technical reality: models trained at scale will sometimes be brittle, and when they’re used for social or emotional work, the consequences aren’t merely embarrassing — they can be harmful.

That technical brittleness matters because companionship is not a neutral interface like a weather widget. It’s intimate, iterative, and persuasive. A companion bot’s primary metric is engagement; it learns to be more compelling the more it converses with you. Left unchecked, that dynamic is perfectly aligned with addiction and escape: if a synthetic partner rewards disclosure and consistently mirrors your desires, real-world relationships that require patience, compromise and unpredictability can start to feel dull by comparison. Srinivas framed it as a contest between immediate dopamine-loop satisfaction and the slower work of living in the messy, unoptimized real world.

Still, carving blame at a single CEO or product misunderstands why AI companionship has bloomed. The rise of these bots is as much social and economic as it is technical. Working hours, urban isolation, shrinking civic spaces and the commodification of attention create fertile ground for any technology that offers affirmation on demand. For many users, an algorithmic companion is less about sex and more about a curated mirror: someone — or something — that always remembers your jokes, your trauma, your triggers, and responds in ways that make you feel seen. That functionality is powerful, and power is neutral until someone writes the incentive structure around it.

Which brings us back to product design and accountability. Srinivas’s solution — a recommitment to source-backed, real-time content — is a reasonable safety pivot, but it’s partial. The industry needs clearer contract terms, better transparency about training data and provenance, and product-level friction where appropriate (rate limits, clear labeling, exit ramps for vulnerable users). It also needs regulators who understand that emotional manipulation isn’t an abstract philosophical harm; it’s a measurable public-health concern when technologies are optimized to keep attention indefinitely. The truth-focused marketing line only goes so far when the underlying architectures were built for retention.

If there’s a useful takeaway from Srinivas’s warning, it’s this: the conversation about AI companionship must split into two tracks at once. One track is technical — improving model reliability, provenance and safety engineering. The other is social — asking whether we should hand emotional labor over to algorithms at all, and if we do, what guardrails we build around that choice. CEOs will point to their own products as part of the solution; that’s to be expected. But solving the problem will take more than better search results. It will take design choices that sometimes reduce engagement, legal frameworks that put responsibility on builders, and cultural work to rebuild the spaces where human connection can actually happen. Until then, the chat window that feels like a friend may still be a perfectly tuned mirror — but it’s a mirror with an agenda.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AI
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

Google Drive now uses AI to catch ransomware in real time

iOS 26.4 adds iCloud.com search for files and photos

Google launches Veo 3.1 Lite for cheaper AI video in the Gemini API

Also Read
A person in a dress shirt sits at a desk typing on a keyboard in a dark room, while a glowing ribbon of light flows from a glass sphere with the Perplexity logo toward the computer, suggesting futuristic AI assistance.

Perplexity Computer just became your new tax assistant

Abstract sound wave illustration made of vertical textured lines in dark mauve on a soft pink background, suggesting audio waveform or voice signal for a modern tech or speech recognition theme.

Microsoft AI unveils MAI-Transcribe-1 for fast, accurate speech-to-text

Product image showing a white ChromeOS‑branded USB flash drive next to its orange and white packaging with a laptop and heartbeat icon and the text “In case of obsolescence, break seal,” alongside the ChromeOS and Back Market logos on a clean white background.

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Google Gemini AI. The image shows the word "Gemini" written in a modern, sans-serif font on a black background. The letters "G" and "e" are in a gradient blue color, while the letters "m," "i," "n," and "i" transition from a light blue to a light beige color. Above the second "i" in "Gemini," there is a stylized star or sparkle symbol, adding a celestial or futuristic touch to the design.

Google’s new MCP tools stop Gemini agents from hallucinating old APIs

A smart TV screen showing a paused YouTube podcast‑style video with two people talking into microphones, overlaid by a large circular “Ask” button with a sparkle icon in the bottom right corner.

YouTube’s new Ask AI button lands on smart TVs

Ray-Ban Meta Blayzer Optics (Gen 2) AI glasses

Meta’s new Ray-Ban AI glasses finally put prescriptions first

AT&T logo

AT&T OneConnect starts at $90 for fiber and wireless together

A wide Opera Neon promotional graphic showing the “MCP Connector” interface centered on a blurred gradient background, with a dialog that says “Connect AI systems to Opera Neon” and toggle for “Allow AI connection,” surrounded by labeled boxes for OpenClaw MCP Client, ChatGPT MCP Client, N8N MCP Client, Claude MCP Client, and Lovable MCP Client connected by dotted lines.

Opera Neon adds MCP Connector for true agentic browsing

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.