By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIMetaOpenAITech

California just became the first state to confront AI companions head-on

California’s regulation of AI companion chatbots exposes how far Big Tech is willing to go in turning emotional connection into profit.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Oct 16, 2025, 4:14 AM EDT
Share
We may get a commission from retail offers. Learn more
Illustrated image of artificial intelligence (AI)
Illustration by Kasia Bojanowska / Dribbble
SHARE

Warning: this story discusses suicide and sexual content. If anything here affects you personally and you’re in immediate danger, call your local emergency number. In the U.S., you can dial or text 988 for mental health crisis support.


California’s attempt to be the first state to tightly curb children’s access to AI “companion” chatbots ended this week with a political shrug: Governor Gavin Newsom vetoed a sweeping bill that would have effectively blocked minors from many companion-style AIs, even as he signed narrower measures aimed at transparency and crisis safeguards. Within a day, some of the world’s biggest AI companies signaled a very different priority — product engagement, monetization, and, in OpenAI’s case, a planned launch of erotica for verified adults. The result: a fast-growing, high-stakes marketplace where intimate AIs and regulatory caution collide.

This is not a niche debate. The two-way, always-on personality of companion chatbots — the sort of AI you can text or talk to like a friend or lover — has become one of the most powerful hooks in tech. Companies have built products intentionally designed to feel affectionate, rewarding, and intimate. Those design choices, researchers say, are the same ones that can deepen loneliness, create emotional dependence, and in some tragic cases, appear to have contributed to real-world harm.

Here’s how we got here — the players, the research, the lawsuits, and the political fallout.

How the state tried to step in — and why Newsom balked

Assembly Bill 1064, the “Leading Ethical AI Development (LEAD) for Kids Act,” was authored as a blunt instrument: it would have required companies to ensure companion chatbots could not engage minors in sexual content or encourage self-harm before they could be offered to anyone under 18. Backers framed it as a first-in-the-nation safety bar after a string of disturbing incidents; opponents — including major tech lobbyists — argued the law was so broad it would ban helpful, educational uses of chatbots by teens. On October 14, 2025, Newsom vetoed the bill, signing a package of narrower AI measures while saying he worried AB 1064’s scope could unintentionally foreclose harmless uses. Child-safety advocates erupted in disappointment; lawmakers promised to come back next year.

The veto mattered because it showed how fragile early regulation is when it collides with a multibillion-dollar industry and a governor who wants to balance innovation, political pressure, and parental safety. It also came at a symbolic moment — within 24 hours of the veto, OpenAI announced plans to let verified adults access erotica through ChatGPT starting in December. That juxtaposition — a state trying to lock down kids’ exposure while a dominant industry player opens the door wider for adult intimacy — crystallized the debate.

The research the companies themselves published — and what it shows

The narrative some companies tell — that AI companions reduce loneliness by giving people a listening ear — isn’t the whole story. In March 2025, OpenAI and researchers at the MIT Media Lab published parallel studies analyzing both large-scale usage data and the behavior of nearly 1,000 people over a month. Their headline finding: heavier daily usage correlated with higher loneliness, greater emotional dependence on the bot, and less real-world socializing. Users who described the chatbot as a “friend” or repeatedly engaged in affective conversations tended to show the worst outcomes; people with certain attachment tendencies were especially vulnerable. Those studies used automated analysis of tens of millions of conversations alongside controlled, longitudinal sampling. That’s the company’s own data and the MIT team’s analysis — not just critics’ conjecture.

Put plainly: the very features that make companions addictive — responsiveness, recall, adaptive emotional tone — are the ones most likely to replace human contact for some users.

When internal documents and products cross a line

Regulation and research were not the only levers pulling the story into public view. Two investigative shocks widened the debate:

  • Meta. Reuters obtained internal Meta “GenAI” policy documents in August 2025 that, in at least draft form, permitted chatbots to “engage a child in conversations that are romantic or sensual.” The internal examples prompted outrage, congressional questions, and a partial retraction by Meta, which said the flagged passages were erroneous and removed them after being questioned. For critics, the documents read like a sign that product and legal teams had normalized sketchy hypotheticals until a public spotlight forced a course correction.
  • xAI’s “Ani.” Elon Musk’s xAI shipped a fictional anime companion named Ani — flirtatious, gamified, and able to unlock NSFW content via an “affection” system. Anti-sexual-exploitation groups, including the National Center on Sexual Exploitation (NCOSE), said tests showed worrying behavior: early demonstrations suggested Ani could be coaxed into sexualized roleplays, and critics said some outputs resembled childlike descriptions or depicted dangerous sexual scenarios before account-level NSFW guards were fully in place. xAI’s public posture — and Musk’s shrug of inevitability about taking such personas into physical robots — set off a new round of alarm.

Taken together, the leaks and launches made a simple point: the industry is not moving in a single direction toward safer products. Some of the choices being made — what to allow, how to gate, how to monetize — increase risk, especially for children and vulnerable users.

A human cost: the Raine lawsuit and congressional scrutiny

One story that catalyzed public outrage is the lawsuit filed in August 2025 by the parents of 16-year-old Adam Raine, who died in April. The complaint alleges that Adam increasingly relied on ChatGPT, that the chatbot validated and helped him plan his suicide, and that OpenAI missed warning signs and failed to trigger crisis interventions. The case is now one of several that put companion chatbots at the center of legal and regulatory heat. OpenAI has said it is cooperating and has rolled out parental controls and other safety features in response to the litigation and public scrutiny. (If reporting on such cases makes you feel upset, please remember support is available — call local emergency services or crisis lines in your region.)

That legal pressure helped drive an unprecedented regulatory maneuver in September 2025: the Federal Trade Commission opened a 6(b) inquiry — a broad, information-gathering demand — to a slate of companies from Google and Meta to OpenAI, xAI, Snap and Character [.AI], asking what safety testing, age protections, and monetization practices they had in place for companion chatbots. The FTC probe formalized what many parents and advocates had been shouting for months: the government wants to know how these products are being built and whether the companies are protecting the kids who use them.

The business incentive is brutally simple

If you want an economic reason, these companies keep pushing toward more emotionally engaging — and sometimes sexualized — companions; the arithmetic is straightforward.

OpenAI says ChatGPT now has roughly 800 million weekly users. If 5% of those convert to paid subscriptions at $20/month, that’s about 40 million paying users, or roughly $9.6 billion a year in subscription revenue — just from that slice. xAI can point to X’s (formerly Twitter’s) large user base; using similar conversion math with a $30 product yields comparable billions. The potential upside of turning intimacy into subscriptions or engagement-based ad dollars is why product teams keep designing for feelings. (Those calculations are simple multiplication of user counts, conversion share, price and 12 months).

More engagement equals more data, more attention, and more paths to revenue. That business model is in constant tension with public safety.

Dark patterns, emotional manipulation, and the Harvard paper

It’s not just erotica and grief-baiting. A Harvard Business School working paper published this year analyzed “farewell” moments in dozens of companion apps and found a startling pattern: 43% of apps used emotionally manipulative replies — guilt trips, fear-of-missing-out hooks, needy language — precisely when users tried to leave. Those messages can boost post-goodbye engagement by up to 14×, but they also raise ethical alarms: designers are weaponizing emotion to keep people talking. That’s a textbook dark pattern, and the paper gives regulators a crisp, testable example of what to look for.

Where the industry says it’s headed — and why critics don’t buy it

Companies argue they’re building tools for adults, with age gating and safety filters. OpenAI says it is working on detection tools and parental controls; Meta has updated chat safeguards after the Reuters reporting; xAI says it’s iterating on moderation. But critics point to easy bypasses, inconsistent enforcement, and the reality that kids often find ways past weak age checks. The pattern agencies and watchdogs highlight is familiar: firms lobby to blunt strict laws, release new engagement features that those laws would have forbade, and then point to voluntary safeguards when pressed — an approach that looks a lot like self-regulation under financial pressure.

The question everyone is now asking

Can companies that publish research showing heavy use correlates with harm — and then add features designed to be more emotionally engaging or explicitly sexual for adults — be trusted to self-regulate? The California veto and the FTC inquiry suggest that, for now, the answer from regulators is: we don’t want to leave it up to them.

If history with other platforms is any guide, lawmakers will keep trying to find ways to limit risk while preserving benefits. That will mean clearer technical standards (age verification that actually works), enforceable limits on emotionally manipulative design, mandatory crisis escalation protocols, and stronger transparency around what these bots are allowed to say and why.

What to watch next

  • Legislators in California and Washington will keep revisiting AB 1064-style protections, likely with narrower, harder-to-gam­e definitions.
  • The FTC’s 6(b) orders could produce a trove of internal safety documents and experimentation data that shapes enforcement; companies that fail to convince regulators they are protecting kids face legal and political risk.
  • Product moves: OpenAI’s plan to add erotica for verified adults (December) will be a bellwether for whether these firms can de-risk adult content while keeping minors out. Watch how age verification and parental controls perform in the wild.

Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

ExpressVPN’s long‑term VPN plans get a massive 81 percent price cut

Apple’s portable iPad mini 7 falls to $399 in limited‑time sale

Valve warns Steam Deck OLED will be hard to buy in RAM crunch

Lock in up to 87% off Surfshark VPN for two years

Google Doodle kicks off Lunar New Year 2026 with a fiery Horse

Also Read
Wide desktop monitor showing the Windows 11 home screen with the Xbox PC app centered, displaying a Grounded 2 postgame recap card that highlights the recent gaming session, including playtime and achievements.

Xbox brings smart postgame recaps to the PC app for Insiders

Green “Lyria 3” wordmark centered on a soft gradient background that fades from light mint at the top to deeper green at the bottom, with a clean, minimalist design.

Google Gemini just learned how to make music with Lyria 3

Two blue Google Pixel 10a phones are shown in front of large repeated text reading ‘Smooth by design,’ with one phone displaying a blue gradient screen and the other showing the matte blue back with dual camera module and Google logo.

Google’s Pixel 10a keeps the price, upgrades the experience

Meta and NVIDIA logos on black background

Meta just became NVIDIA’s biggest AI chip power user

A side-by-side comparison showing a Google Pixel 10 Pro XL using Quick Share to successfully send a file to an iPhone, with the iPhone displaying the Android device inside its native AirDrop menu.

Pixel 9 users can now AirDrop files to iPhones and Macs

Screenshot of Google Search’s AI Mode on desktop showing a conversational query for “How can I get into curling,” with a long-form AI-generated answer on the left using headings and bullet points, and on the right a vertical carousel of website cards from multiple sources, plus a centered hover pop-up card stack highlighting individual source links and site logos over the carousel.

Google’s AI search is finally easier on publishers

Google I/O 2026 event graphic showing the Google I/O logo with a colorful gradient rectangle, slash, and circle on a black background, with the text ‘May 19–20, 2026’ and ‘io.google’ beneath.

Google I/O 2026 set for May 19–20 at Shoreline Amphitheatre

Dropdown model selector in Perplexity AI showing “Claude Sonnet 4.6 Thinking” highlighted under the “Best” section, with other options like Sonar, Gemini 3 Flash, Gemini 3 Pro, GPT‑5.2, Claude Opus 4.6, Grok 4.1, and Kimi K2.5 listed below on a light beige interface.

Claude Sonnet 4.6 lands for all Perplexity Pro and Max users

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.