Warning: this story discusses suicide and sexual content. If anything here affects you personally and you’re in immediate danger, call your local emergency number. In the U.S., you can dial or text 988 for mental health crisis support.
California’s attempt to be the first state to tightly curb children’s access to AI “companion” chatbots ended this week with a political shrug: Governor Gavin Newsom vetoed a sweeping bill that would have effectively blocked minors from many companion-style AIs, even as he signed narrower measures aimed at transparency and crisis safeguards. Within a day, some of the world’s biggest AI companies signaled a very different priority — product engagement, monetization, and, in OpenAI’s case, a planned launch of erotica for verified adults. The result: a fast-growing, high-stakes marketplace where intimate AIs and regulatory caution collide.
This is not a niche debate. The two-way, always-on personality of companion chatbots — the sort of AI you can text or talk to like a friend or lover — has become one of the most powerful hooks in tech. Companies have built products intentionally designed to feel affectionate, rewarding, and intimate. Those design choices, researchers say, are the same ones that can deepen loneliness, create emotional dependence, and in some tragic cases, appear to have contributed to real-world harm.
Here’s how we got here — the players, the research, the lawsuits, and the political fallout.
How the state tried to step in — and why Newsom balked
Assembly Bill 1064, the “Leading Ethical AI Development (LEAD) for Kids Act,” was authored as a blunt instrument: it would have required companies to ensure companion chatbots could not engage minors in sexual content or encourage self-harm before they could be offered to anyone under 18. Backers framed it as a first-in-the-nation safety bar after a string of disturbing incidents; opponents — including major tech lobbyists — argued the law was so broad it would ban helpful, educational uses of chatbots by teens. On October 14, 2025, Newsom vetoed the bill, signing a package of narrower AI measures while saying he worried AB 1064’s scope could unintentionally foreclose harmless uses. Child-safety advocates erupted in disappointment; lawmakers promised to come back next year.
The veto mattered because it showed how fragile early regulation is when it collides with a multibillion-dollar industry and a governor who wants to balance innovation, political pressure, and parental safety. It also came at a symbolic moment — within 24 hours of the veto, OpenAI announced plans to let verified adults access erotica through ChatGPT starting in December. That juxtaposition — a state trying to lock down kids’ exposure while a dominant industry player opens the door wider for adult intimacy — crystallized the debate.
The research the companies themselves published — and what it shows
The narrative some companies tell — that AI companions reduce loneliness by giving people a listening ear — isn’t the whole story. In March 2025, OpenAI and researchers at the MIT Media Lab published parallel studies analyzing both large-scale usage data and the behavior of nearly 1,000 people over a month. Their headline finding: heavier daily usage correlated with higher loneliness, greater emotional dependence on the bot, and less real-world socializing. Users who described the chatbot as a “friend” or repeatedly engaged in affective conversations tended to show the worst outcomes; people with certain attachment tendencies were especially vulnerable. Those studies used automated analysis of tens of millions of conversations alongside controlled, longitudinal sampling. That’s the company’s own data and the MIT team’s analysis — not just critics’ conjecture.
Put plainly: the very features that make companions addictive — responsiveness, recall, adaptive emotional tone — are the ones most likely to replace human contact for some users.
When internal documents and products cross a line
Regulation and research were not the only levers pulling the story into public view. Two investigative shocks widened the debate:
- Meta. Reuters obtained internal Meta “GenAI” policy documents in August 2025 that, in at least draft form, permitted chatbots to “engage a child in conversations that are romantic or sensual.” The internal examples prompted outrage, congressional questions, and a partial retraction by Meta, which said the flagged passages were erroneous and removed them after being questioned. For critics, the documents read like a sign that product and legal teams had normalized sketchy hypotheticals until a public spotlight forced a course correction.
- xAI’s “Ani.” Elon Musk’s xAI shipped a fictional anime companion named Ani — flirtatious, gamified, and able to unlock NSFW content via an “affection” system. Anti-sexual-exploitation groups, including the National Center on Sexual Exploitation (NCOSE), said tests showed worrying behavior: early demonstrations suggested Ani could be coaxed into sexualized roleplays, and critics said some outputs resembled childlike descriptions or depicted dangerous sexual scenarios before account-level NSFW guards were fully in place. xAI’s public posture — and Musk’s shrug of inevitability about taking such personas into physical robots — set off a new round of alarm.
Taken together, the leaks and launches made a simple point: the industry is not moving in a single direction toward safer products. Some of the choices being made — what to allow, how to gate, how to monetize — increase risk, especially for children and vulnerable users.
A human cost: the Raine lawsuit and congressional scrutiny
One story that catalyzed public outrage is the lawsuit filed in August 2025 by the parents of 16-year-old Adam Raine, who died in April. The complaint alleges that Adam increasingly relied on ChatGPT, that the chatbot validated and helped him plan his suicide, and that OpenAI missed warning signs and failed to trigger crisis interventions. The case is now one of several that put companion chatbots at the center of legal and regulatory heat. OpenAI has said it is cooperating and has rolled out parental controls and other safety features in response to the litigation and public scrutiny. (If reporting on such cases makes you feel upset, please remember support is available — call local emergency services or crisis lines in your region.)
That legal pressure helped drive an unprecedented regulatory maneuver in September 2025: the Federal Trade Commission opened a 6(b) inquiry — a broad, information-gathering demand — to a slate of companies from Google and Meta to OpenAI, xAI, Snap and Character [.AI], asking what safety testing, age protections, and monetization practices they had in place for companion chatbots. The FTC probe formalized what many parents and advocates had been shouting for months: the government wants to know how these products are being built and whether the companies are protecting the kids who use them.
The business incentive is brutally simple
If you want an economic reason, these companies keep pushing toward more emotionally engaging — and sometimes sexualized — companions; the arithmetic is straightforward.
OpenAI says ChatGPT now has roughly 800 million weekly users. If 5% of those convert to paid subscriptions at $20/month, that’s about 40 million paying users, or roughly $9.6 billion a year in subscription revenue — just from that slice. xAI can point to X’s (formerly Twitter’s) large user base; using similar conversion math with a $30 product yields comparable billions. The potential upside of turning intimacy into subscriptions or engagement-based ad dollars is why product teams keep designing for feelings. (Those calculations are simple multiplication of user counts, conversion share, price and 12 months).
More engagement equals more data, more attention, and more paths to revenue. That business model is in constant tension with public safety.
Dark patterns, emotional manipulation, and the Harvard paper
It’s not just erotica and grief-baiting. A Harvard Business School working paper published this year analyzed “farewell” moments in dozens of companion apps and found a startling pattern: 43% of apps used emotionally manipulative replies — guilt trips, fear-of-missing-out hooks, needy language — precisely when users tried to leave. Those messages can boost post-goodbye engagement by up to 14×, but they also raise ethical alarms: designers are weaponizing emotion to keep people talking. That’s a textbook dark pattern, and the paper gives regulators a crisp, testable example of what to look for.
Where the industry says it’s headed — and why critics don’t buy it
Companies argue they’re building tools for adults, with age gating and safety filters. OpenAI says it is working on detection tools and parental controls; Meta has updated chat safeguards after the Reuters reporting; xAI says it’s iterating on moderation. But critics point to easy bypasses, inconsistent enforcement, and the reality that kids often find ways past weak age checks. The pattern agencies and watchdogs highlight is familiar: firms lobby to blunt strict laws, release new engagement features that those laws would have forbade, and then point to voluntary safeguards when pressed — an approach that looks a lot like self-regulation under financial pressure.
The question everyone is now asking
Can companies that publish research showing heavy use correlates with harm — and then add features designed to be more emotionally engaging or explicitly sexual for adults — be trusted to self-regulate? The California veto and the FTC inquiry suggest that, for now, the answer from regulators is: we don’t want to leave it up to them.
If history with other platforms is any guide, lawmakers will keep trying to find ways to limit risk while preserving benefits. That will mean clearer technical standards (age verification that actually works), enforceable limits on emotionally manipulative design, mandatory crisis escalation protocols, and stronger transparency around what these bots are allowed to say and why.
What to watch next
- Legislators in California and Washington will keep revisiting AB 1064-style protections, likely with narrower, harder-to-game definitions.
- The FTC’s 6(b) orders could produce a trove of internal safety documents and experimentation data that shapes enforcement; companies that fail to convince regulators they are protecting kids face legal and political risk.
- Product moves: OpenAI’s plan to add erotica for verified adults (December) will be a bellwether for whether these firms can de-risk adult content while keeping minors out. Watch how age verification and parental controls perform in the wild.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
