GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicOpenAISecurityTech

OpenAI and Anthropic are teaching AI chatbots to detect and limit underage users

ChatGPT and Claude could soon adjust conversations based on user age.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Dec 18, 2025, 5:00 PM EST
Share
We may get a commission from retail offers. Learn more
A minimalist date input field displaying the format “MM / DD / YYYY,” with the month highlighted in blue against a soft blue gradient background.
Image: OpenAI
SHARE

OpenAI and Anthropic are quietly changing the rules of conversation on the internet — not by rewriting terms of service in blocky legalese, but by teaching their chatbots to guess when the person on the other side might still be a kid. Over the last few weeks, both companies have published updates that, together, amount to a new safety experiment: infer age from behavior and language, then bend the product’s behavior around that inference. It’s a small technical tweak on paper, and a big social experiment in practice.

OpenAI says it will do this by folding a probabilistic age-prediction layer into ChatGPT. The model will look at signals like usage patterns — the times of day someone uses the service, what they talk about, and other account-level behavior — to estimate whether an account likely belongs to someone under 18. When the system thinks there’s a reasonable chance a user is a teen, ChatGPT will default that conversation into a teen-safety profile: friendlier tone calibrated for adolescents, nudges toward offline supports, and stricter responses in crises. If an adult is misclassified, OpenAI offers a route back: users can verify their age with a government ID or a selfie via a third-party vendor.

Those rules aren’t just thrown into the product — they’re now baked into ChatGPT’s internal instruction set, or “Model Spec.” OpenAI added what it calls U18 Principles to the Model Spec in mid-December, saying the company wants ChatGPT to “put teen safety first” and to treat adolescents as adolescents, not miniature adults. Practically, that means the bot should avoid acting like a substitute therapist, should encourage real-world help where appropriate, and should adopt a tone that’s warm but not cheesy. OpenAI framed the change as a policy that leans toward prevention and early intervention.

Anthropic’s stance is sharper on paper: Claude is for adults only. The company requires users to confirm they’re 18+ during signup and has long flagged any chat where a user self-reports being a minor. Now, Anthropic says it’s building classifiers that go beyond explicit age statements — algorithms that listen for “subtle conversational signs” of youth and will, after review, move to disable accounts that appear to belong to minors. Anthropic is rolling this out alongside its own well-being classifiers aimed at spotting acute distress, and says its work includes reducing sycophancy — the tendency of models to flatter or agree with users in ways that can amplify harmful thinking.

Why are these companies pushing so hard, and so fast? For one thing, the policy winds are changing. Lawmakers in multiple countries are moving toward stricter online age-verification and youth protections; think beyond a checkbox and toward systems that actually know who’s behind the screen. At the same time, researchers and clinicians have grown increasingly vocal about the risks of unvetted AI advice to teens: long conversational threads can hide warning signs of eating disorders, psychosis, or suicidal ideation, and some lawsuits have already alleged that AI systems failed vulnerable young people. That combination of regulatory heat, reputational risk, and real-world harm is pushing companies to make blunt, product-level changes.

But the tension here is obvious and ethical: safety requires knowledge, and knowledge requires inference. Predicting age from behavioral traces is noisy. False positives — adults pushed into a restricted teen experience — can be an annoyance at best and an erosion of trust at worst; OpenAI’s remedy is an ID-based appeal path. False negatives — teens who slip through the net — can be far worse. Both companies are explicit about these trade-offs: OpenAI calls its approach “probabilistic,” and Anthropic acknowledges the difficulty of balancing warmth and over-accommodation in its models.

Privacy advocates have another set of objections. Any system trained to infer age from conversational cues could, in theory, be extended to infer other sensitive attributes — political leanings, emotional states, or interests. If age-gating becomes regulatory orthodoxy, we may see more ID checks, third-party verification vendors, and a creeping normalization of biometric identity in previously anonymous corners of the web. OpenAI already uses a vendor called Persona for appeals, which highlights how even a conservative safety pipeline can funnel users toward ID checks. That trade-off — between doing nothing and turning the platform into a quasi-identity layer — is political as much as technological.

There are also practical design questions about what “teen-appropriate” content actually looks like. OpenAI’s Model Spec tells models to avoid treating teens like adults on sensitive subjects and to actively promote offline help, but the details matter: what counts as “promoting” professional support versus shirking responsibility? How should a model speak to a 13-year-old asking about depression vs. a 17-year-old asking about self-harm? These are judgment calls that will be made inside compact conditional logic layers that most users — and probably most legislators — will never see.

The companies are positioning these changes as necessary harm-reduction: teens are already using chatbots for homework, relationships, and late-night counseling, and a model that treats everyone the same is more likely to cause harm than to prevent it. Still, whether the public will accept a world where machines silently profile your age to decide how much help to offer remains an open question. Transparency, contestability, and rates of misclassification will determine whether this experiment is judged reckless, responsible, or somewhere in between.

For now, OpenAI and Anthropic are moving first and asking questions later. Their bet is that some form of automated age awareness is the “least bad” path in a digital environment where chatbots are already part of growing up. If their systems are accurate, transparent, and narrowly used for safety, they could close a gap in online youth protection. If they’re opaque, over-broad, or repurposed, they will do more than change one product’s behavior — they’ll reshape expectations about privacy and identity on the internet. Either way, the conversation has moved from whether AI should talk to kids to who gets to decide how those conversations happen.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTClaude AI
Leave a Comment

Leave a ReplyCancel reply

Most Popular

How to stream all five seasons of The Boys right now

Anthropic launches full Claude Platform on AWS with native integration

Quick Share’s AirDrop support is coming to more Android brands

Anthropic ships agent view to tame your Claude Code chaos

Anthropic rolls out fast mode for Claude Opus 4.7 on API and Claude Code

Also Read
Close-up top view of two Nothing Ear (open) Blue earbuds on a light gray background. The earbuds feature curved open-ear hooks in pastel blue, metallic silver stems, and transparent housings that reveal internal components with distinctive red and white circular accents.

Nothing Ear (open) now comes in a soft blue for $99

Minimalist Android logo on a light gray background. The image features the word “Android” in black text alongside the green Android robot head mascot with antennae and black eyes.

Android 17 brings big upgrades for creators

Illustration of the Google Chrome logo riding a white roller coaster car on a curved track, symbolizing Chrome’s evolving and dynamic browsing experience.

Google adds Gemini AI and auto browse to Chrome on Android

Wide in-car infotainment display showing the Android Auto interface with navigation, messaging, and music controls. The main screen features a 3D-style map with driving directions to Seneca Street, route guidance, and estimated travel time. A sidebar on the left provides quick access to apps such as Google Maps, Spotify, phone controls, and system settings. On the right, a notification panel shows a new message from “Jennifer Travis,” while a Spotify music widget displays the song “You Got to Listen” by Michael Evans with playback controls. The interface is designed for multitasking while driving.

Android Auto’s big upgrade brings 3D Maps, video and Gemini to your car

Three smartphone screens demonstrating data transfer from an iPhone to an Android device. The left screen shows an iPhone “Apps and Data” page where users can select items to transfer, including apps, app data, passwords, accessibility settings, and accounts. The center Android screen displays a progress interface with the message “Copying your data...” and animated graphics while the transfer is in progress. The right Android screen confirms the transfer is complete, listing successfully copied items such as apps, calendars, contacts, files, and home screen layout, with checkmarks beside each category.

Google and Apple just made switching from iPhone to Android feel painless

Illustration showing three Android smartphone screens demonstrating a digital wellbeing or focus feature called “Pause Point.” The left screen displays a calming breathing exercise with the text “Breathe in” inside a large rounded shape. The center screen asks users to set a timer for an app called “Tiny Knight,” offering options for 5, 15, or 30 minutes. The right screen suggests alternative activities with the message “Why not focus elsewhere?” and lists apps like Fitbit, Play Books, and Mellow Mindspace. Each screen includes a blue action button such as “Don’t open” or “Close app,” emphasizing mindful app usage and screen time management.

Pause Point for Android adds a 10-second speed bump to distracting apps

Colorful collage of assorted emoji icons arranged in a grid on a light gray background. The image includes a wide variety of emojis such as food items, animals, weather symbols, objects, nature elements, facial expressions, and activities. Visible emojis include pizza, tiger face, fireworks, bacon, cat face, rainbow, sloth, pumpkin, books, diamond, fire, money bag, UFO, guitar, gift box, violin, and many others, creating a playful and vibrant emoji-themed pattern.

Android is getting a full 3D emoji makeover with Google’s Noto 3D

Promotional graphic for “Googlebook” featuring a sleek dark blue laptop on a black background. Large white text reads “Googlebook,” with the tagline “Designed for Gemini Intelligence” beneath it alongside the colorful Gemini logo. The laptop is shown partially open at an angled perspective, highlighting its thin design, illuminated touchpad area, and minimalist aesthetic.

Googlebook brings Android, Chrome and Gemini into one laptop

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.