By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIMetaTech

Meta’s smart glasses may soon recognize faces with “Name Tag”

Meta is testing a “Name Tag” feature that would let its smart glasses identify people in real time, and privacy advocates are already on edge.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Feb 13, 2026, 12:00 PM EST
Share
We may get a commission from retail offers. Learn more
Ray-Ban Meta smart glasses
Image: Meta
SHARE

Meta is getting ready to test one of the most controversial ideas in consumer tech: smart glasses that don’t just see the world, but recognize the people in it. Internally, the project has an almost cute codename – “Name Tag” – but the implications are anything but light.

At a basic level, Name Tag is meant to do exactly what the name suggests: you look at someone through Meta’s Ray-Ban or Oakley-branded smart glasses, and the system tries to tell you who they are and surface extra details through Meta’s AI assistant. The company has reportedly been exploring a few versions of this: one where it only recognizes people you’re already connected to on Facebook or Instagram, another where it can pull in details from public Instagram profiles, and a hard line—at least on paper—against a fully universal “look up any stranger on the street” mode. It’s the difference between “Who is this friend from college whose name I always forget?” and “Who is that random person sitting three tables away, and what can you tell me about them?”—and that line is exactly what has regulators and privacy advocates nervous.

The timing here is not accidental. Meta is in a crowded race to make smart glasses the “next smartphone,” facing pressure from startups and heavyweights like OpenAI that are also working on camera-first AI wearables. The Ray-Ban Meta glasses already let you snap photos, record video, livestream and ask an onboard assistant to identify landmarks, translate signs or describe what it sees—just not people. Name Tag is pitched internally as the feature that could turn these glasses from a neat camera with an AI voice into something that feels indispensable, especially for people who struggle with face recognition in everyday life, or for visually impaired users trying to navigate public spaces.

That’s actually where Meta first planned to roll this out. According to internal documents and multiple reports, the original idea was to quietly test Name Tag at a conference for blind and visually impaired attendees before a broader launch. The logic was simple: if you can show a clear assistive benefit in a context where consent is tightly controlled and expectations are set up front, it might blunt some of the initial backlash. That test never happened, but the thinking behind it tells you a lot about how carefully Meta knows it has to move, at least publicly.

Behind the scenes, the company has been weighing what its own documents describe as real “safety and privacy risks.” One internal memo from Meta’s Reality Labs division went even further, reportedly arguing that the current wave of political chaos in the US actually creates a strategic window to ship something this sensitive. The language is blunt: the company expects the civil society groups that would normally organize around a feature like this to have their attention and resources pulled toward the election and other crises, reducing the immediate pressure on Meta. It’s a cold, almost clinical way to talk about public oversight, and it has already become its own mini scandal.

Part of why the reaction is so sharp is Meta’s track record. The company shut down Facebook’s original Face Recognition system back in 2021 after years of criticism, regulatory heat and a multi‑billion dollar slate of privacy settlements over how it collected and used biometric data. Then, in classic Meta fashion, it quietly reintroduced facial recognition in 2024 and 2025, but in a very specific context: fighting scam ads featuring fake versions of celebrities, and streamlining account recovery using selfie‑based verification. Public figures in regions like the UK, EU and South Korea can now opt into a system that scans ads for AI‑generated or stolen likenesses, while ordinary users can choose to unlock their accounts by taking a short video selfie instead of uploading documents. Meta describes those tools as privacy‑preserving—encrypted selfies, no long‑term storage of facial templates, and opt‑in by design—but critics see them as a foot back in the biometric door.

Smart glasses add a whole new layer to that debate, because they move facial recognition from your photo library into the physical world. Even before Name Tag, privacy experts were already uneasy about Ray-Ban Meta glasses, in part because their cameras are small, and the white LED meant to indicate recording can be easy to miss. Every time you snap a photo or ask the AI assistant to “describe what I’m looking at,” that visual feed goes back to Meta’s cloud, where it can be processed and, in many cases, used to train the company’s AI models under its current privacy policy. Meta has published “best practices” suggesting you tell people before recording, turn the glasses off in places like bathrooms and medical offices, and avoid filming sensitive situations—but those are guidelines, not hard technical limits.

Now add the ability to attach names and profiles to faces in that stream, even in a limited, opt‑in way. For many users, the immediate mental image isn’t “accessibility tool,” it’s “portable surveillance device.” Privacy advocates point to scenarios like abusive partners tracking people in public, stalkers using glasses to identify targets, or just everyday people being cataloged by someone else’s wearable without any way to opt out in practice. Even if Meta says it won’t let you identify random strangers by default, the concern is that once the underlying capability exists, the pressure to expand it—or the risk of it being misused, hacked or cloned by other companies—goes up dramatically.

Meta’s answer, at least for now, is to emphasize guardrails. Internally, the company has floated a version of Name Tag that only works on people who have explicitly agreed to be recognized, likely through settings in Facebook or Instagram. Think of something like, “Allow friends with Meta glasses to see my name when they look at me,” or “Allow people I follow to see my basic profile.” That would mirror what Meta has already done with its anti‑fraud facial recognition tools, where public figures must opt in before the system starts scanning ads for impersonations. There’s also talk of keeping the recognition results lightweight—names and a few context tiles, rather than dumping your full profile into someone’s field of view.

But even with those restrictions, a lot of big questions remain unanswered. How do you verify that someone actually gave consent to be recognized, especially in photos and videos where a lot of people appear at once? What happens when law enforcement, advertisers or third‑party developers start asking for access to those recognition pipelines? And will Meta promise, in its binding policy, that it won’t repurpose smart‑glasses facial data for ad targeting, even in “aggregated” or “anonymized” form? None of that is clear yet, and that ambiguity is feeding the outrage as much as the core feature itself.

It’s also worth zooming out. Facial recognition is already everywhere: in airports, on city CCTV networks, inside some retail loss‑prevention systems, and on personal devices like phones and laptops. The difference with something like Name Tag is who’s in control. Instead of a border agency or a store using cameras inside a defined space, you suddenly have millions of individuals walking around with recognition engines on their faces, powered by one of the largest data‑hungry platforms on Earth. The line between “my personal assistant helping me remember names” and “a roaming extension of Meta’s surveillance and data‑collection infrastructure” can get very blurry very quickly.

For Meta, though, the bet is obvious. If it can make smart glasses genuinely useful, they could become a new hardware category that the company doesn’t just build apps for, but actually owns—a chance to escape the gravitational pull of Apple and Google’s mobile ecosystems. Name Tag is part of that push, alongside deeper integration of Meta AI, better cameras and audio, and tighter partnerships with fashion brands like Ray‑Ban to make the tech something people actually want to wear. In that sense, the controversy around facial recognition is almost a by‑product of a bigger strategic gamble: where does Meta draw the line on how invasive it’s willing to be to make these glasses feel “magical”?

If the past few years are any indication, we’re headed toward a messy compromise rather than a clean win for either side. Regulators in Europe and elsewhere have already shown that they’re willing to push back hard on biometric overreach, but they’ve also signed off on tightly scoped uses like Meta’s anti‑scam tools. Civil society groups will almost certainly challenge Name Tag if and when it rolls out, especially given the leaked memo about taking advantage of political distractions. And ordinary users will be left to make their own calculations: does the convenience of having an AI whisper names into your ear outweigh the discomfort of living in a world where any pair of sunglasses might be quietly scanning your face?

For now, Name Tag is still just a codename in internal docs and anonymous briefings, not a toggle you can flip on your Ray‑Ban Meta glasses. But the pieces are clearly moving into place: the hardware on people’s faces, the AI assistant listening for commands, the cloud infrastructure for processing images at scale, and a company that has never been shy about pushing into gray areas of privacy if it thinks there’s a big enough prize on the other side. When facial recognition finally hits smart glasses in a mainstream way—whether from Meta or a rival—it won’t just be a new feature; it’ll be a test of how much persistent, real‑world surveillance society is willing to normalize in the name of convenience.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

This Nimble 35W GaN charger with retractable cable is $16 off

25W Qi2 wireless comes alive with this Google Pixelsnap Charger deal

TACT Dial 01: turn it, press it, focus — that’s literally it

Perplexity Computer is the AI that actually does your work

Claude Marketplace lets you use one AI commitment across multiple tools

Also Read
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.

The Pentagon AI deal that OpenAI’s robotics head couldn’t accept

Nimble Fold 3-in-1 Wireless Travel Charging Dock

Charge iPhone, Apple Watch and AirPods with this Nimble 3‑in‑1 deal

99ONE Rogue 102321

99ONE Rogue wants to kill the ugly helmet comms box forever

Close-up of a person holding the Google Pixel 10 Pro Fold in Moonstone gray with both hands, rear-facing triple camera array and Google "G" logo prominently visible, worn against a silver knit top and blue jacket with a poolside background.

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

Google Search AI Mode showing Canvas in action, with a split-screen view of a conversational AI chat on the left and an "EE Opportunity Tracker" scholarship and grant tracking dashboard on the right, displaying a total funding secured amount of $5,000, scholarship cards with deadlines, and status labels including "To Apply" and "Awarded."

Google’s Canvas AI Mode rolls out to everyone in the U.S.

Google NotebookLM app listing on the Apple App Store displayed on an iPhone screen, showing the app icon, tagline "Understand anything," a Get button with In-App Purchases noted, 1.9K ratings, age rating 4+, and a chart ranking of No. 36 in Productivity.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

A Google Messages conversation on an Android phone showing a real-time location sharing card powered by Find Hub and Google Maps, displaying a live map view near San Francisco Botanical Garden with a blue location dot, labeled "Your location – Sharing until 10:30 AM," within a chat about meeting up for coffee.

Google Messages real-time location sharing is here — here’s how it works

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.