Meta is getting ready to test one of the most controversial ideas in consumer tech: smart glasses that don’t just see the world, but recognize the people in it. Internally, the project has an almost cute codename – “Name Tag” – but the implications are anything but light.
At a basic level, Name Tag is meant to do exactly what the name suggests: you look at someone through Meta’s Ray-Ban or Oakley-branded smart glasses, and the system tries to tell you who they are and surface extra details through Meta’s AI assistant. The company has reportedly been exploring a few versions of this: one where it only recognizes people you’re already connected to on Facebook or Instagram, another where it can pull in details from public Instagram profiles, and a hard line—at least on paper—against a fully universal “look up any stranger on the street” mode. It’s the difference between “Who is this friend from college whose name I always forget?” and “Who is that random person sitting three tables away, and what can you tell me about them?”—and that line is exactly what has regulators and privacy advocates nervous.
The timing here is not accidental. Meta is in a crowded race to make smart glasses the “next smartphone,” facing pressure from startups and heavyweights like OpenAI that are also working on camera-first AI wearables. The Ray-Ban Meta glasses already let you snap photos, record video, livestream and ask an onboard assistant to identify landmarks, translate signs or describe what it sees—just not people. Name Tag is pitched internally as the feature that could turn these glasses from a neat camera with an AI voice into something that feels indispensable, especially for people who struggle with face recognition in everyday life, or for visually impaired users trying to navigate public spaces.
That’s actually where Meta first planned to roll this out. According to internal documents and multiple reports, the original idea was to quietly test Name Tag at a conference for blind and visually impaired attendees before a broader launch. The logic was simple: if you can show a clear assistive benefit in a context where consent is tightly controlled and expectations are set up front, it might blunt some of the initial backlash. That test never happened, but the thinking behind it tells you a lot about how carefully Meta knows it has to move, at least publicly.
Behind the scenes, the company has been weighing what its own documents describe as real “safety and privacy risks.” One internal memo from Meta’s Reality Labs division went even further, reportedly arguing that the current wave of political chaos in the US actually creates a strategic window to ship something this sensitive. The language is blunt: the company expects the civil society groups that would normally organize around a feature like this to have their attention and resources pulled toward the election and other crises, reducing the immediate pressure on Meta. It’s a cold, almost clinical way to talk about public oversight, and it has already become its own mini scandal.
Part of why the reaction is so sharp is Meta’s track record. The company shut down Facebook’s original Face Recognition system back in 2021 after years of criticism, regulatory heat and a multi‑billion dollar slate of privacy settlements over how it collected and used biometric data. Then, in classic Meta fashion, it quietly reintroduced facial recognition in 2024 and 2025, but in a very specific context: fighting scam ads featuring fake versions of celebrities, and streamlining account recovery using selfie‑based verification. Public figures in regions like the UK, EU and South Korea can now opt into a system that scans ads for AI‑generated or stolen likenesses, while ordinary users can choose to unlock their accounts by taking a short video selfie instead of uploading documents. Meta describes those tools as privacy‑preserving—encrypted selfies, no long‑term storage of facial templates, and opt‑in by design—but critics see them as a foot back in the biometric door.
Smart glasses add a whole new layer to that debate, because they move facial recognition from your photo library into the physical world. Even before Name Tag, privacy experts were already uneasy about Ray-Ban Meta glasses, in part because their cameras are small, and the white LED meant to indicate recording can be easy to miss. Every time you snap a photo or ask the AI assistant to “describe what I’m looking at,” that visual feed goes back to Meta’s cloud, where it can be processed and, in many cases, used to train the company’s AI models under its current privacy policy. Meta has published “best practices” suggesting you tell people before recording, turn the glasses off in places like bathrooms and medical offices, and avoid filming sensitive situations—but those are guidelines, not hard technical limits.
Now add the ability to attach names and profiles to faces in that stream, even in a limited, opt‑in way. For many users, the immediate mental image isn’t “accessibility tool,” it’s “portable surveillance device.” Privacy advocates point to scenarios like abusive partners tracking people in public, stalkers using glasses to identify targets, or just everyday people being cataloged by someone else’s wearable without any way to opt out in practice. Even if Meta says it won’t let you identify random strangers by default, the concern is that once the underlying capability exists, the pressure to expand it—or the risk of it being misused, hacked or cloned by other companies—goes up dramatically.
Meta’s answer, at least for now, is to emphasize guardrails. Internally, the company has floated a version of Name Tag that only works on people who have explicitly agreed to be recognized, likely through settings in Facebook or Instagram. Think of something like, “Allow friends with Meta glasses to see my name when they look at me,” or “Allow people I follow to see my basic profile.” That would mirror what Meta has already done with its anti‑fraud facial recognition tools, where public figures must opt in before the system starts scanning ads for impersonations. There’s also talk of keeping the recognition results lightweight—names and a few context tiles, rather than dumping your full profile into someone’s field of view.
But even with those restrictions, a lot of big questions remain unanswered. How do you verify that someone actually gave consent to be recognized, especially in photos and videos where a lot of people appear at once? What happens when law enforcement, advertisers or third‑party developers start asking for access to those recognition pipelines? And will Meta promise, in its binding policy, that it won’t repurpose smart‑glasses facial data for ad targeting, even in “aggregated” or “anonymized” form? None of that is clear yet, and that ambiguity is feeding the outrage as much as the core feature itself.
It’s also worth zooming out. Facial recognition is already everywhere: in airports, on city CCTV networks, inside some retail loss‑prevention systems, and on personal devices like phones and laptops. The difference with something like Name Tag is who’s in control. Instead of a border agency or a store using cameras inside a defined space, you suddenly have millions of individuals walking around with recognition engines on their faces, powered by one of the largest data‑hungry platforms on Earth. The line between “my personal assistant helping me remember names” and “a roaming extension of Meta’s surveillance and data‑collection infrastructure” can get very blurry very quickly.
For Meta, though, the bet is obvious. If it can make smart glasses genuinely useful, they could become a new hardware category that the company doesn’t just build apps for, but actually owns—a chance to escape the gravitational pull of Apple and Google’s mobile ecosystems. Name Tag is part of that push, alongside deeper integration of Meta AI, better cameras and audio, and tighter partnerships with fashion brands like Ray‑Ban to make the tech something people actually want to wear. In that sense, the controversy around facial recognition is almost a by‑product of a bigger strategic gamble: where does Meta draw the line on how invasive it’s willing to be to make these glasses feel “magical”?
If the past few years are any indication, we’re headed toward a messy compromise rather than a clean win for either side. Regulators in Europe and elsewhere have already shown that they’re willing to push back hard on biometric overreach, but they’ve also signed off on tightly scoped uses like Meta’s anti‑scam tools. Civil society groups will almost certainly challenge Name Tag if and when it rolls out, especially given the leaked memo about taking advantage of political distractions. And ordinary users will be left to make their own calculations: does the convenience of having an AI whisper names into your ear outweigh the discomfort of living in a world where any pair of sunglasses might be quietly scanning your face?
For now, Name Tag is still just a codename in internal docs and anonymous briefings, not a toggle you can flip on your Ray‑Ban Meta glasses. But the pieces are clearly moving into place: the hardware on people’s faces, the AI assistant listening for commands, the cloud infrastructure for processing images at scale, and a company that has never been shy about pushing into gray areas of privacy if it thinks there’s a big enough prize on the other side. When facial recognition finally hits smart glasses in a mainstream way—whether from Meta or a rival—it won’t just be a new feature; it’ll be a test of how much persistent, real‑world surveillance society is willing to normalize in the name of convenience.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
