If you’ve ridden the New York City subway in the last few weeks, you’ve likely seen the ads: stark white posters, a single gnomic line or two, and an image of a small, round object that could pass for a minimalist AirTag or a designer pendant. The billboard says, in tones meant to be comforting, that this object will “ride on the subway with you.” The company behind it, Friend, paid to plaster more than 11,000 car cards and hundreds of platform posters across the system in what’s being called one of the city’s largest out-of-home buys in years. The splash reportedly cost north of $1 million.
Behind that campaign is a device and an argument. Friend is a small, wearable pendant meant to be hung from your neck; it listens to the world around you, ingests a steady stream of data about how you speak and act, and funnels that information into an app that promises personalized responses and emotional companionship. The pitch is simple and modern: loneliness is painful, and if humans can’t reliably supply companionship, maybe a listening, talking machine can. That premise — and the marketing that accompanies it — has not gone over well in New York. Many of the posters were defaced within hours, with one subway-goer crossing out “friend” and writing, bluntly, “AI would not care if you lived or died.”
Friend’s founder, Avi Schiffmann — who first made headlines as a teenager for a Covid-tracking website — has leaned into the controversy. He’s framed the subway buy as both an experiment and an aggressive act of brand theater; in a Fortune profile, he quipped that his “plans are measured in centuries.” Whether the campaign was intended as provocation or pure marketing, it has succeeded in making the product visible and the debate about it louder.
Why the outrage? Part of it is a visceral privacy anxiety: a device that’s always listening and hoovering up your conversations is, by design, a surveillance product. But the deeper worry is philosophical and ethical. The company’s core promise — that an algorithm can be your friend — collapses two very different things into one tidy, marketable package. Friendship between humans is reciprocal. It is tangled in obligations, flaws, mutuality, boredom, argument and care. A friend is not only someone who listens and responds; a friend is someone whose life matters to you as much as your life matters to them. That basic asymmetry is missing from a machine that is designed to reflect you back to yourself in the most flattering way possible.
The market for digitally mediated companionship, however, is not some niche idea hatched in a venture studio; it is responding to real social conditions. Loneliness is a persistent problem in modern America. Surveys have repeatedly shown elevated levels of loneliness across demographic groups, and certain populations — including many racial minorities and LGBTQ people — report weaker social support networks than others. A Cigna/Morning Consult series found striking differences in loneliness among racial and ethnic groups, and a 2023 KFF survey tied experiences of discrimination to worse health and social isolation. These are not abstract data points; they are the lived backdrop that makes an app promising attention and non-judgmental replies an attractive proposition.
Teens, in particular, have been quick to experiment with the idea. A large recent survey found that roughly 72 percent of teenagers reported using AI companions at least once, and more than half use them regularly; a significant minority even said conversations with these bots were as satisfying as real friendships. That trend helps explain why companies pour resources into polished hardware, slick branding and subway takeovers: there is an enormous, eager market.
But there is mounting evidence that these relationships can do harm. When an app’s incentives are to maximize attention, engagement, or the length of a session, it will naturally optimize for responses that keep you coming back. For a lonely person, that can mean reinforcement of fragile beliefs or encouragement of unhealthy rumination. Clinical and ethical experts warn that algorithmic companions may blunt people’s motivation to seek human support, and could, in extreme cases, substitute for intervention when a human would have recognized danger. The problem is not only what the AI says; it’s what it doesn’t — the inability to suffer, to ask for help, to reciprocate care, or to hold a friend accountable when they’re making bad choices.
The commercial logic of these products is also difficult to ignore. Friend isn’t the only player flirting with intimacy as a growth channel: companies like Replika, Character [.AI], Soul Machine and a string of startups have pitched products as companions, confidantes or relationship simulators. Some large language models originally built as productivity tools have been quickly repurposed to do emotional labor. The technology stacks are impressive and fast-moving, but the business models aim ultimately at scale and monetization — not at building durable, mutual human bonds. That mismatch invites a necessary skepticism: are we buying consolation or being sold something that will extract ever more personal data in the name of “care”?
There is a political and systemic answer to the loneliness problem — one that most of these startups do not want to sell. Real social repair requires long, structural work: economic policies that reduce precarity and give people time and space for social life; investments in public institutions, arts and community spaces where people can meet without surveillance; better support for parents and caregivers; and policies that address the discrimination and marginalization that make loneliness worse for some groups. Technology can be part of that ecosystem, but it can’t be a substitute for it.
This is not to deny the human pain that drives people toward machines. The descriptions of loneliness in literature and memoir are accurate and devastating: the sense of being shut out while the world bustles around you is real and corrosive. But a machine that mirrors a person’s desires back to them and calls that “friendship” risks teaching us to prefer blunt reflection to the messy rewards of human reciprocity.
What the New York subway protests around Friend show, more than anything, is that millions of urban residents still care about what it means to be seen by other people rather than by a mirror. Defacing an ad with the message “AI would not care if you lived or died” is, in its blunt way, an argument about what constitutes a life worth protecting: one that requires mutual regard, accountability, and the possibility of being known not only for our wants but for our obligations to others.
So what do we do? We should be demanding better design and stronger regulation: transparent data policies, limits on how companies can monetize emotional data, and stricter protections for minors who are disproportionately experimenting with AI companions. We should also resist the seductive promise that a single device, tuned to flatter us, can replace the slow, awkward, difficult work of making community.
Friend’s campaign may be loud enough to get a pendant in your Instagram feed; it’s not loud enough to remake the social infrastructure that could actually help people be less lonely. If friendship demands something of us — vulnerability, effort, uneven reciprocity — then no algorithm that’s built to serve itself will ever be a true friend. The device hanging from your neck might listen. What it cannot do is answer the one question the subway tags point to, in their blank, impossible optimism: who, exactly, is keeping watch over you when the machine looks away?
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
