For years now, Europe has worried about what social media is doing to its kids. The headlines are familiar: rising anxiety, self‑harm content, bullying that follows children home via their phones, and feeds that never seem to stop scrolling. So it’s no surprise that a growing list of European governments is reaching for the bluntest tool available — trying to keep teenagers off social media altogether.
France, Denmark and Greece are among the countries pushing age‑based bans or strict access limits, often framed as “no social media under 15 or 16.” They’re backed by public opinion too: a recent POLITICO European Pulse survey across six major EU countries found that roughly three in four respondents supported bans on social media for minors, reflecting a deep unease about how platforms affect children’s mental health and safety. At first glance, it sounds like a simple fix: if apps are harming kids, just keep kids away from the apps.
Estonia, however, is breaking ranks — and forcing Europe to confront a tougher question: what if bans don’t actually work?
Speaking at POLITICO’s European Pulse Forum in Barcelona, Estonian Education Minister Kristina Kallas argued that blanket bans on social media for minors are more political theatre than a practical solution. In her view, telling platforms to block teenagers might look decisive on paper, but it’s the digital equivalent of putting a “No under‑16s” sign on a door that kids already know how to pick. “Banning kids from social media won’t actually solve the problems,” she warned, adding that young people will “find very quickly the ways to go around and to still use social media”.
Her basic point is one any parent of a tech‑savvy teen will recognise: children are often better at the internet than the adults trying to regulate it. Whether it’s using VPNs, borrowed logins, fake birthdates or secondary accounts, they are adept at dodging rules that treat age as just a box to tick. Estonia’s justice and digital affairs minister Liisa Pakosta has made the same case at home, arguing that a blanket age‑verification regime — where everyone must prove their identity to access social media — would be intrusive, easily bypassed and ultimately misdirected. If platforms aren’t even complying with the rules Europe already has, she asks, why would another layer of bans suddenly make them behave?
This isn’t just a philosophical objection. There’s a concrete counter‑example the Estonians point to: Australia. Canberra became the first country to push through a nationwide ban on social media accounts for under‑16s, requiring platforms like TikTok, Instagram, Snapchat and YouTube to keep younger teenagers off their services. In theory, that’s exactly the kind of tough stance many European politicians are now calling for. In practice, Australia’s online safety watchdog has publicly flagged “major gaps” in how the ban is being enforced, from kids being able to re‑attempt age checks over and over to weak tools for parents to report underage accounts. Months after the law took effect, regulators were still worried that big platforms weren’t “doing enough to adhere” to the rules.
For Estonia, that’s the cautionary tale: bans sound bold, but without airtight enforcement and cooperation from tech giants, they risk becoming porous fences that mostly inconvenience honest users while the determined ones slip through. That’s why Kallas insists the real responsibility shouldn’t be pushed onto 13‑year‑olds trying to “self‑regulate” their feeds, but onto governments and corporations that design and run the platforms in the first place. In her words, “Europe pretends to be weak when it comes to big American and international corporations,” but that’s just a “pretense” — the EU has the regulatory firepower, it just needs to use it.
That firepower already exists on paper. The EU’s Digital Services Act (DSA), which came fully into force in 2024, gives Brussels the authority to force large platforms to take “appropriate and proportionate measures” to ensure a high level of privacy, safety and security for minors. The European Commission has opened child‑safety probes into several tech giants under the DSA and has since issued detailed guidelines for how platforms should protect minors — from setting kids’ accounts to private by default and disabling addictive features such as streaks, to limiting the use of manipulative design tricks that keep young users hooked. Companies that fail to comply can face fines of up to 6 percent of global turnover, a level of penalty that even the largest Silicon Valley players cannot shrug off.
This is the path Estonia wants Europe to commit to: less moral panic about teenagers online, more hard‑nosed enforcement of Big Tech’s legal obligations. Rather than pretending that a simple age threshold can solve everything, Tallinn argues for three pillars. First, make platforms obey existing EU law on data protection and child safety, and investigate them rigorously when they fall short. Second, redesign platforms to be safer by default for young people — fewer dark patterns, less infinite scroll and autoplay, stronger privacy by design. And third, invest heavily in digital literacy and media education so that kids, parents and teachers actually know how to navigate these spaces safely instead of being kept out of them altogether.
That last point may be the least flashy but the most important. Estonia and a small group of like‑minded policymakers argue that children are already part of the “information society,” whether adults like it or not, and that the goal should be equipping them to handle that reality, not trying to shut the door. For teens, social media isn’t just mindless scrolling; it’s where they get news, organise around shared interests, find support communities and maintain friendships. Studies have highlighted genuine harms, from increased anxiety to sleep problems, but they also show benefits such as social connection and peer support, particularly for young people who feel isolated offline. Treating social media as pure poison, Estonia suggests, ignores this more complicated picture.
The political winds in Europe, though, are blowing in a different direction. After years of scandals around teen mental health and online harms, there’s enormous pressure on governments to “do something,” fast. A strong majority of Europeans now back restrictions or bans on minors’ social media use, creating a powerful incentive for leaders to promise clear‑cut age limits that can be easily explained on a campaign poster. An October 2025 “Jutland Declaration,” a political push to restrict children’s access to social media, won broad support across the bloc, with only a couple of countries — including Estonia — refusing to sign on. For many capitals, the appeal of being seen to protect children outweighs the messy details of how a ban would actually work day‑to‑day.
The deeper divide here isn’t just about age limits; it’s about where you put the burden. One model says: tell kids “you’re not allowed,” build age gates, and hope that’s enough. The other says: assume kids will be there anyway and force platforms to make those spaces less dangerous. Australia’s experience shows how fragile the first approach can be if enforcement is weak and companies are slow to adapt. The EU’s own enforcement actions under the DSA hint at the second path — painstaking, legalistic, slower to communicate in a press conference, but potentially more transformative in how platforms function.
Kallas’s challenge to Europe is essentially to stop acting powerless. If Brussels can threaten multi‑billion‑euro fines over illegal content or competition abuses, it can do the same when algorithms push self‑harm content toward 14‑year‑olds or when age‑assurance systems are clearly not fit for purpose. Estonia’s refusal to back bans is, in that sense, less about being “soft” on platforms and more about demanding that the EU uses the heavy regulatory tools it has already created, instead of outsourcing responsibility to the teenagers those tools were supposed to protect.
For parents and educators, this debate can feel abstract, wrapped in acronyms and summit declarations. On the ground, the reality is simpler: young people are online, often on services built to maximise time‑on‑screen and engagement; the harms are real, and so are the benefits. Europe now faces a choice between quick‑hit bans that may prove leaky in practice and a slower, harder project of reshaping how Big Tech operates. Estonia has nailed its colours to the mast: don’t criminalise kids for being on social media — regulate the companies that built it. Whether the rest of Europe follows that path will decide not just what apps teenagers can use in a few years’ time, but how much power the continent is willing to assert over the platforms that shape its public life.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
