At a recent Cannes event, Cloudflare CEO Matthew Prince dropped a striking statistic: people are increasingly relying on AI chatbots’ summaries instead of clicking through to original articles, and the referral traffic to publishers is plummeting as a result. In casual conversation, he put it bluntly: “People aren’t following the footnotes.” That offhand remark hides a seismic shift in how we consume online information—and in how publishers earn revenue. If readers never click links, ad impressions on publisher sites dry up, subscriptions stall, and the economics of journalism tremble.
Prince painted a vivid picture of deteriorating referral ratios. A decade ago, Google’s crawler would fetch roughly two pages from a site for every visitor it sent; six months ago, it was about six pages crawled per visitor; now it’s closer to eighteen pages crawled per visitor, meaning far fewer actual visits relative to indexing activity. The toll from AI-native players is even steeper. Six months ago, OpenAI’s models sent roughly one visitor per 250 pages scraped; today, that ratio has ballooned to roughly one visitor per 1,500 pages. For Anthropic, it went from one visit per 6,000 pages to one per 60,000 pages over the same span. In practice, this means AI systems ingest vast swaths of content to train or answer queries, but return virtually no referral traffic.
This phenomenon builds on a longer trend often called “zero-click search,” where search engines increasingly provide direct answers on the results page, reducing clicks to external sites. Over recent years, Google has rolled out features like AI Overviews or “Search Live” voice interactions that give comprehensive responses without requiring users to visit a publisher’s page. Studies from Press Gazette and industry analyses suggest these AI-driven snippets and summaries can cut click-through rates by half or more when they appear alongside—or instead of—traditional links. Desktop CTRs for top-ranking pages can drop by two-thirds with AI Overviews; mobile sees similar declines. Even if links remain in a sidebar or separate tab, many users don’t bother exploring further once an AI “answer” has satisfied their query.
For publishers, fewer clicks means fewer ad impressions, fewer subscription sign-ups, and a shrinking footprint for brand awareness. Industry bodies warn of billions in lost revenue: a 2024 analysis estimated initial losses in the low billions, but newer figures suggest the reality may be far worse as AI adoption accelerates. In Europe, a consortium of German publishers is demanding roughly €1.3 billion annually from Google for using journalistic content without fair compensation. In the U.S., the News/Media Alliance’s “Stop AI Theft” campaign urges lawmakers to mandate attribution and payment when AI uses news content, arguing that unchecked AI overviews and scraping amount to “theft” of traffic and ad revenue.
Prince isn’t merely sounding the alarm—he’s positioning Cloudflare as part of the defense. He mentioned a forthcoming tool aimed at blocking bots that scrape content for large language models, even when a site’s robots.txt instructs “no crawl.” Fortune reported earlier that Cloudflare has rolled out an “AI Audit” feature to show publishers who’s scraping their sites and how often, and plans to block AI crawlers at the network level by default in mid-2025 unless publishers opt out.
One concrete example is Cloudflare’s AI Labyrinth, introduced in March 2025. When an unauthorized crawler ignores “no-crawl” directives, AI Labyrinth steers it into a maze of convincingly written, AI-generated decoy pages. These pages look real to the crawler but contain no valuable content. As bots follow hidden links deeper into the labyrinth, they waste computing resources while Cloudflare gathers data on their behavior. This approach doesn’t impact human visitors or SEO, since decoy pages are invisible to humans and carry “noindex” directives for legitimate search engines. The goal: discourage indiscriminate scraping by making it costly for AI operators, and to fingerprint misbehaving bots for broader blocking.
Technical measures alone can only go so far. Prince and others in the industry are discussing content licensing marketplaces where publishers charge AI providers for access, in a model akin to traditional syndication. Fortune’s reporting notes that if only one AI company pays and others don’t, the non-paying services could undercut the paying one—so coordinated, industry-wide action is needed. Some large publishers have already signed deals: OpenAI licenses content from The Atlantic, Financial Times, Reuters, and others, with reported payments in the low millions per year range; larger deals (e.g., News Corp’s rumored $250 million over five years) highlight both demand for high-quality training data and the stakes involved.
Meanwhile, trade groups like the News/Media Alliance lobby for legislation requiring AI firms to obtain permission and pay for content use, invoking copyright law and fair use debates. In Europe, the Digital Services Act and emerging AI regulations heighten scrutiny on unauthorized scraping; in the U.S., antitrust and copyright hearings may shape future safeguards. Some experts suggest micropayment schemes—tiny fees each time an AI model cites or draws from a piece of content—but technical implementation and user experience remain unresolved challenges.
Prince argues that high-quality, original reporting—especially hyper-local or deeply researched insights—is harder to replicate cheaply by large language models and thus more valuable if gated behind licensing deals. For instance, specialized content on local ski conditions or niche academic findings can’t be synthesized accurately without the original source; readers who care will seek out—and pay for—services offering exclusive, authoritative data. If publishers can secure fair compensation for this premium content, it could revitalize business models in journalism, academia, and specialized information services.
AI-driven summaries and chatbots present undeniable convenience: quick answers, conversational interfaces, and on-the-fly synthesis of diverse sources. Yet, if over-relied upon, they risk hollowing out the ecosystem of original reporting that underpins trusted information. Prince’s warning is essentially a call to action for publishers to adapt before referral-based revenues erode further. It’s equally a reminder to AI developers to consider the long-term health of the web: training models on unauthorized data may yield short-term gains but could undermine the very content ecosystem that fuels innovation.
Final thoughts: keeping the web alive
The web was built on hyperlinks and the exchange between content creators and aggregators/search engines. When that balance tilts too far in favor of “answer engines” that don’t send traffic back, the incentives for creating quality content weaken. Matthew Prince’s candid comments at Cannes and Cloudflare’s technical countermeasures spotlight an urgent crossroads: will the industry find sustainable ways to harmonize AI capabilities with fair compensation models, or will the “footnotes” vanish, taking journalism’s lifeblood with them? For readers, it means being aware that the quick AI summary might not replace the depth, nuance, and serendipitous discovery found by clicking through to original stories. And for publishers, the message is clear: innovate defense, build new revenue models, and collaborate—before the next generation of storytellers and investigators disappears behind an AI curtain.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
