Agentic browsers — the new breed of web clients that not only fetch pages but act on them for you — promised convenience. They also promise a new class of security problems. This month, those problems moved from theory to proof: Brave researchers found a way to trick Perplexity’s AI browser, Comet, into following hidden instructions embedded in webpage content, potentially leaking private data like emails and one-time passwords. Brave says it reported the problem and Perplexity patched it; the episode still underlines how fragile current browser+AI designs can be.
What Brave found
Brave’s write-up explains the basic mechanics in a disturbingly simple way. When a user asks Comet to “summarize this page,” Comet takes part of the page and hands it straight to its language model without separating the user’s instructions from untrusted webpage content. An attacker can place hidden or cleverly formatted instructions inside that page (for instance, inside a Reddit comment or a spoiler) that the model will treat as commands to execute. In Brave’s proof-of-concept, those hidden instructions could be used to exfiltrate an authenticated user’s email and a one-time password — effectively allowing account takeover.
Brave’s post walks through the exploit and includes a video demo to show how the attack plays out in practice — it’s not just academic hand-waving. The company’s researchers, Artem Chaikin and Shivan Kaul Sahib, framed the issue as an example of “indirect prompt injection,” where the malicious instructions live in external content the assistant ingests while fulfilling an otherwise ordinary user request.
Timeline: discovery, disclosure, patch
Brave says it found the vulnerability on July 25, 2025, and reported it to Perplexity the same day. Perplexity acknowledged the report and deployed an initial fix on July 27, followed by additional work after Brave’s retesting on July 28. Brave notified Perplexity that it planned to go public on August 11, and final testing on August 13 indicated the issue appeared to be patched. That timeline — discovery, quick acknowledgement, iterative fixes, and an eventual public disclosure — is laid out in Brave’s announcement.
Why this is more than a Perplexity problem
Comet is simply the most visible example so far, but the root cause is architectural: agentic browsers routinely take in page content, summarize it, and may — by design — act on the user’s behalf (visit links, fill forms, click buttons). Traditional web security relies heavily on clear separation between code (what the browser executes) and data (what the browser shows). When an LLM is fed a page’s text and asked to “do things,” that clean separation blurs, and attackers can weaponize natural language instructions.
Security researchers and vendors have already begun sounding the alarm about agentic browsers more generally. Independent audits and other firms (for example, Guardio and Malwarebytes analyses) show similar worries: without new design patterns and guardrails, AI agents may be coaxed into automating actions that users never intended. The Comet episode gives those warnings a concrete, high-profile example.
Brave’s perspective — and its own stakes
Brave isn’t just pointing fingers. The company says it’s actively developing agentic features for its own browser — the AI assistant Leo — and wants to get the engineering and threat model right before shipping broad automation to users. That context matters: the vulnerability was found while Brave was examining Comet to understand how other teams were tackling agentic design trade-offs. Brave’s researchers framed their disclosure as a call for industry-level changes in how agentic browsing is architected, not merely a complaint about a single product.
As Brave put it bluntly: giving an agent authority in a user’s authenticated sessions “carries significant security and privacy risks,” and developers need “new security and privacy architectures” for agentic browsing. The company also published a preliminary list of mitigations — practical steps developers can take to reduce prompt-injection risk when feeding page content to LLMs.
What kinds of mitigations are on the table?
Brave’s post outlines several defensive directions (these are paraphrased from the company’s recommendations):
- Context separation — don’t send raw page text directly to the LLM mixed with user instructions; explicitly mark or strip untrusted content.
- Instruction filtering / sanitization — detect and neutralize embedded commands or unusual tokens in page content before using it as model context.
- Least privilege for agent actions — restrict what the agent can do without an explicit, verified user action.
- Stronger telemetry and testing — build fuzzing and adversarial testing into the development lifecycle for agentic features.
These are sensible starting points, but experts say they’re not silver bullets. Prompt injection can be subtle and adaptive; defensive layers will need to evolve alongside attacker techniques.
Perplexity’s response and public reaction
Perplexity pushed a patch quickly, according to Brave’s timeline and multiple news reports. Some outlets reported Perplexity confirmed the issue was fixed and said it worked with Brave during mitigation; others noted that Perplexity did not publish the patch details and — per Brave and independent observers — the browser’s closed-source nature makes external verification harder. That led some commentators to urge greater transparency around security fixes for agentic systems.
The broader press coverage has been a mix of technical explainers and stern reminders to users: agentic features are powerful, yes, but they change the threat model for everyday browsing. Several outlets used the Brave/Comet case to caution users against using AI browsers to manage highly sensitive workflows until the ecosystem matures.
What this means for you (the user)
If you use an AI browser or enable agentic features, two pragmatic takeaways:
- Treat agentic features like any powerful automation — avoid using them for high-value operations (banking, password entry, critical two-factor steps) until those features have had more real-world testing and clearer security guarantees.
- Watch for transparent security disclosures — companies that publish clear timelines, remediation details, and independent verification are easier to trust than ones that patch quietly without public detail. The Brave/Comet episode highlights why public, verifiable disclosure matters.
The bigger picture: a new security frontier
Agentic browsing is attractive: a browser that can summarize, act, and negotiate for you sounds transformative. But it puts natural-language understanding at the heart of the enforcement boundary that has long protected the web. That boundary — the line between user intent and page content, between data and executable instructions — is now fuzzy. Solving this requires new engineering patterns, stronger testing regimes, and probably new standards for how browsers and LLMs interact. Brave’s disclosure doesn’t close the book; it simply opened a new chapter in web security research.
Final note
Brave’s blog post — the primary public account of the research — is worth a read if you want the technical blow-by-blow and the suggested mitigations. Journalists and engineers will be watching how Perplexity, other AI-browser vendors, and standards bodies respond. For now, the Comet incident is a reminder that rapid innovation in AI interfaces must be matched with equally rapid thinking on security.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
