By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIPerplexitySecurityTech

Brave discovers a prompt-injection flaw in Perplexity’s Comet — and the web should pay attention

Brave says its team found a security hole in Perplexity Comet where malicious webpage text could trigger AI actions, creating risks for user privacy and safety.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 26, 2025, 3:48 AM EDT
Share
Illustration of a person browsing on a tablet with a glowing stream of light representing data, flowing around them under a starry sky, featuring the Perplexity Comet browser logo prominently in the center.
Image: Perplexity
SHARE

Agentic browsers — the new breed of web clients that not only fetch pages but act on them for you — promised convenience. They also promise a new class of security problems. This month, those problems moved from theory to proof: Brave researchers found a way to trick Perplexity’s AI browser, Comet, into following hidden instructions embedded in webpage content, potentially leaking private data like emails and one-time passwords. Brave says it reported the problem and Perplexity patched it; the episode still underlines how fragile current browser+AI designs can be.

What Brave found

Brave’s write-up explains the basic mechanics in a disturbingly simple way. When a user asks Comet to “summarize this page,” Comet takes part of the page and hands it straight to its language model without separating the user’s instructions from untrusted webpage content. An attacker can place hidden or cleverly formatted instructions inside that page (for instance, inside a Reddit comment or a spoiler) that the model will treat as commands to execute. In Brave’s proof-of-concept, those hidden instructions could be used to exfiltrate an authenticated user’s email and a one-time password — effectively allowing account takeover.

Brave’s post walks through the exploit and includes a video demo to show how the attack plays out in practice — it’s not just academic hand-waving. The company’s researchers, Artem Chaikin and Shivan Kaul Sahib, framed the issue as an example of “indirect prompt injection,” where the malicious instructions live in external content the assistant ingests while fulfilling an otherwise ordinary user request.

https://gadgetbond.com/wp-content/uploads/2025/08/Perplexity-Comet-Prompt-Injection.mp4

Timeline: discovery, disclosure, patch

Brave says it found the vulnerability on July 25, 2025, and reported it to Perplexity the same day. Perplexity acknowledged the report and deployed an initial fix on July 27, followed by additional work after Brave’s retesting on July 28. Brave notified Perplexity that it planned to go public on August 11, and final testing on August 13 indicated the issue appeared to be patched. That timeline — discovery, quick acknowledgement, iterative fixes, and an eventual public disclosure — is laid out in Brave’s announcement.

Why this is more than a Perplexity problem

Comet is simply the most visible example so far, but the root cause is architectural: agentic browsers routinely take in page content, summarize it, and may — by design — act on the user’s behalf (visit links, fill forms, click buttons). Traditional web security relies heavily on clear separation between code (what the browser executes) and data (what the browser shows). When an LLM is fed a page’s text and asked to “do things,” that clean separation blurs, and attackers can weaponize natural language instructions.

Security researchers and vendors have already begun sounding the alarm about agentic browsers more generally. Independent audits and other firms (for example, Guardio and Malwarebytes analyses) show similar worries: without new design patterns and guardrails, AI agents may be coaxed into automating actions that users never intended. The Comet episode gives those warnings a concrete, high-profile example.

Brave’s perspective — and its own stakes

Brave isn’t just pointing fingers. The company says it’s actively developing agentic features for its own browser — the AI assistant Leo — and wants to get the engineering and threat model right before shipping broad automation to users. That context matters: the vulnerability was found while Brave was examining Comet to understand how other teams were tackling agentic design trade-offs. Brave’s researchers framed their disclosure as a call for industry-level changes in how agentic browsing is architected, not merely a complaint about a single product.

As Brave put it bluntly: giving an agent authority in a user’s authenticated sessions “carries significant security and privacy risks,” and developers need “new security and privacy architectures” for agentic browsing. The company also published a preliminary list of mitigations — practical steps developers can take to reduce prompt-injection risk when feeding page content to LLMs.

What kinds of mitigations are on the table?

Brave’s post outlines several defensive directions (these are paraphrased from the company’s recommendations):

  • Context separation — don’t send raw page text directly to the LLM mixed with user instructions; explicitly mark or strip untrusted content.
  • Instruction filtering / sanitization — detect and neutralize embedded commands or unusual tokens in page content before using it as model context.
  • Least privilege for agent actions — restrict what the agent can do without an explicit, verified user action.
  • Stronger telemetry and testing — build fuzzing and adversarial testing into the development lifecycle for agentic features.

These are sensible starting points, but experts say they’re not silver bullets. Prompt injection can be subtle and adaptive; defensive layers will need to evolve alongside attacker techniques.

Perplexity’s response and public reaction

Perplexity pushed a patch quickly, according to Brave’s timeline and multiple news reports. Some outlets reported Perplexity confirmed the issue was fixed and said it worked with Brave during mitigation; others noted that Perplexity did not publish the patch details and — per Brave and independent observers — the browser’s closed-source nature makes external verification harder. That led some commentators to urge greater transparency around security fixes for agentic systems.

The broader press coverage has been a mix of technical explainers and stern reminders to users: agentic features are powerful, yes, but they change the threat model for everyday browsing. Several outlets used the Brave/Comet case to caution users against using AI browsers to manage highly sensitive workflows until the ecosystem matures.

What this means for you (the user)

If you use an AI browser or enable agentic features, two pragmatic takeaways:

  1. Treat agentic features like any powerful automation — avoid using them for high-value operations (banking, password entry, critical two-factor steps) until those features have had more real-world testing and clearer security guarantees.
  2. Watch for transparent security disclosures — companies that publish clear timelines, remediation details, and independent verification are easier to trust than ones that patch quietly without public detail. The Brave/Comet episode highlights why public, verifiable disclosure matters.

The bigger picture: a new security frontier

Agentic browsing is attractive: a browser that can summarize, act, and negotiate for you sounds transformative. But it puts natural-language understanding at the heart of the enforcement boundary that has long protected the web. That boundary — the line between user intent and page content, between data and executable instructions — is now fuzzy. Solving this requires new engineering patterns, stronger testing regimes, and probably new standards for how browsers and LLMs interact. Brave’s disclosure doesn’t close the book; it simply opened a new chapter in web security research.

Final note

Brave’s blog post — the primary public account of the research — is worth a read if you want the technical blow-by-blow and the suggested mitigations. Journalists and engineers will be watching how Perplexity, other AI-browser vendors, and standards bodies respond. For now, the Comet incident is a reminder that rapid innovation in AI interfaces must be matched with equally rapid thinking on security.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Perplexity Comet
Most Popular

DeepMind’s Gemini Robotics-ER 1.6 pushes embodied AI into the real world

Gemini 3.1 Flash TTS is Google’s new powerhouse text-to-speech model

Google app for desktop rolls out globally on Windows

Google debuts Gemini app for Mac with instant shortcut access

Perplexity brings an always-on Personal Computer to Mac users

Also Read
A graphic design featuring the text “GPT Rosalind” in bold black letters on a light green background. Behind the text are overlapping translucent green rectangles. In the bottom left corner, part of a chemical structure diagram is visible with labels such as “CH₃,” “CH₂,” “H,” “N,” and the Roman numeral “II.” The right side of the background shows a blurred turquoise and green abstract pattern, evoking a scientific or natural theme.

OpenAI launches GPT-Rosalind to accelerate biopharma research

Perplexity interface showing a model selection menu with options for advanced AI models. The default choice, “Claude Opus 4.7 Thinking,” is highlighted as a powerful model for complex tasks. Other options include “GPT-5.4 New” for complex tasks and “Claude Sonnet 4.6” for everyday tasks using fewer credits. A toggle for “Thinking” is switched on, and a tooltip on the right reads “Computer powered by Claude 4.7 Opus.”

Perplexity Max users now get Claude Opus 4.7 in Computer by default

Anthropic brand illustration divided into two halves: On the left, an orange-coral background displays a stylized network or molecule diagram with white circular nodes connected by white lines, enclosed within a black wavy border outline representing a head or mind. On the right, a light teal background features an abstract line drawing of a figure or person with curved black lines and black dots, sketched over a white grid on transparent checkered background, suggesting data points and analytical thinking. The composition symbolizes the intersection of artificial intelligence and human cognition.

Claude Opus 4.7 is Anthropic’s new powerhouse for serious software work

Illustration of a speech bubble with code brackets inside, framed by curly braces on an orange background, representing coding conversations or AI-assisted programming.

Anthropic’s revamped Claude Code desktop app is all about parallel coding workflows

Illustration of Claude Code routines concept: An orange-coral background with a stylized design featuring two black curly braces (code brackets) flanking a white speech bubble containing a handwritten lowercase 'u' symbol. The image represents code execution and automated routines within Claude Code.

Anthropic gives Claude Code cloud routines that work while you sleep

Gemini interface showing a NEET Mock Exam Practice Session. On the left side, a chat message from the user says 'I want to take a NEET mock exam.' Below it is Gemini's response explaining a complete NEET mock exam designed to test concepts in Physics, Chemistry, and Biology, with a 'Show thinking' option expanded. The response includes an embedded card for 'NEET UG Practice Test' dated Apr 11, 7:10 PM, with options to 'Try again without interactive quiz' and encouragement message. On the right side is a panel titled 'NEET UG Practice Test' displaying three subject sections: Physics (45 Questions with a yellow icon and blue Start button), Chemistry (45 Questions with a purple icon and blue Start button), and Biology (90 Questions with a green icon). Each section includes a brief description of question topics covered.

Google Gemini now lets you take full NEET mock exams for free

AI Mode in Chrome showing AI-powered shopping assistant panel alongside a Ninja coffee machine product page with pricing and details

Chrome’s AI Mode puts search and pages side by side

Google Gemini AI

Google Gemini can now craft images from your personal photos

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.