By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleOpenAITech

Prompt injection attack turned ChatGPT into a Gmail data thief

A proof-of-concept attack called ShadowLeak exposed how AI agents like ChatGPT could be tricked into stealing Gmail inbox details without users noticing.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 20, 2025, 9:51 AM EDT
Share
Gmail app icon alamy
Photo: Alamy
SHARE

It sounds like something from a technothriller: a user asks ChatGPT to “do some deep research on my inbox,” and while they sip coffee, the assistant quietly copies names, addresses and other private tidbits from their Gmail and posts them to a URL the attacker controls. The user never opens the malicious message. They don’t click a link. There’s no visible alert. The exfiltration happens inside OpenAI’s servers, out of sight of the user’s security stack.

That scenario isn’t fiction. This week, security researchers at Radware published a detailed proof-of-concept they’re calling ShadowLeak — a “zero-click” service-side prompt-injection attack that coaxed OpenAI’s Deep Research agent (the version of ChatGPT that can autonomously browse and act) into stealing Gmail data and sending it to an attacker-controlled destination. Radware says they responsibly disclosed the issue in June; OpenAI patched the hole before the public write-up and later acknowledged it as fixed.

The quiet heist — how the trick worked

Radware’s researchers laid this out like a playbook. The attacker sends a perfectly ordinary-looking email to the victim’s inbox — for example, a message titled “Restructuring Package – Action Items.” Embedded in the message’s HTML are instructions the human reader can’t see: tiny fonts, white-on-white text and layout tricks that hide the real commands. To a person, the message looks harmless. To the agent reading the raw HTML, the commands are readable and precise.

When the user asks Deep Research to “summarize today’s emails” or “research my inbox about HR,” the agent dutifully reads every message it has access to — including the booby-trapped one. The hidden instructions tell the agent to extract specific fields (names, addresses, etc.), encode them (Radware’s report used base64), and then call a URL that includes the encoded data as parameters. Because Deep Research executes browsing and HTTP calls from OpenAI’s cloud, the data never traverses the user’s device or corporate perimeter — it’s exfiltrated straight from the provider’s servers. That service-side characteristic is what makes ShadowLeak especially insidious.

Radware’s writeup is methodical and refreshingly candid about the trial-and-error it took to reach a reliable technique: they initially hit the agent’s safety mechanisms, learned that asking the model to encode data before sending it worked around lower-layer filters, and iterated until the trick succeeded consistently.

Why this wasn’t just another phishing trick

Prompt injection — planting instructions inside content so a language model executes them — is a known class of attack. What’s new here is the combination of (a) agentic AI with tools that can make web requests, (b) connectors that give those agents access to private data (Gmail, Drive, GitHub, etc.), and (c) the exfiltration happening on the service side rather than on the user’s device. That last part breaks many of the assumptions defenders rely on: secure web gateways, endpoint detection systems and most enterprise forensic logs won’t see requests coming from OpenAI’s cloud. Radware calls this “nearly impossible to detect” from the impacted organization’s vantage point.

Radware also notes the attack generalizes beyond Gmail: any connector that supplies text to the agent (Outlook, Google Drive, SharePoint, Teams, GitHub, etc.) could carry hidden instructions that the agent could be tricked into following.

Timeline and responsible disclosure

Radware says it first reported the issue to OpenAI on June 18, 2025, via Bugcrowd, updated the report the next day, and noticed a fix in early August (initially without communication). OpenAI formally acknowledged the vulnerability and marked it resolved on September 3, 2025; mainstream outlets and security blogs covered the public disclosure on September 18, 2025. Radware and follow-up reporting say there’s no evidence the technique was exploited in the wild before the fix.

What researchers and security teams recommend

Radware and independent coverage emphasize that people and organizations should treat AI agents like privileged insiders who deserve the same governance as a human with broad access. Practical steps include:

  • Audit and minimize permissions. Don’t give agents blanket access to inboxes, drives and repositories unless absolutely necessary. Start in read-only mode and escalate carefully.
  • Sanitize inputs before ingestion. Strip or normalize HTML/CSS, remove hidden text and obfuscated characters before passing content to an agent. (Radware calls this a first line of defense but warns it’s not a panacea.)
  • Log and monitor agent actions. Capture who/what/why for each tool invocation and web request initiated by an agent so you have forensic traceability. Assume agent prompts are untrusted input.
  • Limit automation for high-risk operations. Don’t let agents autonomously perform sensitive actions (submitting data, moving funds, changing configs) without human checkpoints.
  • Require vendor supply-chain checks. If you integrate third-party connectors or MCP (Model Context Protocol) servers, demand prompt-injection resilience testing and include it in vendor contracts.

Malwarebytes’ post offers a similarly pragmatic checklist for day-to-day users: be cautious with connector permissions, enable multi-factor authentication, and keep agent tools up to date so you benefit from security patches.

What OpenAI and the wider industry face next

OpenAI patched the specific issue flagged by Radware, and the company’s broader security posture is under heavier scrutiny than ever. But researchers and commentators warn that ShadowLeak is not a one-off — it’s a demonstration of a broader class of risks that come with agentic AI: models that act (click, fetch, write) rather than merely reply. That shift from passive assistant to active operator changes the attack surface in ways defenders and auditors must rethink.

Beyond tooling, there’s a governance problem: how do you treat an automated agent that can access sensitive corporate data and speak on behalf of users? CSO Online’s reporting urges a maturity model: start with narrow, read-only agents that require manual approval for side-effects, instrument everything an agent does, and red-team with prompt-injection playbooks before you scale.

A few concrete takeaways for users and admins

If you use ChatGPT’s Deep Research (or any agent that can connect to your accounts), here’s what to do today:

  • Revoke or tighten connectors you don’t actively need. If an agent doesn’t need Gmail access, remove it.
  • Treat agent logs like audit logs. Record every outbound request the agent makes and who authorized it.
  • Sanitize incoming HTML/text where possible. Convert rich HTML into plain text before feeding it to an agent, and strip suspicious attributes.
  • Educate staff: don’t presume AI is infallible or self-aware of corporate policy — it follows instructions, and hidden instructions can be malicious.

The bigger picture

ShadowLeak is a reminder that the convenience of agentic AI (hands-off research, automated summarization across accounts) brings friction-free paths for attackers, too. The tradeoffs — faster workflows versus a new class of supply-chain and insiderlike risks — are real, and mitigating them will require engineering fixes, new security controls, and better standards for how connectors and agents are built and governed.

Radware’s writeup reads partly like a warning and partly like a how-to for defenders: the same creativity and persistence that let the researchers make an attack work is what security teams must harness to design defenses before adversaries do. For now, the exploit is patched; the lesson is not.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

ExpressVPN’s long‑term VPN plans get a massive 81 percent price cut

Apple’s portable iPad mini 7 falls to $399 in limited‑time sale

Valve warns Steam Deck OLED will be hard to buy in RAM crunch

Lock in up to 87% off Surfshark VPN for two years

Google Doodle kicks off Lunar New Year 2026 with a fiery Horse

Also Read
Two blue Google Pixel 10a phones are shown in front of large repeated text reading ‘Smooth by design,’ with one phone displaying a blue gradient screen and the other showing the matte blue back with dual camera module and Google logo.

Google’s Pixel 10a keeps the price, upgrades the experience

Meta and NVIDIA logos on black background

Meta just became NVIDIA’s biggest AI chip power user

A side-by-side comparison showing a Google Pixel 10 Pro XL using Quick Share to successfully send a file to an iPhone, with the iPhone displaying the Android device inside its native AirDrop menu.

Pixel 9 users can now AirDrop files to iPhones and Macs

Screenshot of Google Search’s AI Mode on desktop showing a conversational query for “How can I get into curling,” with a long-form AI-generated answer on the left using headings and bullet points, and on the right a vertical carousel of website cards from multiple sources, plus a centered hover pop-up card stack highlighting individual source links and site logos over the carousel.

Google’s AI search is finally easier on publishers

Google I/O 2026 event graphic showing the Google I/O logo with a colorful gradient rectangle, slash, and circle on a black background, with the text ‘May 19–20, 2026’ and ‘io.google’ beneath.

Google I/O 2026 set for May 19–20 at Shoreline Amphitheatre

Dropdown model selector in Perplexity AI showing “Claude Sonnet 4.6 Thinking” highlighted under the “Best” section, with other options like Sonar, Gemini 3 Flash, Gemini 3 Pro, GPT‑5.2, Claude Opus 4.6, Grok 4.1, and Kimi K2.5 listed below on a light beige interface.

Claude Sonnet 4.6 lands for all Perplexity Pro and Max users

Anthropic illustration

Claude Sonnet 4.6 levels up coding, agents, and computer use in one hit

The logo and lettering of Paramount Skydance Corporation can be seen at a Paramount stand at the Media Days in Munich (Bavaria, Germany).

Paramount gets one more shot at stealing Warner Bros. Discovery from Netflix

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.