By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleOpenAITech

Prompt injection attack turned ChatGPT into a Gmail data thief

A proof-of-concept attack called ShadowLeak exposed how AI agents like ChatGPT could be tricked into stealing Gmail inbox details without users noticing.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 20, 2025, 9:51 AM EDT
Share
Gmail app icon alamy
Photo: Alamy
SHARE

It sounds like something from a technothriller: a user asks ChatGPT to “do some deep research on my inbox,” and while they sip coffee, the assistant quietly copies names, addresses and other private tidbits from their Gmail and posts them to a URL the attacker controls. The user never opens the malicious message. They don’t click a link. There’s no visible alert. The exfiltration happens inside OpenAI’s servers, out of sight of the user’s security stack.

That scenario isn’t fiction. This week, security researchers at Radware published a detailed proof-of-concept they’re calling ShadowLeak — a “zero-click” service-side prompt-injection attack that coaxed OpenAI’s Deep Research agent (the version of ChatGPT that can autonomously browse and act) into stealing Gmail data and sending it to an attacker-controlled destination. Radware says they responsibly disclosed the issue in June; OpenAI patched the hole before the public write-up and later acknowledged it as fixed.

The quiet heist — how the trick worked

Radware’s researchers laid this out like a playbook. The attacker sends a perfectly ordinary-looking email to the victim’s inbox — for example, a message titled “Restructuring Package – Action Items.” Embedded in the message’s HTML are instructions the human reader can’t see: tiny fonts, white-on-white text and layout tricks that hide the real commands. To a person, the message looks harmless. To the agent reading the raw HTML, the commands are readable and precise.

When the user asks Deep Research to “summarize today’s emails” or “research my inbox about HR,” the agent dutifully reads every message it has access to — including the booby-trapped one. The hidden instructions tell the agent to extract specific fields (names, addresses, etc.), encode them (Radware’s report used base64), and then call a URL that includes the encoded data as parameters. Because Deep Research executes browsing and HTTP calls from OpenAI’s cloud, the data never traverses the user’s device or corporate perimeter — it’s exfiltrated straight from the provider’s servers. That service-side characteristic is what makes ShadowLeak especially insidious.

Radware’s writeup is methodical and refreshingly candid about the trial-and-error it took to reach a reliable technique: they initially hit the agent’s safety mechanisms, learned that asking the model to encode data before sending it worked around lower-layer filters, and iterated until the trick succeeded consistently.

Why this wasn’t just another phishing trick

Prompt injection — planting instructions inside content so a language model executes them — is a known class of attack. What’s new here is the combination of (a) agentic AI with tools that can make web requests, (b) connectors that give those agents access to private data (Gmail, Drive, GitHub, etc.), and (c) the exfiltration happening on the service side rather than on the user’s device. That last part breaks many of the assumptions defenders rely on: secure web gateways, endpoint detection systems and most enterprise forensic logs won’t see requests coming from OpenAI’s cloud. Radware calls this “nearly impossible to detect” from the impacted organization’s vantage point.

Radware also notes the attack generalizes beyond Gmail: any connector that supplies text to the agent (Outlook, Google Drive, SharePoint, Teams, GitHub, etc.) could carry hidden instructions that the agent could be tricked into following.

Timeline and responsible disclosure

Radware says it first reported the issue to OpenAI on June 18, 2025, via Bugcrowd, updated the report the next day, and noticed a fix in early August (initially without communication). OpenAI formally acknowledged the vulnerability and marked it resolved on September 3, 2025; mainstream outlets and security blogs covered the public disclosure on September 18, 2025. Radware and follow-up reporting say there’s no evidence the technique was exploited in the wild before the fix.

What researchers and security teams recommend

Radware and independent coverage emphasize that people and organizations should treat AI agents like privileged insiders who deserve the same governance as a human with broad access. Practical steps include:

  • Audit and minimize permissions. Don’t give agents blanket access to inboxes, drives and repositories unless absolutely necessary. Start in read-only mode and escalate carefully.
  • Sanitize inputs before ingestion. Strip or normalize HTML/CSS, remove hidden text and obfuscated characters before passing content to an agent. (Radware calls this a first line of defense but warns it’s not a panacea.)
  • Log and monitor agent actions. Capture who/what/why for each tool invocation and web request initiated by an agent so you have forensic traceability. Assume agent prompts are untrusted input.
  • Limit automation for high-risk operations. Don’t let agents autonomously perform sensitive actions (submitting data, moving funds, changing configs) without human checkpoints.
  • Require vendor supply-chain checks. If you integrate third-party connectors or MCP (Model Context Protocol) servers, demand prompt-injection resilience testing and include it in vendor contracts.

Malwarebytes’ post offers a similarly pragmatic checklist for day-to-day users: be cautious with connector permissions, enable multi-factor authentication, and keep agent tools up to date so you benefit from security patches.

What OpenAI and the wider industry face next

OpenAI patched the specific issue flagged by Radware, and the company’s broader security posture is under heavier scrutiny than ever. But researchers and commentators warn that ShadowLeak is not a one-off — it’s a demonstration of a broader class of risks that come with agentic AI: models that act (click, fetch, write) rather than merely reply. That shift from passive assistant to active operator changes the attack surface in ways defenders and auditors must rethink.

Beyond tooling, there’s a governance problem: how do you treat an automated agent that can access sensitive corporate data and speak on behalf of users? CSO Online’s reporting urges a maturity model: start with narrow, read-only agents that require manual approval for side-effects, instrument everything an agent does, and red-team with prompt-injection playbooks before you scale.

A few concrete takeaways for users and admins

If you use ChatGPT’s Deep Research (or any agent that can connect to your accounts), here’s what to do today:

  • Revoke or tighten connectors you don’t actively need. If an agent doesn’t need Gmail access, remove it.
  • Treat agent logs like audit logs. Record every outbound request the agent makes and who authorized it.
  • Sanitize incoming HTML/text where possible. Convert rich HTML into plain text before feeding it to an agent, and strip suspicious attributes.
  • Educate staff: don’t presume AI is infallible or self-aware of corporate policy — it follows instructions, and hidden instructions can be malicious.

The bigger picture

ShadowLeak is a reminder that the convenience of agentic AI (hands-off research, automated summarization across accounts) brings friction-free paths for attackers, too. The tradeoffs — faster workflows versus a new class of supply-chain and insiderlike risks — are real, and mitigating them will require engineering fixes, new security controls, and better standards for how connectors and agents are built and governed.

Radware’s writeup reads partly like a warning and partly like a how-to for defenders: the same creativity and persistence that let the researchers make an attack work is what security teams must harness to design defenses before adversaries do. For now, the exploit is patched; the lesson is not.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

PayPal Business for side hustles, shops and agencies

Google Drive now uses AI to catch ransomware in real time

Nintendo makes physical Switch 2 cartridges $10 pricier than digital ones

iPhone Lockdown Mode: Apple’s extreme security switch

How the PayPal Debit Card works with your balance

Also Read
Hero image for Veo 3.1 Lite featuring the text 'Build with Veo 3.1 Lite' centered on a dark background, surrounded by six sample AI-generated video frames showcasing diverse content: a mountaineer in red jacket at sunrise in a snowy alpine landscape, a white horse galloping through water, a person wearing round sunglasses and patterned jacket, a speedboat cutting through ocean waves, vibrant abstract landscape with colorful rolling hills and pink sky, and an underwater seaweed scene.

Google launches Veo 3.1 Lite for cheaper AI video in the Gemini API

Promotional graphic for Fitbit’s Personal Health Coach showing a smartphone screen with the Fitbit app dashboard, including a circular weekly cardio progress ring at 56%, tiles for steps, readiness, and sleep duration labeled ‘Good,’ and a detailed sleep summary card on a soft blue gradient background with the words ‘Personal Health Coach’ at the top.

Fitbit personal health coach adds cycle health, mental wellbeing and nutrition

Google Account showing updated username 'elisa.beckett.new@gmail.com' with surrounding Google services icons including Gmail, Sheets, Docs, Photos, Drive, and Chrome.

Google now lets US users pick a new Gmail username

Delta Air Lines and Amazon Leo partnership announcement with aircraft flying above clouds in sunrise backdrop.

Amazon Leo is bringing faster free Wi-Fi to Delta flights from 2028

Android Media 3.1.10 illustration showing editing tools, playback controls, timeline scrubber, and notification settings.

Media3 1.10 delivers fresh UI and new format support

Google Workspace Admin data regions reports showing user distribution across Assured Controls, data region policies, and third-party attestation information.

Google Workspace adds third-party proof for data regions

Google Chat guest invitation dialog showing how to invite external users (john@acme.com) with Start chat button.

Guest accounts let you manage non-Workspace users in Google Chat

Vivaldi two-level tab stacking showing organized tab groups with outdoor adventure content.

Vivaldi 7.9 for iOS finally gets Two-Level Tab Stacks

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.