Imagine an email inbox under your control. Now imagine that another party, simply by sending you a specially crafted message, could make ChatGPT leak sensitive data from that inbox without you ever clicking anything and without any visible sign. That is the scenario exposed by ShadowLeak, a new vulnerability in ChatGPT’s Deep Research agent.
What Is the Deep Research Agent
Deep Research is a feature in ChatGPT introduced in early 2025. It allows users to delegate complex tasks and investigations. The agent can browse web pages, analyze documents, and even access content from connected services such as Gmail, Google Drive, or GitHub. It collects and organizes large amounts of information and creates thorough reports, saving the user from manually searching through each source.
For users who enable Gmail integration, Deep Research can directly access their inbox. While this makes research much easier, it also opens the door to new security risks.
What Is ShadowLeak
ShadowLeak is a zero click vulnerability discovered by security researchers. A zero click attack means the user does not need to open or interact with the malicious message. The attack works as a service side exfiltration which means the data is stolen directly from the cloud infrastructure rather than from the user’s own device.
Here is how it works in simple terms:
- An attacker sends a normal looking email to a target who has Deep Research connected to Gmail.
- Inside that message are hidden instructions using tricks like tiny fonts, white text on white background, or invisible HTML elements. Humans cannot see these but the agent can read them.
- Later, when the user asks Deep Research to summarize their emails, it processes the malicious message along with the others. The hidden instructions tell the agent to collect private information from the inbox and send it to an attacker controlled website.
- This entire process happens silently in the cloud. The user sees nothing unusual.
Why This Is Dangerous
ShadowLeak is far more serious than a normal phishing attempt for several reasons:
- It is invisible to usual security tools because everything happens on the cloud. Network logs and firewalls will only see normal behavior.
- It requires no action from the user. The victim does not need to click any link or open any attachment.
- It can affect many connected services. Although it was shown with Gmail, the same method could target file storage systems, shared documents, or any other data source that feeds content into the agent.
How It Was Discovered and Fixed
Security researchers found the flaw in June 2025 and reported it to OpenAI.
OpenAI fixed the issue by early September 2025.
So far there is no evidence that real attackers used this flaw before it was patched.
Lessons and Precautions
ShadowLeak shows that giving AI agents too much access can create new and hidden risks. Both individual users and organizations should follow these steps to reduce their exposure
- Limit permissions and only give AI agents access to the minimum data they need. If email access is not essential, disable it.
- Sanitize content before feeding it into an agent by removing hidden HTML elements, invisible text, and metadata.
- Separate reading and acting permissions so that an agent summarizing emails cannot also send information to external websites.
- Monitor what the agent is doing in the background and keep logs of its actions.
- Educate teams about prompt injection and hidden instructions as serious threats.
- Keep AI systems updated so they receive patches for new security issues.
Looking Ahead
ShadowLeak is a warning for the future of AI agents. These tools blur the line between user control and autonomous action. As they connect to more personal and corporate data sources, it becomes essential to set clear boundaries on what they can access and do.
Organizations should start treating AI agents like privileged services rather than passive tools. This means limiting their permissions, keeping clear records of their actions, and building rules that prevent them from acting outside their intended purpose. These steps will become an important part of both security and compliance in the coming years.


Leave a Reply