AI Vulnerability: The MS Copilot “EcoLeak” Vulnerability
Part One of a series of recently found AI Vulnerabilities in Major AI Provider Models.
Identified in very early 2025, this vulnerability is thought to be the first zero-click attack chain on an AI agent. In this case the AI agent is Microsoft Copilot, a family of AI-powered digital assistants that enhance productivity and creativity across various Microsoft products and services
The vulnerability was named “EchoLeak.” It is a security vulnerability that allows data exfiltration from Microsoft 365 Copilot without requiring any user action, interaction, or awareness. The new discovery exposes a critical-level flaw in Copilot, enabling attackers to exfiltrate sensitive organizational data without user interaction. This vulnerability is tracked as CVE-2025-32711 with a CVSS score of 9.3. Fortunately, the flaw has already been patched by Microsoft and shows no signs of active exploitation.
Security researchers categorize “EchoLeak” as an AI command injection issue caused by what is being called a “Large Language Model (LLM) Scope Violation”. This is a security vulnerability where a Large Language Model (LLM) is tricked into exceeding its intended function, often by executing a malicious command embedded in seemingly harmless user-provided data. This can lead to sensitive data exposure or unauthorized actions.
In this attack chain, an attacker sends a specially crafted email to an employee. When the employee later engages Copilot with a routine business question, the system’s retrieval-augmented generation (RAG) engine unwittingly blends the attacker’s input with internal data. The result: Copilot leaks the sensitive content back to the attacker via Microsoft Teams or SharePoint links.
What makes EchoLeak particularly dangerous is that it requires no user clicks or explicit interaction. It exploits Copilot’s default behavior of combining Outlook and SharePoint data without enforcing trust boundaries, turning what should be helpful workflow automation into a potential attack vector.
Security researchers have noted that vulnerabilities like EchoLeak could be used for stealthy data exfiltration or extortion and may affect both single-turn and multi-turn AI interactions. The vulnerability highlights the broader risks in generative AI design, where highly capable language models, if not properly isolated, can be manipulated into leaking their own privileged context.
Key Takeaways and Security Tips
Copilot's primary risk is that it inherits the permissions of the user it operates under. If a user has broad access to sensitive data (e.g., financial reports, HR information), Copilot will also access that data. This broad access can lead to unintended data leakage, as Copilot might include confidential information in summaries, reports, or even external communications.
Copilot introduces a new attack surface. Its integration with Microsoft 365 data creates a new and broad attack surface that needs monitoring. Of course, compromised accounts are a great concern when AI is in use. If a user's account is compromised, attackers could leverage Copilot to efficiently extract confidential information.
AI technologies are here, but much like the internet was in the 1990’s and early 2000s, it is still in its infancy. Remember, computers and their software can only do what they are told by their designers. Security concerns have clearly not been planned for let alone realized in practice due to the haste of the AI development race.