
Cursor is one of the fastest-growing AI-powered coding tools used by developers today. It combines local code editing with powerful large language model (LLM) integrations to help teams write, debug, and explore code more efficiently. But with that deep integration comes increased trust in automated workflows — and increased risk when that trust is exploited.
As AI-driven developer environments become more embedded in software development workflows, Check Point Research set out to evaluate the security model behind these tools, especially in collaborative environments where code, configuration files, and AI-based plugins are frequently shared across teams and repositories.
We discovered a high-impact vulnerability in Cursor’s Model Context Protocol (MCP) system that enables persistent remote code execution (RCE). Once a user approves a MCP configuration, an attacker can silently change its behavior. From that moment on, malicious commands can be executed every time the project is opened without any further prompts or notifications.
An attacker can:
- Add a harmless-looking MCP configuration to a shared repository.
- Wait for victim to pull the code and approve it once in Cursor IDE.
- Replace the MCP configuration with a malicious payload.
- Gain silent, persistence code execution every time the victim opens Cursor IDE.
This isn’t just a theoretical risk, it’s a real-world vulnerability. In shared coding environments, the flaw turns a trusted MCP into a stealthy, persistent point of compromise. For organizations relying on AI tools like Cursor, the implications are serious: silent, ongoing access to developer machines, credentials, and codebases, all triggered by a single, trusted approval.
“AI-powered developer tools are introducing attack surfaces we’ve never seen before. For years, we’ve focused on defending against traditional supply chain attacks , but now, it’s clear we’re entering a new era of cybersecurity threats” said Oded Vanunu, Chief Technologist & Head of product’s vulnerability research at Check Point Software
How the Vulnerability Works
Cursos uses a system called Model Context Protocol (MCPs). These are configuration files that tell Cursor how to automate certain tasks. Think of them as a way for developers to plug in tools, scripts, or AI-driven workflows directly into their coding environment.
When a user opens a project that contains MCP configuration, Cursor shows a one-time approval prompt asking whether to trust it. But here’s the problem:
Once a MCP is approved, Cursor never checks it again, even if the commands inside it are silently changed later.
That means an attacker working in the same shared repository could:
- Add a completely safe-looking MCP configuration to a project.
- Wait for someone else on the team to pull it.
- Change the configuration later to do something malicious like launching a script, opening a backdoor, or sending data to an external server.
Every time the victim opens the project in Cursor, the new command runs automatically without a new prompt or alert.
Proof of Concept: From Harmless MCP to Persistent Exploit
To show how this vulnerability works in practice, we created a proof of concept that mimics a typical attack scenario in a shared project:
- Step 1: A Harmless MCP
The attacker first commits a completely safe MCP Configuration. Something as innocent as a command that just prints a message. When the victim opens the project, they see a prompt asking to approve this MCP. - Step 2: Silent Switch to Malicious Behavior
After approval, the attacker quietly changes the MCP configuration to malicious code, such as a script that opens a reverse shell or runs harmful system commands. - Step 3: Automatic Execution Every Time
Now, every time the victim opens the project in Cursor IDE, the malicious command runs silently without a warning or prompt. - Step 4: Persistent, Invisible Access
This gives the attacker repeated, stealthy access to the victim’s machine, making it possible to steal data, execute further attacks, or move laterally in the victim’s environment.
Real-World Impact
Because many organizations share and sync projects through repositories, this vulnerability creates an ideal way for attackers to establish long-term, hidden footholds.
Here’s why it’s so dangerous:
- Silent Persistence: Malicious code runs every time a project opens, without alerting users or requiring further approvals. This means attackers can maintain ongoing access indefinitely.
- Wide Attack Surface: Any developer with write access to a shared repository can inject and modify these trusted MCP configuration, putting entire teams and organizations at risk.
- Privilege Escalation Risks: Developer machines often have sensitive credentials, cloud access keys, or other secrets stored locally. An attacker exploiting this vulnerability can leverage those to escalate access further into corporate networks.
- Data and Code Exposure: Beyond direct code execution, attackers could exfiltrate source code, intellectual property, or internal communications without detection.
- AI Toolchain Trust Broken: As AI tools like Cursor become more embedded in software development, their security model must be airtight. This vulnerability highlights the dangers of blind trust in automated workflows.
For companies relying on Cursor and similar AI-powered IDEs, understanding and addressing this vulnerability is critical to protecting their development environments and sensitive assets.
Disclosure and Mitigation
Check Point Research promptly and responsibly disclosed the issue to the Cursor development team on July 16, 2025. Cursor released an update (version 1.3) on July 29th.
This vulnerability is part of a broader challenge facing modern development tools that deeply integrate AI. Platforms like Cursor streamline workflows by automating tasks through natural language and LLM-connected plugins. But with that convenience comes increased reliance on trust, often with limited visibility into how that trust can be abused.
To mitigate this class of vulnerability in AI-assisted development environments, we recommend:
- Treat MCP configuration files as attack surfaces: Just like source code, automation scripts, and MCP configuration definitions should be reviewed, audited, and version-controlled carefully.
- Avoid implicit trust in AI-driven automations: Even if a MCP or suggestion looks benign, ensure team members understand what it does before approving it.
- Limit write permissions in collaborative environments: Control who can modify trusted configuration files, especially in shared repositories.
Conclusion
The discovery of this persistent remote code execution vulnerability in Cursor IDE highlights a critical security challenge for AI-powered developer tools. As organizations increasingly rely on integrated AI workflows, ensuring that trust mechanisms are robust and verifiable is essential.
We encourage developers, security teams, and organizations to stay vigilant, audit their AI development environments, and work closely with vendors to address emerging threats. Only through proactive security can we safely harness the power of AI in software development.



