โ ๏ธ IMPORTANT SECURITY NOTICE: This repository contains security research demonstrating critical vulnerabilities in the Model Context Protocol (MCP). The code is for educational and defensive purposes only. Do not use these techniques maliciously.
GenSecAI is A non-profit community using generative AI to defend against AI-powered attacks, building open-source tools to secure our digital future from emerging AI threats.
This research is part of our mission to identify and mitigate AI security vulnerabilities before they can be exploited maliciously.
This research demonstrates critical security vulnerabilities in the Model Context Protocol (MCP) that allow attackers to:
- ๐ Exfiltrate sensitive data (SSH keys, API credentials, configuration files)
- ๐ญ Hijack AI agent behavior through hidden prompt injections
- ๐ง Redirect communications without user awareness
- ๐ Override security controls of trusted tools
- โฐ Deploy time-delayed attacks that activate after initial trust is established
Impact: Any AI agent using MCP (Claude, Cursor, ChatGPT with plugins) can be compromised through malicious tool descriptions.
# Clone the repository
git clone https://github.com/gensecaihq/mcp-poisoning-poc.git
cd mcp-poisoning-poc
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the demonstration
python examples/basic_attack_demo.py
from src.demo.malicious_server import MaliciousMCPServer
from src.defenses.sanitizer import MCPSanitizer
# Create a malicious MCP server
server = MaliciousMCPServer()
# See how tool descriptions contain hidden instructions
for tool in server.get_tools():
print(f"Tool: {tool['name']}")
print(f"Hidden payload detected!")
# Defend against attacks
sanitizer = MCPSanitizer()
safe_description = sanitizer.clean(tool.description)
Attack Vector | Severity | Exploitation Difficulty | Impact |
---|---|---|---|
Data Exfiltration | ๐ด Critical | Low | Complete credential theft |
Tool Hijacking | ๐ด Critical | Low | Full agent compromise |
Instruction Override | ๐ High | Medium | Security bypass |
Delayed Payload | ๐ High | Medium | Persistent compromise |
The vulnerability exploits a fundamental design flaw in MCP:
- Tool descriptions are treated as trusted input by AI models
- Hidden instructions in descriptions are invisible to users but processed by AI
- No validation or sanitization of tool descriptions occurs
- Cross-tool contamination allows one malicious tool to affect others
See PROOF_OF_CONCEPT.md for detailed technical analysis.
We provide a comprehensive defense framework:
from src.defenses import SecureMCPClient
# Initialize secure client with all protections
client = SecureMCPClient(
enable_sanitization=True,
enable_validation=True,
enable_monitoring=True,
strict_mode=True
)
# Safe tool integration
client.add_server("https://trusted-server.com", verify=True)
/src
- Core implementation of attacks and defenses/docs
- Detailed documentation and analysis/tests
- Comprehensive test suite/examples
- Ready-to-run demonstrations
# Run all tests
pytest
# Run with coverage
pytest --cov=src tests/
# Run security-specific tests
pytest tests/test_attacks.py -v
We welcome contributions to improve MCP security! Please see CONTRIBUTING.md for guidelines.
- ๐ Website: https://gensecai.org
- ๐ง Email: [email protected]
- ๐ฌ Discussions: GitHub Discussions
- Proof of Concept - Detailed PoC explanation
- Attack Vectors - Comprehensive attack analysis
- Mitigation Strategies - Defense implementations
- Technical Analysis - Deep technical dive
This research is conducted under responsible disclosure principles:
- Educational Purpose: Code is for security research and defense only
- No Malicious Use: Do not use these techniques to attack systems
- Disclosure Timeline: Vendors were notified before public release
- Defensive Focus: Primary goal is to enable better defenses
- Organization: GenSecAI - Generative AI Security Community
- Research Team: GenSecAI Security Research Division
- Based on: Original findings from Invariant Labs
- Special Thanks: To the security research community and responsible disclosure advocates
- Security Issues: [email protected]
- General Inquiries: [email protected]
- Website: https://gensecai.org
- Bug Reports: GitHub Issues
This project is licensed under the MIT License - see LICENSE for details.
Made with โค๏ธ by GenSecAI
Securing AI, One Vulnerability at a Time