Skip to content

Commit 4393f39

Browse files
committed
Initial commit
0 parents  commit 4393f39

File tree

9 files changed

+1293
-0
lines changed

9 files changed

+1293
-0
lines changed

.cursorrule

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
2+
You are an expert in Python, FastAPI, microservices architecture, and serverless environments.
3+
4+
Advanced Principles
5+
- Design services to be stateless; leverage external storage and caches (e.g., Redis) for state persistence.
6+
- Implement API gateways and reverse proxies (e.g., NGINX, Traefik) for handling traffic to microservices.
7+
- Use circuit breakers and retries for resilient service communication.
8+
- Favor serverless deployment for reduced infrastructure overhead in scalable environments.
9+
- Use asynchronous workers (e.g., Celery, RQ) for handling background tasks efficiently.
10+
11+
Microservices and API Gateway Integration
12+
- Integrate FastAPI services with API Gateway solutions like Kong or AWS API Gateway.
13+
- Use API Gateway for rate limiting, request transformation, and security filtering.
14+
- Design APIs with clear separation of concerns to align with microservices principles.
15+
- Implement inter-service communication using message brokers (e.g., RabbitMQ, Kafka) for event-driven architectures.
16+
17+
Serverless and Cloud-Native Patterns
18+
- Optimize FastAPI apps for serverless environments (e.g., AWS Lambda, Azure Functions) by minimizing cold start times.
19+
- Package FastAPI applications using lightweight containers or as a standalone binary for deployment in serverless setups.
20+
- Use managed services (e.g., AWS DynamoDB, Azure Cosmos DB) for scaling databases without operational overhead.
21+
- Implement automatic scaling with serverless functions to handle variable loads effectively.
22+
23+
Advanced Middleware and Security
24+
- Implement custom middleware for detailed logging, tracing, and monitoring of API requests.
25+
- Use OpenTelemetry or similar libraries for distributed tracing in microservices architectures.
26+
- Apply security best practices: OAuth2 for secure API access, rate limiting, and DDoS protection.
27+
- Use security headers (e.g., CORS, CSP) and implement content validation using tools like OWASP Zap.
28+
29+
Optimizing for Performance and Scalability
30+
- Leverage FastAPI’s async capabilities for handling large volumes of simultaneous connections efficiently.
31+
- Optimize backend services for high throughput and low latency; use databases optimized for read-heavy workloads (e.g., Elasticsearch).
32+
- Use caching layers (e.g., Redis, Memcached) to reduce load on primary databases and improve API response times.
33+
- Apply load balancing and service mesh technologies (e.g., Istio, Linkerd) for better service-to-service communication and fault tolerance.
34+
35+
Monitoring and Logging
36+
- Use Prometheus and Grafana for monitoring FastAPI applications and setting up alerts.
37+
- Implement structured logging for better log analysis and observability.
38+
- Integrate with centralized logging systems (e.g., ELK Stack, AWS CloudWatch) for aggregated logging and monitoring.
39+
40+
Key Conventions
41+
1. Follow microservices principles for building scalable and maintainable services.
42+
2. Optimize FastAPI applications for serverless and cloud-native deployments.
43+
3. Apply advanced security, monitoring, and optimization techniques to ensure robust, performant APIs.
44+
45+
Refer to FastAPI, microservices, and serverless documentation for best practices and advanced usage patterns.
46+

.gitignore

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Python-generated files
2+
__pycache__/
3+
*.py[oc]
4+
build/
5+
dist/
6+
wheels/
7+
*.egg-info
8+
9+
# Virtual environments
10+
.venv
11+
.env

.python-version

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
3.12

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Model Context Protocol exploration
2+
3+
This repository contains a simple client for the Model Context Protocol (MCP) and a simple server for testing purposes.

brave_search_client.py

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
from mcp import ClientSession, StdioServerParameters
2+
from mcp.client.stdio import stdio_client
3+
import asyncio
4+
import os
5+
import dotenv
6+
7+
dotenv.load_dotenv()
8+
9+
async def search_brave(query: str, count: int = 10):
10+
# Configure server parameters for Brave Search
11+
server_params = StdioServerParameters(
12+
command="/usr/bin/npx",
13+
args=["@modelcontextprotocol/server-brave-search"],
14+
env={
15+
"BRAVE_API_KEY": os.getenv("BRAVE_API_KEY")
16+
}
17+
)
18+
19+
print("Starting server...")
20+
async with stdio_client(server_params) as (read, write):
21+
print("Server started, initializing session...")
22+
async with ClientSession(read, write) as session:
23+
# Initialize the connection
24+
await session.initialize()
25+
26+
# List available tools to verify Brave Search is available
27+
tools = await session.list_tools()
28+
print("Available tools:", [tool.name for tool in tools])
29+
30+
# Call the Brave Search tool
31+
result = await session.call_tool(
32+
"brave_web_search",
33+
arguments={
34+
"query": query,
35+
"count": count
36+
}
37+
)
38+
39+
# Print the search results
40+
if result and result.content:
41+
for content in result.content:
42+
if content.type == "text":
43+
print(content.text)
44+
45+
async def main():
46+
# Check for Brave API key
47+
if not os.getenv("BRAVE_API_KEY"):
48+
print("Please set BRAVE_API_KEY environment variable")
49+
return
50+
51+
# Perform a search
52+
search_query = "Model Context Protocol"
53+
print(f"\nSearching for: {search_query}")
54+
await search_brave(search_query)
55+
56+
if __name__ == "__main__":
57+
asyncio.run(main())

pyproject.toml

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
[project]
2+
name = "mcp-exploration"
3+
version = "0.1.0"
4+
description = "Add your description here"
5+
readme = "README.md"
6+
requires-python = ">=3.12"
7+
dependencies = [
8+
"langchain-anthropic>=0.3.0",
9+
"langchain>=0.3.8",
10+
"mcp>=1.0.0",
11+
"python-dotenv>=1.0.1",
12+
"langgraph>=0.2.53",
13+
]

repro_github.py

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
import os
2+
from mcp import ClientSession, StdioServerParameters
3+
from mcp.client.stdio import stdio_client
4+
import dotenv
5+
from mcp import types
6+
7+
dotenv.load_dotenv()
8+
9+
async def main():
10+
# Create GitHub server parameters
11+
server_params = StdioServerParameters(
12+
command="npx",
13+
args=["-y", "@modelcontextprotocol/server-github"],
14+
env={
15+
"GITHUB_PERSONAL_ACCESS_TOKEN": os.getenv("GITHUB_PERSONAL_ACCESS_TOKEN"),
16+
"PATH": os.getenv("PATH")
17+
}
18+
)
19+
20+
# Create client session and connect to GitHub server
21+
async with stdio_client(server_params) as (read, write):
22+
async with ClientSession(read, write) as session:
23+
# Initialize the connection
24+
await session.initialize()
25+
26+
# Example: Call tool to list repositories
27+
repos = await session.send_request(
28+
types.ClientRequest(
29+
types.CallToolRequest(
30+
method="tools/call",
31+
params=types.CallToolRequestParams(name="search_repositories", arguments={"query": "user:adhikasp"}),
32+
)
33+
),
34+
types.TextContent,
35+
)
36+
print("Repositories:", repos)
37+
38+
if __name__ == "__main__":
39+
import asyncio
40+
asyncio.run(main())

simple_client.py

Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
from mcp import ClientSession, StdioServerParameters, types
2+
from mcp.client.stdio import stdio_client
3+
import dotenv
4+
import os
5+
from pydantic import BaseModel, Field, create_model
6+
from langchain_core.tools import BaseTool
7+
from typing import Optional, Type, List
8+
import json
9+
from langchain_core.callbacks import (
10+
AsyncCallbackManagerForToolRun,
11+
CallbackManagerForToolRun,
12+
)
13+
from langgraph.prebuilt import create_react_agent
14+
from langgraph.checkpoint.memory import MemorySaver
15+
from langchain_core.messages import HumanMessage
16+
from langchain_anthropic import ChatAnthropic
17+
18+
dotenv.load_dotenv()
19+
20+
def create_tool_model(tool_schema: types.Tool) -> Type[BaseModel]:
21+
"""Create a Pydantic model from MCP tool input schema"""
22+
field_definitions = {}
23+
for name, field_schema in tool_schema.inputSchema["properties"].items():
24+
field_type = str # Default to string type
25+
if field_schema.get("type") == "string":
26+
field_type = str
27+
elif field_schema.get("type") == "integer":
28+
field_type = int
29+
elif field_schema.get("type") == "number":
30+
field_type = float
31+
elif field_schema.get("type") == "boolean":
32+
field_type = bool
33+
34+
field_description = field_schema.get("description", "")
35+
field_definitions[name] = (field_type, Field(description=field_description))
36+
37+
return create_model(
38+
tool_schema.inputSchema.get("title", tool_schema.name),
39+
**field_definitions
40+
)
41+
42+
def create_langchain_tool(tool_schema: types.Tool, session: ClientSession, server_params: StdioServerParameters) -> Type[BaseTool]:
43+
"""Create a LangChain tool class from MCP tool schema"""
44+
input_model = create_tool_model(tool_schema)
45+
46+
class McpConvertedLangchainTool(BaseTool):
47+
name: str = tool_schema.name
48+
description: str = tool_schema.description
49+
args_schema: Type[BaseModel] = input_model
50+
mcp_session: ClientSession = session
51+
mcp_server_params: StdioServerParameters = server_params
52+
53+
def _run(
54+
self,
55+
run_manager: Optional[CallbackManagerForToolRun] = None,
56+
**kwargs,
57+
) -> any:
58+
raise NotImplementedError("Implement me!")
59+
60+
async def _arun(
61+
self,
62+
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
63+
**kwargs,
64+
) -> any:
65+
async with stdio_client(self.mcp_server_params) as (read, write):
66+
async with ClientSession(read, write) as session:
67+
await session.initialize()
68+
return await session.call_tool(self.name, arguments=kwargs)
69+
70+
return McpConvertedLangchainTool()
71+
72+
async def convert_mcp_to_langchain_tools(server_params: List[StdioServerParameters]) -> List[BaseTool]:
73+
"""Convert MCP tools to LangChain tools based on given server parameters"""
74+
langchain_tools = []
75+
for server_param in server_params:
76+
async with stdio_client(server_param) as (read, write):
77+
async with ClientSession(read, write) as session:
78+
await session.initialize()
79+
80+
# Convert all MCP tools to LangChain tools
81+
tools: types.ListToolsResult = await session.list_tools()
82+
for tool in tools.tools:
83+
tool_class = create_langchain_tool(tool, session, server_param)
84+
langchain_tools.append(tool_class)
85+
86+
return langchain_tools
87+
88+
async def run():
89+
# Create server parameters for stdio connection
90+
server_params = [
91+
StdioServerParameters(
92+
command="/home/adhikasp/.local/bin/uvx",
93+
args=["mcp-server-fetch"],
94+
env={
95+
"PATH": os.getenv("PATH")
96+
}
97+
),
98+
StdioServerParameters(
99+
command="/usr/bin/npx",
100+
args=["-y", "@modelcontextprotocol/server-brave-search"],
101+
env={
102+
"BRAVE_API_KEY": os.getenv("BRAVE_API_KEY"),
103+
"PATH": os.getenv("PATH")
104+
}
105+
),
106+
# StdioServerParameters(
107+
# command="/usr/bin/npx",
108+
# args=["-y", "@modelcontextprotocol/server-github"],
109+
# env={
110+
# "GITHUB_PERSONAL_ACCESS_TOKEN": os.getenv("GITHUB_PERSONAL_ACCESS_TOKEN"),
111+
# "PATH": os.getenv("PATH")
112+
# }
113+
# ),
114+
]
115+
116+
langchain_tools = await convert_mcp_to_langchain_tools(server_params)
117+
118+
# model = ChatAnthropic(model="claude-3-haiku-20240307")
119+
model = ChatAnthropic(model="claude-3-5-sonnet-latest")
120+
memory = MemorySaver()
121+
agent_executor = create_react_agent(model, langchain_tools, checkpointer=memory)
122+
123+
config = {"configurable": {"thread_id": "abc123"}}
124+
messages = []
125+
async for chunk in agent_executor.astream({"messages": [HumanMessage(
126+
content="Search github repositories owned by adhikasp. What are his interest based on it?")]}, config):
127+
messages.append(chunk)
128+
129+
for message in messages:
130+
print(message)
131+
# if 'agent' in message:
132+
# print(message['agent']['messages'][-1].content[0])
133+
# if 'tools' in message:
134+
# print(message['tools']['messages'][-1].content)
135+
136+
if __name__ == "__main__":
137+
import asyncio
138+
asyncio.run(run())

0 commit comments

Comments
 (0)