A Go library for building intelligent agents that can interact with Multiple Model Context Protocol (MCP) servers using Large Language Models (LLMs).
mcp-agents-go provides a framework for creating AI agents that can:
- Connect to multiple MCP servers to access various tools and resources
- Use different LLM providers (currently supports Azure OpenAI)
- Execute tool calls based on natural language prompts
- Manage multiple agents with different capabilities and tool access
This project uses:
- langchain-go for LLM integration
- mark3labs/mcp-go for MCP client and server functionality
- Multi-provider LLM support: Currently supports Azure OpenAI with extensible architecture
- Flexible MCP server connections: Supports both stdio and SSE transport types
- Agent-based architecture: Create multiple agents with different tool access permissions
- Configuration-driven setup: YAML-based configuration for easy deployment
- Tool access control: Fine-grained control over which tools each agent can use
- Streaming responses: Real-time response streaming with
GenerateContentAsStreaming
- Enhanced conversation flow: Support for complex multi-turn conversations with tool interactions
go get github.com/carlossantin/mcp-agents-go
Create a config.yaml
file with your providers, servers, and agents:
providers:
- name: my-azure-provider
type: AZURE
token: <YOUR_AZURE_TOKEN>
baseUrl: <YOUR_AZURE_BASE_URL>
model: gpt-4o-mini
version: 2025-01-01-preview
servers:
- name: my-mcp-server
type: sse
url: http://localhost:8080/mcp/events
# For stdio servers:
# type: stdio
# command:
# - /path/to/your/mcp-server
agents:
- name: my-agent
servers:
- name: my-mcp-server
allowed_tools:
- tool001
- tool002
provider: my-azure-provider
package main
import (
"context"
"fmt"
"github.com/carlossantin/mcp-agents-go/config"
"github.com/tmc/langchaingo/llms"
)
func main() {
ctx := context.Background()
// Setup from configuration file
err := config.SetupFromFile(ctx, "config.yaml")
if err != nil {
panic(err)
}
// Get an agent and generate content
agent, ok := config.SysConfig.Agents["my-agent"]
if !ok {
panic("Agent not found")
}
// Create message content
msgs := []llms.MessageContent{
{Role: llms.ChatMessageTypeHuman, Parts: []llms.ContentPart{llms.TextContent{Text: "What tools are available?"}}},
}
response, _ := agent.GenerateContent(ctx, msgs, false)
fmt.Println(response)
}
For real-time streaming responses:
package main
import (
"context"
"fmt"
"github.com/carlossantin/mcp-agents-go/config"
"github.com/tmc/langchaingo/llms"
)
func main() {
ctx := context.Background()
// Setup from configuration file
err := config.SetupFromFile(ctx, "config.yaml")
if err != nil {
panic(err)
}
// Get an agent
agent, ok := config.SysConfig.Agents["my-agent"]
if !ok {
panic("Agent not found")
}
// Create message content
msgs := []llms.MessageContent{
{Role: llms.ChatMessageTypeHuman, Parts: []llms.ContentPart{llms.TextContent{Text: "Give me the current dollar to real exchange rate in BRL."}}},
}
// Stream responses
var textResp <-chan string
var msgsResp <-chan llms.MessageContent
textResp, msgsResp = agent.GenerateContentAsStreaming(ctx, msgs, true)
// Process both channels concurrently
go func() {
for resp := range msgsResp {
msgs = append(msgs, resp)
}
}()
for resp := range textResp {
fmt.Print(resp)
}
}
Instead of using a configuration file, you can set up the system programmatically:
package main
import (
"context"
"github.com/carlossantin/mcp-agents-go/config"
"github.com/carlossantin/mcp-agents-go/agent"
"github.com/tmc/langchaingo/llms"
)
func main() {
ctx := context.Background()
providers := []config.LLMProvider{
{
Name: "my-provider",
Type: "AZURE",
Token: "your-token",
BaseURL: "your-base-url",
Model: "gpt-4o-mini",
Version: "2025-01-01-preview",
},
}
servers := []config.MCPServer{
{
Name: "my-server",
Type: "sse",
URL: "http://localhost:8080/mcp/events",
},
}
agents := []config.MCPAgent{
{
Name: "my-agent",
MCPAgentServers: []agent.MCPAgentServer{
{
Name: "my-server",
AllowedTools: []string{"tool1", "tool2"},
},
},
Provider: "my-provider",
},
}
err := config.Setup(ctx, providers, servers, agents)
if err != nil {
panic(err)
}
}
providers:
- name: string # Unique identifier for the provider
type: string # Currently supports "AZURE"
token: string # API token/key
baseUrl: string # Base URL for the API
model: string # Model name (e.g., "gpt-4o-mini")
version: string # API version (for Azure)
servers:
- name: string # Unique identifier for the server
type: string # "stdio" or "sse"
# For stdio servers:
command: []string # Command to start the server
# For SSE servers:
url: string # Server URL
headers: []string # Optional HTTP headers
agents:
- name: string # Unique identifier for the agent
servers: # List of MCP servers this agent can use
- name: string # Server name (must match a server definition)
allowed_tools: # Optional: restrict which tools can be used
- string
provider: string # Provider name (must match a provider definition)
The library consists of several main components:
- Config: Manages system configuration and initialization
- Server: Handles MCP server connections (stdio and SSE)
- Agent: Implements the agent logic with LLM integration
- Examples: Demonstrates usage patterns
- Agent receives a natural language prompt as
MessageContent
- LLM analyzes the prompt and determines if tools are needed
- If tools are required, the agent executes them via MCP servers
- Tool responses are fed back to the LLM for final response generation
- For streaming mode, responses are delivered in real-time as they're generated
GenerateContent(ctx context.Context, msgs []llms.MessageContent, addNotFinalResponses bool) (string, []llms.MessageContent)
Generates content synchronously from a sequence of messages.
Parameters:
ctx
: Context for the requestmsgs
: Array of message content representing the conversationaddNotFinalResponses
: Whether to include intermediate tool execution details in the response
Returns:
string
: The generated response text[]llms.MessageContent
: The complete conversation context including the new response
GenerateContentAsStreaming(ctx context.Context, msgs []llms.MessageContent, addNotFinalResponses bool) (chan string, chan llms.MessageContent)
Generates content with real-time streaming responses.
Parameters:
ctx
: Context for the requestmsgs
: Array of message content representing the conversationaddNotFinalResponses
: Whether to include intermediate tool execution details in the stream
Returns:
chan string
: Channel for streaming response chunkschan llms.MessageContent
: Channel for complete message contexts
Messages use the llms.MessageContent
structure:
type MessageContent struct {
Role ChatMessageType // Human, AI, Tool, etc.
Parts []ContentPart // Text, images, tool calls, etc.
}
Example usage:
msgs := []llms.MessageContent{
{
Role: llms.ChatMessageTypeHuman,
Parts: []llms.ContentPart{
llms.TextContent{Text: "Your question here"}
}
},
}
You can use environment variables in your configuration file:
servers:
- name: my-server
type: sse
url: ${MY_SERVER_URL|http://localhost:8080/mcp/events}
When addNotFinalResponses
is set to true
, the agent provides detailed information about tool execution:
[tool_usage] tool_name
: Indicates which tool is being executed[tool_response] tool_name: response
: Shows the tool's response (truncated if longer than 1000 characters)
This is particularly useful for debugging and understanding the agent's decision-making process.
Both GenerateContent
and GenerateContentAsStreaming
methods return complete conversation contexts, allowing you to:
- Maintain conversation history across multiple interactions
- Implement conversation persistence
- Build complex multi-turn dialogues
Example:
// Initial conversation
msgs := []llms.MessageContent{
{Role: llms.ChatMessageTypeHuman, Parts: []llms.ContentPart{llms.TextContent{Text: "Hello!"}}},
}
response, conversationContext := agent.GenerateContent(ctx, msgs, false)
// Continue conversation with context
conversationContext = append(conversationContext, llms.MessageContent{
Role: llms.ChatMessageTypeHuman,
Parts: []llms.ContentPart{llms.TextContent{Text: "What was my previous question?"}},
})
response2, updatedContext := agent.GenerateContent(ctx, conversationContext, false)
The streaming feature allows for real-time response delivery:
var textResp <-chan string
var msgsResp <-chan llms.MessageContent
textResp, msgsResp = agent.GenerateContentAsStreaming(ctx, msgs, true)
// Process both channels concurrently
go func() {
for msg := range msgsResp {
// Handle conversation context updates
msgs = append(msgs, msg)
}
}()
for chunk := range textResp {
fmt.Print(chunk) // Print each chunk as it arrives
}
The library includes comprehensive error handling:
- Server connection failures are reported during initialization
- Tool execution errors are passed to the LLM for appropriate handling
- Configuration validation ensures all required fields are present
- Streaming operations include error propagation through channels
-
Always check for agent existence before using:
agent, ok := config.SysConfig.Agents["my-agent"] if !ok { return fmt.Errorf("agent not found") }
-
Handle streaming channels properly:
var textResp <-chan string var msgsResp <-chan llms.MessageContent textResp, msgsResp = agent.GenerateContentAsStreaming(ctx, msgs, true) // Handle both channels concurrently go func() { for msg := range msgsResp { // Process conversation context } }() for chunk := range textResp { fmt.Print(chunk) }
-
Use context for cancellation:
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() response, _ := agent.GenerateContent(ctx, msgs, false)
-
Manage conversation context for multi-turn dialogues:
var conversationHistory []llms.MessageContent // Add user message conversationHistory = append(conversationHistory, llms.MessageContent{ Role: llms.ChatMessageTypeHuman, Parts: []llms.ContentPart{llms.TextContent{Text: userInput}}, }) // Get response and update context response, updatedContext := agent.GenerateContent(ctx, conversationHistory, false) conversationHistory = updatedContext
This project uses several key dependencies:
- tmc/langchaingo: LLM integration and conversation management
- mark3labs/mcp-go: MCP client and server functionality
- bytedance/sonic: High-performance JSON serialization
- gookit/config: Configuration management with environment variable support
- life4/genesis: Utility functions for slices and collections
Contributions are welcome! Please feel free to submit a Pull Request.
When contributing, please ensure:
- Your code follows Go best practices
- Include tests for new functionality
- Update documentation for API changes
- Handle errors appropriately
- Consider backwards compatibility
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
For questions and support, please create an issue on GitHub.
- Agent not found: Ensure your
config.yaml
file is properly formatted and the agent name matches - Tool execution failures: Check that your MCP server is running and accessible
- Streaming issues: Ensure you're properly handling both channels returned by
GenerateContentAsStreaming
- Configuration errors: Verify all required fields are present and environment variables are set