-
Notifications
You must be signed in to change notification settings - Fork 112
Shutting Down MCP Server #23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think there is a Bug somewhere in the request POST JSON message accounting to the I tested the mcphost locally with the local llama.cpp server with empty mcpServer setting and sent a simple prumpt "hi". The log in mcphost was:
While the log in llama.cpp was:
I think the Edited: //pkg/llm/openai/types.go
type MessageParam struct {
Role string `json:"role"`
Content *string `json:"content"`
ReasoningContent *string `json:"reasoning_content",omitempty` // Add omitempty here
FunctionCall *FunctionCall `json:"function_call,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
Name string `json:"name,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
} |
Pitching in just in case this is still an issue. For me, it turned out that the model I was using was not found. I got to this by logging the error returned here. I hope this helps. |
Add ```omitempty``` to ```reasoning_content``` in OpenAI message API. See issue [link](mark3labs#23)
I ran into the same issue, and at least in my case, @Waasi was correct. For me, not only was the model not found, but I also could not connect to the Ollama server. I had to correctly set my OLLAMA_HOST environment variable. After that, I got my first MCP server up and running. So, I believe better logging when this error occurs would be really helpful. |
The text was updated successfully, but these errors were encountered: