Skip to content

Conversation

MarkWard0110
Copy link
Contributor

The bug this fixes.
Sometimes the LLM can make a mistake in the tool's name. The client would output Tool x not found and submit a new chat request. The problem is that the chat request will contain the assistant tool request, but will be missing a tool response. This results in an HTTP status code 400, invalid tool usage.

The patch adds a tool response when the tool is not found. This will enable the client to inform the LLM that it performed a bad tool call as a tool response and avoid the HTTP 400 error.

Test with a prompt that will invoke many tool calls like I need the weather conditions for each state capital in the United States.

@ParthSareen
Copy link
Member

Thanks!

@ParthSareen ParthSareen changed the title resolve invalid tool usage status code 400 if llm makes a mistake examples: resolve invalid tool usage status code 400 if llm makes a mistake gpt-oss Sep 2, 2025
@ParthSareen ParthSareen merged commit 9f41447 into ollama:main Sep 2, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants