-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Tool calling with LiteLLM and thinking models fail #765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can you please provide a full working script? Happy to take a look! |
@rm-openai see below
|
- Introduced a new configuration file for permissions in `.claude/settings.local.json`. - Enhanced `LitellmModel` to properly handle assistant messages with tool calls when reasoning is enabled, addressing issue openai#765. - Added a new comprehensive test suite for LiteLLM thinking models to ensure the fix works across various models and scenarios. - Tests include reproducing the original error, verifying successful tool calls, and checking the fix's applicability to different thinking models.
Enhances the fix for issue openai#765 to work universally with all LiteLLM thinking models that support function calling. Verified working: - Anthropic Claude Sonnet 4 (partial fix - progress from "found text" to "found tool_use") - OpenAI o4-mini (complete success - full tool calling with reasoning) The fix now automatically applies when ModelSettings(reasoning=...) is used with any LiteLLM model, making it future-proof for new thinking models that support both reasoning and function calling.
… maintainability - Cleaned up whitespace and formatting in the test suite for LiteLLM thinking models. - Ensured consistent use of commas and spacing in function calls and assertions. - Verified that the fix for issue openai#765 applies universally across all supported models. - Enhanced documentation within the tests to clarify the purpose and expected outcomes.
Root Cause Analysis and Current StatusTLDR: Thinking with tool calling for Anthropic is broken in LiteLLM. I've investigated this issue thoroughly and determined the root cause is in LiteLLM, not the openai-agents-python SDK. What's Actually Happening
Current WorkaroundsUntil LiteLLM fixes this upstream:
Related Issues
Why No Fix in This SDKI initially created a PR with a workaround, but decided against it because:
|
Thanks! Another workaorund could also be use Anthropic through the OpenAI Responses API Compatibility no? Haven't tried it but should work |
Describe the bug
When running the agents SDK with tool calling and a thinking model through LITELLM (e.g. sonnet 4) getting this error
Debug information
Repro steps
Expected behavior
Everything works :)
The text was updated successfully, but these errors were encountered: