Skip to content

ValidationError from InputTokensDetails when using LitellmModel with None` cached\_tokens #760

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
DanielHashmi opened this issue May 26, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@DanielHashmi
Copy link

DanielHashmi commented May 26, 2025

Please read this first

  • Have you read the docs?Agents SDK docs
  • Yes
  • Have you searched for related issues? Others may have faced similar issues.
  • Yes

Describe the bug

When running the sample agent code using LitellmModel, the following validation error occurs during runtime:

pydantic_core._pydantic_core.ValidationError: 1 validation error for InputTokensDetails
cached_tokens
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type

Debug information

  • Agents SDK version: (e.g. v0.0.16)
  • Python version (e.g. Python 3.12.1)

Repro steps

  1. Use the following minimal code:
from __future__ import annotations
import asyncio
from agents import Agent, Runner, function_tool
from agents.extensions.models.litellm_model import LitellmModel

@function_tool
def get_weather(city: str):
    print(f"[debug] getting weather for {city}")
    return f"The weather in {city} is sunny."

async def main(model: str, api_key: str):
    agent = Agent(
        name="Assistant",
        instructions="You only respond in haikus.",
        model=LitellmModel(model=model, api_key=api_key),
        tools=[get_weather],
    )

    result = await Runner.run(agent, "What's the weather in Tokyo?")
    print(result.final_output)

if __name__ == "__main__":
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("--model", type=str, required=False)
    parser.add_argument("--api-key", type=str, required=False)
    args = parser.parse_args()

    model = args.model or input("Enter a model name for Litellm: ")
    api_key = args.api_key or input("Enter an API key for Litellm: ")

    asyncio.run(main(model, api_key))
  1. Run the script and provide valid inputs for model and API key.
  2. Observe the traceback error.

Screenshots

Image

Expected behavior

[debug] getting weather for Tokyo
The weather in Tokyo is sunny.

@DanielHashmi DanielHashmi added the bug Something isn't working label May 26, 2025
@DavidN22
Copy link

In the litellm_model.py library theirs a line of code like:

cached_tokens=getattr(response_usage.prompt_tokens_details, "cached_tokens", 0 )

you can try changing it to

cached_tokens = int(getattr(response_usage.prompt_tokens_details, "cached_tokens", 0) or 0)

since the structure requires an int but you are returning None and the InputTokensDetails takes an int not None, this is prolly a bug honestly or maybe an older version of litellm, I did run into this as well

@DanielHashmi
Copy link
Author

DanielHashmi commented May 27, 2025

In the source code of OpenAi Agent SDK, in litellm_model.py file this line is already as you said go check: https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/models/litellm_model.py

Look for this:
input_tokens_details=InputTokensDetails( cached_tokens=getattr( response_usage.prompt_tokens_details, "cached_tokens", 0 ) or 0 ),

The difference is just that its not type casting it to an int.

They have actually fixed it, but the latest changes are not being installed! maybe because of PyPi version not updated yet!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants