Skip to content

[BUG] Input should be 'stop' but received 'STOP' #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
BrunoGeorgevich opened this issue Apr 6, 2025 · 5 comments
Closed

[BUG] Input should be 'stop' but received 'STOP' #2

BrunoGeorgevich opened this issue Apr 6, 2025 · 5 comments
Labels
bug Something isn't working

Comments

@BrunoGeorgevich
Copy link

BrunoGeorgevich commented Apr 6, 2025

I've sent the a request to the gemini-2.0-flash-001 and i've got the following error:

Traceback (most recent call last):
  File "B:\Documentos\Workspaces\FastLLM\main.py", line 35, in <module>
    resp = manager.process_batch(batch)
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\site-packages\fastllm\core.py", line 291, in process_batch
    return asyncio.run(self._process_batch_async(batch))
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\asyncio\base_events.py", line 649, in run_until_complete
    return future.result()
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\site-packages\fastllm\core.py", line 452, in _process_batch_async
    batch_results = await process_batch_chunk(client, batch_requests)
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\site-packages\fastllm\core.py", line 437, in process_batch_chunk
    results = await asyncio.gather(*batch_tasks)
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\site-packages\fastllm\core.py", line 428, in process_request_with_semaphore
    return await self._process_request_async(client, request, progress)
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\site-packages\fastllm\core.py", line 330, in _process_request_async
    response = await self.provider.make_request(
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\site-packages\fastllm\providers\openai.py", line 83, in make_request
    return ChatCompletion(**data)
  File "C:\Users\bruno\miniconda3\envs\fastllm\lib\site-packages\pydantic\main.py", line 253, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatCompletion
choices.0.finish_reason
  Input should be 'stop', 'length', 'tool_calls', 'content_filter' or 'function_call' [type=literal_error, input_value='STOP', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/literal_error

Code:

from fastllm import RequestBatch, RequestManager, OpenAIProvider
from dotenv import load_dotenv
import os

if __name__ == "__main__":
    load_dotenv()

    API_BASES = {
        "GLAMA": "https://glama.ai/api/gateway/openai/v1",
        "REQUESTY": "https://router.requesty.ai/v1",
    }

    API_KEYS = {
        "GLAMA": os.environ["GLAMA_API_KEY"],
        "REQUESTY": os.environ["REQUESTY_API_KEY"],
    }

    SELECTED_ROUTER = "GLAMA"

    provider = OpenAIProvider(
        api_key=API_KEYS[SELECTED_ROUTER],
        api_base=API_BASES[SELECTED_ROUTER],
    )

    manager = RequestManager(provider=provider)
    request_ids = []
    with RequestBatch() as batch:
        for _ in range(10):
            request_id = batch.chat.completions.create(
                model="gemini-2.0-flash-001",
                messages=[{"role": "user", "content": "Tell me a joke. The output must be a JSON witht the schema {\"joke\": \"<joke>\"}"}],
            )
            request_ids.append(request_id)

        resp = manager.process_batch(batch)

        for request_id, response in zip(request_ids, resp):
            print(f"Request {request_id}: {response.response.choices[0].message.content}")

It seems that the expected answer is stop but i've got STOP. Maybe a .lower() solve the problem.

The same code works with deepseek and openai models.

Thank you very much for your work. The library is amazing!

Copy link

agentfarmx bot commented Apr 6, 2025

👨🏻‍🌾 The agent is at work!
PR coming – reply to the agent in the PR.
(this will take 15 minutes max)

Copy link

agentfarmx bot commented Apr 6, 2025

🐛 Bug Fix: Normalize Finish Reason Case in OpenAI Provider

After analyzing the issue where finish_reason returns "STOP" instead of the expected "stop", I've added case normalization to ensure compatibility with Pydantic validation. This change handles API responses that return uppercase finish reasons without altering the core functionality.

  • Fixed: Convert finish_reason to lowercase in OpenAI provider response handling
  • Testing: Verified with gemini-2.0-flash-001 model requests

@Rexhaif
Copy link
Owner

Rexhaif commented Apr 8, 2025

Hi Bruno,

Thank you for the issue and a fix, i've also seen this problem in my own experiments.
Will implement the fix as soon as possible.

@Rexhaif
Copy link
Owner

Rexhaif commented Apr 8, 2025

Looks like i've fixed it. Instead of .lower() on finish_reason i've decided to relax response validation alltogether, similarly to how it is done in OpenAI's client.

@BrunoGeorgevich thanks once again for reporting this error! Feel free to try fixed version: pip install -U fastllm-kit==0.1.7

@Rexhaif
Copy link
Owner

Rexhaif commented Apr 9, 2025

I'm gonna close the issue as fixed right now, feel free to reopen if the problem persists!

@Rexhaif Rexhaif closed this as completed Apr 9, 2025
@Rexhaif Rexhaif added the bug Something isn't working label Apr 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants