Skip to content

Error In check_for_completion when using Groq in litellm #224

@khushalcodiste

Description

@khushalcodiste

When using Groq LLM its working good but through error in the logs as :

2025-07-09 14:44:49.645 ERROR {contextual_conversational_agent} [check_for_completion] check_for_completion exception: Error code: 401 - {'error': {'message': 'Incorrect API key provided: gsk_YPb8********************************************. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

In this code:

class StreamingContextualAgent(BaseAgent):
    def __init__(self, llm, **kwargs):
        super().__init__()
        self.llm = llm
        self.conversation_completion_llm = OpenAiLLM(
            model=os.getenv("CHECK_FOR_COMPLETION_LLM", llm.model), **kwargs
        )
        self.history = [{"content": ""}]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions