-
Notifications
You must be signed in to change notification settings - Fork 16
allow to pass api key also for ollama #429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
…tensions" This reverts commit 3234d5f.
…ur via model flags.
The following feedback could not be added to specific lines, but still contains valuable information:
|
Number of tokens: input_tokens=33968 output_tokens=2379 max_tokens=4096
|
@JuliaS92 please check if this works for you
This pull request refines the handling of API keys and improves flexibility in configuring LLM integrations. Key updates include making API key input optional for certain models, generalizing API key management, and enabling dynamic configuration of base URLs for specific models.
Improvements in API key handling:
alphastats/gui/utils/llm_helper.py
: Updated thellm_config
function to make API key input optional for models that do not require it. The prompt now dynamically reflects whether the API key is mandatory.alphastats/gui/utils/llm_helper.py
: Generalized API key management in theset_api_key
function by removing references to "OpenAI" and aligning the logic to support multiple providers. Updated error and info messages accordingly.Enhancements in LLM integration:
alphastats/llm/llm_integration.py
: Modified the initialization of theOpenAI
client to use the providedapi_key
dynamically instead of hardcoding it for specific models. This change improves flexibility for different LLM configurations.