-
Notifications
You must be signed in to change notification settings - Fork 650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It is so difficult to setup the Ai Provider #1368
Comments
Yes please, i have been trying to integrate Deepseek with ollama, for the past 4 days, and still it says Failed to create task, unable to deploy. Even tho I am not a tech sound person, I am still trying yet failing miserably. |
Hey @ravenizzed, @realcarlos, could you guys share your thoughts with us? What kind of option would work better for you—setting up an independent page on the UI, a more dedicated document, or something else? Also, could you share which part of the configuration setup is the most challenging for you? Your feedback will really help us improve the setup flow. Thanks a lot! |
Hi @paopa , thank you for your prompt reply. Usually we choose "custom" when init the setup, and there is no visual UI for us to test if the LLM and Embedder config works. I think if there is a Ai setting page in the "Setting" section will help. Just replace the config.yaml with a visual web page , and add a "Test connection" button. If you want to make it more perfect, you can refer to the model configuration module of Dify and add all the existing AI inference services. WrenAi looks good ,I hope I can experience it locally soon. Thank you very much. |
Well, @paopa Let me share my recent experience with WrenAI.
The point is, the GUI we have is fair enough for a starter piece, the user needs to be updated along while connecting, because if the connection works, then its just the data we have to worry. i hope this minor step would change so much, because you already have this with OpenAI setup. and thankyou for the quick response. |
@paopa @qdrddr in .env ,there are some key strings: LLM_OPENAI_API_KEY= in config.yaml: type: embedder define OPENAI_API_KEY=<api_key> in ~/.wrenai/.env if you are using openai embedding modelplease refer to LiteLLM documentation for more details: https://docs.litellm.ai/docs/providersI wonder which key is the right one. |
Hi @ravenizzed and @realcarlos, thanks so much for your feedback! We’ll chat in the team about how to make the config step even easier. If you have any more thoughts on the config setup, I think we can keep commenting on this issue. It’ll be a great option for us to make it easier! |
@Nikita23526 i can share my env, config, docker file if you want |
@ravenizzed yes please |
@Nikita23526 why don't you share your files, env, config, and docker so the devs can help you out as well. I will resume with gemini in a couple of days, trying to setup dB via postgresSql to Redshift. So I can guide as best I can with the limited knowledge I have. |
@ravenizzed I have used ollama mistral but i was unable to understand that do i need to put mistral api key ....2025-03-19 11:27:32 wren-ai-service-1 | File "/src/globals.py", line 49, in create_service_container
|
@Nikita23526, well the first thing i see is the embeddings, try using phi4 as stated in the config logs, check if that works. try using nomic embeder. |
I think we need a setting page to test if the LLM and Embedder works easily, now it is so wired, always show "failed to create ask"
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: