New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Select Vision Model from Client or Config file for Custom Endpoint #1634
Comments
Thanks for your report. I will have to test to be sure but this is likely because gpt-4-vision is being prioritized regardless of you selecting gemini as it is using OpenAI specs, on top of maybe some other incompatibility. I'm using this issue to address the core issue, where users would benefit from outright selecting the vision model to be used. |
For now, I also recommend using the Google endpoint as vision is fully supported for Gemini there. Maybe you are region-locked but you could use a VPN to access it. https://docs.librechat.ai/install/configuration/ai_setup.html#generative-language-api-gemini |
As a workaround I'm using the Google endpoint with VPN. There it's working. |
What happened?
Hello everyone,
I have connected the gemini-pro-vision model via openrouter.ai, but I always get the following error message within LibreChat. I've tested it with different images and types (png, jpg...).
Am I doing something wrong, do I need to set an option?
Thanks for your help!
Steps to Reproduce
What browsers are you seeing the problem on?
Firefox
Relevant log output
Screenshots
Code of Conduct
The text was updated successfully, but these errors were encountered: