Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Select Vision Model from Client or Config file for Custom Endpoint #1634

Open
1 task done
dannykorpan opened this issue Jan 25, 2024 · 3 comments
Open
1 task done

Comments

@dannykorpan
Copy link

dannykorpan commented Jan 25, 2024

What happened?

Hello everyone,

I have connected the gemini-pro-vision model via openrouter.ai, but I always get the following error message within LibreChat. I've tested it with different images and types (png, jpg...).

Something went wrong. Here's the specific error message we encountered: Error: { "error": { "code": 400, "message": "Provided image is not valid.", "status": "INVALID_ARGUMENT" } } 

Am I doing something wrong, do I need to set an option?

Thanks for your help!

Steps to Reproduce

  1. Create openrouter.ai account
  2. Get API key
  3. Edit librechat.yaml
  4. restart docker container
  5. Test gemini-vision-pro

What browsers are you seeing the problem on?

Firefox

Relevant log output

2024-01-25 14:17:52 error: [MeiliMongooseModel.findOneAndUpdate] Convo not found in MeiliSearch and will index b716275d-6e10-4a5b-a4ff-8a5a7a7b20d0 Document `b716275d-6e10-4a5b-a4ff-8a5a7a7b20d0` not found.
2024-01-25 14:17:54 warn: [OpenAIClient.chatCompletion][stream] API error
2024-01-25 14:17:54 warn: [OpenAIClient.chatCompletion][finalChatCompletion] API error
2024-01-25 14:17:54 error: [OpenAIClient.chatCompletion] Unhandled error type Error: {
  "error": {
    "code": 400,
    "message": "Provided image is not valid.",
    "status": "INVALID_ARGUMENT"
  }
}

2024-01-25 14:17:54 error: [handleAbortError] AI response error; aborting request: Error: {
  "error": {
    "code": 400,
    "message": "Provided image is not valid.",
    "status": "INVALID_ARGUMENT"
  }
}

Screenshots

grafik

Code of Conduct

  • I agree to follow this project's Code of Conduct
@dannykorpan dannykorpan added the bug Something isn't working label Jan 25, 2024
@danny-avila danny-avila removed the bug Something isn't working label Jan 25, 2024
@danny-avila danny-avila changed the title [Bug]: Error while using openrouter/gemini-pro-vision Enhancement: Select Vision Model from Client or Config file for Custom Endpoint Jan 25, 2024
@danny-avila
Copy link
Owner

danny-avila commented Jan 25, 2024

Thanks for your report. I will have to test to be sure but this is likely because gpt-4-vision is being prioritized regardless of you selecting gemini as it is using OpenAI specs, on top of maybe some other incompatibility.

I'm using this issue to address the core issue, where users would benefit from outright selecting the vision model to be used.

@danny-avila
Copy link
Owner

For now, I also recommend using the Google endpoint as vision is fully supported for Gemini there. Maybe you are region-locked but you could use a VPN to access it. https://docs.librechat.ai/install/configuration/ai_setup.html#generative-language-api-gemini

@dannykorpan
Copy link
Author

dannykorpan commented Jan 25, 2024

As a workaround I'm using the Google endpoint with VPN. There it's working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants