Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Use a Self-Hosted Model with Custom Headers #1535

Open
mio4kon opened this issue Feb 14, 2025 · 3 comments
Open

How to Use a Self-Hosted Model with Custom Headers #1535

mio4kon opened this issue Feb 14, 2025 · 3 comments
Labels

Comments

@mio4kon
Copy link

mio4kon commented Feb 14, 2025

Feature request

Our company provides an API similar to OpenAI's ({host}/v1/chat/completions)。
but requires custom headers in the API request.

Does pr-agent support this?

Image

Image

Motivation

Use a Self-Hosted Model

@mio4kon mio4kon added the feature 💡 label Feb 14, 2025
@mrT23
Copy link
Collaborator

mrT23 commented Feb 14, 2025

We support all the different models and APIs via litellm.

Are you able to utilize these custom headers via litellm ? if so, share a link, and we can check if it can be supported

@mio4kon
Copy link
Author

mio4kon commented Feb 18, 2025

@mrT23
Perhaps my explanation wasn't very clear.

Our AI provider is internal within the company, and they provide us with an API. This API is quite different from the standard OpenAI API—it requires extra headers and the return format is also different.

I noticed that the project uses litellm at its core to call large models.
I would like to know if litellm supports this custom API calling method, or do we need to deploy our own proxy service to handle protocol translation?

@mrT23
Copy link
Collaborator

mrT23 commented Feb 25, 2025

review this
#1564

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants