Request New Models/Provider/Endpoints - Jan 2026 #18642
Replies: 5 comments 4 replies
-
Codex CLI now uses this endpoint |
Beta Was this translation helpful? Give feedback.
-
#19547 submitted |
Beta Was this translation helpful? Give feedback.
-
|
Provider/Model name: zenmux.ai |
Beta Was this translation helpful? Give feedback.
-
|
Provider/Model name: Intel OpenVINO (specifically supporting OpenVINO Model Server or ovms) Your use case: Our team is migrating to Small Language Models (SLMs) hosted on Intel Xeon-based EC2 instances (M7i/z1d families). We use the OpenVINO framework to compress these models for high-performance CPU inference, achieving low latency without the need for expensive GPUs. Currently, we use the LiteLLM Proxy for all our LLM routing, cost tracking, and unified API access. However, because LiteLLM lacks a native OpenVINO adapter, we cannot easily route traffic to our self-hosted OpenVINO endpoints through the same proxy. Why this is valuable:
|
Beta Was this translation helpful? Give feedback.
-
|
Request support for xAI models: grok-4.20-multi-agent-beta-0309, grok-4.20-beta-0309-reasoning, grok-4.20-beta-0309-non-reasoning Provider/Model name: xAI / grok-4.20-multi-agent-beta-0309, xAI / grok-4.20-beta-0309-reasoning, xAI / grok-4.20-beta-0309-non-reasoning |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Request New Models, Providers & Endpoints - Jan 2026
This is the central place for the LiteLLM community to request and vote on new integrations.
How to Request
Before posting, search existing requests to avoid duplicates.
To request a new integration:
Comment below with:
Upvote existing requests you want to see added
Prioritization
Requests are prioritized based on:
Already Supported
Check our providers documentation - we currently support 100+ models across OpenAI, Anthropic, Azure, AWS Bedrock, Vertex AI, and more.
For urgent production needs, contact us directly at support@berri.ai
Beta Was this translation helpful? Give feedback.
All reactions