-
Notifications
You must be signed in to change notification settings - Fork 61.1k
添加 OpenRouter 作为 LLM 服务提供商 #6484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Removed the (very long) model list from OpenRouter. - Added support for icon detection for OpenRouter models
@hyc1230 is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThis change introduces comprehensive support for the OpenRouter provider throughout the application. It adds OpenRouter to environment configuration, client and server APIs, UI settings, localization files, and documentation. New logic is implemented for API routing, authentication, model listing, and chat interactions specific to OpenRouter, with corresponding updates to environment variables and user interface components. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI_Settings
participant AccessStore
participant ClientApi
participant OpenRouterApi
participant ServerAPI
participant OpenRouterBackend
User->>UI_Settings: Selects OpenRouter provider, enters API key and endpoint
UI_Settings->>AccessStore: Stores OpenRouter credentials
User->>ClientApi: Initiates chat request (OpenRouter)
ClientApi->>OpenRouterApi: Calls chat() with message
OpenRouterApi->>ServerAPI: Sends proxied request with credentials
ServerAPI->>OpenRouterBackend: Forwards request, handles auth and model restrictions
OpenRouterBackend-->>ServerAPI: Returns model response (streaming/non-streaming)
ServerAPI-->>OpenRouterApi: Returns processed response
OpenRouterApi-->>ClientApi: Delivers chat result
ClientApi-->>User: Displays model response
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Tip ⚡️ Faster reviews with caching
Enjoy the performance boost—your workflow just got faster. ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (6)
app/api/openrouter.ts (2)
44-47
: Stale comment & path handling clarityThe comment still references “alibaba”, which can confuse future maintainers. Suggest updating to reflect OpenRouter or removing entirely.
69-79
: Forward original headers for better compatibilityOnly
Content-Type
andAuthorization
are relayed. Many OpenRouter models expectX-Title
,HTTP-Referer
, or other custom headers. Forwarding all client-supplied headers (except ones you intentionally strip) improves compatibility:- headers: { - "Content-Type": "application/json", - Authorization: req.headers.get("Authorization") ?? "", - }, + headers: Object.fromEntries(req.headers),app/constant.ts (1)
852-856
: Provider sort index inconsistent with client implementation
DEFAULT_MODELS
adds OpenRouter withsorted: 15
, whileapp/client/platforms/openrouter.ts
returns models withsorted: 14
. This discrepancy can break deterministic ordering in UI lists.Pick one value (preferably
15
to stay monotonic after SiliconFlow’s14
) and use it consistently.- sorted: 14, + sorted: 15,app/client/platforms/openrouter.ts (3)
1-3
: Out-of-date header commentThe opening comment still mentions “azure and openai”. Consider updating to avoid confusion.
-// azure and openai, using same models. so using same LLMApi. +// Client implementation for OpenRouter LLM provider.
71-73
: Log messages reference the wrong providerBoth logs label the payload as “openai”, which can mislead when debugging.
- console.log("[Request] openai payload: ", requestPayload); + console.log("[Request] openrouter payload: ", requestPayload);Also applies to: 120-121
126-134
: Required OpenRouter headers not sentOpenRouter’s docs require a
Referer
(orHTTP-Referer
) header identifying the calling site and often anX-Title
. Consider enrichinggetHeaders()
for the OpenRouter branch or injecting here:headers: { ...getHeaders(), Referer: location.origin, "X-Title": "NextChat", },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
.env.template
(1 hunks)README.md
(2 hunks)README_CN.md
(2 hunks)app/api/[provider]/[...path]/route.ts
(2 hunks)app/api/auth.ts
(1 hunks)app/api/openrouter.ts
(1 hunks)app/client/api.ts
(7 hunks)app/client/platforms/openrouter.ts
(1 hunks)app/components/emoji.tsx
(2 hunks)app/components/settings.tsx
(3 hunks)app/config/server.ts
(3 hunks)app/constant.ts
(7 hunks)app/locales/cn.ts
(1 hunks)app/locales/da.ts
(1 hunks)app/locales/en.ts
(1 hunks)app/store/access.ts
(5 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
app/store/access.ts (1)
app/constant.ts (1)
OPENROUTER_BASE_URL
(39-39)
app/client/api.ts (1)
app/client/platforms/openrouter.ts (1)
OpenRouterApi
(43-287)
app/components/settings.tsx (2)
app/components/ui-lib.tsx (2)
ListItem
(54-88)PasswordInput
(272-295)app/constant.ts (1)
OpenRouter
(274-278)
app/api/openrouter.ts (4)
app/config/server.ts (1)
getServerSideConfig
(132-278)app/api/auth.ts (1)
auth
(27-132)app/constant.ts (1)
OPENROUTER_BASE_URL
(39-39)app/utils/model.ts (1)
isModelNotavailableInServer
(230-258)
🪛 LanguageTool
README.md
[uncategorized] ~315-~315: You might be missing the article “the” here.
Context: ...ow qwen/qwen3-32b:free(OpenRouter)
in model list. For Azure: use `modelName@Azure=...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🔇 Additional comments (28)
.env.template (1)
84-89
: Correctly implemented OpenRouter environment variablesThe new environment variables for OpenRouter integration are properly documented and follow the existing pattern in the file. Both
OPENROUTER_API_KEY
andOPENROUTER_URL
are marked as optional with clear comments.README.md (2)
314-316
: Well-documented OpenRouter model configurationThe documentation clearly explains how to manually configure models from OpenRouter using the
CUSTOM_MODELS
environment variable. The provided example with+qwen/qwen3-32b:free@OpenRouter
is helpful for users to understand the syntax.🧰 Tools
🪛 LanguageTool
[uncategorized] ~315-~315: You might be missing the article “the” here.
Context: ...owqwen/qwen3-32b:free(OpenRouter)
in model list. For Azure: use `modelName@Azure=...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
367-374
: Properly documented environment variablesThe new environment variables for OpenRouter are well-documented following the same format as other providers. This ensures users have the necessary information to configure the OpenRouter integration.
app/api/auth.ts (1)
107-109
: Correctly implemented OpenRouter authenticationThe implementation for OpenRouter authentication follows the established pattern in the file. The switch case properly assigns the OpenRouter API key from server configuration when no user API key is provided.
app/api/[provider]/[...path]/route.ts (2)
18-18
: Properly imported OpenRouter handlerThe OpenRouter handler import follows the same pattern as other provider handlers.
54-55
: Correctly implemented OpenRouter routingThe OpenRouter API path case is properly integrated into the routing switch statement, following the established pattern for other providers.
app/locales/cn.ts (1)
510-520
: Clean implementation of OpenRouter localization stringsThe addition of OpenRouter configuration strings follows the same structure used for other providers, with clear and consistent translations for API key and endpoint settings.
app/locales/en.ts (1)
494-504
: Good addition of English translations for OpenRouterThe implementation correctly adds English localization strings for OpenRouter configuration options, maintaining consistency with other provider settings.
app/components/emoji.tsx (4)
60-61
: Support for OpenAI model prefixes expandedThe addition of "o3" model prefix recognition and "openai/" namespace pattern improves icon handling for various OpenAI models, including those that might be accessed through OpenRouter.
64-77
: Enhanced model namespace detection for Google and Anthropic modelsRestructured conditions to recognize both direct model names and namespace-prefixed patterns (like "google/gemini" and "anthropic/claude"), which improves icon assignment for models accessed through aggregators like OpenRouter.
81-85
: Improved Mistral AI model detectionAdded support for the "mistralai/" namespace prefix, ensuring proper icon assignment for Mistral models that may be accessed through OpenRouter.
95-98
: Enhanced Grok model detectionExtended the condition to recognize "x-ai/grok" namespace pattern, maintaining consistent icon handling across different provider naming conventions.
README_CN.md (2)
235-237
: Clear documentation for OpenRouter model configurationThe instructions clearly explain that OpenRouter models need to be manually configured using the suggested syntax and provide a helpful example that demonstrates how to add an OpenRouter model to the system.
281-288
: Well-documented environment variables for OpenRouterThe addition of
OPENROUTER_API_KEY
andOPENROUTER_URL
environment variables follows the same documentation pattern as other providers, making it clear for users how to configure OpenRouter integration.app/components/settings.tsx (3)
78-78
: Clean import addition for OpenRouter.The import of
OpenRouter
from constants is properly added alongside other service provider imports.
1363-1402
: Well-structured OpenRouter configuration component.The implementation follows the same pattern as other provider configurations, with proper endpoint URL and API key inputs. The component correctly binds to the access store state and includes appropriate localization for labels and placeholders.
1866-1866
: Proper integration of the OpenRouter component.The component is correctly included in the conditional rendering logic alongside other provider-specific components.
app/locales/da.ts (1)
470-480
: Complete and consistent Danish translations for OpenRouter.The localization entries for OpenRouter follow the same structure and naming conventions as other provider translations, providing all necessary text for the UI components.
app/store/access.ts (5)
20-20
: Proper import for OpenRouter base URL.The OpenRouter base URL constant import is correctly added to the list of imports.
63-63
: Correct implementation of default OpenRouter URL.The default URL is properly set based on whether the application is running in app mode or web mode, consistent with other provider implementations.
138-140
: Well-structured state properties for OpenRouter.The OpenRouter URL and API key properties are properly added to the default state, following the same pattern as other service providers.
229-231
: Clean validation method for OpenRouter.The
isValidOpenRouter()
method correctly validates the presence of the OpenRouter API key, using the same pattern as other provider validation methods.
252-252
: Proper integration with authorization logic.The OpenRouter validation is correctly added to the
isAuthorized()
method, ensuring OpenRouter credentials are properly checked during authorization.app/config/server.ts (3)
91-94
: Well-defined environment variables for OpenRouter.The OpenRouter URL and API key environment variables are properly declared in the ProcessEnv interface, following the same pattern as other providers.
170-170
: Correct flag for detecting OpenRouter configuration.The isOpenRouter flag properly detects if OpenRouter is configured based on the presence of an API key, consistent with the approach used for other providers.
254-257
: Complete server-side configuration for OpenRouter.The OpenRouter configuration is properly added to the server-side config object, with correct handling of the API key through the getApiKey helper function for key rotation/selection.
app/client/api.ts (2)
177-180
: OpenRouter provider wiring looks goodThe new
case ModelProvider.OpenRouter
correctly instantiatesOpenRouterApi
, and the precedingbreak
prevents fall-through.
272-274
:isOpenRouter
flag is calculated but never used
isOpenRouter
is returned fromgetConfig()
and destructured later, yet nothing in the remainder ofgetHeaders
(or the surrounding file) references it. Leaving unused variables tends to trigger linter warnings and adds cognitive load.If no follow-up code will consume this flag, consider deleting it:
- const isOpenRouter = - modelConfig.providerName === ServiceProvider.OpenRouter; ... - isOpenRouter,[ suggest_nitpick ]
Also applies to: 315-316, 345-345
headers: { | ||
"Content-Type": "application/json", | ||
Authorization: req.headers.get("Authorization") ?? "", | ||
}, | ||
method: req.method, | ||
body: req.body, | ||
redirect: "manual", | ||
// @ts-ignore | ||
duplex: "half", | ||
signal: controller.signal, | ||
}; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Body stream consumed twice – may forward an empty body
fetchOptions.body
is first initialised withreq.body
(a ReadableStream).- Later
await req.text()
is invoked to inspect/clone the payload, which consumes that stream. - If
serverConfig.customModels
is falsy, the clonedBody replacement never occurs and the already-consumed stream is forwarded, resulting in an empty request body at OpenRouter.
- const fetchOptions: RequestInit = {
- ...
- body: req.body,
+ const rawBody = await req.text(); // consume once
+ const fetchOptions: RequestInit = {
+ ...
+ body: rawBody.length ? rawBody : undefined,
...
- if (serverConfig.customModels && req.body) {
+ if (serverConfig.customModels && rawBody.length) {
...
- const clonedBody = await req.text();
- fetchOptions.body = clonedBody;
+ const jsonBody = JSON.parse(rawBody) as { model?: string };
This guarantees the body is read a single time and forwarded intact.
Also applies to: 82-110
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
app/client/platforms/openrouter.ts (5)
182-183
: Consider improving type safety instead of using @ts-ignoreThere are TypeScript type safety issues being bypassed with
@ts-ignore
. Consider improving the type definitions to properly handle the tool call arguments array indexing without needing to ignore TypeScript errors.- // @ts-ignore - runTools[index]["function"]["arguments"] += args; + if (runTools[index] && runTools[index].function) { + runTools[index].function.arguments += args; + }
224-226
: Improve type safety in tool message handlingMultiple
@ts-ignore
comments indicate potential type safety issues when manipulating the request payload messages. Consider adding appropriate type guards or interfaces to handle this more safely.- // @ts-ignore - requestPayload?.messages?.splice( - // @ts-ignore - requestPayload?.messages?.length, + if (requestPayload.messages) { + requestPayload.messages.splice( + requestPayload.messages.length,
274-274
: Translate Chinese comment to EnglishFor consistency with the rest of the codebase which is in English, consider translating the Chinese comment to English.
- let seq = 1000; //同 Constant.ts 中的排序保持一致 + let seq = 1000; // Keep consistent with the sorting in Constant.ts
116-117
: Improve comment clarity about max_tokens exclusionThe comment about excluding
max_tokens
doesn't clearly explain the reasoning. Consider providing a more informative explanation or linking to documentation that explains this decision.- // max_tokens: Math.max(modelConfig.max_tokens, 1024), - // Please do not ask me why not send max_tokens, no reason, this param is just shit, I dont want to explain anymore. + // Deliberately excluding max_tokens parameter as it can cause unexpected truncation issues with OpenRouter + // See: [add reference to relevant documentation or discussion]
120-120
: Update log message to reference OpenRouterThe log message references OpenAI but this is the OpenRouter implementation.
- console.log("[Request] openai payload: ", requestPayload); + console.log("[Request] openrouter payload: ", requestPayload);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/client/platforms/openrouter.ts
(1 hunks)
🔇 Additional comments (2)
app/client/platforms/openrouter.ts (2)
43-287
: LGTM! Well-structured OpenRouter API implementationThe OpenRouter API implementation follows the established patterns for LLM providers in the codebase. It properly handles custom URLs, authentication, streaming, non-streaming requests, and model listing. The implementation of the thinking/reasoning feature and tool calls also looks correct.
83-246
:✅ Verification successful
Verify OpenRouter API compatibility with the implementation
The implementation assumes OpenRouter API has the same structure as OpenAI for handling streaming responses, tool calls, and reasoning. Please verify this is correct for all supported models through OpenRouter.
🌐 Web query:
Does OpenRouter API support the same streaming format and tool calls structure as OpenAI API?
💡 Result:
OpenRouter does support the same streaming format and tool calls structure as the OpenAI API, making it a highly compatible alternative for developers working with multiple LLM providers.
Streaming Implementation
OpenRouter implements streaming through Server-Sent Events (SSE), just like OpenAI's implementation. To enable streaming, you simply set the
stream
parameter totrue
in your request[6][9]. This allows the model to stream the response in chunks rather than returning the entire response at once, which is particularly useful for building chat interfaces where UI should update as the model generates the response[6].The streaming functionality works with any model available through OpenRouter, making it a versatile feature[6]. Implementation examples include:
Python Example:
import requests import json question = "How would you build the tallest building ever?" url = "https://openrouter.ai/api/v1/chat/completions" headers = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json" } payload = { "model": "openai/chatgpt-4o-latest", "messages": [{"role": "user", "content": question}], "stream": True } with requests.post(url, headers=headers, json=payload, stream=True) as r: for chunk in r.iter_content(chunk_size=1024, decode_unicode=True): # Process streaming chunksTool Calls & Structured Outputs
OpenRouter fully supports OpenAI's tool calling capabilities, allowing for structured outputs across various LLM providers[10]. When working with tool calls:
- You must ensure the model you're using supports Tool Calling or Structured Outputs in the OpenRouter models listing[10].
- You can use the same interface as you would with OpenAI's API.
For example, using the Instructor library with OpenRouter demonstrates this compatibility:
from openai import OpenAI import instructor from pydantic import BaseModel class User(BaseModel): name: str age: int client = OpenAI( base_url="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY"), ) client = instructor.from_openai(client, mode=instructor.Mode.TOOLS) resp = client.chat.completions.create( model="google/gemini-2.0-flash-lite-001", messages=[{"role": "user", "content": "Ivan is 28 years old"}], response_model=User, extra_body={"provider": {"require_parameters": True}}, )API Compatibility
OpenRouter deliberately implements a standardized API compatible with OpenAI's Chat Completions endpoint, normalizing the request and response schema across different model architectures[2]. This provides developers with:
- Consistent access to diverse LLMs while maintaining control over model selection
- The ability to use OpenRouter as a drop-in replacement for OpenAI[9]
- Support for any SDK that works with OpenAI's API[9]
The base URL for all API interactions is 'https://openrouter.ai/api/v1', and the request schema supports the same parameters as OpenAI's API, including temperature control, max tokens, and stop sequences[7].
OpenRouter's compatibility extends to handling advanced features like JSON mode and the ability to work with both text and image inputs, with PDF support coming soon[9][10].
Citations:
- 1: https://openrouter.ai/compare/openai/chatgpt-4o-latest
- 2: https://openrouter.ai/docs/api-reference/overview
- 3: https://openrouter.ai/openai/gpt-4.1
- 4: https://openrouter.ai/openai/gpt-4-32k/api
- 5: https://community.openai.com/t/openrouter-and-legality-around-accessing-openai/1104224
- 6: https://openrouter.ai/docs/api-reference/streaming
- 7: https://relevanceai.com/llm-models/set-up-and-use-openrouter-auto-llm-for-ai-applications
- 8: https://discuss.ai.google.dev/t/google-vs-openrouter-api-differences/66026
- 9: https://openrouter.ai/docs/faq
- 10: https://python.useinstructor.com/integrations/openrouter/
OpenRouter streaming and tool calls compatibility verified
OpenRouter’s API is fully compatible with OpenAI’s SSE-based streaming and tool-calling formats across all supported models, so no changes are needed here.
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
CUSTOM_MODELS
环境变量中配置。(除openrouter/auto
外,因为 OpenRouter 的模型列表太长了)📝 补充信息 | Additional Information
此 PR 与目前已有的关于 OpenRouter 的 PR #6324 的区别:
Summary by CodeRabbit
New Features
Documentation
Localization
Bug Fixes