自定义渠道 deepseek-reasoner 请求错误 #6590
-
📦 部署环境Docker 📌 软件版本v1.52.6 💻 系统环境Windows 🌐 浏览器Chrome 🐛 问题描述自定义渠道(自建onehub转发)的 📷 复现步骤使用自定义渠道的 🚦 期望结果可以正常进行多轮对话 📝 补充信息lobehub报错信息(实际为onehub返回的error message)
体现为请求报文不符合deepseek的message role要求 控制台调试信息(lobehub客户端请求报文)
发现请求输入里面缺失了发送的 截图 |
Beta Was this translation helpful? Give feedback.
Replies: 12 comments
-
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
Beta Was this translation helpful? Give feedback.
-
To resolve the "invalid_request_error" with the The code should automatically handle this by inserting a user message with empty content if the first message is from the assistant. Here's a relevant code snippet that demonstrates this logic: it('should insert a user message if the first message is from assistant', async () => {
const payloadMessages = [{ content: 'Hello', role: 'assistant' as LLMRoleType }];
const expectedMessages = [
{ content: '', role: 'user' },
...payloadMessages,
];
const payload: ChatStreamPayload = {
messages: payloadMessages,
model: 'deepseek-reasoner',
temperature: 0,
};
await instance.chat(payload);
expect(instance['client'].chat.completions.create).toHaveBeenCalled();
const actualArgs = (instance['client'].chat.completions.create as Mock).mock.calls[0];
const actualMessages = actualArgs[0].messages;
expect(actualMessages).toEqual(expectedMessages);
}); Ensure your implementation follows this logic to handle the message sequence correctly before sending it to the chat completion API [1]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
是否和 #5601 有关或者类似原因呀 |
Beta Was this translation helpful? Give feedback.
-
Is it related to #5601 or similar reasons? |
Beta Was this translation helpful? Give feedback.
-
不止如此,DeepSeek API的要求是user和assistant要间隔出现,不能出现连续出现。 |
Beta Was this translation helpful? Give feedback.
-
Not only that, the DeepSeek API requires that user and assistant appear at intervals and cannot appear continuously. |
Beta Was this translation helpful? Give feedback.
-
把 onehub 的 baseURL 和 apikey 填到 deepseek provider 用吧,只有那里会对发送的消息进行处理。 |
Beta Was this translation helpful? Give feedback.
-
我目前是这样的!但是在自定义渠道里面 我的对话内容完整如前面截图: 用户第一句问好内容缺失了。这可能是一个相关问题或者新的问题。应为我看deepseek供应商下面R1的消息加工逻辑是当首条消息不是user的话会进行空消息插入的操作,而且这部分代码逻辑在自定义供应商里面也没有引用或者复用,所以很奇怪怎么会缺失首条 另外,我在想是不是可以把deepseek-r1这种特殊消息规则单独抽象出来不限定只在deepseek provider下才会处理这类规则,因为短期来看deepseek官方的api很不稳定,多数用户在用三方的r1和各种满血版。大概思路就是不限定供应商以模型的 以上欢迎有类似问题的用户和贡献者一起讨论 |
Beta Was this translation helpful? Give feedback.
-
I'm currently like this! However, in the custom channel, the message "deepseek-reasoner" should be in theory if it is not processed, the content after |
Beta Was this translation helpful? Give feedback.
-
user 和 assistant 消息配对(pair)的这个限制除了 DeepSeek 以外其实很多 provider 也会存在。但是如果给所有 provider 都默认开启这个设置则会导致一些意料之外的情况(例如 https://github.com/lobehub/lobe-chat/discussions/categories/rfc-%E7%89%B9%E6%80%A7%E5%BC%80%E5%8F%91#discussioncomment-12333376 ),因此这个配置是不太可能给所有 provider 都加上的。 如果你使用 openai Compactiable 的渠道,从 LobeChat 角度来看就是一个完全兼容 openai 格式规范的一个地址,因此不可能在这里再去做个 message 的 pair,这件事需要渠道去解决。 而如果你是用 DeepSeek 的 provider ,那么在 LobeChat 这个层面就能感知到是 DeepSeek 的格式规范,那么我们会去在应用层去满足这个 pair 的要求。 而从更加长期的角度来看, OpenAI、 Claude、 Google 这三家的模型接口已经演进分化地不尽相同了,以及 DeepSeek、OpenRouter 等后起之秀开始有了自己专有的请求出入参设计。所以可能要接受未来要更加明确请求 sdk 这件事。 |
Beta Was this translation helpful? Give feedback.
-
This issue is closed, If you have any questions, you can comment and reply. |
Beta Was this translation helpful? Give feedback.
-
In addition to DeepSeek, this limitation of user and assistant message pairing (pair) actually exists many providers. However, if you enable this setting by default for all providers, it will lead to some unexpected situations (for example, https://github.com/lobehub/lobe-chat/discussions/categories/rfc-%E7%89%B9%E6%80%A7%E5%BC%80%E5%8F%91#discussioncomment-12333376 ), so this configuration is unlikely to be added to all providers. If you use the openai Compactiable channel, from the perspective of LobeChat, it is an address that is fully compatible with the openai format specifications. Therefore, it is impossible to make a message pair here. This matter needs to be solved by channels. If you use the DeepSeek provider, then you can perceive that it is the DeepSeek format specification at the level of LobeChat, and then we will meet the requirements of this pair at the application level. From a more long-term perspective, the model interfaces of OpenAI, Claude, and Google have evolved and differentiated, and rising stars such as DeepSeek and OpenRouter have begun to have their own unique designs for reference. So you may have to accept the matter of requesting sdk more clearly in the future. |
Beta Was this translation helpful? Give feedback.
user 和 assistant 消息配对(pair)的这个限制除了 DeepSeek 以外其实很多 p…