New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[本地部署]: ollama serve +川虎部署失败 #1084
Comments
模型列表点击我添加的qwen:14b-chat-v1.5-fp16, 也不能选择 |
"openai_api_key"设置为openai,api base设置为你的本地api地址,模型选择时选择gpt系列的模型,不要添加新的模型 |
附上我的json "openai_api_key": "ollama", 下面的注释掉了 |
试了很多次,总是如图错误.请问怎么办. |
ollama 应该在模型选单选 ollama |
不太明白什么意思,应该如何操作?麻烦介绍清楚点.谢谢. @GaiZhenbiao 我刚才 又确认了一下.在LM STUDIO开API SERVER, 可以改API和BASE, 不管MODEL可以推理. 但是用OLLAMA后, 相同的修改配置方法, 却不可以推理. |
"default_model": "ollama", // 默认模型 结果是这样的 @GaiZhenbiao |
是否已存在现有反馈与解答?
是否是一个代理配置相关的疑问?
错误描述
不能推理
复现操作
3.川虎配置文件设置了
"openai_api_key": "ollama",
"extra_models": ["qwen:14b-chat-v1.5-fp16"],
"openai_api_base": "http://localhost:11434/v1",
错误日志
No response
运行环境
补充说明
No response
The text was updated successfully, but these errors were encountered: