Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[本地部署]: ollama serve +川虎部署失败 #1084

Open
2 tasks done
taozhiyuai opened this issue Mar 26, 2024 · 8 comments · May be fixed by #1089
Open
2 tasks done

[本地部署]: ollama serve +川虎部署失败 #1084

taozhiyuai opened this issue Mar 26, 2024 · 8 comments · May be fixed by #1089
Labels
localhost deployment question Further information is requested

Comments

@taozhiyuai
Copy link

taozhiyuai commented Mar 26, 2024

是否已存在现有反馈与解答?

  • 我确认没有已有issue或discussion,且已阅读常见问题

是否是一个代理配置相关的疑问?

  • 我确认这不是一个代理配置相关的疑问。

错误描述

不能推理
截屏2024-03-26 09 43 16
截屏2024-03-26 09 43 23

复现操作

  1. 模型是qwen:14b-chat-v1.5-fp16
  2. 运行ollama serve
截屏2024-03-26 09 40 39

3.川虎配置文件设置了
"openai_api_key": "ollama",
"extra_models": ["qwen:14b-chat-v1.5-fp16"],
"openai_api_base": "http://localhost:11434/v1",

错误日志

No response

运行环境

- OS: macOS
- Browser: safari
- Gradio version: 
- Python version:

补充说明

No response

@taozhiyuai taozhiyuai added localhost deployment question Further information is requested labels Mar 26, 2024
@taozhiyuai
Copy link
Author

模型列表点击我添加的qwen:14b-chat-v1.5-fp16, 也不能选择

@Keldos-Li
Copy link
Collaborator

"openai_api_key"设置为openai,api base设置为你的本地api地址,模型选择时选择gpt系列的模型,不要添加新的模型

@taozhiyuai
Copy link
Author

"openai_api_key"设置为openai,api base设置为你的本地api地址,模型选择时选择gpt系列的模型,不要添加新的模型

截屏2024-03-26 10 39 51 lm studio像你说的操作可以.换成ollama server 就出现如图错误

@taozhiyuai
Copy link
Author

taozhiyuai commented Mar 26, 2024

附上我的json
config.json

"openai_api_key": "ollama",
"default_model": "GPT3.5 Turbo", // 默认模型
"openai_api_base": "http://localhost:11434/v1",

下面的注释掉了
// "available_models": ["GPT3.5 Turbo", "GPT4 Turbo", "GPT4 Vision"], // 可用的模型列表,将覆盖默认的可用模型列表
// "extra_models": ["模型名称3", "模型名称4", ...], // 额外的模型,将添加到可用的模型列表之后

@taozhiyuai
Copy link
Author

"openai_api_key"设置为openai,api base设置为你的本地api地址,模型选择时选择gpt系列的模型,不要添加新的模型

试了很多次,总是如图错误.请问怎么办.

@GaiZhenbiao
Copy link
Owner

ollama 应该在模型选单选 ollama

@taozhiyuai
Copy link
Author

ollama 应该在模型选单选 ollama

不太明白什么意思,应该如何操作?麻烦介绍清楚点.谢谢. @GaiZhenbiao

我刚才 又确认了一下.在LM STUDIO开API SERVER, 可以改API和BASE, 不管MODEL可以推理. 但是用OLLAMA后, 相同的修改配置方法, 却不可以推理.

@taozhiyuai
Copy link
Author

taozhiyuai commented Mar 26, 2024

"default_model": "ollama", // 默认模型

结果是这样的 @GaiZhenbiao
截屏2024-03-27 06 00 48

截屏2024-03-27 05 50 52

@tusik tusik linked a pull request Apr 1, 2024 that will close this issue
@Keldos-Li Keldos-Li linked a pull request Apr 1, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
localhost deployment question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants