-
Notifications
You must be signed in to change notification settings - Fork 942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
服务器部署问题,求大佬帮忙 #2423
Comments
理论上你把model_dir 这里 设置为你下载的whisper路径就可以了 |
那估计没有这么简单,CPU版和GPU版教程我都试过很多次whisper-large-v3了,不管是自己预先下载的还是自动下载的,都无法启动服务,cpu版我试sensevoice-small和paraformer-large都可以,GPU版示例的paraformer-large可以,但是多次尝试sensevoice-small和whisper-large-v3都不行,也都是从modelscope中下载的,应该是项目中提供的处理脚本并没有支持whisper |
换了之后发现似乎是模型缺少某个参数是吗。我似乎遇到这样的问题。 |
因为GPU版会自动导出torchscript,但whisper-large-v3并不支持这么做 |
原因都能想到是这个,主要是想知道有没有其它GPU版能部署whisper-large-v3的方法呢,项目文档教程里面的中文离线GPU版部署里面不会只能用paraformer吧
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2025年3月20日(星期四) 下午4:53
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [modelscope/FunASR] 服务器部署问题,求大佬帮忙 (Issue #2423)
因为GPU版会自动导出torchscript,但whisper-large-v3并不支持这么做
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
lansfair left a comment (modelscope/FunASR#2423)
因为GPU版会自动导出torchscript,但whisper-large-v3并不支持这么做
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
修改cpp里的代码逻辑,去掉导出torchscript的部分,重写加载模型的逻辑,再重新编译 |
例如这个中文流式语音识别服务文档,如果我是想测试whisper-large-v3的ASR效果,我该如何设置参数,之前已经在modelscope社区下载了模型文件到本地了并传到服务器上了:
请问我该如何设置这个服务端参数?
The text was updated successfully, but these errors were encountered: