-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
limit pytorch version to cudnn8 for pip install
#958
Conversation
I personally don't know why 1. Add Nvidia cudnn to the installation process - e.g.:
2. In the entry point for the library add to it the following, which will add to the paths (but not replace them) :
3. Take it a Step Further and add cublas, runtime or whatever else... - e.g.
That way users wouldn't have to worry about installing CUDA/CUDNN globally at all. Again, not sure why |
according to https://www.github.com/pytorch/pytorch/issues/100974, therefore before |
@BBC-Esq faster-whisper is not compliant with cuddn 9 as ctranslate2 does not. It can not be compliant without a custom build of ctranslate2 as ctranslate2 is the core for all the cuda stuff. |
I wish this would be merged as a workaround until OpenNMT/CTranslate2#1780 is fixed. When you install faster-whisper in a completely new environment, I hope this workaround will be merged for now until CTranslate2 really supports the cuDNN 9 build. |
Since OpenNMT/CTranslate2#1803 is merged and ( I got another bug with |
See this link to a workaround and alternative method of running faster-whisper in general, with the newest |
the pytorch current latest version is this PR may be closed |
because all pytorch>=2.4 on conda are started to be compiled with cudnn9,
so this PR can avoid new users who just installed with
pip install faster-whisper
from getting into the issue:just after installing.