-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UserWarning: Attempted to get default timeout for nccl backend, but NCCL support is not compiled #200
Comments
Hi, could you provide more information on what platform/OS you are trying to run it on? Also, please try reinstalling PyTorch and try running it again. You can do it from here : https://pytorch.org/get-started/locally/ |
I got the same error. My OS is windows 11. Here is what I got with
|
It seems like Windows doesn't support NCCL backend. Does it mean that I can only run |
I have tried again with my Ubuntu 22.04 installed under WSL. The
|
Could you please provide the complete error message and your hardware specs, along with the code you tried to run? NCCL isn't supported on Windows. If you are running on Windows, can you please check here and use |
Above is the complete error message when I try to run the example |
I think the root cause is the hardware doesn't meet the minimum requirement to run the llama-7B model. |
Yes it might be that. You will need a min VRAM of ~16GB to run the 8B model in fp16 precision. |
Closing this issue. Feel free to re-open if the issue persists. |
W0509 01:09:39.797000 8201419456 torch/distributed/elastic/multiprocessing/redirects.py:27] NOTE: Redirects are currently not supported in Windows or MacOs.
UserWarning: Attempted to get default timeout for nccl backend, but NCCL support is not compiled
warnings.warn("Attempted to get default timeout for nccl backend, but NCCL support is not compiled")
Traceback (most recent call last):
The text was updated successfully, but these errors were encountered: