-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't dowload the GGUF, it progress stoped at 1% #2255
Labels
bug
Something isn't working
Comments
Has your problem being resolved or not yet? Have you tried several times? Are you sure your disk is not full for instance? And is your network unstable? Also, could you update Download should not block like this so the above is the only issues I can think about. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
E:\LocalGpt10\localGPT>python run_localgpt.py & force_download=true, resume_download=false
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:244 - Running on: cpu
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:245 - Display Source Documents set to: False
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:246 - Use history set to: False
2024-04-28 07:11:56,561 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
C:\Users\USER\anaconda3\Lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
max_seq_length 512
2024-04-28 07:12:14,584 - INFO - run_localgpt.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:60 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:61 - This action can take a few minutes!
2024-04-28 07:12:16,210 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
llama-2-7b-chat.Q4_K_M.gguf: 1%|▍ | 31.5M/4.08G [00:30<1:16:29, 882kB/s]
Reproduction
No response
Logs
System info
The text was updated successfully, but these errors were encountered: