You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
E:\LocalGpt10\localGPT>python run_localgpt.py & force_download=true, resume_download=false
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:244 - Running on: cpu
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:245 - Display Source Documents set to: False
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:246 - Use history set to: False
2024-04-28 07:11:56,561 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
C:\Users\USER\anaconda3\Lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
max_seq_length 512
2024-04-28 07:12:14,584 - INFO - run_localgpt.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:60 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:61 - This action can take a few minutes!
2024-04-28 07:12:16,210 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
llama-2-7b-chat.Q4_K_M.gguf: 1%|▍ | 31.5M/4.08G [00:30<1:16:29, 882kB/s]
Reproduction
No response
Logs
E:\LocalGpt10\localGPT>python run_localgpt.py & force_download=true, resume_download=false
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:244 - Running on: cpu
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:245 - Display Source Documents set to: False
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:246 - Use historyset to: False
2024-04-28 07:11:56,561 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
C:\Users\USER\anaconda3\Lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
max_seq_length 512
2024-04-28 07:12:14,584 - INFO - run_localgpt.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:60 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:61 - This action can take a few minutes!
2024-04-28 07:12:16,210 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
llama-2-7b-chat.Q4_K_M.gguf: 1%|▍ | 31.5M/4.08G [00:30<1:16:29, 882kB/s]
Has your problem being resolved or not yet? Have you tried several times? Are you sure your disk is not full for instance? And is your network unstable? Also, could you update huggingface_hub (you have 0.16.4 installed but latest is 0.22.x).
Download should not block like this so the above is the only issues I can think about.
Describe the bug
E:\LocalGpt10\localGPT>python run_localgpt.py & force_download=true, resume_download=false
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:244 - Running on: cpu
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:245 - Display Source Documents set to: False
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:246 - Use history set to: False
2024-04-28 07:11:56,561 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
C:\Users\USER\anaconda3\Lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
max_seq_length 512
2024-04-28 07:12:14,584 - INFO - run_localgpt.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:60 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:61 - This action can take a few minutes!
2024-04-28 07:12:16,210 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
llama-2-7b-chat.Q4_K_M.gguf: 1%|▍ | 31.5M/4.08G [00:30<1:16:29, 882kB/s]
Reproduction
No response
Logs
System info
The text was updated successfully, but these errors were encountered: