Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't dowload the GGUF, it progress stoped at 1% #2255

Closed
tycoh01 opened this issue Apr 27, 2024 · 2 comments
Closed

can't dowload the GGUF, it progress stoped at 1% #2255

tycoh01 opened this issue Apr 27, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@tycoh01
Copy link

tycoh01 commented Apr 27, 2024

Describe the bug

E:\LocalGpt10\localGPT>python run_localgpt.py & force_download=true, resume_download=false
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:244 - Running on: cpu
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:245 - Display Source Documents set to: False
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:246 - Use history set to: False
2024-04-28 07:11:56,561 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
C:\Users\USER\anaconda3\Lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
max_seq_length 512
2024-04-28 07:12:14,584 - INFO - run_localgpt.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:60 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:61 - This action can take a few minutes!
2024-04-28 07:12:16,210 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
llama-2-7b-chat.Q4_K_M.gguf: 1%|▍ | 31.5M/4.08G [00:30<1:16:29, 882kB/s]

Reproduction

No response

Logs

E:\LocalGpt10\localGPT>python run_localgpt.py & force_download=true, resume_download=false
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:244 - Running on: cpu
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:245 - Display Source Documents set to: False
2024-04-28 07:11:54,039 - INFO - run_localgpt.py:246 - Use history set to: False
2024-04-28 07:11:56,561 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
C:\Users\USER\anaconda3\Lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
max_seq_length  512
2024-04-28 07:12:14,584 - INFO - run_localgpt.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:60 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2024-04-28 07:12:16,210 - INFO - run_localgpt.py:61 - This action can take a few minutes!
2024-04-28 07:12:16,210 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
llama-2-7b-chat.Q4_K_M.gguf:   1%|| 31.5M/4.08G [00:30<1:16:29, 882kB/s]

System info

- huggingface_hub version: 0.16.4
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.9.6
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: C:\Users\USER\.cache\huggingface\token
- Has saved token ?: False
- Configured git credential helpers: manager
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.0.1
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.4.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.23.3
- pydantic: 1.10.12
- aiohttp: 3.8.5
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: C:\Users\USER\.cache\huggingface\hub
- HUGGINGFACE_ASSETS_CACHE: C:\Users\USER\.cache\huggingface\assets
- HF_TOKEN_PATH: C:\Users\USER\.cache\huggingface\token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
@tycoh01 tycoh01 added the bug Something isn't working label Apr 27, 2024
@Wauplin
Copy link
Contributor

Wauplin commented Apr 29, 2024

Has your problem being resolved or not yet? Have you tried several times? Are you sure your disk is not full for instance? And is your network unstable? Also, could you update huggingface_hub (you have 0.16.4 installed but latest is 0.22.x).

Download should not block like this so the above is the only issues I can think about.

@Wauplin
Copy link
Contributor

Wauplin commented Jun 10, 2024

Closing as no news since quite some time.

@Wauplin Wauplin closed this as not planned Won't fix, can't repro, duplicate, stale Jun 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants