Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting bug "The system cannot find the path specified." #11

Open
Glatinis opened this issue Apr 12, 2023 · 6 comments
Open

Getting bug "The system cannot find the path specified." #11

Glatinis opened this issue Apr 12, 2023 · 6 comments

Comments

@Glatinis
Copy link

Glatinis commented Apr 12, 2023

I've set up everything. When I run the "run.bat" file, it prints out "The system cannot find the path specified." on two lines. When I try to run the "main.py" file directly and hit submit, it gives the following error:

INFO:chatbot.model:Loading the chatbot_models/pygmalion-2.7b model
Traceback (most recent call last):
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict
    return torch.load(checkpoint_file, map_location="cpu")
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\serialization.py", line 777, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\serialization.py", line 282, in __init__
    super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 419, in load_state_dict
    if f.read(7) == "version":
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1864: character maps to <undefined>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 57, in <module>
    chat_model, tokenizer = build_model_and_tokenizer_for(model_name)
  File "C:\Users\rayan\Desktop\RenAI-Chat\chatbot\model.py", line 29, in build_model_and_tokenizer_for
    model = transformers.AutoModelForCausalLM.from_pretrained(
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\auto\auto_factory.py", line 464, in from_pretrained
    return model_class.from_pretrained(
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 2301, in from_pretrained
    state_dict = load_state_dict(resolved_archive_file)
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 431, in load_state_dict
    raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'chatbot_models/pygmalion-2.7b\pytorch_model.bin' at 'chatbot_models/pygmalion-2.7b\pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.```
@Rubiksman78
Copy link
Owner

How did you download the model? It seems that the model folder was not downloaded correctly, or with some corrupted data during the process.

@Glatinis
Copy link
Author

I cloned it with git clone, I'll try another model and let you know if it works.

@Glatinis
Copy link
Author

I changed the model and it worked! But now I keep getting this error:

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
  warn(msg)
CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
  warn(msg)
C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
  warn(msg)
CUDA SETUP: Loading binary C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
CUDA SETUP: Loading binary C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected.
CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig.
CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following:
CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null
CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a
CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc
Traceback (most recent call last):
  File "main.py", line 57, in <module>
    chat_model, tokenizer = build_model_and_tokenizer_for(model_name)
  File "C:\Users\rayan\Desktop\RenAI-Chat\chatbot\model.py", line 29, in build_model_and_tokenizer_for
    model = transformers.AutoModelForCausalLM.from_pretrained(
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\auto\auto_factory.py", line 464, in from_pretrained
    return model_class.from_pretrained(
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 2372, in from_pretrained
    from .utils.bitsandbytes import get_keys_to_not_convert, replace_8bit_linear
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\utils\bitsandbytes.py", line 10, in <module>
    import bitsandbytes as bnb
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\__init__.py", line 7, in <module>
    from .autograd._functions import (
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\autograd\__init__.py", line 1, in <module>
    from ._functions import undo_layout, get_inverse_transform_indices
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\autograd\_functions.py", line 9, in <module>
    import bitsandbytes.functional as F
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\functional.py", line 17, in <module>
    from .cextension import COMPILED_WITH_CUDA, lib
  File "C:\Users\rayan\AppData\Local\Programs\Python\Python38\lib\site-packages\bitsandbytes\cextension.py", line 22, in <module>
    raise RuntimeError('''
RuntimeError:
        CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment!
        If you cannot find any issues and suspect a bug, please open an issue with detals about your environment:
        https://github.com/TimDettmers/bitsandbytes/issues

@Rubiksman78
Copy link
Owner

Did you setup CUDA, cudNN and stuff for your GPU ? What is your GPU ?

@Glatinis
Copy link
Author

Yes, I installed Cudatoolkit, and torch is recognizing my GPU. My GPU is an RTX 2070. Is there perhaps an exact guide I should follow, just in-case I missed a step in the setup process?

@Rubiksman78
Copy link
Owner

This tutorial is the one linked in the wiki.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants