Replies: 2 comments
-
I had same issue with 4070. Just resolved it. Installed CUDA Toolkit 11.8 on global Win11 sys, nVidia Drivers remained as required for OS (12.2). Here is procedure
https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_local
conda install cudatoolkit=11.8 nvidia-smi Install Pytorch Ensure that you have installed the CUDA version of PyTorch. If you installed PyTorch without specifying a CUDA version, you might have the CPU-only version. To troubleshoot the availability of torch.cuda, follow these steps : packages in environment at d:\LLM\LocalGPT\localgpt:Name Version Build Channelpytorch 2.0.1 py3.10_cpu_0 pytorch This will print the number of CUDA devices available. If it returns 0, then no CUDA devices are detected by PyTorch.
Optional pip freeze –local python xxxTEST.py You can put multiple folders within the SOURCE_DOCUMENTS folder and the code will recursively read your files. From Anaconda activated localgpt environment, run script: This will create a new folder called DB and use it for the newly created vector store. You can ingest as many documents as you want, and all will be accumulated in the local embeddings database. If you want to start from an empty database, delete the DB and reingest your documents. Note: When you run this for the first time, it will need internet access to download the embedding model (default: Instructor Embedding). In the subsequent runs, no data will leave your local environment and you can ingest data without internet connection.
python run_localGPT.py --show_sources --use_history
Once the answer is generated, you can then ask another question or Type exit to finish the script. |
Beta Was this translation helpful? Give feedback.
-
Also, git clone didnt pull all files (?!). So I downloaded zip and manually added missing ones. Requirements.txt was one of missing... |
Beta Was this translation helpful? Give feedback.
-
hi,
i am having a RTX MX250 2GB VRAM, i want to try to use GPU and see how it behave.
i am using the below configuration:
MODEL_ID = "TheBloke/Llama-2-7b-Chat-GGUF"
MODEL_BASENAME = "llama-2-7b-chat.Q4_K_M.gguf"
however, when i run the command with "python run_localGPT.py --device_type cuda"
i get the below error after many attempts in troubleshooting and reinstalling. e.g "conda install -c pytorch torchvision cudatoolkit=10.1 pytorch"
can you advise what is the issue here?
File "C:\ProgramData\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\site-packages\torch\cuda_init_.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Beta Was this translation helpful? Give feedback.
All reactions