Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pip install failing #27

Open
shayanshafquat opened this issue Jul 20, 2023 · 2 comments
Open

pip install failing #27

shayanshafquat opened this issue Jul 20, 2023 · 2 comments

Comments

@shayanshafquat
Copy link

shayanshafquat commented Jul 20, 2023

There are multiples issues with installing the version of the packages

  1. numpy~=1.18.4 : Throws error: subprocess-exited-with-error
  2. torch, torchtext versions missing, or is it because of a different Python version. It throws this: ERROR: Ignored the following versions that require a different python version: 0.7 Requires-Python >=3.6, <3.7; 0.8 Requires-Python >=3.6, <3.7 ERROR: Could not find a version that satisfies the requirement torchtext~=0.7.0 (from versions: 0.1.1, 0.2.0, 0.2.1, 0.2.3, 0.3.1, 0.4.0, 0.5.0, 0.6.0, 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.11.1, 0.11.2, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.14.1, 0.15.1, 0.15.2) ERROR: No matching distribution found for torchtext~=0.7.0
    Is updating the functions according to the recent versions of the libraries/packages used the only way to go forward?
@haliluyadd
Copy link

I have the same problem as you. Have you solved it?

@nolliv22
Copy link

Hi, I got the same issue and I managed to fix it.

Here are the steps:

  1. Use Python 3.8 as there are already many pre-built packages available on pip. I recommend you to use pyenv to install it alongside your system Python so you won't break your system Python version. Using pyenv:
# Install Python 3.8
pyenv install 3.8

# Enable Python 3.8
pyenv shell 3.8

# Verify that the version is correct
pip --version 

# Should outputs: 
pip 23.0.1 from /home/michel/.pyenv/versions/3.8.19/lib/python3.8/site-packages/pip (python 3.8)
  1. Then you can install dependencies (there are some extra steps to fix errors):
# Install dependencies
# Force install old version of sklearn and downgrade protobuf to avoid errors
SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True venv/bin/pip install -r requirements.txt
venv/bin/pip install protobuf==3.19.0 
  1. Finally replace the dataset sources as they are either unavailable or too slow with this one:
# Replacing all occurence of the old source url by the new one
grep -rl 'http://yann.lecun.com/exdb/mnist/' . | xargs sed -i 's|http://yann.lecun.com/exdb/mnist/|https://github.com/golbin/TensorFlow-MNIST/raw/master/mnist/data/|g'

After that you can normally run it:

venv/bin/python training.py --name mnist --params configs/mnist_params.yaml --commit none
/home/michel/Downloads/backdoors101/venv/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at  /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  return torch._C._cuda_getDeviceCount() > 0
Downloading https://github.com/golbin/TensorFlow-MNIST/raw/master/mnist/data/train-images-idx3-ubyte.gz to .data/MNIST/raw/train-images-idx3-ubyte.gz
9920512it [00:02, 3679627.46it/s]                                                                  
Extracting .data/MNIST/raw/train-images-idx3-ubyte.gz to .data/MNIST/raw
Downloading https://github.com/golbin/TensorFlow-MNIST/raw/master/mnist/data/train-labels-idx1-ubyte.gz to .data/MNIST/raw/train-labels-idx1-ubyte.gz
32768it [00:00, 62339.13it/s]                                                                      
Extracting .data/MNIST/raw/train-labels-idx1-ubyte.gz to .data/MNIST/raw
Downloading https://github.com/golbin/TensorFlow-MNIST/raw/master/mnist/data/t10k-images-idx3-ubyte.gz to .data/MNIST/raw/t10k-images-idx3-ubyte.gz
1654784it [00:01, 1595056.98it/s]                                                                  
Extracting .data/MNIST/raw/t10k-images-idx3-ubyte.gz to .data/MNIST/raw
Downloading https://github.com/golbin/TensorFlow-MNIST/raw/master/mnist/data/t10k-labels-idx1-ubyte.gz to .data/MNIST/raw/t10k-labels-idx1-ubyte.gz
8192it [00:00, 13965.30it/s]                                                                       
Extracting .data/MNIST/raw/t10k-labels-idx1-ubyte.gz to .data/MNIST/raw
Processing...
/home/michel/Downloads/backdoors101/venv/lib/python3.8/site-packages/torchvision/datasets/mnist.py:480: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
  return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
Done!
2024-04-24 21:34:04 - WARNING  - | name | value | 
 |-----|-----|
| task | MNIST |
| synthesizer | Pattern |
| batch_size | 64 |
| test_batch_size | 100 |
| lr | 0.01 |
| momentum | 0.9 |
| decay | 0.0005 |
| epochs | 350 |
| save_on_epochs | [] |
| optimizer | SGD |
| log_interval | 100 |
| scheduler | False |
| poisoning_proportion | 1.0 |
| backdoor_label | 8 |
| backdoor | True |
| backdoor_dynamic_position | False |
| loss_balance | MGDA |
| mgda_normalize | loss |
| save_model | False |
| log | False |
| tb | False |
| transform_train | True |
| loss_tasks | ['backdoor', 'normal'] |
| current_time | Apr.24_21.33.58 |
| commit | none |
| name | mnist |
100it [00:03, 27.61it/s]
2024-04-24 21:34:07 - WARNING  - Backdoor False. Epoch:     0. Accuracy: Top-1: 9.15 | Loss: value: 2.31
0it [00:00, ?it/s]2024-04-24 21:34:07 - INFO     - Epoch:   1. Batch:     0/938.  Losses: ['backdoor: 2.33', 'normal: 2.29', 'total: 2.29']. Scales: ['backdoor: 0.01', 'normal: 0.99']
99it [00:10, 10.18it/s]2024-04-24 21:34:18 - INFO     - Epoch:   1. Batch:   100/938.  Losses: ['backdoor: 1.02', 'normal: 1.04', 'total: 1.03']. Scales: ['backdoor: 0.16', 'normal: 0.84']
200it [00:20, 10.09it/s]2024-04-24 21:34:28 - INFO     - Epoch:   1. Batch:   200/938.  Losses: ['backdoor: 0.10', 'normal: 0.22', 'total: 0.21']. Scales: ['backdoor: 0.09', 'normal: 0.91']
300it [00:31,  9.09it/s]2024-04-24 21:34:39 - INFO     - Epoch:   1. Batch:   300/938.  Losses: ['backdoor: 0.04', 'normal: 0.13', 'total: 0.13']. Scales: ['backdoor: 0.08', 'normal: 0.92']
400it [00:41,  8.77it/s]2024-04-24 21:34:49 - INFO     - Epoch:   1. Batch:   400/938.  Losses: ['backdoor: 0.03', 'normal: 0.11', 'total: 0.11']. Scales: ['backdoor: 0.07', 'normal: 0.93']
500it [00:51,  9.02it/s]2024-04-24 21:34:59 - INFO     - Epoch:   1. Batch:   500/938.  Losses: ['backdoor: 0.01', 'normal: 0.09', 'total: 0.09']. Scales: ['backdoor: 0.06', 'normal: 0.94']
600it [01:01,  9.06it/s]2024-04-24 21:35:09 - INFO     - Epoch:   1. Batch:   600/938.  Losses: ['backdoor: 0.02', 'normal: 0.10', 'total: 0.09']. Scales: ['backdoor: 0.07', 'normal: 0.93']
699it [01:12,  9.40it/s]2024-04-24 21:35:20 - INFO     - Epoch:   1. Batch:   700/938.  Losses: ['backdoor: 0.01', 'normal: 0.09', 'total: 0.09']. Scales: ['backdoor: 0.07', 'normal: 0.93']
799it [01:22, 10.43it/s]2024-04-24 21:35:30 - INFO     - Epoch:   1. Batch:   800/938.  Losses: ['backdoor: 0.01', 'normal: 0.06', 'total: 0.05']. Scales: ['backdoor: 0.09', 'normal: 0.91']
899it [01:33,  9.95it/s]2024-04-24 21:35:40 - INFO     - Epoch:   1. Batch:   900/938.  Losses: ['backdoor: 0.01', 'normal: 0.08', 'total: 0.07']. Scales: ['backdoor: 0.06', 'normal: 0.94']
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants