Create the environment from the environment.yml file:
conda env create -f environment.yml
- Download the CUB-200 dataset from the link
- Preprocess the noisy concepts in the dataset using the following command:
cd scripts_data
python download_cub.py
The train-test-val splits of all the datasets are given in the corresponding json files in the scripts_data
directory.
- For CUB-200, check
config/BB_cub.yml
file. - For HAM10000, check
config/BB_derma.yml
file
- Prior to start the training process, edit
data_root
,json_root
andlogs
parameters in the config fileconfig/BB_cub.yaml
to set the path of images, json files for train-test-val splits and the output to be saved respectively. - Prior to follow the steps, refer to
./iPython/Cub-Dataset-understanding.ipynb
file to understand the CUB-200 dataset. This step is optional. - Preprocess the noisy concepts as described earlier.
- Follow the steps below for CUB-200 dataset:
python main_lth_pruning.py --dataset "cub"
python main_lth_save_activations.py --dataset "cub"
python main_lth_generate_cavs.py --dataset "cub"
python /ocean/projects/asc170022p/shg121/PhD/Project_Pruning/main_lth_pcbm.py --dataset "cub"
python /ocean/projects/asc170022p/shg121/PhD/Project_Pruning/main_lth_get_concepts.py --dataset "cub"
Edit labels_for_tcav
parameter in the file config/BB_cub.yaml
for the desired class label to generate the Grad-CAM
saliency maps. By default, we generate the saliency map for the 2nd image of the desired class in the test-set.
python main_heatmap_save.py --config "config/BB_cub.yaml"
./iPython/Analysis-CUB_Test-GradCAM.ipynb
All the bash scripts to follow steps are included in ./bash_script
file.