This repository provides the source code for the paper titled FusionINN: Decomposable Image Fusion for Brain Tumor Monitoring by Nishant Kumar, Ziyan Tao, Jaikirat Singh, Yang Li, Peiwen Sun, Binghui Zhao and Stefan Gumhold.
The key contributions of the paper are as follows:
- Introduces first-of-its-kind image fusion framework that harnesses invertible normalizing flow for bidirectional training.
- The framework not only generates a fused image but can also decompose it into constituent source images, thus enhancing the interpretability for clinical practitioners.
- Conducts evaluation studies that shows state-of-the-art results with standard fusion metrics, alongside its additional capability to decompose the fused images.
- Illustrates the framework's clinical viability by effectively decomposing and fusing new images from source modalities not encountered during training.
- The paper got accepted at IJCAI Workshop 2024.
- Check out our AAAI 2024 work QuantOD on Outlier-aware Image Classification.
- Check out our CVPR 2023 highlight work FFS on Outlier-aware Object Detection.
- How to Cite
- Installation
- Training FusionINN framework
- Training other Fusion Models
- Inference procedure
- Using Pre-trained FusionINN Model
- Using Pre-trained DDFM Model
- Visualization of FusionINN results
- Evaluating Quantitative Performance
- License
If you find this code or paper useful in your research, please consider citing our paper as follows:
@misc{kumar2024fusioninn,
title={FusionINN: Invertible Image Fusion for Brain Tumor Monitoring},
author={Nishant Kumar and Ziyan Tao and Jaikirat Singh and Yang Li and Peiwen Sun and Binghui Zhao and Stefan Gumhold},
year={2024},
eprint={2403.15769},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
pip install -r requirements.txt
We provide a processed version of the BraTS 2018 data used to train the FusionINN framework and other evaluated fusion models. The processed data only contain those images from the BraTS 2018 dataset where the clinical annotations shows the presence of the necrotic core, non-enhancing tumor and peritumoral edema. The processed data consists of roughly 10000 image pairs, which we shuffled and partitioned it into training and test sets. The processed data can be downloaded from link for test set and link for training set.
Note: Please be aware that if you use this data for your own research, you need to cite the original manuscripts as stated at the BraTS 2018 page.
Step 1: Download the FusionINN source code and make sure you are inside the project folder by running
cd /path/to/FusionINN-main/FusionINN
Step 2: Prior to initiating the training process, please ensure to update the folder path in inn_train.py
where the dataset is located. Additionally, confirm that you have allocated at least one GPU for the training. Subsequently, execute the following command:
python inn_train.py
The trained model will be saved as inn.pt
at /path/to/FusionINN-main/FusionINN/inn.pt
.
Step 1: Make sure you are inside the correct project folder. For example: the path for the DeepFuse model will be as follows:
cd /path/to/FusionINN-main/FusionModels/DeepFuse
Step 2: Similar to FusionINN model, make sure you change the folder path where the dataset is placed in deepfuse_train.py
, before you start the training process. Then, run the following command:
python deepfuse_train.py
The trained model will be saved as deepfuse.pt
at /path/to/FusionINN-main/FusionModels/DeepFuse/deepfuse.pt
. Please follow the same procedure for other fusion models.
Ensure that the folder paths are correct in the file inn_test.py
. The inference procedure remains consistent regardless of whether you utilize pre-trained models or train the models using the aforementioned procedure. For example, to test the FusionINN model, use the following command:
cd /path/to/FusionINN-main/FusionINN
python inn_test.py
The files namely val_fused_tensor.pt
and val_recon_tensor.pt
will be saved at /path/to/FusionINN-main/FusionINN/
.
Note: Please modify the inference procedure according to the model you want to test. Please note that other models will only produce val_fused_tensor.pt
file since other models are not invertible.
If you prefer to utilize the pre-trained FusionINN model instead of training a new instance, please follow the steps outlined below:
Step 1: You need to download the pre-trained FusionINN model from here.
Step 2: Place the downloaded inn.pt
file in the exact same folder where the trained model gets saved for the training procedure i.e. /path/to/FusionINN-main/FusionINN/
.
Step 3: Run the Inference procedure.
Please note that you need to directly use a pre-trained Diffusion model to run DDFM approach as this method is not adaptable for training from scratch. Hence, to test DDFM approach, please follow the below steps:
Step 1: You need to download the pre-trained model named 256x256_diffusion_uncond.pt
from here and place it in /path/to/FusionINN-main/FusionModels/DDFM/models/
folder.
Step 2: Run the following command:
cd /path/to/FusionINN-main/FusionModels/DDFM/
python sample_brats.py
The above steps will save the fused images obtained from DDFM model in the following path /path/to/FusionINN-main/FusionModels/DDFM/output/recon/
.
To visualize the fusion and decomposition performance of FusionINN model, please follow the below steps:
Step 1: Create five new folders, two for input images (named T1ce
and Flair
), one for fused images (named Fused
) and two for decomposed images (named Recon_T1ce
and Recon_Flair
) in the model folder /path/to/FusionINN-main/FusionINN
.
Step 2: Subseqently, run the following command:
cd /path/to/FusionINN-main/FusionINN
python inn_vis.py
Note: Please modify the visualize procedure according to the model you want to test. Please note that other models will not require folders for decomposed images.
Step 1: To compute the SSIM metric scores, run the following command:
cd /path/to/FusionINN-main/
python ssim_test.py
Step 2: Please make sure you have MATLAB installed in your workspace. Enter the MATLAB environment and add all folders and the subfolders of FusionINN project to search path of your MATLAB environment. Next, running the evaluate.m
file available at /path/to/FusionINN-main/
will save a MATLAB file Q.mat
in the same folder path. The Q.mat
file will contain scores obtained from four other fusion metrics for an evaluated fusion model.
Step 3: Finally run the following command to obtain average values of all the five fusion metrics for all the evaluated models on the test image set:
cd /path/to/FusionINN-main/
python mean_test.py
We thank this repository for providing the MATLAB code for the fusion metrics other than SSIM function.
This software is licensed under the MIT license.