Skip to content
/ DMSR Public

【ICME2025】 Offical Pytorch Code for "Learning Dual-Domain Multi-Scale Representations for Single Image Deraining"

License

Notifications You must be signed in to change notification settings

zs1314/DMSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

869adfc · Mar 21, 2025

History

5 Commits
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Mar 21, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025
Feb 10, 2025

Repository files navigation

Learning Dual-Domain Multi-Scale Representations for Single Image Deraining

paper supplement project

🔥🔥🔥 News

  • 2025-02-10: This repo is released.

Abstract: Existing image deraining methods typically rely on single-input, single-output, and single-scale architectures, which overlook the joint multi-scale information between external and internal features. Furthermore, single-domain representations are often too restrictive, limiting their ability to handle the complexities of real-world rain scenarios. To address these challenges, we propose a novel Dual-Domain Multi-Scale Representation Network (DMSR). The key idea is to exploit joint multi-scale representations from both external and internal domains in parallel while leveraging the strengths of both spatial and frequency domains to capture more comprehensive properties. Specifically, our method consists of two main components: the Multi-Scale Progressive Spatial Refinement Module (MPSRM) and the Frequency Domain Scale Mixer (FDSM). The MPSRM enables the interaction and coupling of multi-scale expert information within the internal domain using a hierarchical modulation and fusion strategy. The FDSM extracts multi-scale local information in the spatial domain, while also modeling global dependencies in the frequency domain. Extensive experiments show that our model achieves state-of-theart performance across six benchmark datasets.


⚙️ Dependencies

  • Python 3.8
  • PyTorch 1.9.0
  • NVIDIA GPU + CUDA
# Clone the github repo and go to the default directory 'DMSR'.
git clone https://github.com/zs1314/DMSR.git
conda create -n DMSR python=3.8
conda activate DMSR
pip install matplotlib scikit-image opencv-python yacs joblib natsort h5py tqdm

Install warmup scheduler

cd pytorch-gradual-warmup-lr; python setup.py install; cd ..

⚒️ TODO

  • Release code
  • Release pretrained models

🔗 Contents

  1. Datasets
  2. Training
  3. Testing
  4. Results
  5. Citation
  6. Contact
  7. Acknowledgements

🖨️ Datasets

Used training and testing sets can be downloaded as follows:

Training Set Testing Set Visual Results
Rain13K[complete training dataset: Google Drive / Baidu Disk] Test100 + Rain100H + Rain100L + Test2800 + Test1200 [complete testing dataset: Google Drive / Baidu Disk] Google Drive / Baidu Disk

Download training and testing datasets and put them into the corresponding folders of Datasets/. See Datasets for the detail of the directory structure.

🔧 Training

  • Download training (Rain13K, already processed) and testing (Test100 + Rain100H + Rain100L + Test2800 + Test1200 , already processed) datasets, place them in Datasets/.

  • Run the following scripts. The training configuration is in training.yml.

    python train.py
  • The training experiment is in checkpoints/.

🔨 Testing

  • Download testing (Test100 + Rain100H + Rain100L + Test2800 + Test1200) datasets, place them in Datasets/.

  • Run the following scripts. The testing configuration is in test.py.

      python test.py
  • The output is in results/.

  • To reproduce PSNR/SSIM scores of the paper, run (in matlab):

      evaluate_PSNR_SSIM.m 

🔎 Results

We achieve state-of-the-art performance. Detailed results can be found in the paper.

Quantitative Comparison (click to expand)
  • results in Table 1 of the main paper

  • results in Table 2 of the main paper

  • results in Table 2 of the supplementary material

  • results in Table 1 of the supplementary material

  • results in Table 3 of the supplementary material

Visual Comparison (click to expand)
  • results in Figure 2 of the main paper

  • results in Figure 3 of the main paper

  • results in Figure 4 of the main paper

  • results in Figure 1 of the supplementary material

  • results in Figure 2 of the supplementary material

📂 Contact

Should you have any question, please contact shunzou.njau@gmail.com

📎 Citation

We kindly request that you cite our work if you utilize the code or reference our findings in your research:

@inproceedings{zou2025dmsr,
  title={Learning Dual-Domain Multi-Scale Representations for Single Image Deraining},
  author={Zou, Shun and Zou, Yi and Zhang, Mingya and Luo, Shipeng and Gao, Guangwei and Qi, Guojun},
  booktitle={ICME},
  year={2025}
}

💡 Acknowledgements

This code is built on MPRNet, ChaIR.

About

【ICME2025】 Offical Pytorch Code for "Learning Dual-Domain Multi-Scale Representations for Single Image Deraining"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published