- 2025-02-10: This repo is released.
Abstract: Existing image deraining methods typically rely on single-input, single-output, and single-scale architectures, which overlook the joint multi-scale information between external and internal features. Furthermore, single-domain representations are often too restrictive, limiting their ability to handle the complexities of real-world rain scenarios. To address these challenges, we propose a novel Dual-Domain Multi-Scale Representation Network (DMSR). The key idea is to exploit joint multi-scale representations from both external and internal domains in parallel while leveraging the strengths of both spatial and frequency domains to capture more comprehensive properties. Specifically, our method consists of two main components: the Multi-Scale Progressive Spatial Refinement Module (MPSRM) and the Frequency Domain Scale Mixer (FDSM). The MPSRM enables the interaction and coupling of multi-scale expert information within the internal domain using a hierarchical modulation and fusion strategy. The FDSM extracts multi-scale local information in the spatial domain, while also modeling global dependencies in the frequency domain. Extensive experiments show that our model achieves state-of-theart performance across six benchmark datasets.
- Python 3.8
- PyTorch 1.9.0
- NVIDIA GPU + CUDA
# Clone the github repo and go to the default directory 'DMSR'.
git clone https://github.com/zs1314/DMSR.git
conda create -n DMSR python=3.8
conda activate DMSR
pip install matplotlib scikit-image opencv-python yacs joblib natsort h5py tqdm
Install warmup scheduler
cd pytorch-gradual-warmup-lr; python setup.py install; cd ..
- Release code
- Release pretrained models
Used training and testing sets can be downloaded as follows:
Training Set | Testing Set | Visual Results |
---|---|---|
Rain13K[complete training dataset: Google Drive / Baidu Disk] | Test100 + Rain100H + Rain100L + Test2800 + Test1200 [complete testing dataset: Google Drive / Baidu Disk] | Google Drive / Baidu Disk |
Download training and testing datasets and put them into the corresponding folders of Datasets/
. See Datasets for the detail of the directory structure.
-
Download training (Rain13K, already processed) and testing (Test100 + Rain100H + Rain100L + Test2800 + Test1200 , already processed) datasets, place them in
Datasets/
. -
Run the following scripts. The training configuration is in
training.yml
.python train.py
-
The training experiment is in
checkpoints/
.
-
Download testing (Test100 + Rain100H + Rain100L + Test2800 + Test1200) datasets, place them in
Datasets/
. -
Run the following scripts. The testing configuration is in
test.py
.python test.py
-
The output is in
results/
. -
To reproduce PSNR/SSIM scores of the paper, run (in matlab):
evaluate_PSNR_SSIM.m
We achieve state-of-the-art performance. Detailed results can be found in the paper.
Quantitative Comparison (click to expand)
- results in Table 1 of the main paper
- results in Table 2 of the main paper
- results in Table 2 of the supplementary material
- results in Table 1 of the supplementary material
- results in Table 3 of the supplementary material
Visual Comparison (click to expand)
- results in Figure 2 of the main paper
- results in Figure 3 of the main paper
- results in Figure 4 of the main paper
- results in Figure 1 of the supplementary material
- results in Figure 2 of the supplementary material
Should you have any question, please contact shunzou.njau@gmail.com
We kindly request that you cite our work if you utilize the code or reference our findings in your research:
@inproceedings{zou2025dmsr,
title={Learning Dual-Domain Multi-Scale Representations for Single Image Deraining},
author={Zou, Shun and Zou, Yi and Zhang, Mingya and Luo, Shipeng and Gao, Guangwei and Qi, Guojun},
booktitle={ICME},
year={2025}
}