This repository is an unofficial implementation of the paper Deep Unfolding Network for Image Super-Resolution
File/Folder | Description |
---|---|
img/ |
img for readme |
options/ |
configuration files |
kernels/ |
pre-trained kernels |
model_zoo/ |
pre-trained models |
models/ |
model files |
testsets/ |
test images |
utils/ |
utility functions |
main_test_bicubic.py main_test_realapplication.py main_test_table1.py |
test code for images and tables in the paper |
USRNet.yaml |
conda environment file |
train.py |
training code |
test.py |
test code |
- Clone the repository
- Create a conda environment using the provided
USRNet.yaml
file - Prepare the training and testing data:
- For training, use DIV2K+Flickr2K datasets
- For testing, use 100 images from ImageNet (you can download from here)
- Prepare the pre-trained models:
- you can download pretrained models from official pretrained models or my pretrained models
- put the pretrained models in
model_zoo/
folder
- Change the paths in
options/train_usrnet.json
:- "datasets"/"train"/"dataroot_H": your train data folder
- "datasets"/"test"/"dataroot_H": your test data folder
- "path"/"root"/"pretrained_netG": your pre-trained model path('xxx/Implement-USRNet/model_zoo/xxx.pth')
I use wandb
to log the training process. You can change wandbconfig
to True
in test.py
and train.py
if you want to use it.
When run main_test_bicubic.py
main_test_realapplication.py
main_test_table1.py
, you need to change the model_name
variable in the code to the name of the pre-trained model in the model_zoo
folder.
![]() |
![]() |
---|
@inproceedings{zhang2020deep, % USRNet
title={Deep unfolding network for image super-resolution},
author={Zhang, Kai and Van Gool, Luc and Timofte, Radu},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
pages={3217--3226},
year={2020}
}