Skip to content

DSen2-CR: A network for removing clouds from Sentinel-2 images. This repo contains the model code, written in Python/Keras, as well as links to pre-trained checkpoints and the SEN12MS-CR dataset.

License

Notifications You must be signed in to change notification settings

ameraner/dsen2-cr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion

BPA BPA

PWC

Paper preview

Example results from the final setup of DSen2-CR using the CARL loss. Left is the input cloudy image, middle is the predicted image, right is the cloud-free target image.


This repository contains the code and models for the paper

Meraner, A., Ebel, P., Zhu, X. X., & Schmitt, M. (2020). Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS Journal of Photogrammetry and Remote Sensing, 166, 333-346.

The open-access paper is available at the Elsevier ISPRS page.

The paper won the ISPRS 2020 Best Paper Award, and and then went on to win the U.V. Helava Award as the best paper for the period 2020-2021.

If you use this code, models or dataset for your research, please cite us accordingly:

@article{Meraner2020,
title = "Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
volume = "166",
pages = "333 - 346",
year = "2020",
issn = "0924-2716",
doi = "https://doi.org/10.1016/j.isprsjprs.2020.05.013",
url = "http://www.sciencedirect.com/science/article/pii/S0924271620301398",
author = "Andrea Meraner and Patrick Ebel and Xiao Xiang Zhu and Michael Schmitt",
keywords = "Cloud removal, Optical imagery, SAR-optical, Data fusion, Deep learning, Residual network",
}

Code


NOTE

The code in this repository has been created in my early Python years and might not be the most elegant in some parts. I apologize for eventual issues or possible bugs.

Should you notice something in the code, please feel free to create a Github issue (or, even better, a pull request :)), or let me know at the address andrea.meraner [at] eumetsat.int !


Installation

The network is written in Keras with Tensorflow as backend. It is strongly advised to use GPU support to run the models.

A conda environment with the required dependencies can be created with

conda create -n dsen2cr_env
conda activate dsen2cr_env
conda install -c conda-forge python=3.7 tensorflow-gpu=1.15.0 keras=2.2.4 numpy=1.17 scipy rasterio pydot graphviz h5py=2.10.0

Alternatively, a Dockerfile is provided in Docker/Dockerfile which can be used to create a Docker image including CUDA.

Note: This code has been mainly written at the end of 2018/start of 2019 with the Python packages versions available at that time. A usage with updated packages might require some modification of the code. If you try this code with updated libraries, please let me know your findings (andrea.meraner [at] eumetsat.int).

To clone the repo:

git clone [email protected]:ameraner/dsen2-cr.git
cd dsen2-cr

Usage

Basic Commands

A new model can be trained from scratch by simply launching

cd Code/
python dsen2cr_main.py

The setup and hyperparameters can be tuned directly in the first lines of the main code.

To resume the training from a previoulsy saved checkpoint, type

python dsen2cr_main.py --resume path/to/checkpoint.h5

To predict images and evaluate the metrics of a trained network, do

python dsen2cr_main.py --predict path/to/checkpoint.h5

Dataset Paths

The main code will look for the paths to training/validation/test data in the csv file Data/datasetfilelist.csv. An example is provided in the repository. The first column of each entry is an integer, where 1 defines a training sample, 2 a validation sample, and 3 a test sample. The second, third, and fourth column indicate the subfolder names where the Sentinel-1, Sentinel-2 Cloudfree, and Sentinel-2 Cloudy images are located respectively. The fifth column finally states the filename of the image, that must be the same in the three folders. The three subfolders must be located in the path defined by the variable input_data_folder in the main script.

If you wish to download the full list of patches including the indication of the train/val/test split, you can find the full csv files here. Please see the Dataset section below for more information.

Trained Model Checkpoints

The full DSen2-CR model trained by optimizing the CARL loss can be downloaded from Google Drive here.

The full model trained on a plain L1 loss can be downloaded here. The network trained on CARL but without SAR input can be found here. The network trained without SAR, and on plain L1 loss, can be found here.

Dataset

The dataset used in this work is called SEN12MS-CR. A slightly reprocessed version of it is publicly available for download here. If you use this dataset for your research, please cite our related IEEE TGRS paper

Ebel, P., Meraner, A., Schmitt, M., & Zhu, X. X. (2020). Multisensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery. IEEE Transactions on Geoscience and Remote Sensing.

describing the dataset release. The paper can be accessed for free at the IEEE Explore page. See also a related website here.

@article{Ebel2020,
  author={P. {Ebel} and A. {Meraner} and M. {Schmitt} and X. X. {Zhu}},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={Multisensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery}, 
  year={2020},
  volume={},
  number={},
  pages={1-13},
  doi={10.1109/TGRS.2020.3024744}}

NOTE

The published SEN12MS-CR dataset described above is a reprocessed version of the one used in this work.

The main difference is that the one from this work was in the WGS84 coordinate system, whereas the released one was a reprocessing with a UTM CRS transform (in order to make the patches co-registered with available semantic segmentations and scene-wise labels - see paper). The differences will most probably not affect the network performance, and the pre-trained models can still be used.

The csv files to be used as inputs for the model linked in the "Dataset Paths" section above have been adapted to match the filenaming convention used in the published reprocessed dataset. Note that due to the reprocessing, the train/val/test splits are not identical to the ones used for the paper - the differences, however, are minor.

@CodyKurpanek created a Jupyter Notebook that downloads and processes the public SEN12MS-CR dataset in order to fit the expected structure by this code. The notebook can be found under Data/unpack_dataset.ipynb.

PyTorch Model

If you're interested in a PyTorch implementation of the DSen2-CR model, the according class is available in Code/dsen2cr_pytorch_model.py (thanks to Patrick Ebel).


Credits

Although now heavily modified and expanded, this code was originally based on the code by Charis Lanaras available in the DSen2 repo. Also the network used by this work is, as the name suggests, heavily based on the original DSen2 network (see related paper). I am grateful to the authors for making the original source code available.

About

DSen2-CR: A network for removing clouds from Sentinel-2 images. This repo contains the model code, written in Python/Keras, as well as links to pre-trained checkpoints and the SEN12MS-CR dataset.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published