
Overview • Features • Install • Getting Started • Documentation • Contribution • References
SLURP : Smart Land Use Reconstruction Pipeline
SLURP is your companion to compute a simple land-use/land-cover mask from Very High Resolution (VHR) optical images. It proposes different few or unsupervised learning algorithms that produce one-versus-all masks (water, vegetation, shadow, urban). Then a final algorithm stacks them all together and regularize them to obtain into a single multiclass mask.
SLURP uses some global data, such as Global Surface Water (Pekel) for water detection or World Settlement Footprint (WSF) for building detection.
Data preparation can be achieved with Orfeo ToolBox or other tools, in order to bring all necessary data in the same projection. You can either build your mask step by step, or use a batch script to launch and build the final mask automatically.
You need to clone the repository and pip install SLURP.
git clone [email protected]:pluto/slurp.git
To install SLURP, you need OTB, EOScale and some libraries already installed on VRE OT.
Otherwise, if you are are connected to TREX, or working on your personal computer (Linux), you may set the environment as mentioned below.
On TREX, connect to a computing node to create & compile the virtual environment (needed to compile rasterio at install time)
sinter -A cnes_level2 -N 1 -n 8 --time=02:00:00 --mem=64G --x11 --pty bash
Load OTB and create a virtual env with some Python libraries. Compile and install EOScale and then SLURP
module load otb/9.0.0-python3.8
# Creates a virtual env base on Python 3.8.13
python -m venv slurp_env
. slurp_env/bin/activate
# upgrade pip and install several libraries
pip install pip --upgrade
cd <EOScale source folder>
pip install .
cd <SLURP source folder>
pip install .
# for validation tests
pip install pytest
Your environment is ready, you can compute SLURP masks with slurp_watermask, slurp_urbanmask, etc.
Once your environment has been set up, you can run SLURP.
A tutorial (with and without OTB) is available : Tutorial.md.
On TREX, you can directly use SLURP by sourcing the following environment.
source /work/CAMPUS/users/tanguyy/PLUTO/slurp_demo/init_slurp.sh
This will load OTB 9.0 and all Python dependencies
You can also use a shell script with SLURM to launch different masks algorithms on your images.
sbatch --export="PHR_IM=/work/scratch/tanguyy/public/RemyMartin/PHR_image_uint16.tif,OUTPUT_DIR=/work/scratch/tanguyy/public/RemyMartin/,CLUSTERS_VEG=4,CLUSTERS_LOW_VEG=2" /softs/projets/pluto/demo_slurp/compute_all_masks.pbs
Two scripts (to calculate all the masks and the scores) are available in conf/ directory.
Each mask needs some auxiliary files. They must be on the same projection, resolution and bounding box of the VHR input image to enable mask computation. You can generate this data yourself or use the prepare script available in SLURP.
The prepare script enables :
- Computation of stack validity (with or without a cloud mask)
- Computation of NDVI and NDWI
- Extraction of largest Pekel file
- Extraction of largest HAND file
- Extraction of WSF file
- Computation of texture file with a convolution
It requires an OTB installation.
To run the script
- Configure the JSON file. A template is available at conf/main_config.json with default values.
- Update input, aux_layers, resources and prepare blocs inside the JSON file.
- Run the command :
slurp_prepare <JSON file>
You can override the JSON with CLI arguments. For example : slurp_prepare <JSON file> -file_vhr <VHR input image> -file_ndvi <path to store NDVI>
Type slurp_prepare -h
for complete list options :
- overwriting of output files (-w)
- bands identification (-red <1/3>, etc.),
- files to extract and reproject (-pekel, -hand, -wsf, etc.),
- output paths (-extracted_pekel, etc.),
- etc.
Water model is learned from Pekel (Global Surface Water) reference data and is based on NDVI/NDWI2 indices. Then the predicted mask is cleaned with Pekel, possibly with HAND (Height Above Nearest Drainage) maps and post-processed to clean artefacts.
To compute the mask
- Configure the JSON file : a template is available at conf/main_config.json with default values.
- Update input, aux_layers and masks blocs inside the JSON file. To go further you can modify resources, post_process and water blocs.
- Run the command :
slurp_watermask <JSON file>
You can override the JSON with CLI arguments. For example : slurp_watermask <JSON file> -file_vhr <VHR input image> -watermask <your watermask.tif>
Type slurp_watermask -h
for complete list of options :
- samples method (-samples_method, -nb_samples_water, etc.),
- add other raster features (-layers layer1 [layer 2 ..]),
- post-process mask (-remove_small_holes, -binary_closing, etc.),
- saving of intermediate files (-save),
- etc.
Vegetation mask are computed with an unsupervised clustering algorithm. First some primitives are computed from VHR image (NDVI, NDWI2, textures). Then a segmentation is processed (SLIC) and segments are dispatched in several clusters depending on their features. A final labellisation affects a class to each segment (ie : high NDVI and low texture denotes for low vegetation).
To compute the mask
- Configure the JSON file : a template is available at conf/main_config.json with default values.
- Update input, aux_layers and masks blocs inside the JSON file. To go further you can modify resources and vegetation blocs.
- Run the command :
slurp_vegetationmask <JSON file>
You can override the JSON with CLI arguments. For example : slurp_vegetationmask <JSON file> -file_vhr <VHR input image> -vegetationmask <your vegetation mask.tif>
Type slurp_vegetationmask -h
for complete list of options :
- segmentation mode and parameter for SLIC algorithms
- number of workers (parallel processing for primitives and segmentation tasks)
- number of clusters affected to vegetation (3 by default - 33%)
- etc.
An urban model (building) is learned from WSF reference map. The algorithm can take into account water and vegetation masks in order to improve samples selection (non building pixels will be chosen outside WSF and outside water/vegetation masks). The output is a "building probability" layer ([0..100]) that can be used by the stack algorithm.
To compute the mask
- Configure the JSON file : a template is available at conf/main_config.json with default values.
- Update input, aux_layers and masks blocs inside the JSON file. To go further you can modify resources and urban blocs.
- Run the command :
slurp_urbanmask <JSON file>
You can override the JSON with CLI arguments. For example : slurp_urbanmask <JSON file> -file_vhr <VHR input image> -urbanmask <your urban mask.tif>
Type slurp_urbanmask -h
for complete list of options :
- samples parameters),
- add other raster features (-layers layer1 [layer 2 ..])
- elimination of pixels identified as water or vegetation (-watermask , -vegetationmask ),
- etc.
Shadow mask detects dark areas (supposed shadows), based on two thresholds (RGB, NIR). A post-processing step removes small shadows, holes, etc. The resulting mask is a three-classes mask (no shadow, small shadow, big shadows). The big shadows can be used in the stack algorithm in the regularization step.
To compute the mask
- Configure the JSON file : a template is available at conf/main_config.json with default values.
- Update input, aux_layers and masks blocs inside the JSON file. To go further you can modify resources, post_process and shadow blocs.
- Run the command :
slurp_shadowmask <JSON file>
You can override the JSON with CLI arguments. For example : slurp_shadowmask <JSON file> -file_vhr <VHR input image> -shadowmask <your shadow mask.tif>
Type slurp_shadowmask -h
for complete list of options :
- relative thresholds (-th_rgb, -th_nir, etc.),
- post-process mask (-remove_small_objects, -binary_opening, etc.),
- etc.
The stack algorithm take into account all previous masks to produce a 6 classes mask (water, low vegetation, high vegetation, building, bare soil, other) and an auxilliary height layer (low / high / unknown). The algorithm can regularize urban mask with a watershed algorithm based on building probability and context of surrounding areas. This algorithm first computes a gradient on the image and fills a marker layer with known classes. Then a watershed step helps to adjust contours along gradient image, thus regularizing buildings shapes.
To compute the mask
- Configure the JSON file : a template is available at conf/main_config.json with default values.
- Update input, aux_layers and masks element inside the JSON file. To go further you can modify resources, post_process and stack blocs.
- Run the command :
slurp_stackmasks <JSON file>
You can override the JSON with CLI arguments. For example : slurp_stackmasks <JSON file> -file_vhr <VHR input image> -remove_small_objects 500 -binary_closing 3
Type slurp_stackmasks -h
for complete list of options :
- watershed parameters,
- post-process parameters (-remove_small_objects, -binary_opening, etc.),
- classif value of each element of the final mask
- etc.
The predicted mask is compared to a given raster ground truth and some metrics such as the recall and the precision scores are calculated. The resulting mask shows the overlay of the prediction and the ground truth. An optional mode, useful for the urban mask, extracts the polygons of each raster and compare them, giving the number of expected buildings identified and the IoU score. The analysis can be performed on a window of the input files.
slurp_scores -im <predicted mask> -gt <raster ground truth - OSM, ..> -out <your overlay mask>
Type slurp_scores -h
for complete list of options :
- selection of a window (-startx, -starty, -sizex, -sizey),
- detection of the buildings (-polygonize) and iou score (-polygonize.union) with some parameters (-polygonize.area, -polygonize.unit, etc.),
- saving of intermediate files (-save)
The project comes with a suite of unit and functional tests. All the tests are available in tests/ directory.
To run them, launch the command pytest
in the root of the slurp project. To run tests on a specific mask, execute pytest tests/<file_name>"
.
By default, the tests generate the masks and then validate them by comparing them with a reference. You can choose to only compute the masks with pytest -m computation
or validate them with pytest -m validation
. To validate data preparation, you can use pytest -m prepare
or pytest -m all
for the complete test : these two last modes require OTB installation.
You can change the default configuration for the tests by modifying the JSON file "tests/config_tests".
Go in docs/ directory
See Contribution manual
This package was created with PLUTO-cookiecutter project template.
Inspired by main cookiecutter template and CARS cookiecutter template