Use Python version 3.8.12
Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer. This model is implemented on top of the vit-pytorch vision transformers library. The proposed model can be used to enhance (binarize) degraded document images, as shown in the following samples.
Degraded Images | Our Binarization |
clone the repository:
git clone https://github.com/dali92002/DocEnTR
cd DocEnTr
- install requirements.txt
We gathered the DIBCO, H-DIBCO and PALM datasets and organized them in one folder. You can download it from this link. After downloading, extract the folder named DIBCOSETS and place it in your desired data path. Means: /YOUR_DATA_PATH/DIBCOSETS/
Be aware that these datasets have their own Licences, they are not included under the Licence of this repository.
Specify the data path, split size, validation and testing sets to prepare your data. In this example, we set the split size as (256 X 256), the validation set as 2016 and the testing as 2018 while running the process_dibco.py file.
python process_dibco.py --data_path /YOUR_DATA_PATH/ --split_size 256 --testing_dataset 2018 --validation_dataset 2016
For training, specify the desired settings (batch_size, patch_size, model_size, split_size and training epochs) when running the file train.py. For example, for a base model with a patch_size of (16 X 16) and a batch_size of 32 we use the following command:
python train.py --data_path /YOUR_DATA_PATH/ --batch_size 32 --vit_model_size base --vit_patch_size 16 --epochs 151 --split_size 256 --validation_dataset 2016
You will get visualization results from the validation dataset on each epoch in a folder named vis+"YOUR_EXPERIMENT_SETTINGS" (it will be created). In the previous case it will be named visbase_256_16. Also, the best weights will be saved in the folder named "weights".
To test the trained model on a specific DIBCO dataset (should be matched with the one specified in Section Process Data, if not, run process_dibco.py again). Download the model weights (In section Model Zoo), or use your own trained model weights. Then, run the following command. Here, I test on H-DIBCO 2018, using the Base model with 8X8 patch_size, and a batch_size of 16. The binarized images will be in the folder ./vis+"YOUR_CONFIGS_HERE"/epoch_testing/
python test.py --data_path /YOUR_DATA_PATH/ --model_weights_path /THE_MODEL_WEIGHTS_PATH/ --batch_size 16 --vit_model_size base --vit_patch_size 8 --split_size 256 --testing_dataset 2018
In this demo, we show how we can use our pretrained models to binarize a single degraded image, this is detailed with comments in the file named demo.ipynb for simplicity we make it a jupyter notebook where you can modify all the code parts and visualize your progresssive results.
In this section we release the pre-trained weights for all the best DocEnTr model variants trained on DIBCO benchmarks.
Testing data | Models | Patch size | URL | PSNR | |
---|---|---|---|---|---|
0 | DIBCO 2011 |
DocEnTr-Base | 8x8 | Unavailable | 20.81 |
DocEnTr-Large | 16x16 | Unavailable | 20.62 | ||
1 | H-DIBCO 2012 |
DocEnTr-Base | 8x8 | model | 22.29 |
DocEnTr-Large | 16x16 | model | 22.04 | ||
2 | DIBCO 2017 |
DocEnTr-Base | 8x8 | model | 19.11 |
DocEnTr-Large | 16x16 | model | 18.85 | ||
3 | H-DIBCO 2018 |
DocEnTr-Base | 8x8 | model | 19.46 |
DocEnTr-Large | 16x16 | model | 19.47 |
If you find this useful for your research, please cite it as follows:
@inproceedings{souibgui2022docentr,
title={DocEnTr: An end-to-end document image enhancement transformer},
author={Souibgui, Mohamed Ali and Biswas, Sanket and Jemni, Sana Khamekhem and Kessentini, Yousri and Forn{\'e}s, Alicia and Llad{\'o}s, Josep and Pal, Umapada},
booktitle={2022 26th International Conference on Pattern Recognition (ICPR)},
year={2022}
}
Thank you for interesting in our work, and sorry if there is any bugs.