This repo includes functionality for morphing attack detection benchmarking from the paper "MorDeephy: Face Morphing Detection Via Fused Classification". The project may be used for performing benchmarks on your side and further submitting results for comparison between different developers. The project only unifies the data, protocols, performance estimation, results comparison. The extracting of the predictions data is up to the developer. We only propose an example for generating random predictions.
- Prepare and align data for the protocol (for demo we use MTCNN to align images). We propose to have separate directories for different types of alignment.
python align_protocol_insf.py -n <sd_protocol name> -f dataset.txt
- Extracting predictions. You can store models files in the same directory together with the predictions and benchmark result data. It is up to you to adopt your algorithms and extract the respective predictions for protocols. This is not generalizable due to the various development enviroments. You may follow the demo script (it generates randomized predictions) for extracting predictions:
python sd_demo_extracting_predictions.py -m <your modelname> -n <protocol name> -d <path to the aligned images>
- Computing the result performance curves:
python sd_benchmark_model.py -m <your modelname> -n <protocol name>
- Prepare and align data for the protocol (for demo we use MTCNN to align images). We propose to have separate directories for different types of alignment.
python align_protocol_insf.py -n <dd_protocol name> -f dataset_full.txt
- Extracting predictions. You can store models files in the same directory together with the predictions and benchmark result data. It is up to you to adopt your algorithms and extract the respective predictions for protocols. This is not generalizable due to the various development enviroments. You may follow the demo script (it generates randomized predictions) for extracting predictions:
python dd_demo_extracting_predictions.py -m <your modelname> -n <protocol name> -d <path to the aligned images>
- Computing the result performance curves (indeed can be made with the same scripts as no-reference case):
python sd_benchmark_model.py -m <your modelname> -n <protocol name>
We propose to make a submission as a pull request (PR) to this repository. How to PR to public repo is explained here: manual official, manual simple
The process can be summarized to the following pipeline:
1. Login to your github account and fork this repository.
2. Submit your results following the requirements to your fork and then make PR to the original repo.
Here we propose two tested options: Regular and Simple.
2.1. Simple. Uploading files right at the github website.
2.1.1. For each submitted protocol you need to create a directory in your fork in "submittions" directory.
Use "Add File -> Create new file" option to create README.md
file indicating its name with a relative path to a corresponding results directory, which you want to add.
2.1.2. Navigate to the created folder and use Add File -> Upload files
to upload files for your submission.
2.1.3. Create README.md
in ./submissions/supplementary/<your_submission_name>/
. Follow the submission requirements for README.md
, but feel free to organize this README.md
file to better represent your submission.
2.1.4. Make a PR to the original repo with the appeared Contribute
button.
2.2. Regular. Using git utilities
2.2.1. Clone your fork to your local machine.
cd <projects_path>
git clone https://github.com/<your_github_username>/MorDeephy.git
cd MorDeephy
2.2.2. Optionally change your branch to the name of your submission (better practice). Howerer it is ok and indeed more simple to stay on master.
git branch <your_submission_branch>
2.2.3. Execute the script for preparing submission (prepare_submission_files.py)
or manually copy your submission files to the corresponding folders.
python prepare_submission_files.py -m <your modelname> -s <submission name>
Or if the specific protocol is needed:
python prepare_submission_files.py -m <your modelname> -s <submission name> -p <protocol modelname>
If the required README.md is created in ./models/<your_submission_name>/
it will be also copied with the above command.
Otherwise create corresponding README.md
in ./submissions/supplementary/<your_submission_name>/
. Follow the submission requirements for README.md
, but feel free to organize this README.md
file to better represent your submission.
2.2.4. Add and commit your changes. Please dont add changes which are not related to your submission. If you want, then do it in a separate PR.
git add <your files for submission or just "." to add all new>
git commit -m “Submission <submission name>”
2.2.5. Push changes to the current branch of your forked repository.
git push --set-upstream origin <your current branch>
To authenticate this step github does not allow using regular credentials. We propose to Generate classic token. Further authentification for push can be made using it instead of a password.
2.2.6. Configure a Remote for the Fork
git remote add upstream https://github.com/iurii-m/MorDeephy.git
git remote -v
Output should be:
origin https://github.com/<your_github_username>/MorDeephy.git (fetch)
origin https://github.com/<your_github_username>/MorDeephy.git (push)
upstream https://github.com/iurii-m/MorDeephy.git (fetch)
upstream https://github.com/iurii-m/MorDeephy.git (push)
2.2.7. Sync the fork
git fetch upstream
2.2.8. Open your forked repo on github and Create Pull Request with appeared alert button.
If you are not confident with git, we suggest to have separate projects for development and for making submissions and perform manual file copying.
You submission name must end with _NNN
where NNN is a numerical identifier, which starts with 000 for the initial submission and is incremented for further submissions.
You can submit your results for one or several protocols. The submission files for a particular protocol must include files predictions.txt
and gt_labels.txt
(which are generated for each particular protocol) and optionally a README.md
.
Do not put other files to the directory with protocol data except those three.
See ./submissions/test_protocol as a template.
You also must add a README.md
in ./submissions/supplementary/<your_submission_name>/
This README.md
must include a section with explicit reference to the approach (article, arxiv report, or if not exists the description of the approach in text), website of the institution/team/researcher and optionally include in the section Additional References
for references, related to the submission.
You can decorate your README.md
but store all additional files (i.e. images) files in ./submissions/supplementary/<your_submission_name>/
See ./submissions/test_protocol
If you have public data related to face morphing, consider extending the functionality of this repo. If you propose to use some custom protocol, you can generate them and PR to the repo Protocols_generation Please separate those PRs from PRs from submission of peformance results.
To compare several results and plot together curves for defined protocols, run the script:
python plot_results.py
You can also specify the specific protocol and exclude some submissions from the plotted curve:
python plot_results.py -n <protocol_name> -e <list_of_submissions_to_be_excluded>
Result ROC
and DET
curves will appear in the submissions/<protocol_name>
.
The benchmarking is based on public data. It is firmly recommended not to use it during training of your algorithms in case you plan to submit your results here. See the preparation details here (data_processing)
- FRLL-Morphs
- Dustone_Morphs (email request is required)
Some data (FRGC-Morphs and FERET-Morphs datasets) is now unavailable due to the withdrawal.
If use of our work in your research, please cite the paper in your publications:
@inproceedings{MorDeephy,
author={Iurii Medvedev and Farhad Shadmand and Nuno Gonçalves},
title={MorDeephy: Face Morphing Detection via Fused Classification},
booktitle={Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,},
year={2023},
pages={193-204},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011606100003411},
isbn={978-989-758-626-2},
}
The authors would like to thank the Portuguese Mint and Official Printing Office (INCM) and the Institute of Systems and Robotics - University of Coimbra for the support of the project Facing. This work has been supported by Fundação para a Ciência e a Tecnologia (FCT) under the project UIDB/00048/2020. The computational part of this work was performed with the support of NVIDIA Applied Research Accelerator Program with hardware and software provided by NVIDIA.