This is a part of Mock-Buddy project, used to detect face interactivity. CNN architecture used to build the model to detect facial landmarks. The model is build with TensorFlow applying direction regression apporach.
- Python 3.7 or newer
300W consists of several datasets
You need bounding boxes to crop faces from above datasets
After downloading datasets just update the extraction paths with your datasets path.
- Export train and test csv from
300W.ipynb
. - Run
model_train.ipynb
to start training.
I have taken test datasets as iBug, Helen-test and LFPW-test and used evaluation metrics mentioned in 300 faces in-the-wild challenge (link).
Trained model metrics are in
metrics
folder.
👤 Karthick T. Sharma
- Github: @Karthick47v2
- LinkedIn: @Karthick47
- Add heatmap regression approach
@article{sagonas2016300,
title={300 faces in-the-wild challenge: Database and results},
author={Sagonas, Christos and Antonakos, Epameinondas and Tzimiropoulos, Georgios and Zafeiriou, Stefanos and Pantic, Maja},
journal={Image and Vision Computing},
volume={47},
pages={3--18},
year={2016},
publisher={Elsevier}
}
@inproceedings{sagonas2013300,
title={300 faces in-the-wild challenge: The first facial landmark localization challenge},
author={Sagonas, Christos and Tzimiropoulos, Georgios and Zafeiriou, Stefanos and Pantic, Maja},
booktitle={Proceedings of the IEEE International Conference on Computer Vision Workshops},
pages={397--403},
year={2013},
organization={IEEE}
}
Contributions, issues and feature requests are welcome!
Feel free to check issues page.
Give a ⭐️ if this project helped you!