|
| 1 | +# Dataset Preparation |
| 2 | +The raw dataset should be put into the `$HOME/datasets/landmark-datasets`. The layout should be organized as the following screen shot. |
| 3 | + |
| 4 | + |
| 5 | + |
| 6 | +## [300-W](https://ibug.doc.ic.ac.uk/resources/300-W/) |
| 7 | + |
| 8 | +### Download |
| 9 | +- 300-W consits of several different datasets |
| 10 | +- Create directory to save images and annotations: mkdir ~/datasets/landmark-datasets/300W |
| 11 | +- To download i-bug: https://ibug.doc.ic.ac.uk/download/annotations/ibug.zip |
| 12 | +- To download afw: https://ibug.doc.ic.ac.uk/download/annotations/afw.zip |
| 13 | +- To download helen: https://ibug.doc.ic.ac.uk/download/annotations/helen.zip |
| 14 | +- To download lfpw: https://ibug.doc.ic.ac.uk/download/annotations/lfpw.zip |
| 15 | +- To download the bounding box annotations: https://ibug.doc.ic.ac.uk/media/uploads/competitions/bounding_boxes.zip |
| 16 | +- In the folder of `~/datasets/landmark-datasets/300W`, there are four zip files ibug.zip, afw.zip, helen.zip, and lfpw.zip |
| 17 | +``` |
| 18 | +unzip ibug.zip -d ibug |
| 19 | +mv ibug/image_092\ _01.jpg ibug/image_092_01.jpg |
| 20 | +mv ibug/image_092\ _01.pts ibug/image_092_01.pts |
| 21 | +
|
| 22 | +unzip afw.zip -d afw |
| 23 | +unzip helen.zip -d helen |
| 24 | +unzip lfpw.zip -d lfpw |
| 25 | +unzip bounding_boxes.zip ; mv Bounding\ Boxes Bounding_Boxes |
| 26 | +``` |
| 27 | +The 300W directory is in `$HOME/datasets/landmark-datasets/300W` and the sturecture is: |
| 28 | +``` |
| 29 | +-- afw |
| 30 | +-- Bounding_boxes |
| 31 | +-- helen |
| 32 | +-- ibug |
| 33 | +-- lfpw |
| 34 | +``` |
| 35 | + |
| 36 | +Then you use the script to generate the 300-W list files. |
| 37 | +``` |
| 38 | +python generate_300W.py |
| 39 | +``` |
| 40 | +All list files will be saved into `./lists/300W/`. The files `*.DET` use the face detecter results for face bounding box. `*.GTB` use the ground-truth results for face bounding box. |
| 41 | + |
| 42 | +#### can not find the `*.mat` files for 300-W. |
| 43 | +The download link is in the official [300-W website](https://ibug.doc.ic.ac.uk/resources/300-W). |
| 44 | +``` |
| 45 | +https://ibug.doc.ic.ac.uk/media/uploads/competitions/bounding_boxes.zip |
| 46 | +``` |
| 47 | +The zip file should be unzipped, and all extracted mat files should be put into `$HOME/datasets/landmark-datasets/300W/Bounding_Boxes`. |
| 48 | + |
| 49 | +## [AFLW](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/) |
| 50 | + |
| 51 | +Download the aflw.tar.gz file in `$HOME/datasets/landmark-datasets` and extract it by `tar xzvf aflw.tar.gz`. |
| 52 | +``` |
| 53 | +mkdir $HOME/datasets/landmark-datasets/AFLW |
| 54 | +cp -r aflw/data/flickr $HOME/datasets/landmark-datasets/AFLW/images |
| 55 | +``` |
| 56 | + |
| 57 | +The structure of AFLW is: |
| 58 | +``` |
| 59 | +--images |
| 60 | + --0 |
| 61 | + --2 |
| 62 | + --3 |
| 63 | +``` |
| 64 | + |
| 65 | +Download the [AFLWinfo_release.mat](http://mmlab.ie.cuhk.edu.hk/projects/compositional/AFLWinfo_release.mat) from [this website](http://mmlab.ie.cuhk.edu.hk/projects/compositional.html) into `./cache_data`. This is the revised annotation of the full AFLW dataset. |
| 66 | + |
| 67 | +Generate the AFLW dataset list file into `./lists/AFLW`. |
| 68 | +``` |
| 69 | +python aflw_from_mat.py |
| 70 | +``` |
| 71 | + |
| 72 | +## [300VW](https://ibug.doc.ic.ac.uk/resources/300-VW/) |
| 73 | +Download `300VW_Dataset_2015_12_14.zip` into `$HOME/datasets/landmark-datasets` and unzip it into `$HOME/datasets/landmark-datasets/300VW_Dataset_2015_12_14`. |
| 74 | + |
| 75 | +Use the following command to extract the raw video into the image format. |
| 76 | +``` |
| 77 | +python extrct_300VW.py |
| 78 | +sh ./cache/Extract300VW.sh |
| 79 | +``` |
| 80 | + |
| 81 | +Generate the 300-VW dataset list file. |
| 82 | +``` |
| 83 | +python generate_300VW.py |
| 84 | +``` |
| 85 | + |
| 86 | +## a short demo video sequence |
| 87 | + |
| 88 | +The raw video is `./cache_data/cache/demo-sbr.mp4`. |
| 89 | +- use `ffmpeg -i ./cache/demo-sbr.mp4 ./cache/demo-sbrs/image%04d.png` to extract the frames into `/cache/demo-sbrs/` |
| 90 | +Then use `python demo_list.py` to generate the list file for the demo video. |
| 91 | + |
| 92 | +# Citation |
| 93 | +If you use the 300-W dataset, please cite the following papers. |
| 94 | +``` |
| 95 | +@article{sagonas2016300, |
| 96 | + title={300 faces in-the-wild challenge: Database and results}, |
| 97 | + author={Sagonas, Christos and Antonakos, Epameinondas and Tzimiropoulos, Georgios and Zafeiriou, Stefanos and Pantic, Maja}, |
| 98 | + journal={Image and Vision Computing}, |
| 99 | + volume={47}, |
| 100 | + pages={3--18}, |
| 101 | + year={2016}, |
| 102 | + publisher={Elsevier} |
| 103 | +} |
| 104 | +@inproceedings{sagonas2013300, |
| 105 | + title={300 faces in-the-wild challenge: The first facial landmark localization challenge}, |
| 106 | + author={Sagonas, Christos and Tzimiropoulos, Georgios and Zafeiriou, Stefanos and Pantic, Maja}, |
| 107 | + booktitle={Proceedings of the IEEE International Conference on Computer Vision Workshops}, |
| 108 | + pages={397--403}, |
| 109 | + year={2013}, |
| 110 | + organization={IEEE} |
| 111 | +} |
| 112 | +``` |
| 113 | +If you use the 300-VW dataset, please cite the following papers. |
| 114 | +``` |
| 115 | +@inproceedings{chrysos2015offline, |
| 116 | + title={Offline deformable face tracking in arbitrary videos}, |
| 117 | + author={Chrysos, Grigoris G and Antonakos, Epameinondas and Zafeiriou, Stefanos and Snape, Patrick}, |
| 118 | + booktitle={Proceedings of the IEEE International Conference on Computer Vision Workshops}, |
| 119 | + pages={1--9}, |
| 120 | + year={2015} |
| 121 | +} |
| 122 | +@inproceedings{shen2015first, |
| 123 | + title={The first facial landmark tracking in-the-wild challenge: Benchmark and results}, |
| 124 | + author={Shen, Jie and Zafeiriou, Stefanos and Chrysos, Grigoris G and Kossaifi, Jean and Tzimiropoulos, Georgios and Pantic, Maja}, |
| 125 | + booktitle={Proceedings of the IEEE International Conference on Computer Vision Workshops}, |
| 126 | + pages={50--58}, |
| 127 | + year={2015} |
| 128 | +} |
| 129 | +@inproceedings{tzimiropoulos2015project, |
| 130 | + title={Project-out cascaded regression with an application to face alignment}, |
| 131 | + author={Tzimiropoulos, Georgios}, |
| 132 | + booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, |
| 133 | + pages={3659--3667}, |
| 134 | + year={2015} |
| 135 | +} |
| 136 | +``` |
| 137 | +If you use the AFLW dataset, please cite the following papers. |
| 138 | +``` |
| 139 | +@inproceedings{koestinger2011annotated, |
| 140 | + title={Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization}, |
| 141 | + author={Koestinger, Martin and Wohlhart, Paul and Roth, Peter M and Bischof, Horst}, |
| 142 | + booktitle={IEEE International Conference on Computer Vision Workshops}, |
| 143 | + pages={2144--2151}, |
| 144 | + year={2011}, |
| 145 | + organization={IEEE} |
| 146 | +} |
| 147 | +``` |
0 commit comments