Skip to content

Commit d4457c5

Browse files
committed
finish train
0 parents  commit d4457c5

File tree

508 files changed

+2509
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

508 files changed

+2509
-0
lines changed

.gitattributes

+501
Large diffs are not rendered by default.

.gitignore

+110
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# Created by .ignore support plugin (hsz.mobi)
2+
### Python template
3+
# Byte-compiled / optimized / DLL files
4+
__pycache__/
5+
*.py[cod]
6+
*$py.class
7+
.idea/
8+
scores/
9+
# C extensions
10+
*.so
11+
checkpoints/
12+
notebooks/.ipynb_checkpoints
13+
debug.log
14+
# Distribution / packaging
15+
.Python
16+
build/
17+
examples/checkpoints/
18+
examples/events/
19+
develop-eggs/
20+
dist/
21+
downloads/
22+
eggs/
23+
.eggs/
24+
lib/
25+
lib64/
26+
parts/
27+
sdist/
28+
var/
29+
wheels/
30+
*.egg-info/
31+
.installed.cfg
32+
*.egg
33+
*.pyc
34+
# PyInstaller
35+
# Usually these files are written by a python script from a template
36+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
37+
*.manifest
38+
*.spec
39+
40+
# Installer logs
41+
pip-log.txt
42+
pip-delete-this-directory.txt
43+
44+
# Unit test / coverage reports
45+
htmlcov/
46+
.tox/
47+
.coverage
48+
.coverage.*
49+
.cache
50+
51+
nosetests.xml
52+
coverage.xml
53+
*.cover
54+
.hypothesis/
55+
56+
# Translations
57+
*.mo
58+
*.pot
59+
60+
# Django stuff:
61+
*.log
62+
local_settings.py
63+
64+
# Flask stuff:
65+
instance/
66+
.webassets-cache
67+
68+
# Scrapy stuff:
69+
.scrapy
70+
71+
# Sphinx documentation
72+
docs/_build/
73+
74+
# PyBuilder
75+
target/
76+
77+
# Jupyter Notebook
78+
.ipynb_checkpoints
79+
80+
# pyenv
81+
.python-version
82+
83+
# celery beat schedule file
84+
celerybeat-schedule
85+
86+
# SageMath parsed files
87+
*.sage.py
88+
89+
# Environments
90+
.env
91+
.venv
92+
env/
93+
venv/
94+
ENV/
95+
events/
96+
checkpoints/
97+
98+
# Spyder project settings
99+
.spyderproject
100+
.spyproject
101+
102+
# Rope project settings
103+
.ropeproject
104+
105+
# mkdocs documentation
106+
/site
107+
108+
# mypy
109+
.mypy_cache/
110+

Pipfile

+39
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
[[source]]
2+
url = "https://pypi.org/simple"
3+
verify_ssl = true
4+
name = "pypi"
5+
6+
[packages]
7+
absl-py = "==0.5.0"
8+
astor = "==0.7.1"
9+
cycler = "==0.10.0"
10+
face-recognition = "*"
11+
gast = "==0.2.0"
12+
grpcio = "==1.15.0"
13+
"h5py" = "==2.8.0"
14+
keras-applications = "==1.0.6"
15+
keras-preprocessing = "==1.0.5"
16+
kiwisolver = "==1.0.1"
17+
markdown = "==3.0.1"
18+
matplotlib = "==3.0.0"
19+
model-zoo = "*"
20+
numpy = ">=1.15.2"
21+
opencv-python = "==3.4.3"
22+
pandas = "*"
23+
pillow = "*"
24+
protobuf = "==3.6.1"
25+
pyparsing = "==2.2.2"
26+
python-dateutil = "==2.7.3"
27+
scikit-learn = "==0.20.0"
28+
scipy = ">=1.1.0"
29+
six = "==1.11.0"
30+
sklearn = "*"
31+
tensorboard = ">=1.11.0"
32+
tensorflow = ">=1.11.0"
33+
termcolor = "==1.1.0"
34+
werkzeug = "==0.14.1"
35+
36+
[dev-packages]
37+
38+
[requires]
39+
python_version = "3.6"

README.md

+109
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
# Emotion Recognition
2+
3+
Emotion Recognition Implemented by [ModelZoo](https://github.com/ModelZoo/ModelZoo).
4+
5+
## Usage
6+
7+
Firstly, you need to clone this repository and download training data:
8+
9+
```
10+
git clone https://github.com/ModelZoo/EmotionRecognition.git
11+
cd EmotionRecognition
12+
git lfs pull
13+
```
14+
15+
Next, install the dependencies using pip:
16+
17+
```
18+
pip3 install -r requirements.txt
19+
```
20+
21+
Finally, just run training:
22+
23+
```
24+
python3 train.py
25+
```
26+
27+
If you want to continue training your model, you need to define `checkpoint_restore` flag in `train.py`:
28+
29+
```python
30+
tf.flags.DEFINE_bool('checkpoint_restore', True, help='Model restore')
31+
```
32+
33+
And you can define the specific model with `checkpoint_name` which you want to continue training with:
34+
35+
```python
36+
tf.flags.DEFINE_string('checkpoint_name', 'model-178.ckpt', help='Model name')
37+
```
38+
39+
40+
## TensorBoard
41+
42+
After training, you can see the transition of loss in TensorBoard.
43+
44+
```
45+
cd events
46+
tensorboard --logdir=.
47+
```
48+
49+
![](https://ws3.sinaimg.cn/large/006tNbRwgy1fw37u664tzj319d0mumym.jpg)
50+
51+
The best accuracy is 65.64% from step 178.
52+
53+
## Predict
54+
55+
Next, we can use our model to recognize the emotion.
56+
57+
Here are the test pictures we picked from the website:
58+
59+
![](https://ws4.sinaimg.cn/large/006tNbRwgy1fw3f6am6jpj310405cwf8.jpg)
60+
61+
Then put them to the folder named `tests` and define the
62+
model path and test folder in `infer.py`:
63+
64+
```python
65+
tf.flags.DEFINE_string('checkpoint_name', 'model.ckpt-178', help='Model name')
66+
tf.flags.DEFINE_string('test_dir', 'tests/', help='Dir of test data')
67+
```
68+
69+
Then just run inference using this cmd:
70+
71+
```
72+
python3 infer.py
73+
```
74+
75+
We can get the result of emotion recognition and probabilities of each emotion:
76+
77+
```
78+
Image Path: test1.png
79+
Predict Result: Happy
80+
Emotion Distribution: {'Angry': 0.0, 'Disgust': 0.0, 'Fear': 0.0, 'Happy': 1.0, 'Sad': 0.0, 'Surprise': 0.0, 'Neutral': 0.0}
81+
====================
82+
Image Path: test2.png
83+
Predict Result: Happy
84+
Emotion Distribution: {'Angry': 0.0, 'Disgust': 0.0, 'Fear': 0.0, 'Happy': 0.998, 'Sad': 0.0, 'Surprise': 0.0, 'Neutral': 0.002}
85+
====================
86+
Image Path: test3.png
87+
Predict Result: Surprise
88+
Emotion Distribution: {'Angry': 0.0, 'Disgust': 0.0, 'Fear': 0.0, 'Happy': 0.0, 'Sad': 0.0, 'Surprise': 1.0, 'Neutral': 0.0}
89+
====================
90+
Image Path: test4.png
91+
Predict Result: Angry
92+
Emotion Distribution: {'Angry': 1.0, 'Disgust': 0.0, 'Fear': 0.0, 'Happy': 0.0, 'Sad': 0.0, 'Surprise': 0.0, 'Neutral': 0.0}
93+
====================
94+
Image Path: test5.png
95+
Predict Result: Fear
96+
Emotion Distribution: {'Angry': 0.04, 'Disgust': 0.002, 'Fear': 0.544, 'Happy': 0.03, 'Sad': 0.036, 'Surprise': 0.31, 'Neutral': 0.039}
97+
====================
98+
Image Path: test6.png
99+
Predict Result: Sad
100+
Emotion Distribution: {'Angry': 0.005, 'Disgust': 0.0, 'Fear': 0.027, 'Happy': 0.002, 'Sad': 0.956, 'Surprise': 0.0, 'Neutral': 0.009}
101+
```
102+
103+
Emmm, looks good!
104+
105+
## Pretrained Model
106+
107+
Looking for pretrained model?
108+
109+
just go to `checkpoints` folder, here is the model with best performance at step 178.

images/1-159.jpg

+3
Loading

images/2-112.jpg

+3
Loading

images/2-114.jpg

+3
Loading

images/2-115.jpg

+3
Loading

images/2-154.jpg

+3
Loading

images/2-267.jpg

+3
Loading

images/2-410.jpg

+3
Loading

images/2-412.jpg

+3
Loading

images/2-429.jpg

+3
Loading

images/2-440.jpg

+3
Loading

images/3-140.jpg

+3
Loading

images/3-15.jpg

+3
Loading

images/3-153.jpg

+3
Loading

images/3-155.jpg

+3
Loading

images/3-156.jpg

+3
Loading

images/3-157.jpg

+3
Loading

images/3-160.jpg

+3
Loading

images/3-161.jpg

+3
Loading

images/3-175.jpg

+3
Loading

images/3-184.jpg

+3
Loading

images/3-187.jpg

+3
Loading

images/3-190.jpg

+3
Loading

images/3-192.jpg

+3
Loading

images/3-204.jpg

+3
Loading

images/3-206.jpg

+3
Loading

images/3-208.jpg

+3
Loading

images/3-210.jpg

+3
Loading

images/3-219.jpg

+3
Loading

images/3-221.jpg

+3
Loading

images/3-224.jpg

+3
Loading

images/3-231.jpg

+3
Loading

images/3-232.jpg

+3
Loading

images/3-239.jpg

+3
Loading

images/3-241.jpg

+3
Loading

0 commit comments

Comments
 (0)