Skip to content

[AAAI2021] A repository of Contrastive Adversarial Learning for Person-independent FER

Notifications You must be signed in to change notification settings

kdhht2334/Contrastive-Adversarial-Learning-FER

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contrastive-Adversarial-Learning-for-Person-independent-FER (CAL-FER)

This repository provides the official PyTorch implementation of the following paper:

Contrastive Adversarial Learning for Person Independent Facial Emotion Recognition (AAAI 2021)

  • Real-time demo with pre-trained weights

Requirements

To install all dependencies, do this.

pip install -r requirements.txt

News

[21.06.04]: Add weights of encoder and FC layer.

[21.02.06]: UPLOAD full training and evaluation files.

[21.01.02] OPEN official pytorch version of CAL-FER.


Datasets

  1. Download three public benchmarks for training and evaluation (I cannot upload datasets due to the copyright issue).

(For more details visit website)

  1. Follow preprocessing rules for each dataset by referring pytorch official custom dataset tutorial.

Training and evaluation

1.Go to /src.

2.Train CAF-FER.

python main.py --gpus 0 --train 1 --freq 5 --csv_path <csv_path> --data_path <data_path> --save_path <save_path> --load_path <load_path>

3.Evaluate CAF-FER.

python main.py --gpus 0 --train 0 --csv_path <csv_path> --data_path <data_path> --load_path <load_path>
  • Arguments
  • gpus: GPU number (in case of single GPU, set to 0)
  • train: 1 (training phase), 0 (evaluation phase)
  • freq: Parameter saving frequency
  • csv_path: Path to load name and label script.
  • data_path: Path to load facial dataset.
  • save_path: Path to save weights.
  • load_path: Path to load pre-trained weights.

Real-time demo

  1. Go to /Real_demo.

  2. Run main.py.

  • Facial detection and AV domain FER functions are equipped.
  • Before that, you have to train and save Encoder.t7 and FC_layer.t7.

BibTeX

Please cite our paper if you find our work useful for your research:

@inproceedings{kim2021contrastive,
title={Contrastive Adversarial Learning for Person Independent Facial Emotion Recognition},
author={Kim, Dae Ha and Song, Byung Cheol},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={7},
pages={5948--5956},
year={2021}
}