Skip to content

This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.

License

Notifications You must be signed in to change notification settings

youngwoo-yoon/Co-Speech_Gesture_Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Co-Speech Gesture Generator

This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots (Paper, Project Page)

The original paper used TED dataset, but, in this repository, we modified the code to use Talking With Hands 16.2M for GENEA Challenge 2022. The model is also changed to estimate rotation matrices for upper-body joints instead of estimating Cartesian coordinates.

Environment

The code was developed using python 3.8 on Ubuntu 18.04. Pytorch 1.5.0 was used.

Prepare

  1. Install dependencies

    pip install -r requirements.txt
    
  2. Download the FastText vectors from here and put crawl-300d-2M-subword.bin to the resource folder (resource/crawl-300d-2M-subword.bin).

Train

  1. Make LMDB

    cd scripts
    python twh_dataset_to_lmdb.py [PATH_TO_DATASET]
    
  2. Update paths and parameters in config/seq2seq.yml and run train.py

    python train.py --config=../config/seq2seq.yml
    

Inference

  1. Do training or use a pretrained model (output/train_seq2seq/baseline_icra19_checkpoint_100.bin). When you use the pretrained model, please put vocab_cache.pkl file into lmdb train path.

  2. Inference. Output a BVH motion file from speech text (TSV file).

    python inference.py [PATH_TO_MODEL_CHECKPOINT] [PATH_TO_TSV_FILE]
    

Sample result

Result video for val_2022_v1_006.tsv by using the challenge visualization server.

val_2022_v1_006_generated.mp4

Remarks

  • I found this model was not successful when all the joints were considered, so I trained the model only with upper-body joints excluding fingers and used fixed values for remaining joints (using JointSelector in PyMo). You can easily try a different set of joints (e.g., full-body including fingers) by specifying joint names in target_joints variable in twh_dataset_to_lmdb.py. Please update data_mean and data_std in the config file if you change target_joints. You can find data mean and std values in the console output of the step 3 (Make LMDB) above.

License

Please see LICENSE.md

Citation

@INPROCEEDINGS{
  yoonICRA19,
  title={Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots},
  author={Yoon, Youngwoo and Ko, Woo-Ri and Jang, Minsu and Lee, Jaeyeon and Kim, Jaehong and Lee, Geehyuk},
  booktitle={Proc. of The International Conference in Robotics and Automation (ICRA)},
  year={2019}
}

About

This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages