This fork adapts Gesticulator, the semantically-aware speech-driven gesture generation model, for integration with conversational agents in Unity.
-
Updated
Jan 23, 2023 - Python
This fork adapts Gesticulator, the semantically-aware speech-driven gesture generation model, for integration with conversational agents in Unity.
Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis
Parcel project to visualization gesture using three.js
Scripts for numerical evaluations for the GENEA Gesture Generation Challenge
This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents".
This is an official PyTorch implementation of "Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation" (IROS 2022).
Scripts for numerical evaluations for the GENEA Gesture Generation Challenge
This repository contains the gesture generation model from the paper "Moving Fast and Slow" (https://www.tandfonline.com/doi/full/10.1080/10447318.2021.1883883) trained on the English dataset
Deep Non-Adversarial Gesture Generation
Awesome Gesture Generation
[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation
PATS Dataset. Aligned Pose-Audio-Transcripts and Style for co-speech gesture research
This repository contains data pre-processing and visualization scripts used in GENEA Challenge 2022 and 2023. Check the repository's README.md file for instructions on how to use scripts yourself.
Official Repository for the paper Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach published in ECCV 2020 (https://arxiv.org/abs/2007.12553)
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
[CVPR 2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture generation"
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)
This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".
Add a description, image, and links to the gesture-generation topic page so that developers can more easily learn about it.
To associate your repository with the gesture-generation topic, visit your repo's landing page and select "manage topics."