[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
-
Updated
Feb 24, 2023 - Jupyter Notebook
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code
(CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
The authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.
Freeform Body Motion Generation from Speech
PyTorch implementation for NED (CVPR 2022). It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.
code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021
Crystal TTVS engine is a real-time audio-visual Multilingual speech synthesizer with a 3D expressive avatar.
The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
canvas-based talking head model using viseme data
Ukrainian alternative of Talking Ben
Talking Avatar: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
AI Avatar/Anchor: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Animated Characters: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Add a description, image, and links to the talking-head topic page so that developers can more easily learn about it.
To associate your repository with the talking-head topic, visit your repo's landing page and select "manage topics."