This project, Lip-To-Speech-Synthesis, focuses on synthesizing speech from lip movements. It aims to convert lip movements captured from video footage into corresponding speech output. This technology has potential applications in various fields such as speech therapy, computer graphics, and human-computer interaction.
The above GIFs showcase the output of the Lip-To-Speech-Synthesis system. It demonstrates the conversion of lip movements into synthesized speech.
This project is developed by Xavier Dias. For any inquiries or further information, feel free to reach out.
You can reach out to us through the following channels:
Feel free to connect with us on these platforms for any inquiries, feedback, or collaboration opportunities.
If you liked the project, please give it a star ⭐
If you have any feedback, please reach out to me!!
Contributions are always welcome!