This Project Aims to develop an advanced system for recognizing and interpreting silent speech by analyzing lip movements. The project’s innovation lies in the combination of 3D CNN and GRU, enabling the model to effectively process and interpret the dynamic visual cues of lip movements for silent speech recognition. The system can have various application, including improving accessibility for individuals with hearing impairments, enhancing communication in noisy environments, and integrating with virtual assistants , telecommunications, and security systems.

Built With

Share this project:

Updates