Inspiration

People who are mute struggle to communicate with people in their day-to-day lives as most people do not understand American Sign Language. We were inspired by the desire to empower mute individuals by bridging communication gaps and fostering global connections.

What it does

VoiceLens uses advanced lip-reading technology to transcribe speech and translate it in real-time, allowing users to choose their preferred output language for seamless interaction.

How we built it

We used ReactJS for the frontend and Symphonic Labs API to silently transcribe and read the lips of the user. We also then used Groq's llama3 allow for translation between various languages and used Google's Text-to-speech API to voice the sentences.

Challenges

We faced challenges in accurately capturing and interpreting lip movements, as well as ensuring fast and reliable translations across diverse languages.

Accomplishments that we're proud of

We're proud of achieving high accuracy in lip-reading and successfully providing real-time translations that facilitate meaningful communication.

What we learned

We learned the importance of collaboration between technology and accessibility, and how innovative solutions can make a real difference in people's lives.

What's next for VoiceLens

We plan to enhance our language offerings, improve speed and accuracy, and explore partnerships to expand the app's reach and impact globally.

Built With

Share this project:

Updates