Inspiration

Randomly we moved to the food section where goodies were provided and one of our team member picked up a waffy packet behind which we saw some sign language imprints which puzzled us and gave a spark to our ideation ,BOOOM!! that's how we planned to work upon an application based on sign language helping the good ones :) The main motive of this project stems from the desire to empower individuals who are deaf and dumb to communicate more effectively with others. Sign language is a beautiful and expressive form of communication, but it often requires both parties to be familiar with the language. By creating a technology that can understand and translate sign language, we aim to enhance inclusivity and create a more accessible environment for everyone.

What it does

As the name suggests VAANI i.e. voice , so ironically through this application we are providing deaf and dumb their inner voice in a technical way. "Sign Recognition for Deaf and Dumb Communication" uses AI to instantly translate American Sign Language (ASL) gestures into text or speech. By capturing and processing ASL signs in real-time, the system enables seamless communication between ASL users and non-sign language speakers. With a user-friendly interface and customizable features, it fosters inclusivity and accessibility for the deaf and dumb community, paving the way for meaningful interactions.

How we built it

We constructed "Sign Recognition for Deaf and Dumb Communication" by leveraging advanced computer vision and machine learning techniques. We curated a diverse dataset of ASL gestures, trained an SSD MobileNet ML-Model, and developed real-time video processing. The system captures ASL signs, translates them to text or speech, and boasts a user-friendly interface with customizable options. This project not only empowers communication but also embraces inclusivity, offering a glimpse into the transformative potential of technology in fostering understanding and connection.

Challenges we ran into

During development, we encountered challenges in curating a diverse ASL gesture dataset, ensuring real-time video processing efficiency, and optimizing model accuracy. Customizable vocabulary implementation posed complexities. Balancing recognition accuracy and speed was another hurdle. Additionally, integrating speech synthesis for spoken language translation required careful implementation. Despite these challenges, our team's dedication and problem-solving led to a robust "Sign Recognition for Deaf and Dumb Communication" system that enhances communication accessibility and fosters empathy.

Accomplishments that we're proud of

Our accomplishment is a real-time "Sign Recognition for Deaf and Dumb Communication" system, enabling seamless interaction between ASL users and non-sign language speakers. With accurate recognition, customizable features, and user-friendliness, we've made a meaningful stride towards inclusive communication and a more empathetic society.

What we learned

Throughout the project, we gained valuable insights into advanced computer vision, machine learning model fine-tuning, real-time video processing optimization, and speech synthesis integration. Additionally, we deepened our understanding of communication barriers faced by the deaf and dumb community, emphasizing the importance of empathy and inclusive technology solutions.

What's next for Vaani : Conversational App for the deaf and mute

Next for Vaani involves expanding its impact by collaborating with accessibility organizations, integrating with communication tools, and supporting additional sign languages. We aim to enhance gesture-based applications, further empowering those with communication challenges. Our journey continues towards a more inclusive and connected world through innovative technology solutions

Built With

Share this project:

Updates