Inspiration

A recent incident that went viral on social media regarding "help" sign of the sign language in public.

What it does

-Individuals with auditory impairment rely on hand signs to communicate, but it can be challenging for those unfamiliar with sign language to understand their messages. Consequently, there is a pressing need for systems that can recognize and interpret these distinct signs, effectively conveying information to individuals without auditory impairments. -To address this communication barrier, sign language recognition systems play a pivotal role. These systems utilize advanced technologies, such as machine learning and computer vision, to analyze and interpret the gestures and signs made by individuals with auditory impairments. By accurately recognizing and translating these signs into spoken language or written text, these systems enable seamless communication between people with auditory impairments and those who are not proficient in sign language

How we built it

-Collected the dataset from kaggle -We built a model using different tools for image processing such as Tensorflow, OpenCV

Challenges we ran into

-Limited dataset -Accuracy issues

Accomplishments that we're proud of

-That we were able to go come together and find a technical solution to solve a social problem

What we learned

Technically we learned how to implement a image processing project with a set of dataset Other than that we learned about different signs and symbols that exist, the challenges faced by the people with auditory impairment.

What's next for Sign Language Detection

Future scope is that there can be an app for real time sign language to text or voice conversion

Built With

Share this project:

Updates