Inspiration
Approximately 12 million people 40 years and over in the United States have vision impairment, including 1 million who are blind. For this reason, they cannot see the facial and corporal expressions that help convey our emotions.
What it does## What it does
Through the use of ML and computer vision, this pre-trained program is capable of detecting ASL sign language and outputting the translation to the user in real time.
How we built it
We trained and built model using TensorFlow. To detect keypoints of palms, hands and face we used Mediapipe. Then we use that keypoints to predict human emotions
Challenges we ran into
The biggest challenge was to find proper ASL dataset of different words to train for making the ASL model.
Accomplishments that we're proud of
We are able build a model to predict human emotion and body language with good accuracy.
What we learned
We learned how to detect human emotion, body language and American Sign Language in computer vision.
What's next for Untitled
We want to build the ASL model properly with bigger dataset and increase our accuracy.

Log in or sign up for Devpost to join the conversation.