Inspiration
Shape & Sign designed to teach American Sign Language (ASL) letters, phrases, and shape matching serves as a powerful resource for early learning and communication development. By integrating sign language with shape recognition, it effectively aids children, particularly those with disabilities, in enhancing both cognitive and motor skills. The activity of matching hand gestures to corresponding shapes promotes fine motor coordination, spatial awareness, and pattern recognition, making the learning process more interactive and engaging.
What it does
Recognize American sign language, and shape matching game.
How we built it
We created our own training data from the features (landmarks) of our hands, which are extracted using mediapipe. We utilized tensorflow (neural network) to train on static gesture such as the alphabet while training a long short term memory model for dynamic gestures.
Challenges we ran into
The most challenging part of this project is that we have to create your own data from scratch and we have to repeatedly replace or concatenate new data in order to achieve high accuracy. Plus, detecting hand gesture from real-time camera is a completely new level of difficulty. Also, we also had issue with front-end. We have to guarantee that the mini games for alphabet and phrases have to be simple and suitable to showcase the sign language dataset.
Accomplishments that we're proud of
Successfully created high-quality training data that yielded 99% accuracy. In term of real-time detection, we were able to achieve 95% for the alphabet and 75% for common phrases.
What we learned
We learned about the architecture of long term short term model. We learned to utilize such model to predict time series data, in this case continuous frame. We also learned that by building UI/UX for this project, we gain a deeper understanding of how to create user-friend, visually appealing, and high performance applications while improving our technical and problem solving skills
What's next for Shape & Sign
Incorporate facial recognition for emotion detection, which can enhance the accuracy of the model. Also we can utilize convolutional neural network to capture the spatial features instead of landmarks.
Log in or sign up for Devpost to join the conversation.