Inspiration
One of our close friends is deaf himself and struggles with accessibility issues in common facilities like restaurants. We wanted to create an app that could lessen the gap between the deaf and hearing.
What it does
This project aims to create an AI-powered application capable of interpreting American Sign Language (ASL) letters from uploaded videos. The ultimate goal is to provide an accurate text-based sentence depiction of what was signed in the video. By combining cutting-edge AI model training with a seamless full-stack implementation, the application seeks to bridge communication gaps for ASL users, making their messages more accessible to non-signing audiences.
How we built it
We used React.js, TypeScript, and Vite for the frontend which handles interactions between the user and app. We used Node.js for the backend, and we used Python, pandas, numpy, and tensorflow for the AI model.
Challenges we ran into
Training the AI model was a challenge since the AI had trouble differentiating between some similar signs. However, we changed our model to a convoluted neural network and we had better fitting with this type of model.
Accomplishments that we're proud of
Despite none of our team members having experience with training AI, we managed to build a functional AI that can interpret our signs.
What we learned
We learned more about AI training and testing. Talking to the mentors helped us understand more about how to fit to our data better and what models fit our uses better.
What's next for Talk to the Hand
We plan to extend this project to save older recordings and record its live camera for the user to access old signs. For this, we may need to experiment with Firebase for user authentication and account creation.
Built With
- javascript
- numpy
- pandas
- pinata
- python
- react
- tensorflow
- typescript
Log in or sign up for Devpost to join the conversation.