Inspiration
We wanted to make technology that could turn sign language into text. This idea would be used to help newly deaf or deaf people with loved ones who do not yet know sign language. For this project we are using the ASL sign language format.
What it does
Signify is an application that uses AI models to translate ASL sign language to text/words, and also uses AI for emotion detection, on your face. With the combination of emotion and words, it will be able to give insightful information about what the person is trying to say and how your feeling!
How we built it
- Next JS(Typescript) (used as our framework and base)
- Tailwind CSS (front end design)
- shadcn/ui (Prebuilt UI components for buttons, modals, forms, etc.)
- Framer Motion (Smooth animations and transitions)
- OpenCV.js (Real-time image processing (e.g. hand detection) in the browser)
- TensorFlow.js (Run machine learning models in-browser i.e. for emotion or sign recognition)
- **Flask, Python (Lightweight backend for serving APIs or logging data) ## Challenges we ran into After we got our backend working and our front end developed we could not integrate them together to create a seamless product. ## Accomplishments that we're proud of
- AI (making a largely functional AI model that can read ASL)
- Front end design (clean design, and made efficiently with little error) ## What we learned It's important to keep coding languages organized and compatible between project parts for easier integration which we struggled with here. ## What's next for Signify
- Fixing integration issues (converting languages, and keeping them consistent with each other)
- Expanding the data set (some inconsistency issues with the AI's detection, and lack of models in the data set used)
- Adding more variety to the detection (adding in motion reading for certain letters, and adding more sign language models)
Built With
- flask
- framer-motion
- opencv
- python
- shadcn
- tailwind
- tensorflow
- typescript
Log in or sign up for Devpost to join the conversation.