Inspiration
Every month, about 1000 babies are diagnosed with permanent hearing loss. And more slowly gain the disease after birth. One of the main issues with being permanently deaf is losing the ability to communicate with loved ones. For people trying to learn sign language, options are both inconvenient and expensive. That’s why we created Beanstalk, a quick, easy, and intuitive AI-powered ASL learning platform.
What it does
Beanstalk was built on the principle of making the learning experience easy for the user, helping them learn basic ASL as quickly as possible. Our track is built with modules for learning and mastery, as well as computer-vision modules that analyse signing from camera feeds on the user’s device. Users can race against their friends to climb the “beanstalk” learning track, and compare against a leaderboard. The dashboard also provides an organized visual representation of the time the user has spent on the site, with dropdowns and customization. All in all, the experience is tailored to making ASL easy for the users, bringing them closer to communicating with the ones they love.
How we built it
We built Beanstalk with a modern full-stack approach. On the frontend, we used React 18 with Vite for fast development and React Router DOM for smooth navigation. The interface is styled with Tailwind CSS and shadcn/ui for a clean, consistent design. For ASL recognition, we combined MediaPipe for hand tracking with ASL datasets to train a Random Forest model. On the backend, we used Node.js for APIs and Firebase Firestore for real-time data and authentication. We also added Chart.js for dashboards, Lottie animations for feedback, and the Canvas API for webcam processing.
Challenges we ran into
Throughout the development process of Beanstack, we inevitably ran into tons of bugs and glitches, but we eventually got over them and were able to create what we have now. Unfortunately we had to overcome a monumental barrier in our process. Surprisingly enough, there are barely any datasets available relating to hand gesticulations and ASL. While we did find some for simple words and letters, the training data for complex, dynamic gestures simply does not exist. We do believe, however, that if we have a team ready to compile training data, the website should be production ready very quickly due to the fact that we kept scalability as a priority.
Accomplishments that we're proud of
We are most proud of our lessons system, including randomly generated pools and orders of questions, answers, and most importantly, the ASL character detection model to check if the user can perform the right sign. In addition, we’re very happy with the UI/UX of our website, and our overall design decisions.
What we learned
We learned a lot about ML, specifically how to process images with MediaPipe, integrating into a web app with Flask, using AI tools to efficiently create an effective and very visually appealing website, ensuring every group member is on the same page and effectively using GitHub, and time management.
What's next for BeanStalk: AI-Powered Sign Language Learning Platform
Learning ASL typically requires taking expensive courses and extensive in person training. We foresee that BeanStalk can be a much more affordable, accessible, and convenient alternative. However, this can only happen once more research is done on creating effective models for more ASL words, phrases, and eventually, models which put everything together including body and facial expressions. The tracks and modules can easily be expanded to account for improvements in technology.

Log in or sign up for Devpost to join the conversation.