Inspiration
As we continue to grow with technology, it’s always exciting to see the many opportunities we have to create something impactful that positively contributes to our communities. We have always been inspired by technology that brings people together, especially when it comes to learning! While we have seen many amazing language-learning apps that engage users, we wanted to explore something that would take engagement and immersion to the next level! So we decided to explore computer vision for our project, with the goal of encouraging users to learn American Sign Language! Our goal is not just showcase emerging technology but to promote more inclusive communication and awareness of the Deaf Community through engaging, real-time learning. At its core, our project was about connection. By helping users build simple ASL skills while staying engaged with a point system and interactive learning, we hoped to empower more inclusive communication and expand awareness of deaf communities. And inspired by the Harry Potter theme of this amazing hackathon, we designed the experience to feel a little more magical and immersive!
What it does
Our application starts on the home page, where users can enter their name and choose a Hogwarts House of their choice. Once they’re ready, they can move on to the learn page, where their ASL learning journey begins! Users can enter a sentence or word of their choice and will then be given video tutorials on how to perform the sign or movement. By watching the tutorial, users can recreate the movement and display it on the camera. After executing the correct sign, the system will determine whether the user was accurate and let the user move on to the next step of their desired sentence. Once they complete the sentence, they win points for their house, and the competition between all the Hogwarts Houses begins!
How we built it
Using the CVZone and MediaPipe libraries, we implemented computer vision to detect hand movements! We recorded our own data to train our model to recognize ASL signs and movements, and then trained our model to perform accurately when presented with various ASL signs. Using Python and JavaScript, we put together our little application with some exciting, magical UI/UX frontend features. Some other technologies to mention are FastAPI and React, which helped us a lot in creating this platform!
Challenges we ran into
It was our first time working with computer vision, so it took a long time to understand how to set up the libraries and work with CVZone, but it was a great learning experience! We got the hang of it over time, and we initially struggled to recognize movements but later learned to use frames to track sequences of several JPGs. There was a little difficulty putting our backend and frontend together, but it was the most rewarding outcome once we finished!
Accomplishments that we’re proud of
We had a bit of a late start, but we were proud to put together this exciting project by the end of the hackathon! Our interest in computer vision inspired us to pursue this idea, and we’re especially proud to have built something that encourages interactive ASL learning while promoting more inclusive communication with Deaf communities.
What we learned
We learned a lot about using Computer Vision libraries and also the data collection process for training our model to recognize the ASL signs and movements! It was nice to learn about some of these modern technologies and combine them with our React and web development experience to build a responsive, end-to-end system.
What’s next for Hogwarts for ASL
We hope to add more to Hogwarts for ASL by enhancing its magical elements! Over time, we also plan to expand our dataset with additional ASL signs and movements so the platform can support a broader and more comprehensive learning experience for anyone interested in learning :)
Log in or sign up for Devpost to join the conversation.