Inspiration
Communication is a struggle for everyone and we wanted to try to help people communicate with each other better in any possible way we can. This is when we discussed interpersonal vocational challenges where we found that it is hard for humans to understand ASL (American Sign Language), a language that includes hand gestures to articulate words/phrases/sentences. Therefore, we created GenASL.
What it does
An online transcriber which takes pictures of hand gestures and identifies what they mean (in terms of English) using machine learning algorithms.
How we built it
We built the app using Python3 where we used several APIs used to train the machine to recognize specific hand gestures that are used in ASL. More specifically, we used the OpenCV API to capture video input and the MediaPipe API to track hand movements and locate specific parts of the hand during these movements. Furthermore, we created our own regressional model calibrated towards the self-created datasets. This model helped us classify the live input to see what hand gesture the input was most similar to. In addition, we also created a website with popular front-end APIs such as React.JS, Bootstrap, and Node.js.
Challenges we ran into
When we first started the project, we hardcoded the dataset that would be used for modeling a linear regressional algorithm and classifying the live hand gestures from the video input. However, we soon realized that it would be detrimental to our progress and it would be more efficient to make our own regressional model.
It took a long time to calibrate the regressional model because of the insufficient datasets we made at first.
After creating the machine learning algorithm, and finishing most if not all of the website, we ran into technical issues when linking the machine learning algorithm in Python3 to the JavaScript/HTML/CSS architecture of the website.
Accomplishments that we're proud of
Overall, we were proud of the accomplishments we achieved as a group when we collaborated to help create GenASL. More specifically, the supervised machine-learning algorithm that was made from almost nothing, was a pleasant surprise to see how far it got in terms of both accuracy and precision. In addition, we were also proud of the website and how it turned out in the end with some modern features that were made possible by the React.JS and BootStrap APIs.
What we learned
We learned that it took lots of time to plan to orchestrate a well-done project and create impressive features. We learned how to make a homemade regressional algorithm in order to analyze and classify the data.
What's next for GenASL
Futures that we would like to add to GenASL is mobile app support, coherence with REST API, and further increased accuracy in the machine learning algorithm.



Log in or sign up for Devpost to join the conversation.