What it does
ASL Aid facilitates aspiring learners of ASL, by providing methods to guide users, assess their retention of the language, and offering features that assist in real time communication. Features of the app include a homepage with signs grouped in general categories of essential ASL basics, including words and phrases. The app includes a scanner, which implements a specifically trained machine learning model to detect the signs of the ASL alphabet in real time, and proceeds to recognize and inform the user regarding each letter. There is also a feature that acts a real-time ASL translator, where learners can say a word or phrase they want to translate, and receive a detailed image of the relevant signs. Additionally, there are several quizzes available to test users retention of the signs they are learning.
How I built it
ASL Aid was created using Swift in the XCode IDE, with tools such the CoreML vision recognition framework and a specifically trained image classification model created using Microsoft Azure. It also uses Apple’s Speech framework, for live vocal recognition in real-time ASL translator.
Challenges I ran into
Throughout the development of this application I faced a significant amount of challenges, having to learn how to implement functions such as vocal recognition and using new frameworks such as Azure, which I had never used before. Yet through testing and troubleshooting, I was able to successfully apply these technologies to the app. In addition, acquiring a large dataset for an an accurate image classification model was difficult, and after more that 80,000 images gathers and processed, there is still room for improvement.
Accomplishments that I'm proud of
I am proud of being able to have successfully implemented several new technologies within this app, that I had previously viewed as too difficult or intimidating. I am also proud of the the fact that I was able to have a completed, functional product by the end of my first hackathon.
What I learned
I learned several new things throughout the process of this app development and hackathon, from how to use voice recognition within an app, to how to collect a dataset and create a functional image classifier using a convolutional neural network. In general, I learned that for any challenging problems I faced during the process of creating this app, there were always useful resources available online from articles to virtual communities that offered a great way to find relevant answers to any of my questions. Even though the answer might not come so easily, by conducting this thorough research, with the amount of resources that are available on the internet, I now believe anything is possible in regards to what new things you someone can learn. I also found that working on something I was passionate about made it so much easier to push through the challenges I faced during development of the app.
What's next for ASL Aid
In the future, I would like to add more basic word and phrases modules and quizzes within the app so users can reach a greater understanding of the language. I also want to continue increasing the ASL dataset to potentially include signs other than just the alphabet, as well modifying the convolutional neural network as to get more accurate results within the image classifier.

Log in or sign up for Devpost to join the conversation.