We were inspired because of the small number of people who know ASL, and the lack of updated resources available for the average person to learn from.
What it does
This app teaches users how to use ASL and tests users on their knowledge. A 3D model of each letter in the alphabet will appear so that the user can visualize how to make the sign. We then use AI to decide if the user correctly made the sign. The app will allow users to learn in free mode as well as quiz mode where they can test their speed and accuracy.
How we built it
The app uses echoAR to display the 3D hand models and machine learning to confirm if the user is correct or not.
Challenges we ran into
It was hard to find 3D models for the ASL letters of the alphabet, and none of us are familiar with tools to make 3D models. One member had to spend a few hours making all of the models and getting familiar with the software.
echoAR went down around 10pm PST on day 1 of the project, so we were stuck until it came back online. There was nothing we could do to prevent this or fix this, so it was a huge challenge.
What we learned
Rigging and posing 3D models
Each sign model in our app comes from the same hand model which was rigged and posed to form each sign.
Training models with CoreML
CoreML and the CreateML tool make training ML models quite easy. For the version in this project, we simply provided it with our training/testing data and allowed it to train for ~15 hours.
Matching images with Vision
Using the model we trained with CoreML, we learned how to use the Apple Vision API to classify images to our labels.
What's next for ASL Made Easy
Implement more ASL than just the alphabet.
It would be great to add support for words on top of the alphabet. Realistically, most ASL speakers are not signing the alphabet to get by.
Add more learning modes.
We had other mode ideas like a speed-run where you see how fast you can match the whole alphabet without pausing. Due to time constraints we had to focus on the standard quiz mode for our MVP.