Knowledge of sign language is an important and useful skill to have. It is one of the most used languages in the United States, and we have been interested in learning it.
What it does
The Kinect app recognizes the signs the user is performing and lets them know whether the sign is recognized. The webapp component allows users to explore more signs to learn and to video call others to practice signing.
How we built it
We built it the Kinect app by recording video data for various signs. The Kinect has a classifier that we trained to recognize certain signs. The webapp portion was built with node.js, the video call feature was done through the Cisco Jabber API, and it was hosted on heroku.
Challenges we ran into
We ran into problems with the Kinect not recognizing signs perfectly because it is a time consuming process to record enough data. We also ran into problems with using the Cisco API, especially because video calls weren't able to be sent through the guest network, but we had some help from some great mentors and found a workaround!
Accomplishments that we're proud of
We are proud of making an app that could help people communicate better with each other, as well as working through the various challenges that we came across.
What we learned
We learned a lot about training data and gained experience in working with node.
What's next for ASL Kinect
It would be awesome if we could gather enough data for the Kinect to be able to recognize most common signs. Perhaps we could ask for help from the open source community to gather such large amounts of data. We would also be interested in building out the community building component of our project that we started with the video call feature.