Inspiration

We were fascinated by the tech of the LeapMotion, and wanted to find a real-world example for the technology, that could positively help the disabled

What it does

Our system is designed to be a portable sign language translator. Our wearable device has a leap motion device embedded into with a raspberry pi. Every new sign language input is compared to a machine learning model for gesture classification. The word or letter that is returned by the classification model is then output as spoken words, through our Text-to-Speech engine.

How we built it

There were four main sub tasks in our build.

1) Hardware: We attempted to use a wearable with the raspberry pi and the leap motion device. A wristband was created to house the leap motion device. Furthermore, a button input and RGB led were soldered as hardware inputs.

2) Text-to-Speech: We made use of Google's TTS api in python to make sure we had comprehensible language, and smooth output

3) Leap Motion: We created a python project to collect relevant data from the leap motion device and store it as needed

4) Azure Machine Learning: We created a machine learning model based on training data we generated by outputting leap motion data to a .csv file. With the generated model, we created our own web api service in order to pass in input leap motion data to classify any gesture.

Challenges we ran into

We ran into two main challenges:

1) Hardware Compatibility: We assumed that because we could write python could on Windows for the leap motion device, we could also port that code over to a Linux based system, such as a raspberry pi, with ease. As we prepared to port our code over, we found out that there is no supported hardware drivers for arm devices. In order to prepare something for demonstration, we used an Arduino for portable hardware inputs but the leap motion device had to stay plugged into a laptop.

2) Machine Learning Training: A lot of the gestures in the american sign language alphabet are very similar, therefore our classification model ended up returning a lot of false responses. We believe that with more training data and a more reliable data source for gestures, we could produce a more reliable classification model.

Accomplishments that we are proud of

Although our machine learning model was not very accurate, we are still proud that we were able to produce speech output from gesture control. We also managed to work really well as a team; splitting up tasks, design, problem solving, and team atmosphere.

What we learned

We learned more about machine learning and got a better idea of how to code in python.

What's next for Gesture.io

Up next, we would look at finding compatible portable hardware to interact with the leap motion device and continue to train our classification model.

Share this project:
×

Updates