Inspiration

We wanted to do something with a Myo, as it was hardware that we never worked with before. Since it gave us data about one's arms, such as position, orientation, and engagement of muscles, we thought that a good use-case of something like this would be to interpret sign language.

What it does

It recognizes sign language that the user does and translates them real-time into speech.

How we built it

We built it using sensor data from Myos, using them to detect custom gestures and using text-to-speech technology in order to output the result as speech. We built two applications using this: one is a C++ application that utilizes a hackathon project that already implements custom gesture training and recognition (link) while the other is a web app that has the gesture recognition built in-house.

For the web app, we utilize the Google Predictions API to match the gesture to the phrase. We record snapshots of one's arm's orientation and EEG data of a complete gesture every 0.25 seconds for up to two seconds and send that array of snapshots to the API which returns the closest gesture based off the custom training model.

Challenges we ran into

We ran into challenges in standardizing a user's action so that we can match actions of variable length as well as slowed down or sped up actions. We tried using Fourier transforms to do this, but in the end it proved too challenging for the time we were given and so we resorted to recording the action for two seconds and timing when we start and stop the action.

Accomplishments that we're proud of

We managed to recognize gestures with greater than 50% accuracy using our custom trained model.

What we learned

Signal processing is very hard.

What's next for Audiosyne

We want to utilize Fourier transforms to standardize a user's actions before sending that data up to the Predictions API for better pattern matching.

Share this project:

Updates