Inspiration

 At my work, we have regular customers that come in to order, however they are deaf and only communicate via sign language. We have only two employees that may or may not be there, so often times we have to just hand them a piece of paper to write it down and it sets them apart as different. Therefore, I wanted to create something that gives them the opportunity to uphold regular interactions with others and make their lives a little easier.

What it does

 It reads the input of a person's hand and arm as they sign and converts it to an audio output. This way a person who is signing can uphold a conversation with others and not worry about people not understanding them.

How we built it

It is a glove equipped with five Flex Sensors (for each finger) to read the hand gesture. The glove is paired with a Myo armband to read the movement of the person's arm which, when combined, can create a complete reading of the full gesture. The feedback from the flex sensors is handled by an Arduino Uno microcontroller that processes that data from each finger and breaks it down into whether the finger is in a fully open, partially open, or fully closed position and is all put together to determine the entire status of the hand. The data is then sent to a website, created from HTML, CSS, and Javascript, that provides a little more info on the project, but more importantly outputs what the gesture is  in both audio and visual form and real-time tracking of the sensors. This is accomplished through Firebase transferring it in to the site and an API called ResponsiveVoice.JS for speaking the meaning of the gesture.

Challenges we ran into

 I thought I would just need the Myo and maybe an Arduino and speaker. It turns out the Myo has a low specificity in distinguishing muscle/arm/hand components. We also ran into problems with the sensor outputs giving different numbers now that they were mounted on the glove, so they had to be re-calibrated.

Accomplishments that we're proud of

  This project. And the things learned from it. 

What we learned

 -Working with Flex Sensors & their output (since they are potentiometers)
 -Transferring real-time data through Firebase and displaying on a web page
 -More about the Myo and it's capabilities
 -More on circuitry and microcontrollers

What's next for kssargen.github.io

 Hopefully, follow-up with the my concept to broaden this into a device that can teach sign language as well as fully develop the gesture recognition library. Also, explore a new API discovered during this project: ResponsiveVoice.JS

Built With

Share this project:
×

Updates