At my work, we have regular customers that come in to order, however they are deaf and only communicate via sign language. We have only two employees that may or may not be there, so often times we have to just hand them a piece of paper to write it down and it sets them apart as different. Therefore, I wanted to create something that gives them the opportunity to uphold regular interactions with others and make their lives a little easier.
What it does
It reads the input of a person's hand and arm as they sign and converts it to an audio output. This way a person who is signing can uphold a conversation with others and not worry about people not understanding them.
How we built it
Challenges we ran into
I thought I would just need the Myo and maybe an Arduino and speaker. It turns out the Myo has a low specificity in distinguishing muscle/arm/hand components. We also ran into problems with the sensor outputs giving different numbers now that they were mounted on the glove, so they had to be re-calibrated.
Accomplishments that we're proud of
This project. And the things learned from it.
What we learned
-Working with Flex Sensors & their output (since they are potentiometers)
-Transferring real-time data through Firebase and displaying on a web page
-More about the Myo and it's capabilities
-More on circuitry and microcontrollers
What's next for kssargen.github.io
Hopefully, follow-up with the my concept to broaden this into a device that can teach sign language as well as fully develop the gesture recognition library. Also, explore a new API discovered during this project: ResponsiveVoice.JS