_Created by Andrew Wang , Chetan Velivela, Jeff Alvarex, Dominic Reid


Those that work in the line of action when working with others must often use gestures to communicate as sound could give away their location or if there are loud background noises that make it hard to hear others talking. By using gestures to communicate hands are also kept free so that engagement is easy if an unexpected situation arises (E.G. if an unexpected enemy soldier suddenly appears). By creating a gesture to sound program this communication is made more efficient as it is not necessary to be able to see the gestures (can be long distance away or behind physical obstructions) and it is not necessary to focus on the hands of the gesturer so that those listening can be more aware of their surroundings.

What it does

It uses the capabilities of the myo armbands to read any gestures that are made and then a python program utilizes a text to speech library to convert each gesture to a command sound that is stored in a text file. Pre-made text files for various occupation classes can be downloaded through our website so that the output sound for each gesture can be changed. Custom classes can also be created from our website so that the user can decide exactly what sound output they want for each of the recognizable gestures (other than unlocking and locking the myo which we have set to down fist and down open hand).

How we built it

Using the python library for myo we created the parameters for each command and then used the text to speech library gTTS to create a mp3 of the sound that should be generated for each of the commands. The website contains premade txt files that represent a usage class and custom txt files can be made that contain commands for each supported gesture. The python file calls that txt file when it must generate each sound.

Challenges we ran into

Creating the parameters for each gesture on the myo was quite difficult as was reading through the documentation for utilizing python with myo.

Accomplishments that we are proud of

Creating the parameters for each gesture correctly and enabling the user to be able to create custom sounds for each gesture.

What we learned

How to utilize the data from myo in python and how to subsequently create a sound event for each instance of a gesture.

What's next for GestureLead

Creating more recognizable gestures and building an iOS app that can connect to myo and generate sound for each gesture and can also assign custom sounds to be played at each gesture.

Built With

Share this project: