Inspiration

In the U.S. alone, there are over 28 million people who are deaf or hard of hearing, and over 500,000 who use American Sign Language as their primary form of communication. Evidently, ASL has grown into one of the most prevalent languages employed in the United States today. Although countless people rely on ASL, there are still a multitude of communication problems that exist, such as the inconvenience of depending on translators. Therefore, we wanted to find a way to more efficiently translate between ASL and English. Thus, by trying to “bridge this gap,” we created BridgeSpeak.

What it does

BridgeSpeak is a smart glove that uses a combination of flex sensors, an accelerometer and bluetooth module in order to convert American Sign Language into text. Once the hand motion has been detected and converted, BridgeSpeak projects the converted text onto a UART interface through the Adafruit IOS app.

How I built it

BridgeSpeak uses 5 flex sensors, an accelerometer, bluetooth module, and Arduino Nano in order to determine the user’s finger and hand movements. The flex sensors are connected to voltage dividers on a breadboard in order to measure their analog voltages. These analog values are then mapped to angles, which we use to differentiate the various ASL motions. The accelerometer measures the approximate position of the hand relative to the x, y and z axis, as some letters in ASL can only be differentiated through a simple rotation of the hand. Finally, data is sent through the serial port, converted into text and then projected onto the UART interface of the Adafruit IOS app via the Adafruit Bluefruit LE UART Friend bluetooth module.

Challenges I ran into

The biggest challenge we ran into was hardcoding the ASL values into text. This process involved setting lower and upper limits for the angles of each finger in order to approximate the exact position in ASL that determined the respective letter. However, this method was subject to large margins of error. The inherent nature of the flex sensor is such that it would not bend back to a perfect 0 degrees once it was bent before, which offset some of the angle values every time we bent the sensor. Similarly, we had to account for the fact that the user would not create the exact same motion each time, which meant that the angles would consistently vary. Compiling both of these factors together, the angle measurements for each finger varied drastically at times, which made it very difficult to translate some letters despite creating a seemingly accurate motion. In order to address this issue, we created large error bounds for each finger, which then posed a separate issue of projecting a letter despite the fact that we did not create the motion. Consequently, this process of finding the ideal error bound became time consuming and frustrating at times.

Another significant challenge we faced was implementing bluetooth. Initially, we hoped to use an HC-05 Bluetooth module, but realized that the module can only be used via Android. As a result, we resorted to the Adafruit Bluefruit Le UART Friend module, which currently does not have extensive documentation as to its usage. Therefore, a majority of our time was spent understanding the software, learning how to calibrate the module, and how to write from the serial monitor onto the bluetooth module. This was a tedious and time consuming process, but we eventually succeeded in understanding the essence of bluetooth.

Accomplishments that I'm proud of

We are proud of the fact that we were able to successfully translate hand motions from ASL into text, which allows people unfamiliar with sign language to understand it. Communication is so essential to the forging of relationships and overall social infrastructures, so the ability to contribute to this basic necessity is extremely exciting for us.

What I learned

We learned a lot about American Sign Language through this project, specifically how to say certain letters in the alphabet. In terms of technical knowledge, we gained a deeper understanding of how voltage dividers, flex sensors, and accelerometers work. We also learned more about serial communication, expanding beyond just the serial monitor and onto UART interfaces. Finally, we were able to pick up basic electrical engineering skills, such as soldering, hardware debugging, and circuit organization.

What's next for BridgeSpeak

We hope to eventually implement the entire alphabet into the system, as well as adding in words or phrases. In addition, rather than manually setting the angle and accelerometer ranges for the letters, we would like to incorporate a machine learning component that would increase the accuracy and consistency of BridgeSpeak. Finally, it would be great to add “text-to-speech” to the product, simulating real life conversations.

Link to one-minute video: https://www.youtube.com/watch?v=6GDroCfEseQ

Built With

Share this project:
×

Updates