Inspiration

Over 70 million people worldwide use some for of sign language to communicate, but communication is still difficult because the vast majority of the population does not speak sign language.

Within this communication gap, there’s two sides:

  • Difficulty communicating sign language
  • Difficulty understanding vocal speech

Although they’re both apart of the same challenge, we chose to focus on the former problem of translating sign language to vocal speech (most deaf people rely on lip reading and they’re fairly accurate with it. moreover, the magnitude of the two problems vary).

Our idea with Sign2All is to bridge this gap and increase accessibility by creating a physical wearable device that can translate sign language to vocal speech.

What it does

The physical modality of this is a glove (and in the future a set of gloves) that would be worn by the individual. As they use different hand gestures to communicate in sign language, sensors on the gloves would take in details (specifically an accelerometer and gyroscope to determine motion and position in space + flex sensors to determine the bend of each finger).

This information is collected and processed through gesture recognition and ML classification to output the hand gesture with a corresponding word.

If the user wants to translate the english syntax to another language, they can also do so using a bluetooth mobile application.

How we built it

Challenges we ran into

  1. Initially, used an Arduino (rather than a Raspberry Pi) to test initial inputs and to correctly calibrate sensor output values, but we couldn’t run the robust clustering machine learning algorithms we wanted to, so we ended up switching to a Raspberry Pi+an ADC, which was something we hadn’t previously worked with.
  2. Each of the five flex sensors were not very accurate in the beginning and had to be individually calibrated.
  3. Maintaining the integrity of our epidermis when soldering (many burns were faced here).

Accomplishments that we're proud of

Even completing this project is massive for us - filled with lots of troubleshooting and figuring things out in the moment, switching up wiring + the types of modules we’re using to make sure everything integrates nicely. Dealing with uncertainty and failure when our original plans didn’t pan out the way we hoped they would, and learning about how to integrate all these subsections into one holistic project together :)

What we learned

Learned so much! The biggest things were definitely: interfacing with Raspberry Pi, troubleshooting with hardware and ensuring all pieces are connected in the way we want them to, ensuring the signals we had were sensitive enough for our use case (increasing accuracy of hardware when being interfaced with the software)

What's next for Sign2All

  • ASL (or any sign languages) don’t map perfectly to english → can we collect other kinds of data (perhaps adhesive facial sensors, eye tracking, etc) or improve software (grammar completion) to improve the output?
  • Increasing accuracy (e.g. through more sensitive sensors)
  • Expand offerings (additional gestures, sign languages, and translated languages)

Built With

Share this project:

Updates