Inspiration

What inspired us was we wanted to make an innovative solution which can have a big impact on people's lives. Most accessibility devices for the visually impaired are text to speech based which is not ideal for people who may be both visually and auditorily impaired (such as the elderly). To put yourself in someone else's shoes is important, and we feel that if we can give the visually impaired a helping hand, it would be an honor.

What it does

The proof of concept we built is separated in two components. The first is an image processing solution which uses OpenCV and Tesseract to act as an OCR by having an image input and creating a text output. This text would then be used as an input to the second part, which is a working 2 by 3 that converts any text into a braille output, and then vibrate specific servo motors to represent the braille, with a half second delay between letters. The outputs were then modified for servo motors which provide tactile feedback.

How we built it

We built this project using an Arduino Uno, six LEDs, six servo motors, and a python file that does the image processing using OpenCV and Tesseract.

Challenges we ran into

Besides syntax errors, on the LED side of things there were challenges in converting the text to braille. Once that was overcome, and after some simple troubleshooting for menial errors, like type comparisons, this part of the project was completed. In terms of the image processing, getting the algorithm to properly process the text was the main challenge.

Accomplishments that we're proud of

We are proud of having completed a proof of concept, which we have isolated in two components. Consolidating these two parts is only a matter of more simple work, but these two working components are the fundamental core of the project we consider it be a start of something revolutionary.

What we learned

We learned to iterate quickly and implement lateral thinking. Instead of being stuck in a small paradigm of thought, we learned to be more creative and find alternative solutions that we might have not initially considered.

What's next for Helping Hand

  • Arrange everything in one android app, so the product is cable of mobile use.
  • Develop neural network so that it will throw out false text recognitions (usually look like a few characters without any meaning).
  • Provide API that will be able to connect our glove to other apps, where the user for example may read messages.
  • Consolidate the completed project components, which is to implement Bluetooth communication between a laptop processing the images, using OpenCV & Tesseract, and the Arduino Uno which actuates the servos.
  • Furthermore, we must design the actual glove product, implement wire management, an armband holder for the uno with a battery pack, and position the servos.
Share this project:

Updates