HackUMBC Fall 2019

  • 1st Place at the Hackathon
  • Winner of the Miner & Kasch - Best Data Science Hack
  • Winner of the Most Unique Hack category


We wanted to create a tool that could help visually impaired people to navigate the world.

What it does

Vision can help people with a visual impairment to navigate through the world. It can identify the objects in front of you and read a text by responding to simple voice commands.

How we built it

Glasses use voice recognition, image processing and object recognition, and text recognition to provide various simple features to serve as an assistant. We used a hot glue gun to stick a tiny camera onto the lens. We also attached the raspberry-pi circuit board and speaker to the side of the lens. Finally, we have the camera and speaker hooked up to the raspberry-pi which is connected to our monitor for visuals.

Challenges we ran into

Lots of optimization issues with speed/spacing of hardware. There were many dependencies with libraries that one small issue would cause failure for the project overall. Designing a functional design for the use of glasses, camera, and speaker. Adapting to minimal resources.

Accomplishments that we're proud of

After hours of hard work, we were overjoyed with excitement when we heard the speaker accurately depict what the glasses detected. We produced a working demo that is a demonstrating concept that was developed in a short amount of time. In addition, we were able to overcome the missing resources.

What we learned

We gained further experience in handling unexpected problems similar to a real-world production environment.

Build with

  1. Tensorflow
  2. Tesseract
  3. Raspberry-Pi
  4. Python
  5. OpenCV

Built With

Share this project: