As a team, we wanted to impact the world in our own way. Unfortunately, it can be difficult to change the world overnight. So we thought of the idea to assist a group of people struggling with visual impairedness and blindness.

What it does

We designed a User Interface to help people struggling with blindness navigate through their environment. This project utilizes the power TensorFlow and Computer Vision to process camera footage in real-time identifying objects in a user's path. This UI provides haptic feedback through a buzzer to notify if the object is in their path, and which path to take next.

How we built it

With the use of a Raspberry Pi 4, we were able to run a computing-intensive program to return real-time outputs. This project required quite the bit of coding in order to provide an output based on a large set of edge cases. Along with this, we had to develop a UI that utilized all sensor inputs and calculated the safest routes to take for the user. All of the sensors had to be run through a breadboard in order to interface with the Raspberry Pi (Other than the PiCamera). Along with this, we had to run testing calibrations on the Ultrasonic Sensor in order to provide accurate proximity measurements, as a singular camera does not provide depth perception.

Challenges we ran into

1.) We spent hours on this, but one of the hardest parts was interfacing all the sensors outputs with one another. We had to have all of the sensor inputs work in sync to provide accurate readings. Unfortunately, there is no simple way to apply this, so we had to learn how to multithread multiple files.

2.) While the Raspberry Pi 4 is a powerful machine for its size, it was still lacking performance. To accurately be able to trace the environment and return accurate commands to the user we need around 20 fps from the processed footage in real-time. While we were unable to attain such high performance we found solutions that would help our project development in the future. Google Coral is a Tensor Processing Unit that plugs into the raspberry pi for this exact purpose to gain up to a1000% performance boost in TensorFlow. We would have implemented this today, but we would have to wait for delivery.

Accomplishments that we're proud of

We worked on a team of 2 and actually were able to deliver a proof of concept with software and hardware integration. While we could not sleep, we are still proud of achieving a working product and demo!

What we learned

The biggest thing that we learned during this project was dealing with hardware. While it wasn't the most difficult part of the project it was the most unknown territory for us. As computer science majors we rarely get exposed to hardware and it was amazing that we actually figured out how to properly interface.

What's next for Vision_Assist

We plan on ordering a Google Coral for a performance boost and hopefully optimizing our software with more time. Along with this, our product is currently bulky and we hope to shrink down the size to make it more user-friendly.

Built With

  • computervision
  • hapticfeedbuzzer
  • picamera
  • python
  • raspberry-pi
  • tensorflow
  • ultrasonic-sensor
  • vncviewer
Share this project: