Inspiration

There has been increasing number of visually impaired people becoming immobile due to their physical conditions. We aim to help them move more free with modern technology.

What it does

It uses Intel Realsense Depth Camera to reconstruct a 3D image of the surrounding and Yolo v3 object detection algorithm to give user open space walking suggestions and the distance to surrounding objects(or potential obstacles) that could run into.

Detailed Slides and results are posted in the Github Repo below

How we built it

We uses the OpenCV library with Yolo V3 object detection algorithm to get both labeling information as well as distance information in 3D space.

Challenges we ran into

How to accurately detect and do neural network inference in reasonable time. How to convey concise information within reasonable amount time and still keep a good information flow.

Accomplishments that we're proud of

With the technology we have, we were able to let several team members walk blind-folded in the hallway of the hackathon. The results are pretty good.

What we learned

How to use OpenCV library for DNN Object Detection, how to extract information from depth image.

What's next for Sight beyond Vision: Helping the Visually Impaired See

Make the model more accurate and make the information flow more efficient.

Built With

Share this project:
×

Updates