My goal at Cal Hacks 4.0 was to use machine learning to help the blind, and I have created the technology that does just that!

What it does

I am using the Google cloud vision API to identify objects around a user in combination with AR to map distances to the those objects.

The blind can use this to detect walls and objects and know approx how far away they are from them!

How I built it

My first step was to create an AR Video feed. From that I needed to figure out how to measure and map objects in a 3D plane. I then implemented Googles Vision API for object recognition in a 3D plane. Then finishing up with 3D AR labels and text to voice.

Challenges I ran into

My biggest challenges were the following:

  1. How to use machine learning to detect real world objects in a 3D plane.
  2. Measuring distances to real world objects
  3. Implementing Google Cloud Vision API
  4. Drawing 3D text and lines to detected objects

Accomplishments that I'm proud of

Currently, vision apis that assist the blind only educate them on what is around them on a 2D plane. I have just added a 3D understanding! My goal is to help millions around the world with this technology!

What I learned

I learned how to use the Google Cloud ML API. I was surprised at how accurate it is for use in object recognition. I now want to use other AI Google APIs for future builds!

What's next for Vision AI 3D

I want to update the distances as the user moves away or towards detects objects on a 3D plane. This will help blind people navigate a room.

Built With

Share this project: