Inspiration

The lack of visualization software that seamlessly fits into the lives of visually impaired individuals. A couple specific use cases that we thought up: looking for a specific item on a shelf at a supermarket, trying to find door handle of a car, picking up an item (pen, passport, etc) from a desk.

What it does

Uses computer vision and deep learning software to identify and guide an individual to a desired object.

How I built it

The pipeline runs as follows:

  1. Using speech to text to have the user identify the desired object.
  2. Use tensorflow to find the object location through the webcam.
  3. Use handtracking to track current palm location.
  4. Find distance between the two points, and use that as a function of sound volume.

Challenges I ran into

Finding a feasible open-source handtracking project that would work in real time.

Accomplishments that I'm proud of

putting the pipeline together

What I learned

How to use tensorflow and integrate other software with it.

What's next for BEVO - BlindEnvironmentVisualizationOperations

We'd like to port this to a mobile environment. We'd also like to train the model on more object classes to make it more robust.

Built With

Share this project:

Updates