We were first motivated by the idea that we could use a camera to detect objects. JP Morgan's Challenge inspired us to create this application to help the visually impaired. Google Cloud and Microsoft Azure inspired us to use the wonderful existing APIs.

What it does

Our Third Eye gives the visually impaired the ability to complete tasks without the help of others or the use of unnecessary tools.

How I built it

There are two components:

1. Intel RealSense Depth Camera D435i

The Camera was used to track the distance the user is from any object in its surroundings and provide realtime data to be processed.

2. Website App utilizing Firebase Real-Time DB and Azure Computer Vision

The Web App was used to pull data from Azure and Firebase and alert the user of their surroundings.

Challenges I ran into

  1. Making sure the camera was capturing the correct object and distance
  2. Processing the data from the firebase and alerting the user using text to speech and vibrations
  3. Lag from the camera to the database to the app.

Accomplishments that I'm proud of

  1. All alerts and notifications are as close to realtime as possible.

  2. A better understanding of ML

  3. Objects detected are accurate

What I learned

  1. Machine Learning

  2. How to use firebase API

  3. How to use Azure

What's next for ThirdEye

  1. Increasing Accuracy and Speed

  2. Perform more complex Tasks

  3. Integrate the camera into Smartphone Cameras

  4. Develop the App for iOS and Android

Share this project: