Inspiration

Conversations with visually-impaired people who used walking sticks, guide dogs and echolocation to get a feel of the world around them but felt as if they never truly knew what was around them.

What it does

The user will wear a camera with depth sensor (presently a Kinect). The video feed is processed and analysed in order to detect what obstacles are in the path of the user and the distance of said objects. This information is then fed back to the user via text-to-speech technology.

How we built it

We used the Kinect api in order to retrieve the video feed and depth information from the Kinect. We used blob detection on the depth information in order to detect where the closest objects were. Then the full colour image at the corresponding location was processed by a cloud service to determine the identity of the object. This information was then spoken back to the user using text-to-speech technology.

Challenges we ran into

Difficulty in retrieving the required data via the APIs.

Accomplishments that we're proud of

Getting a working proof-of-concept!

What we learned

How to process visual data (i.e. blob detection, object recognition).

What's next for Sight

Built With

Share this project:
×

Updates