Our app makes it so that your computer will give you descriptions and captions for the photos it sees on a social media feed. We use a Stanford deep learning algorithm to analyze and caption photos automatically.

What this means is that the web navigation is now welcoming to vision-impaired people. We envisioned web navigation such that a user only has to drag the mouse around a web page as the names and descriptions of each object on a page are repeated back to the user using the computer's own audio voice.

This app is the first step to a broader navigation system for the visually impaired. For now, it only includes a photo feed where the computer calculates mostly very accurate captions for the photos shown. A reference dashboard page is included to allow navigation to other common sites in an easy manner.

We used the myo armband to allow switching between the pages of our web app by using intuitive gestures to open each web page.

Built With

Share this project:

Updates