Inspiration

Our main goal was to try to help those who are visually impaired appreciate art as well, without having to rely on anyone else's bias.

What it does

Tells the user what object it sees.

How we built it

We created a flask API that uses a deep learning model, implement using Keras. We used react-native on the front-end to take pictures, and send these pictures to the API for processing. The API responds with a caption in text format. Then the react-native application converts this text to speech. Our pitch website was done using HTML, CSS. Icons and other graphics were done using Adobe-XD.

Challenges we ran into

finding an appropriate data set to train the model with.

Accomplishments that we're proud of

Producing an MVP at this event!

What we learned

Deepened our knowledge of implement speech and deep learning.

What's next for OpenEyes

  • Voice commands
  • Live video processing for navigation

Built With

Share this project:

Updates