We want to assist visually impaired people by connecting good Samaritans in platforms such as slack, that are willing to take some part of their day to assist visually impaired people.

What it does

At it's core, Secondsight is an image to speech android application where a user can take photos of a scene or object and is given the description of the image through speech.

How we built it

We used android studio and the google-cloud-vision api for our main application. Stdlib and Firebase connecting the slack bot application together.

What we learned

Android Studio, Stdlib, Firebase and Computer Vision.

What's next for SecondSight

Add multiple functionalities such as filtering best descriptions from the responses in the slack channel.

Built With

Share this project: