Inspiration

Assistive Technology for the visually impaired has come a long way since the Braille typewriter. In today's world of intelligent voice assistants, smart homes and gesture detection wearables, it is comforting to know that technology will always make life easier for you. However, there is a long way to go. The beautiful world around us - with all its captivating visual stimuli - is out of some people's reach. How do we use the technology available to us today to make their lives not just easier but also more meaningful? Maybe the answer lies in artificial intelligence. Maybe we can unlock the power of deep learning to help those without vision to see and understand the breathtakingly complicated world in front of us and deliver it as a narrative.

What it does

Project Deep See aims to use scene recognition to help the visually impaired see and understand the world around them.

How we will build it

We plan to make Amazon Alexa a lot smarter by adding machine vision capability via Google's deep learning API's. By simply asking, "Alexa, what do you see?", Alexa will trigger an image capture on a Raspberry Pi camera and send a recognition request to Google's Vision API, which will return a list of labels that we can string together to make a meaningful sentence.

Challenges I ran into

TBD

Accomplishments that I'm proud of

TBD

What I learned

TBD

What's next for Project Deep See

Built With

Share this project:
×

Updates