To help visually impaired people know about the surroundings using a image recognition which outputs speech to help them realize the surroundings.
How we built it
It takes video input, samples them into images and call the clarfai API for image recogonition to generate tags about the surroudings and send the output to microsoft cogintive services to output speech from text to help the visually impaired.
Challenges we ran into
Installing opencv was not straight forward and it took us time. This was the first we used opencv, so this was challenging. Also the first time using Microsoft cognitive services for text to speech, so it was challenging. The saddest part is that the MLH laptop on which most of openCV programming was done restarted at the last hour and wiped all the code (MLH laptop does that!!). :(
Accomplishments that we're proud of
Learnt opencv and successfully merged two API's. This was also some of our first machine learning project, so we learnt a lot.
What we learned
We have learnt opencv, key concepts of machine learning with the help of clarifai.