We used the trained neural network provided by Clarifai to determine semantically meaningful descriptions of images and videos on the screen. Then, we used a chrome extension which allows the user to walk along the DOM of a webpage and Google's text-to-speech API to create a chrome extension that describes exactly what's on the page; user's gain meaningful insights without even having to take a glance at the webpage. Although not limited to this use case, we found that the most appropriate use of our extension is for the visually impaired -- those who would not normally gain much meaning from images and video now able to really see.

Built With

Share this project:

Updates