What it does

<p>Envision takes an image and returns related tags by analyzing the different elements of the picture and determining what they are. It then reads aloud the tags - a feature targeted towards people who are visually impaired.</p>

How we built it

<p>We used Apple's Core ML library and a text-to-speech library in order to analyze images and read aloud what is contained in them.</p>

Built With

Share this project:

Updates