We wanted to play with Microsoft's Cognitive Services. It sounded really cool, so we threw ourselves at the API. One of our friends works with the Texas School for the Blind and Visually Impaired. We thought the face and emotion API's could add information to images that lack them for the screen readers they use.
What it does
It identifies the the images on the webpages and sends them off the API for processing. Then it takes the information returned and makes alternative text identifying the number of people and their genders, ages, and if they have glasses or facial hair.
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
See above, lol.
What's next for Face Finder
Find a way around the rate limit. Probably make the whole thing look a bit better and take a bit less time. Make the code look like not a mess. Deal with synchronization.
P.S. We recently published to the Chrome Store and it can take up to an hour to show up... :S