We wanted to play with Microsoft's Cognitive Services. It sounded really cool, so we threw ourselves at the API. One of our friends works with the Texas School for the Blind and Visually Impaired. We thought the face and emotion API's could add information to images that lack them for the screen readers they use.

What it does

It identifies the the images on the webpages and sends them off the API for processing. Then it takes the information returned and makes alternative text identifying the number of people and their genders, ages, and if they have glasses or facial hair.

How we built it

Used JavaScript, HTML, and so much help from the Microsoft API docs.

Challenges we ran into

We never worked with API calls in JavaScript, so that was a super fun ~7 hours of experimenting and searching the internet. Also, the API only gives a trial limit of 20 requests per minute. Websites can have many more than that.

Accomplishments that we're proud of

We learned how to make extensions, use JavaScript, and make API calls in JavaScript. There weren't even any breakdowns! We both kept it together. I (Ryan) was suuuuuuuuuuper unreasonably excited when I got just an API call working in JS.

What we learned

See above, lol.

What's next for Face Finder

Find a way around the rate limit. Probably make the whole thing look a bit better and take a bit less time. Make the code look like not a mess. Deal with synchronization.

P.S. We recently published to the Chrome Store and it can take up to an hour to show up... :S

Built With

Share this project: