Some of us (read: Carol) are really bad at reading facial expressions, and without this key form of body language expression, it's hard to make sense of what others are thinking, and thus to judge what an appropriate response would be.

What it does

Gamifies the facial recognition and expression experience, with support from the Microsoft Azure API, particularly the Face API.

How we built it

Followed a tutorial on a similar subject (from the Azure Computer Vision API), built the app on Glitch (thanks Slack workshop!), and accessed webcam through the browser's native video APIs.

Challenges we ran into

Trying to beautify everything for the final demo, trying to fix errors in the original Azure tutorial, and testing difficulties with two team members working on the same Glitch project.

Accomplishments that we're proud of

Learned how to set up personal instances of the Azure API in less than an afternoon, and being able to wrangle the native browser APIs and its new autoplay features (complicating implementation of most existing tutorials).

What we learned

Getting everything demo-ready in 8 hours with severe food distraction is very hard.

What's next for Facial Expression Exercises

Real time emotion tracking with faster Javascript Computer Vision libraries for client-side computation.

Built With

Share this project: