What it does
It is able to find the general mood of a room in a concise summary to save time for people trying to improve the experience of people in public spaces. It loops through a list of filenames, categorizes faces present in the images based on emotion into different folders, presents a report on how many positive or negative faces it found, and annotates the images.
How we built it
Based on an example function provided by Google showing us how to use their API, we created a program to loop through a list of files and report on the general mood it recognizes from them.
Challenges we ran into
The images we tested the software on didn't all have genuine emotions, so we had a bit of trouble getting it to recognize the images properly.
Accomplishments that we're proud of
Used a machine learning API to analyze images
What we learned
How to use Google's ML API, and a bit about image manipulation using pillow.
Log in or sign up for Devpost to join the conversation.