Inspiration

Our team wanted to create a unique facial recognition tool that would revolutionize the way we approach group activities. Traditional meetings and lectures are linear and one-sided. We wanted to encourage students, shareholders, and fans to become more involved in their share as audience members. We envision to see a world where people can have a greater voice without ever having to open their mouth.

What it does

Uanimate is an application that analyzes the facial tones of the audience and visualizes the feedback in a dynamic graph.

How we built it

It starts off with an imaging device used to stream live video from which images are captured. Due to the complex nature of human facial features, we strove to improve the quality of our data. With the use of Indico's api, we were able to establish regions of interest among many individuals and classify their emotional responses into five elemental categories.

Challenges we ran into

We encountered many problems throughout the development process, one of which was the uncharted territories of machine learning. Several errors in the documentation of indico's many apis made our journey both a challenging and rewarding one; we laughed when we found bugs caused by poor spelling, and cried when our code had stopped working.

Accomplishments that we're proud of

From the first hour to the last, our commitment and dedication to our cause was unwavering. Being able to code proficiently even while suffering from sever sleep deprivation is no easy task.

What we learned

As the team became more fluent in Python, HTML, and CSS, we realized that even though time may be fleeting, a goal is always achievable in the end if you never give up.

What's next for Uanimate

We are working on expanding from the five basic emotions. We are also looking into adding more features to track, such as body language and eye glances, for a more in-depth analysis of the human behavior.

Built With

Share this project:

Updates