This project eliminates the complications associated with emotions in everyday life, bringing you to a neutral feeling. Happy? You'll receive a cruel insult. Sad? You'll receive a great joke.

Inspiration

The original inspiration for us was a simple and fun project that used facial recognition to determine if you were happy. If you were, it would state a sad fact to make you sad. We wanted to expand on that project to deliver a humorous experience to the user by performing actions that would result in them having a neutral feeling.

What it does

The program recognizes facial emotions and insults you if you’re happy; tells you a joke if you’re sad.

The program uses Azure’s pre-trained facial recognition model to detect the user’s emotions. The predicted emotion is then sent to a database to query for a random appropriate response. For instance, if the user was sad, the queried response would be a joke to cheer the user up. The program then translates the response text into speech, and plays the speech out to the user.

How we built it

We used a variety of Azure Cloud Platform Tools in order to build the features of the program. We mostly used Python as the language of the software itself as it seemed to be a language that most of us knew how to code in. We employed the use of Microsoft Azure Vision API to recognize faces paired with the Microsoft Azure Emotion API

Challenges we ran into

Collecting, formatting, and training the data required to train the custom Trump text-to-speech agent was a big pain in the @&#.

The code doesn’t run properly on all of our computers. Some features seem to be dependent on which operating system it was run on.

We also ran into problems with setting up the code for communicating with the Azure APIs. These problems were hard to debug as we were new to using Azure, thus we had to really think outside the box in order to resolve them.

Accomplishments that we're proud of

We managed to use different APIs and combine the edge technologies with our daily life. Our software does not store any data locally, everything we have are upon in cloud, which makes our software scalable and much safer.

By using machine learning technology, we managed to generate different human voices, and finishing the facial recognition.

What we learned

We learned how to generate ideas and refine them into Minimum Viable Products (MVPs). We also learned how to use the Azure APIs to integrate a machine-learning model into python without needing the expertise of training a machine learning model, and we learned how to setup the cosmos database.

What's next for HomEMOstasis

We are going to release different choices on human voices and we could also make it interactive, so that people could chat with AI, and AI could know what to talk about based on users’ facial expressions.

Built With

Share this project:
×

Updates