1. To help improve your public speaking skills by giving you feedback based on your facial and vocal emotions.
  2. To help improve the banking experience by analyzing the facial emotion of a customer entering the bank and then alerting the bank teller or a customer service representative to act accordingly.
  3. To help with your mental disorder or disabilities by making you aware of your mental health based on your facial and vocal emotions.

What it does

Analyzes emotion based on your facial gesture and your vocal tone. Vocal and Facial emotion analysis are independent of each other. After the analysis is done, Alexa speaks the analysis back to the user. The emotions used are Anger, Disgust, Contempt, Fear, Happy, and Sad.

How we built it

We used OpenCV and FisherFace Algorithm to detect face and analyze facial gestures and emotions in Python. Used Cohn-Kanade Facial Expression Database to train a model of various facial emotions. We used Vokaturi software to analyze emotion based on voice and then incorporated the system using Python in PyCharm.

Challenges we ran into

  1. Integrating all the different types of algorithm together to make one working Python WebApp was the big challenge.
  2. Working with the Alexa Developer API was a challenge as well.

Accomplishments that we're proud of

  1. Although the front-end UI looks simple, the results from the analysis are profound and have implications in a wide range of applications.
  2. Successfully integrating all the different types of the algorithm, machine-learning models, Alexa, etc. to create a simple Python-based app.
  3. Team work

What's next for FaVEA

  1. Improve the front-end UI. Then include a record button that automatically records a video live and instantly analyzes emotion based on your facial gesture and vocal tone.
  2. Make Alexa more interactive based on the user's emotions.
  3. Include body-gesture based emotion analysis as well as to better portray a speaker's (or a user's) emotional state.
Share this project: