Inspiration

Our project focuses on Alexithymia, which is also known as emotional blindness. It is associated with difficulties in attachment and interpersonal relations which might hinder and prevent people with alexithymia from performing well in the workforce. This is very common--1 in 10 people in the general population have Alexithymia.

What it does

Our goal is to help people with Alexithymia perform better in the workforce and foster collaborative inclusiveness. So, we developed a real-time emotion-processing app that provides social interaction suggestions. Our targets are individuals who struggle with recognizing others' emotions or expressing their own in the workforce. Alex and Mia can identify the emotions of people the users are interacting with and provide suggestions on how users can respond with both languages and actions.

How we built it

We built a proof-of-concept web application to serve as the user interface to show how an individual who struggles to understand emotions can respond in a social situation. To do this, we pass video input (this can be with any device that has a camera) and run it through a CascadeClassifier that builds a bounding box around the face. The bounding boxes are passed through a pre-trained CNN using DeepFace which uses facial structure to recognize one of seven emotions: anger, fear, disgust, happy, neutral, surprised, or sad. We then built a prompt that we optimized using QuotientAI to output a suggestion. This suggestion can be used to respond to the social situation at hand and help the individual understand the general mood of the conversation.

Challenges we ran into

  • We didn’t have time to train our own model to classify more emotions, but in the future we would want to have more emotions to characterize
  • Emotion recognition is difficult. The state of the art (SOTA) models struggle to generalize and future work is needed to develop better models that go beyond the 73.3% SOTA accuracy.
  • There is a lag in our output for suggestions in how to respond to a social situation to prevent rate-limiting our API key for OpenAI. We do this by averaging the confidence scores of the seven emotions in a 10 second interval and taking the max average confidence score to represent the overall emotion in 5 seconds
  • Understanding how to pass video input through different streaming protocols
  • Learning how to optimize prompts with QuotientAI

Accomplishments that we're proud of

  • Actually help people with Alexithymia better perform in the workforce
  • Figure out how to use any device with a camera to connect to our pipeline
  • Generalize to settings when interacting with multiple people

What we learned

  • Learned how to optimize prompts with QuotientAI
  • How the emotion recognition model works in real-time settings
  • Learned more about the challenges that people with Alexithymia are facing, so that we can better develop our product.

What's next for Alex and Mia

  • Improve the context to give more accurate and relevant suggestions
  • Integrate our product into wearable devices such as Apple Vision Pro and Meta Smart Glasses
  • Training models to understand more emotions
  • Reduce the lag we imposed on identifying emotions

Built With

Share this project:

Updates