We express emotions in our everyday lives when we communicate with our loved ones, our neighbors, our friends, our local Loblaw store customer service, our doctors or therapists. These emotions can be examined by cues such as gesture, text and facial expressions. The goal of Emotional.AI is to provide a tool for businesses (customer service, etc), or doctors/therapists to identify emotions and enhance their services.
What it does
Uses natural language processing (from audio transcription via Assembly AI) and computer vision to determine emotion of people.
How we built it
Natural Language processing
- First we took emotion classified data from public sources (Kaggle and research studies).
- We preprocessed, cleaned, transformed, created features, and performed light EDA on the dataset.
- Used TF-IDF tokenizer to deal with numbers, punctuation marks, non letter symbols, etc.
- Scaled the data using Robust Scaler and made 7 models. (MNB, Linear Regression, KNN, SVM, Decision Tree, Random Forrest, XGB)
Used Mediapipe to generate points on face, then use those points to get training data set. We used Jupyter Notebook to run OpenCV and Mediapipe. Upon running our data in Mediapipe, we were able to get a skeleton map of the face with 468 points. These points can be mapped in 3-dimension as it contains X, Y, and Z axis. We processed these features (468 points x 3) by saving them into a spreadsheet. Then we divided the spreadsheet into training and testing data. Using the training set, we were able to create 6 Machine learning models and choose the best one.
We converted video/audio from recordings (whether it’s a therapy session or customer service audio from 1000s of Loblaws customers 😉) to text using Assembly API.
Amazon Web Services
We used the S3 services to host the video files uploaded by the user. These video files were then sent the Assembly AI Api.
For Computing (ML)
Challenges we ran into
- Collaborating virtually is challenging
- Deep learning training takes a lot of computing power and time
- Connecting our front-end with back-end (and ML)
- Time management
- Working with react + flask server
- Configuring amazon buckets and users to make the app work with the s3 services
Accomplishments that we're proud of
Apart from completing this hack, we persevered through each challenge as a team and succeeded in what we put ourselves up to.
What we learned
- Working as a team
- Configuration management
- Working with Flask
What's next for Emotional.AI
- We hope to have a more refined application with cleaner UI.
- We want to train our models further with more data and have more classifications.
- We want to make a platform for therapists to connect with their clients and use our tech.
- Make our solution work in real-time.