Inspiration

Our inspiration came from mutual interest and hope for systems that can recognize human emotion that would be able to help people on a daily basis.

What it does

Based on a selfie from a smartphone front-facing camera, or uploaded image, the input is fed into a Neural Network. After it is fed into the Neural Network, a respective emotion output is given. This output can be used in recommendation systems to recommend products.

How we built it

Starting with our initial idea of an emotion recognition system led us to research datasets that were specifically created for the aforementioned problem. Natively, PyTorch doesn't include this type of specific dataset in its pre-trained models. We settled on a previously run challenge on Kaggle named Challenges in Representation Learning Facial Expression Recognition. This dataset has a total of 35,887 images split into training, test, and valid sets. Using the Kaggle API in a Google Colab notebook, we connected to the image set and cloned the repository. This allowed us to generate the images into their respective folders in the colab notebook.

We decided to use VGG19 as our pre-trained neural network implementation. Our initial thought of using this architecture was to alleviate the vanishing gradient problem and most importantly, strengthen feature propagation to take full advantage of facial features in the given images in the dataset. Training, testing, and validation were done in a GPU enabled Google Colab notebook to help speed up processing over the use of a CPU. Specifically, the hardware used was a single NVIDIA Tesla P4.

After training, we were able to output our best predictions on random images from the dataset and display the results.

Challenges we ran into

So far, our biggest challenge, while building the model, was handling the imbalanced data of the image set. Initially, we thought our model was just overfitting. We realized that our data augmentation implementation was not setup correctly. This gave a constant, non-changing validation accuracy.

Accomplishments that we are proud of

  • Continuing to work while a few teammates dropped out
  • Wore multiple hats
  • Had great communication daily. Even though our team was located in the US, India, and Egypt, we stayed in close contact.
  • Great team attitude
  • Willingness to explore a teammate's idea.
  • We were not afraid to research as much as we could to implement our best work.

What we learned

We learning how to work on a small development team that was 100% percent remote. This brought certain obvious challenges, but we were able to move past them to build something. We also learned a lot about ML/DL models, best implementations using PyTorch, and got better at data augmentation and visualization.

What's next for Emotion Recognition for a Recommender System

In the future, these prediction results will be fed into the Facebook app that, with a given input prediction, will output a recommended action to take based on your emotion. Your input could come from posted selfies or other types of photos where a human face can be recognized. Recommended actions could be places to visit, movies to see, food to eat, and music to listen to.

Built With

  • convolutiona-neural-network
  • google-colab
  • kaggle-api
  • numpy
  • python
  • pytorch
  • tensorboard
  • tesla-t4-gpu
  • vgg19
Share this project:

Updates

posted an update

Compared our output to EmoPy. We were initially concerned with our validation output, but when compared to EmoPy out put, we got 20% greater validation accuracy on our model. Thanks to Yashika for this comparison!

Log in or sign up for Devpost to join the conversation.