Autistic people have double the rate of depression and are five times more likely to attempt suicide than the wider population according to a 2015 study by Kaiser Permanente (https://iancommunity.org/aic/link-between-autism-and-suicide-risk).

Imagine Steven, a 12-year-old child, who because of his autism finds it difficult to both recognize the emotions of the people around him and effectively communicate how he is feeling to others. Think about how isolated that makes him feel.

These difficulties extend to Facebook, the primary platform for digital social interaction. How can Steven possibly engage with the content he sees, when he can’t recognize the emotions of the people in the photos in his feed?

We have created a bot, which we have named ‘Visage’, to allow Steven to determine the emotions of the people in pictures he sees. Visage can identify the 6 basic emotions & neutrality.

extend this functionality to Facebook, opening up this network for people like Steven, with our bot that we have named ‘Visage’, which identifies the 6 basic emotions & neutrality.

Visage makes use of the FER2013 dataset from Kaggle (https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data). The training set consists of 28,709 examples of 48x48 pixel grayscale images of faces. The test set used consists of 3,589 examples.

Presentation slides: https://docs.google.com/presentation/d/152szZ68KfrmoyODtv1XK4kVgspMky7INgCt8UYlEJNI/edit#slide=id.g290a79d1fe_0_330

We have implemented ‘VGG Face’, a custom VGG-style neural network for expression recognition. In theory the network should be able to achieve a precision of 71.64% (https://github.com/XiaoYee/emotion_classification); with our time limitations we were able to achieve 60% accuracy on the test set.

Messenger acts as the point for the user to interact with our experience. Visage will allow users who have difficulties distinguishing different emotions to use machine learning to help them in this aspect of their lives. We have used Messenger as a vehicle to receive images from the user using established methods within Facebook (both file uploads and images taken directly on a mobile device). We then use the url generated when Facebook stores the image on its servers and pass this to the neural network via a request to a Flask server.

The neural network takes the url of a web-hosted image as input, and outputs the most likely emotion of each of the faces in the image. This emotion is used to generate a string which is piped back to the Messenger app and posted into Messenger as Visage's response.

We had originally wanted to annotate the images themselves with the emotions of each the people, but it was challenging to pass images between the Messenger API and the Python backend. We ultimately decided to work entirely with strings.

We were delighted to have developed a fully functioning product with near-state-of-the-art accuracy. The ability of the bot to recognize the emotions of multiple people in the same picture, which required the layering of two neural networks, is something we are particularly proud of.

What's next:

  1. Augment training set with additional images sourced directly from Facebook - we expect this will improve accuracy when Facebook images are used as inputs.
  2. ‘Balance’ training set - for example there were (relatively) very few examples of faces expressing disgust.
  3. Extend functionality to allow users for game play with photos selected at random - this will allow users to develop their own ability to recognition emotions.
  4. Improve classification based on context - people in the same image are likely to be expressing the same emotion and this could be used to improve classification.
  5. Allow users to 'flag' images they believe have been incorrectly classified to further improve the accuracy of the network.

Built With

Share this project:
×

Updates