People with low social EQ struggle to understand the human body language and unable to read what they're feeling from the facial expressions. We hope that this project is able to fill this gap and help these people have more meaningful conversations

What it does

Our project is a model that reads an image of a face and accurately output the emotions of the person in the image

How we built it

We made use of image processing in keras and tensorflow, and through the use of a convolutional neural network (VGG-16), we are able to train the model to recognize faces in the images and their emotions

Challenges we ran into

With only 24 hours, we can only train the model with a limited amount of data and it was difficult finding a decent amount of data set of Asian faces showing the expressions we wanted

Accomplishments that we're proud of

We are able to build the program and train the model in the nick of time and the program was able to produce promising results

What we learned

We have learned that when training the model, the number of layers increases the training time exponentially and it is important to balance the effectiveness of the model and the time taken to train the model when we are working on a project with a time constraint

What's next for Facial Expression Recognition

We can link this project with Skype and improve it even further to make it real time instead of a screen capture so that the project is more suitable for real life applications

Built With

Share this project: