Inspiration

For blind people, one of the problems they face is they are not able to detect face and emotion of a person.

What it does

This is a simple Machine Learning model using neural networks which says the emotion of a person when we give the image as an input

How we built it

I did face detection using libraries from dlib and opencv. Then proceeded to create the ml model. Initially I downloaded the training and testing data set from kaggle. Then created the model using neural networks technique.

Challenges we ran into

Initially I tried installing dlib in the local system. But it didn't work out. So I shifted from local system to google colab. And also initially the accuracy of the model was 60 percent. Then I added the relu layers after Conv2D layer and removed the dropout layer. So that the accuracy increased to 98 percent for validation data

Accomplishments that we're proud of

Got an accuracy of 98% for validation data

What we learned

Learnt how to create a ml model and work on increasing the accuracy of the model.

What's next for Facial Emotion recognition

Integration of the model with an app which can be used as an audio assistant for blind people

Built With

Share this project:

Updates