Inspiration

Interpreting people's facial emotions is important in our day to day social lives, and most people take the ability to do so for granted. However, lots of visually impaired people face difficulties when trying to understand the emotions of the people they talk to, which can be a major obstacle in socialization. The same goes for people with mental illnesses that prevent them from accurately interpreting expressions.

What it does

Emotion Reader takes video for input (this could be video from a camera attached to sunglasses, a youtube video, etc.). Frames from the video are sent to a node server where the images are greyscaled and the faces are detected, and the server returns an appropriately sized image of a person's face. The new image is then analyzed for emotions by a convolutional neural network. The emotions can be read from a website using a screen reader, or they can be triggered by an Amazon Echo command.

*We ran out of time before we could finish the neural network and Amazon Echo bits, and resorted to using the Microsoft Azure API. We'll have the neural network up later -- keep an eye out for updates!

How we built it

We used node.js to set up the server and openCV.js to detect faces and process the image. We used convnetjs to set up the neural network. We attempted to train the model with a database of 327 greyscaled images of different facial expressions.

Challenges we ran into

Both of us were unfamiliar or rusty with a lot of the technology, and so pulling this off was really tough.

Accomplishments that I'm proud of

Learning a ton.

What we learned

Never use node ever again.

What's next for Emotion Reader

You can expect a complete, working website soon!

Built With

Share this project:
×

Updates