Inspiration

Talking to people is hard enough without having to understand their emotions. Old school mood rings were amazing because you could clearly understand how someone was feeling just by looking at their ring, these were the good old days.

What it does

I tried to solve this problem by using computer vision to create a tool for people to learn from their previous conversations. After recording a segment of a conversation a picture it taken of the other person. Using the face++ api a post request is made which returns with an emotional breakdown of that faces. If the faces trigger a high confidence in a specific feeling the audio, emotion rating, and picture are saved.

How I built it

It was build using python. CV2 was used to create capture the webcam and pyaudio uses the mics to create a 10 second audio clip. Face++ returns a http response json that I can parse and find all the values and keep track of.

Challenges I ran into

Opening a connection a camera takes a lot longer than I thought. So I ended up breaking up the code to allow me to keep the camera connected to through the whole process

Accomplishments that I'm proud of

I actually got my code to work! I didn't think the api rest call would actually work but seeing a response was amazing to see.

What I learned

I had to learn a lot about how python deals with http requests which is hard. I also learned that using big companies NLP code is not an easy thing to do which I think may discourage a lot of people from trying to use it.

What's next for Modern Day Mood Ring

The goal is to make this run on a pi so it can be brought into the field. Also the next major thing is to use the audio for sentiment analysis, this would make the conversation much more useful as a learning tool.

https://docs.google.com/presentation/d/1S4PFHA9ysOU_HXriTIY_d6kHox0bGhiNvaE5i2R3M3s/edit?usp=sharing

Share this project:

Updates