We were inspired by the difficulty to find the correct emoji on time during a conversation, as well as our interest in machine learning and image recognition.
What it does
It uses Microsoft's Azure API to detect the emotion on one's face and finds a corresponding emoji. This emoji gets copied into the user's clipboard and then they can easily paste it into whatever they're using.
How We built it
We used tkinter to create a GUI and OpenCv to get a video feed and used Azure to get a face's emotion.
Challenges I ran into
At first we were going to train a neural network of our own. However, we ran into several difficulties and were not able to train it in time in the end. When we discovered Microsoft's Azure API, we found a way to easily solve all of our problems and make all of our frustrations disappear.
Accomplishments that I'm proud of
We discovered how use the azure api, opencv's facial recognition features, and learnt a massive amount about machine learning.
What We learned
We learnt a ton of machine learning and pytorch during this hackathon. Although we weren't able to fully train a working neural network, we were able to train a neural network to 40% accuracy during our hack, which I feel like is pretty good for some people who have never made a large neural network before and our one teammate who has only used Tensorflow and Keras. We realized that the Azure Face API was much more convenient and switched over to that.
What's next for Easymoji
Firstly, we would like to have more emojis and have our app recognize hand gesture emojis in addition to faces. Secondly, we would like to create our own PyTorch model that we will have full control over to use for the recognition of emotion and hand signs.