We wanted to make something and had very little time to do it. This seemed fun and a bit troll.
How it works
Sentiment selfie uses a facial tracking library called clmtrackr.js to get emotions from a person's face and then generates images that we loaded and emojis from the emojify.js library based on the tracked emotion. The person can then take a picture with their sentiments rather than having to express it through a description.
Sentiment selfie at this moment tracks three emotions: happy, sad and angry. We wrote the code to generate emojis and the images (with random size and placement) around the user's face according to the user emotion that was most strongly detected as well as the photobooth-like features (taking a picture, saving it).
Challenges we ran into
- Stupid CSS errors.
- Working with facial tracking to place images away from the user's face.
- Setting up the timing of the stickers and how often emotions are being tracked.
Accomplishments that I'm proud of
- Commenting our code better
- Working together while at different timezones.
What I learned
- More about DOM manipulation.
- Using the HTML5 Canvas Element.
What's next for sentiment-selfie
More features to tell you how you're feeling. We have emojis and images currently. Would be great to extend that to music, news articles and other media that could be labelled as sad, happy e.t.c. Also, would love to create more facial models and identify more emotions.