Easy and real-time feedback. We like to think that there is no easier way to gather feedback from people participating in any kind of events (from small meetings to large social gatherings).
What it does
Based on face recognition, it interprets and displays emotions on graphs (singular and aggregated emotions graphs are available) and on snapshots received from the live video feed.
How we built it
- Webcam used for video feed
- Using canvas for stream rendering, it extracts snapshots on time intervals
- The snapshots are sent to a server for API processing
- Emotions API is used for detecting faces and quantifying emotions
Challenges we ran into
- Well… taking snapshots from a video stream was not that easy :^).
- Initially, we used a library that was taking pictures only upon face detection, but it was causing the client browser to freeze frequently so we dropped it.
- The whole team came to term that storing images is a big NO, so we adapted the architecture to accomplish the exact opposite: emotions are the only things we store.
- Post-processing the “neutral” sentiment was another headache. In general, it overshadows the other sentiments so we made some tweaking.
- Displaying emotion details over snapshots
Accomplishments that we're proud of
We did it! We managed to hack all the components and deliver a fully functional product.
What we learned
- Working with Canvas (Alex knows better)
- Keeping a steady temper even when the results are not as expected
What's next for Moodz
- manual/automated mode for snapshot capture
- pre-processing image correction
- gender distinction
- more relevant reports and statistics