Inspiration

Jiamu often sends me Instagram shorts. Sometimes it’s a bit annoying 😡, but most of the time it genuinely makes me laugh and feel happy 🤣. That made us start thinking: instead of guessing whether a video, poster, or ad is interesting, could we actually observe people’s reactions in a more systematic way?

We wanted to explore whether facial signals could give hints about attention, surprise, or interest while someone is watching short-form content. That curiosity became the starting point of Sentiment Flow.


What we built

Sentiment Flow is a real-time facial signal monitoring system designed for reaction analysis. It is especially suited for short videos, ads, or posters that aim to trigger reactions such as surprise, fear, or unexpected moments.

While a person watches a video or poster, the system tracks facial signals such as gaze direction, blink rate, mouth movement, and head motion. These signals are displayed as real-time line charts on a dashboard. After the session, an AI component can generate a short summary of the observed trends, and the results can be automatically uploaded to SurveyMonkey for survey analysis.


How we built it

The system is split into two independent parts.

The engine handles webcam input and real-time facial analysis. It continuously processes frames and updates the latest signal values. For live video, the engine exposes the camera feed as an MJPEG stream, which allows frames to be transmitted smoothly without blocking other computation.

The dashboard is built with Streamlit and does not interact with the camera directly. Instead, it reads the latest data produced by the engine, visualizes the signal trends using live charts, and embeds the MJPEG video stream. This separation keeps the interface responsive and makes the overall system easier to extend.

An AI module was added on top of the existing pipeline to generate short summaries based on recent signal changes. This component is optional and does not affect the core real-time system.


Challenges

One of the main challenges was keeping the real-time monitoring stable. Early versions often failed to maintain a live connection, especially when restarting different components.

The initial line charts were also choppy, which made trends difficult to interpret. This required adjusting how frequently data was updated and rendered. We also encountered issues with URL-based streaming, where small configuration mistakes could break the video feed.

Finally, the AI summaries were not accurate at first. The prompts needed refinement to better align with the actual facial signals being measured.


Accomplishments

We successfully built a complete end-to-end system that works in real time, from facial signal capture to live visualization and AI-generated summaries. One accomplishment we are especially proud of is integrating the system with SurveyMonkey, allowing survey results to be updated automatically instead of being handled manually.

Most importantly, this was our first hackathon project, and we managed to turn an initial idea into a working and stable demo within a limited time frame 🎉.


What we learned

Through this project, we learned how to work with MediaPipe for facial analysis, manage real-time data streams, and build dashboards that remain stable under continuous updates. We also gained experience debugging live systems and refining AI prompts to produce more meaningful outputs.


What’s next for Sentiment Flow

In the future, we want to improve the accuracy of the facial signals and make the system work under a wider range of conditions. We are also interested in supporting multiple viewers, improving long-term trend analysis, and exploring more robust ways to stream data beyond local connections.

With more time, we would like to turn Sentiment Flow into a more general tool for reaction analysis that can be used in real-world studies and user research.

Built With

Share this project:

Updates