Inspiration

Reflekt started from a simple question. What happens if a system does not label emotions, but listens to small human signals and responds through art. The goal was to explore emotion as movement and change, not as a fixed category.

What it does

Reflekt listens to facial expression and voice in real time and turns them into a living visual system.
It does not detect or diagnose emotion. It blends signals into a temporary emotional state that controls generative visuals.
At the end of a session, a single visual residue remains.

How we built it

-The backend is written in Python and processes camera and voice input.
-Facial signals and voice sentiment are combined into valence and arousal values.
-A WebSocket bridge sends only derived emotional state to the browser.
-The frontend renders a reactive particle system and saves session visuals.

Challenges we ran into

-Balancing multiple weak signals without overtrusting any one input was difficult.
-Keeping the system real time while avoiding storage of raw biometric data required careful design.
-Tuning visual behavior to feel expressive without being chaotic took iteration.

Accomplishments that we're proud of

-We built a system that reacts emotionally without claiming emotional truth.
-The visuals change smoothly over time.
-The system leaves behind art, not data

What we learned

-Simple signals can create rich experiences when combined carefully.
-Constraints can lead to more thoughtful design.

What's next for Reflekt

-Improved voice dynamics and rhythm sensitivity.
-Use in live installations and interactive exhibits. -gallery.html needs modification for proper implementation in a public scenario.

Built With

Share this project:

Updates