Inspiration
I built Resonance because I was tired of wellness apps that demand I name my emotion before they do anything.
“Are you anxious? Sad? Overwhelmed?” — honestly, half the time I don’t know, and labeling it feels like work.
I wanted something that could sense how I feel without forcing me to articulate it first.
The idea came from thinking about how people actually connect. Sometimes you don’t want advice, you don’t want to journal, and you don’t even want to talk to someone you know — you just want to sit with a stranger who’s in the same emotional weather as you.
No profiles. No history. No social graph.
Just one moment of resonance.
What it does
Resonance identifies a user’s emotional state through a short sequence of questions and intuitive image selections, then matches them with another user who feels emotionally similar.
Instead of asking users to explain how they feel, the system builds a vector representation of their emotional state and uses it to find a match.
Once matched, two users enter a simple, anonymous, real-time chat.
There are no profiles, no memory, and no system-generated conversation — just two people sharing the same emotional space in that moment.
How we built it
We grounded the system in Mehrabian and Russell’s PAD emotional model (1977), which represents emotion across three dimensions: Pleasure (P), Arousal (A), and Dominance (D).
The interaction flow is intentionally lightweight:
- 3 short questions to establish an initial (P, A, D) vector
- 3 rounds of image-pair selections to fine-tune (P, A)
Each image is pre-labeled with (p_delta, a_delta) values and organized into a 3×3 bucket grid based on the user’s initial emotional state.
The final vector is computed as:
[ final_P = initial_P + avg(p_deltas) \times 0.5 ] [ final_A = initial_A + avg(a_deltas) \times 0.5 ]
Dominance (D) remains fixed throughout.
For matching, we use a queue-based system:
- Users enter a waiting pool after their vector is computed
- The system finds the closest available user using Euclidean distance on (P, A, D)
- Matching is constrained so users share the same Dominance level
On the infrastructure side:
- n8n handles workflows (storage, matching, routing)
- Google Sheets is used as a lightweight backend for storing vectors and polling state
- Lovable is used to rapidly build the frontend interaction flow
The system is intentionally minimal — designed to work without heavy infrastructure.
Challenges we ran into
The hardest part was calibrating the image selection system.
Early versions failed in two opposite ways:
- Too abstract → users clicked randomly
- Too literal → it felt like a personality quiz
Finding the middle ground — where choices feel instinctive but still carry meaningful signal — took multiple iterations.
Matching was another challenge.
We initially ran into cases where users with very different arousal levels were matched simply because their pleasure scores were close. Fixing this required tightening how distance is calculated and filtered.
We also had to make deliberate trade-offs.
We removed features like AI-generated conversation starters and more complex visual systems to keep the core experience simple, focused, and reliable.
Accomplishments that we're proud of
We built a fully working emotional matching system end-to-end with minimal infrastructure.
- A non-verbal emotion capture flow that actually produces structured data
- A functioning matching system based on emotional vectors
- A real-time anonymous chat between matched users
- A complete pipeline from interaction → vector → matching → connection
We’re especially proud that the experience feels intuitive rather than analytical — users don’t feel like they’re filling out a test, but the system still extracts meaningful signal.
What we learned
We learned that emotion doesn’t need to be explicitly labeled to be captured.
By combining small, intuitive interactions with a structured emotional model, we can infer how someone feels without asking them to explain it — and that makes the experience much more approachable.
We also learned that constraints help.
Using simple tools like Google Sheets and n8n forced us to think in terms of flows and interactions instead of over-engineering.
Most importantly, we realized that connection doesn’t require identity, history, or context — sometimes it’s enough for two people to exist in the same emotional state at the same time.
What's next for Resonance
Next, we want to improve both precision and depth.
- Expand and refine the image dataset to better capture subtle emotional differences
- Improve matching quality with clustering and adaptive weighting
- Explore lightweight moderation and safety mechanisms
- Experiment with longer or structured interactions beyond one-off sessions
Longer term, we’re interested in building a richer “emotional layer” on top of the system — not to track users, but to better understand how emotional states evolve over time while still preserving anonymity.
Built With
- google-sheets
- lovable
- n8n
- rad-model
- vector-matching
- webhooks
Log in or sign up for Devpost to join the conversation.