BEBOP: AI-Powered Neurofeedback & Emotional Regulation
Inspiration
The growing need for accessible, personalized mental health support inspired BEBOP. Traditional wellness apps are static; they lack real-time adaptability and continuous context. We envisioned a "smart companion" that doesn't just converse, but deeply understands underlying emotional states and biological stress markers, guiding the user through evidence-based regulation techniques exactly when they need them most.
What it does
BEBOP is a memory-augmented neurofeedback application. It combines real-time voice interaction with deep emotional intelligence. As the user speaks, BEBOP identifies their emotional state (Anxiety, Stress, Sadness, Neutral) and correlates it with cognitive markers specifically targeting the Amygdala, Insula, and Prefrontal Cortex activation patterns.
Using a persistent session memory, it tracks the user’s emotional arc and dynamically triggers active regulation techniques such as Box Breathing, Body Scans, or Cognitive Reframing to help stabilize their mental state in real time.
How we built it
To achieve low-latency voice interaction without sacrificing analytical depth, we designed a sophisticated, event-driven architecture featuring:
- Asynchronous Dual-Model Strategy (AWS Bedrock): We decoupled conversational generation (Nova Sonic) from background emotional classification (Nova Lite) to ensure a zero-latency, empathetic user experience.
- MARA (Memory-Augmented Regulation Agent): A custom backend subsystem that calculates the user's emotional trajectory $\frac{\Delta E}{\Delta t}$ and programmatically injects psychological context into the LLM system prompt on the fly.
- **Immersive Tech Stack:Built with Next.js and React Three Fiber for a spatial 3D frontend, and Node.js/Socket.io for high-performance, full-duplex WebSocket communication.
Challenges we ran into
Our biggest hurdle was orchestrating bidirectional audio streaming alongside real-time system prompt injections without interrupting the active voice connection. We encountered race conditions between audio buffer initialization and context updates. We solved this by engineering an automated server-side setup sequence that intercepts the connection, synchronizes the MARA insights, and opens the audio channel in precise sequential order.
Accomplishments that we're proud of
- Zero-Latency Synthesis: Successfully implementing the dual-model strategy on AWS Bedrock to provide real-time emotional analysis without lagging the voice conversation.
- Algorithmic Empathy: Developing the MARA subsystem to translate abstract emotional states into deterministic, math-backed triggers for clinical regulation.
- Seamless Integration: Building a stable, bidirectional bridge between raw audio streams and complex AI classification models.
What we learned
We gained profound insights into orchestrating multiple foundation models simultaneously. We mastered real-time audio buffer management in Node.js and, crucially, learned how to bridge the gap between human-computer interaction (HCI) and computational neuroscience to create a tool that feels truly "aware."
What's next for BEBOP
We plan to introduce multi-modal biometric integrations, such as wearable heart rate monitors or consumer EEG headsets, to validate our software's emotional assessments. We also aim to expand the clinical regulation library within the MARA subsystem to include more diverse therapeutic interventions.
Built With
- amazon-nova-lite
- amazon-nova-sonic
- aws-bedrock
- clerk
- convex
- express.js
- next.js
- node.js
- react
- react-three-fiber
- socket.io
- tailwind-css
- typescript
Log in or sign up for Devpost to join the conversation.