Inspiration
Grief is universal, but support isn’t.
Breakups, loss, and emotional crises often leave people isolated—blocked by stigma, cost, or lack of awareness.
We were driven by a simple question:
What if emotional support didn’t disappear when someone does?
At the same time, we recognized a critical boundary, AI should never replace real human relationships.
So we designed a system that acts as a temporary emotional bridge, not a permanent substitute.
What it does
this too shall pass is an AI-driven therapeutic support platform that helps users process grief, emotional distress, and crisis safely.
We translate the five stages of grief into adaptive AI behavior:
Venting (Denial & Anger):
The AI listens, validates, and encourages emotional expression.Cloning (Bargaining & Depression):
The system reconstructs supportive elements (speech style, personality traits, shared memories) of the lost person to provide familiarity, while maintaining therapeutic boundaries.Transitioning (Acceptance):
The AI gradually reduces mimicry and guides users toward independence, new habits, and real-world connections.
Safety & Ethical Design
We explicitly define what AI should not do:
- Replace human relationships
- Create emotional dependency
- Provide clinical diagnosis
Instead, we built:
- Stepped-care model: AI → Volunteers → Professionals
- Crisis detection system: Real-time identification of self-harm risk
- No-self-harm protocol: Encourages seeking immediate help
- Memory modulation: Gradually tones down emotional reliance on the AI
Evidence-Based Approach
Our system is grounded in psychological theory and measurement.
We model emotional progression as a dynamic function over time:
$$ Recovery(t) = f(Emotional\ Expression, Memory\ Exposure, Support\ Level) $$
We track distress using the Impact of Event Scale (IES) and adapt intervention intensity accordingly:
$$ If\ IES_{score} > \theta \rightarrow Escalate\ to\ Human\ Support $$
This ensures that the system is not static, but responsive to user state.
How we built it
- AI Engine: Google Gemini (context-aware therapeutic responses)
- Memory Extraction: Structured user inputs into memories, habits, routines, and traits
- Backend: Firebase (Authentication, Firestore database, role-based access)
- Crisis Analyzer: Detects high-risk signals and triggers escalation
- Role System:
- Users (seek support)
- Volunteers (assist in SOS situations)
- Super Admin (demo monitoring and safety oversight)
- Users (seek support)
Challenges we faced
- Balancing emotional realism vs ethical safety
- Preventing over-dependence on AI-generated personas
- Designing robust crisis detection without false positives
- Handling deeply personal data responsibly
What we learned
- In mental health, restraint is more important than intelligence
- The goal is not to simulate a person, but to support emotional progression
- Ethical design is not a limitation, it is the core innovation
Log in or sign up for Devpost to join the conversation.