EmpathSystem
IMPORTANT NOTE: If the model is not working properly is due to the free credit limit of hugging face of 0.10 only. I do not have the money to buy credits. Thank you for understanding :)
Inspiration
EmPath is inspired by my personal experiences with friends who struggle with depression. Many of them didn’t necessarily need solutions—they just needed someone who would listen and understand. EmPath was created to be that presence: a conversational AI companion designed to respond with empathy, care, and emotional awareness.
What It Does
EmPath is a web-based conversational AI powered by adaptive machine learning models. It is capable of detecting emotional cues in text and responding in a supportive, empathetic manner. Rather than acting as a diagnostic or clinical tool, EmPath is designed to be a safe emotional companion, offering comfort, encouragement, and gentle reframing during difficult moments.
How It Works
1. Dual Classification (Parallel Processing)
When a user sends a message, two AI models analyze it simultaneously:
- EMPATHY Model: Detects emotional state (anxiety, depression, crisis)
- EMPHASIST Model: Identifies cognitive distortions (catastrophizing, overgeneralization, etc.)
2. Cognitive Load Calculation
Converts multiple distortions into a single severity score (0-1 scale):
// Step 1: Weight each distortion
rawLoad = Σ(confidence × weight)
// Weights: catastrophizing(0.85), self-blame(0.75),
// overgeneralization(0.65), black_and_white(0.55), mind_reading(0.45)
// Step 2: Average and apply logarithmic scaling
avgLoad = rawLoad / distortionCount
countFactor = log₁₀(distortionCount + 1) / log₁₀(6)
cognitiveLoad = min(1.0, avgLoad × (1 + countFactor × 0.5))
Example:
- 2 distortions: overgeneralization (0.92 conf) + catastrophizing (0.88 conf)
- rawLoad = (0.92 × 0.65) + (0.88 × 0.85) = 1.346
- cognitiveLoad = 0.88 → 88% cognitive load
3. Combined Severity Score (0-100)
Merges emotion and cognitive metrics:
// Step 1: Weighted combination (70% emotion, 30% cognitive)
combined = (emotionSeverity × 0.70) + (cognitiveLoad × 0.30)
// Step 2: Apply trend modifier
if (distortionTrend === 'increasing') combined *= 1.15
if (distortionTrend === 'decreasing') combined *= 0.92
// Step 3: Non-linear scaling for high severity (>70%)
if (combined > 0.7) combined = 0.7 + (combined - 0.7) × 1.2
// Step 4: Convert to 0-100 scale
combinedSeverity = min(100, round(combined × 100))
Example:
- Emotion: 65%, Cognitive: 88%, Trend: increasing
- Combined = (0.65 × 0.70) + (0.88 × 0.30) = 0.719
- With trend: 0.719 × 1.15 = 0.827
- With scaling: 0.7 + (0.127 × 1.2) = 0.852
- Final: 85/100
4. Intervention Level
Determines response strategy based on severity:
| Level | Threshold | Strategy |
|---|---|---|
| Crisis | Crisis mode detected | Immediate safety resources |
| Intervene | ≥70 or (≥55 + increasing trend) | Active Socratic questioning |
| Guide | 40-69 | Gentle reframing + validation |
| Observe | 0-39 | Empathy-first, build rapport |
5. Dynamic System Prompt
Generates context-aware instructions for the LLM based on:
- Severity level
- Detected distortions (with specific guidance for each type)
- Distortion trend (increasing/decreasing/stable)
- Intervention level
6. Response Generation
- Primary LLM: Llama 3.3 70B Instruct
- Fallback: Llama 3.2 3B Instruct
- Parameters: max_tokens=300, temperature=0.8, top_p=0.92
- Uses last 10 messages for context
Flow Diagram
User Message
↓
[Emotion + Distortion Analysis]
↓
[Calculate Cognitive Load]
↓
[Calculate Combined Severity]
↓
[Determine Intervention Level]
↓
[Generate Dynamic System Prompt]
↓
[LLM Response Generation]
↓
Bot Response + Metrics
How I Built It / Features
EmPath is built on the meta-llama/Llama-3.2-3B-Instruct or 3.3-3B large language model, enhanced with two custom-trained models:
Empathy Model — a text classifier trained to recognize emotional tone and guide the LLM toward emotionally appropriate responses, from simple validation to supportive encouragement.
Emphasist Model — a fine-tuned DistilBERT model designed to detect cognitive distortions (e.g., catastrophizing, overgeneralization, self-blame) based on Cognitive Behavioral Therapy (CBT) principles. This enables EmPath to gently reframe negative thought patterns without being prescriptive or diagnostic.
Adaptive AI - By merging these 3 creates an AI that adapts base on the flow of the conversation and accomodates you accordingly. By using the Empathy and Emphasist model the LLM is able to be adpative to your conversation starting by observing to guiding and even intervening if things get out of hand.
Empathy Model – Training Details
| Epoch | Training Loss | Validation Loss |
|---|---|---|
| 1 | 1.0512 | 0.6480 |
| 2 | 0.2452 | 0.1822 |
| 3 | 0.0486 | 0.0720 |
| 4 | 0.0254 | 0.0489 |
| 5 | 0.0146 | 0.0317 |
| 6 | 0.0084 | 0.0317 |
- 📊 Final validation loss: 0.0317
- Check the HF model and space in the links below
Emphasist Model – Training Performance
| Epoch | Training Loss | Validation Loss |
|---|---|---|
| 1 | 0.1200 | 0.0857 |
| 2 | 0.0322 | 0.0258 |
| 3 | 0.0165 | 0.0129 |
| 4 | 0.0335 | 0.0084 |
| 5 | 0.0079 | 0.0067 |
| 6 | 0.0066 | 0.0056 |
| 7 | 0.0311 | 0.0048 |
| 8 | 0.0523 | 0.0045 |
| 9 | 0.0051 | 0.0044 |
| 10 | 0.0278 | 0.0043 |
Final Validation Loss: 0.0043
- Check the HF model and space in the links below
The models were trained using Python, NumPy, Hugging Face Transformers, and Google Colab/Kaggle Notebook. Labeled text data enabled accurate emotional classification and cognitive distortion detection, which together guide empathetic response generation.
Frontend
- Next.js (React)
- JavaScript (JSX)
- Tailwind CSS
- Framer Motion
- Lucide Icons
Backend / AI
- Python
- Hugging Face Transformers
- Meta LLaMA (3.2 / 3.3 – 3B Instruct)
- DistilBERT
- NumPy
Database
- Supabase
- Json
Model Training & Deployment
- Google Colab
- Kaggle Notebooks
- Hugging Face Hub
- Hugging Face Spaces
Hosting
- Vercel
Challenges We Ran Into
- Time constraints: I joined the hackathon just one week before the deadline.
- ** Free Credites:** I only have few credits for the all the AI and its components making testing needs to be precise usage of the tokens/credits.
- Computational limits: Training multiple models was resource-intensive and time-consuming.
- Hugging Face Space issues: The initial Space was mistakenly deployed as a chatbot instead of a dedicated model inference Space, causing delays and requiring a full rebuild.
Accomplishments I'm Proud Of
- Successfully trained and deployed the Empathy Model & Emphasist under tight time constraints.
- Made the two models making the LLM adaptable
- Built a non-diagnostic AI companion focused on emotional support and understanding.
What I Learned
- Gained hands-on experience with machine learning pipelines and NLP model fine-tuning.
- Learned how emotional classification and cognitive distortion detection can meaningfully guide LLM behavior.
- Improved our understanding of responsible AI design for mental health–adjacent applications.
What’s Next for EmPath
- Enhance the reasoning and contextual awareness of EmPath for more nuanced responses.
- Expand and diversify the training dataset to cover broader emotional contexts.
- Explore voice-based interaction for a more natural and immersive support experience.
Log in or sign up for Devpost to join the conversation.