🧠 MindGuard - AI-Powered Mental Health Support

💡 Inspiration

The inspiration for MindGuard came from a simple but urgent realization: mental health doesn't follow a 9-to-5 schedule. While human therapy is invaluable, it’s often inaccessible due to cost, scheduling, or immediate crisis needs. We saw the rise of AI chatbots but noticed a dangerous gap—safety and accountability. Most AI therapy apps are "black boxes" that could provide harmful advice without detection. We wanted to build a platform that wasn't just "smart," but safe by design. Our goal was to create "Therapy that never breaks—because the AI monitors itself."

🧠 What it does

MindGuard is a comprehensive AI-powered mental health platform that provides 24/7 therapeutic support through a sophisticated multi-agent system.

  • 5-Agent AI Orchestration: Specialized agents (Therapist, Crisis, Sentiment, Voice, and Coordinator) work in parallel to provide empathetic, safe, and context-aware responses.
  • Real-Time Crisis Detection: If a user expresses self-harm or emergency intent, the system detects it in less than 1 second, creates a P1 Incident, and immediately displays crisis resources like the 988 hotline.
  • Live Sentiment Analysis: The platform tracks the user's emotional trajectory in real-time, visualizing mood shifts and alerting supervisors if a session takes a dark turn.
  • Natural Voice interaction: Integrated with ElevenLabs to provide calming, natural-sounding voice responses for a more human-like connection.
  • Supervisor Dashboard: A dedicated space for human therapists to monitor active sessions, manage incidents, and step in when the AI detects a high-risk situation.

🛠️ How we built it

We built MindGuard using a modern, high-performance tech stack focused on real-time safety:

  • Frontend: Next.js 16 (React 19) with TypeScript for a fast, type-safe user experience.
  • AI Core: Google Gemini 1.5 Pro and Flash orchestrated via a custom multi-agent coordinator.
  • Voice: ElevenLabs API for high-fidelity voice synthesis.
  • Observability: Datadog integration for real-time safety monitoring, APM traces, and incident management.
  • Backend Infrastructure: A dual-deployment setup with the Next.js frontend on Vercel and a specialized Python/FastAPI emotion detection service on Railway.
  • Real-Time State: Socket.IO for WebSocket communication, ensuring that sentiment and crisis alerts reach the dashboard instantly.
  • Database: MongoDB Atlas for persisting session history and incident logs.

🚧 Challenges we ran into

  • Multi-Agent Latency: Orchestrating 5 different AI agents in parallel while keeping response times under 3 seconds was a major challenge. We solved this by implementing an asynchronous coordination layer that processes safety and sentiment checks simultaneously with response generation.
  • Docker Image Sizing: Our backend AI service initially ballooned to over 8GB due to heavy AI frameworks. We had to aggressively optimize our Dockerfile, switching to CPU-only versions of libraries and using multi-stage builds to fit within Railway's 4GB limit.
  • Real-Time Synchronization: Synchronizing the state of a "live" session between the user's chat, the AI agents, and the therapist's dashboard required a robust WebSocket implementation to prevent race conditions and data loss.
  • Safety Guardrails: Defining the "line" for a crisis detection to minimize false positives while ensuring 100% recall for genuine emergencies required multiple iterations of our keyword and NLP models.

🏅 Accomplishments that we're proud of

  • <1s Crisis Detection: We achieved sub-second latency for detecting crisis language, ensuring immediate intervention when it matters most.
  • Production-Ready Deployment: Successfully deploying a complex multi-agent system with real-time monitoring and voice synthesis to a public URL.
  • 100% Test Coverage: Our automated test suites verify everything from session creation to multi-turn sentiment tracking, ensuring the system remains stable.
  • Harmonious UI/UX: Creating a "Stone 950" enterprise-grade aesthetic that feels premium, calming, and focused on the user's well-being.

📚 What we learned

  • AI Safety as a Systems Problem: We learned that AI safety isn't just about the prompt—it's about the infrastructure around it. Observability (logging, metrics, traces) is what makes an AI system trustworthy.
  • Orchestration > Single Model: A single large model can't do everything perfectly. Breaking tasks down into specialized agents (one for empathy, one for safety, one for sentiment) yields much better results than one "god model."
  • The Value of Multi-Cloud: Using Vercel for the frontend and Railway for the heavy-lifting AI service allowed us to leverage the best of both worlds for scale and performance.

🚀 What's next for MindGuard

  • Mobile Native Apps: Developing iOS and Android versions to provide "tap-to-talk" therapy on the go.
  • Multilingual Support: Expanding our agents to support 30+ languages to reach underserved communities globally.
  • EHR Integration: Connecting with Electronic Health Records so human therapists can see AI session summaries as part of their patients' clinical history.
  • Predictive Analytics: Using long-term sentiment trends to predict potential mental health "dips" before they happen, allowing for proactive outreach.

Built With

Share this project:

Updates