MindGuard: Your Emotional Safety & Cognitive Clarity Assistant

Inspiration

The inspiration for MindGuard came from recognizing how often we dismiss subtle emotional manipulation and invisible mental fatigue until they become overwhelming. Many people find themselves asking questions like "Why do I feel so mentally tired lately?" or "Is this gaslighting, or am I just sensitive?" but lack the tools to objectively analyze these patterns. We realized that while general AI assistants exist, there wasn't a specialized system focused on emotional safety and cognitive clarity that could track patterns over time and provide empowering, boundary-setting guidance. The goal was to create a "personal emotional safety mirror" that helps users detect red flags before they snowball into serious mental health issues.

What it does

MindGuard is a multi-agent emotional intelligence system that protects users from emotional manipulation, relationship coercion, and invisible cognitive overload. It analyzes journal entries, chat logs, and personal reflections to detect manipulative language patterns, track mental fatigue indicators, and identify recurring emotional red flags. The system provides real-time insights about emotional state, flags concerning behavioral patterns, and offers practical boundary-setting responses. Unlike general AI assistants, MindGuard specializes in emotional safety, tracks repeated behaviors over time, and gives gentle, empowering feedback specifically designed to help users reclaim their mental clarity and establish healthy boundaries.

How we built it

We built MindGuard using Google's Agent Development Kit (ADK) with Gemini 1.5 Flash as the LLM backend, creating a modular multi-agent architecture in Python 3.10+. The system consists of five specialized agents: a Cognitive Load Agent that detects mental fatigue from task descriptions, a Message Analyzer Agent that flags emotionally manipulative phrases, a Pattern Detector Agent that tracks repeating behaviors over time, an Insight Agent that provides overall emotional state feedback, and a Boundary Coach Agent that offers healthy response templates. Each agent operates independently but shares insights to provide comprehensive emotional analysis. The system uses a CLI interface with ADK's web UI for development, processes data in-memory for privacy, and can be extended with NLP tools like spaCy for enhanced analysis.

Challenges we ran into

One of the biggest challenges was calibrating the sensitivity of manipulation detection to avoid false positives while still catching subtle emotional abuse patterns. We had to carefully balance being helpful without being overly diagnostic or replacing professional mental health support. Designing the multi-agent coordination was complex, ensuring each agent contributed meaningfully without creating conflicting or overwhelming feedback. Privacy concerns required us to architect the system to run locally without cloud storage, which limited some advanced features we initially planned. We also faced challenges in creating response templates that felt authentic and empowering rather than scripted or condescending, requiring extensive testing of tone and language.

Accomplishments that we're proud of

We successfully created a working multi-agent system that can accurately detect emotional manipulation patterns and cognitive overload indicators in real-time. The modular architecture allows users to customize which agents they want active, respecting different comfort levels and needs. We achieved our privacy-first goal by keeping all analysis local and in-memory. The Boundary Coach Agent generates contextual, empowering response suggestions that users have found genuinely helpful in testing. Most importantly, we created a system that doesn't just identify problems but provides actionable guidance for improving emotional well-being, filling a genuine gap in available mental health tools.

What we learned

We learned that emotional intelligence AI requires a fundamentally different approach than general-purpose assistants, with specialized focus on empathy, privacy, and empowerment rather than just information delivery. The importance of pattern recognition over time became clear - single incidents might be ambiguous, but recurring patterns reveal concerning dynamics. We discovered that users needed both validation ("your feelings are valid") and practical tools ("here's how to respond"), not just analysis. The project taught us about the delicate balance between being helpful and being appropriately cautious about mental health recommendations, and how crucial user agency is in tools designed for emotional wellbeing.

What's next for MindGuard

Our roadmap includes developing a real-time Chrome extension for chat applications, enabling immediate emotional safety feedback during online conversations. We plan to implement persistent memory storage with visual dashboards to show emotional patterns over time, helping users track their progress in setting boundaries. A fine-tuned BERT model specifically trained on relationship abuse detection would improve accuracy. Voice journaling capabilities with streaming cognitive state prediction would make the tool more accessible and immediate. We're also exploring integration with mental health professionals, allowing users to share anonymized pattern reports with therapists, and expanding the agent system to include specialized modules for workplace emotional dynamics and family relationship patterns.

Built With

  • adk-web
  • agent-development-kit
  • cli
  • dev-ui
  • firestore
  • gemini-1.5-flash
  • google-adk
  • google-ai-studio
  • in-memory-storage
  • llm
  • local-storage
  • multi-agent-architecture
  • nlp-tools
  • python-3.10+
  • rule-based-classifiers
  • spacy
  • sqlite
  • zero-shot-labeling
Share this project:

Updates