ClearMind AI: Mental Health Check-In Bot Project Background
Inspiration
ClearMind was born from a deeply personal place. As university students, we've witnessed firsthand how the pressures of academic life, social expectations, and now post-pandemic isolation have affected our peers' mental wellbeing. Many students struggle silently, with limited access to mental health resources due to long waitlists, financial constraints, or stigma around seeking help.
We recognized the need for an accessible "first touchpoint" - a non-judgmental space where people could express their feelings and receive immediate support, even if just as a bridge to professional care.
Our goal was not to replace therapists, but to create a compassionate digital companion that could listen, respond empathetically, and guide users toward healthy coping strategies or professional resources when needed.
What it does
ClearMind is an AI-powered conversational agent that provides supportive, empathetic responses to users discussing their mental health challenges. It serves as a safe space for emotional expression without fear of judgment.
Key functions include:
- Engaging in natural conversation with users about their feelings
- Maintaining memory of previous exchanges to provide contextual support
- Responding with evidence-based coping strategies and validation
- Recognizing potentially serious situations and gently suggesting professional resources
- Providing a consistent presence when human support isn't immediately available
Our project fits perfectly within the Health & Wellness track, addressing the critical need for accessible mental health support tools. It also incorporates elements of the Education track, as it helps users develop emotional awareness and healthy coping mechanisms.
How we built it
Our development process began with extensive research into conversational AI for mental health and best practices for supportive communication. We consulted with psychology students to ensure our approach was ethical and evidence-based.
We divided the work into three main components:
- Backend Development: Creating the conversational AI using LangChain and Google's Gemini model
- Frontend Interface: Building a clean, accessible Streamlit interface
- Prompt Engineering: Crafting a thoughtful system prompt to guide the AI toward appropriate, empathetic responses
We prioritized functionality in stages, first ensuring the core conversation system worked, then adding memory capabilities to maintain context, and finally refining the bot's responses through prompt engineering.
Challenges we ran into
One significant challenge was implementing persistent memory in the chat system. Initially, our bot would "forget" previous interactions, making conversations feel disjointed. We explored multiple approaches with LangChain's memory systems before finding a solution that worked with Streamlit's session management.
Balancing empathy with responsibility was another challenge. We needed the bot to be supportive without overstepping boundaries into pseudo-therapy or missing signs of serious distress. This required careful prompt engineering and multiple rounds of testing with various emotional scenarios.
Technical integration between LangChain, Gemini, and Streamlit presented unexpected compatibility issues that required creative problem-solving and research into documentation.
Accomplishments that we're proud of
We're particularly proud of creating a conversational agent that feels genuinely empathetic rather than clinical or mechanical. Users testing our prototype consistently reported feeling "heard" and supported by the responses.
The successful implementation of conversation memory was a technical achievement we celebrated, as it significantly improved the quality of interactions.
We're also proud of the ethical framework we established for the bot's responses, ensuring it knows its limitations and can appropriately guide users toward professional help when needed.
What we learned
Through this project, we gained valuable insights into:
- The complexities of prompt engineering for emotionally sensitive AI applications
- Technical implementation of LangChain's conversation and memory systems
- The importance of ethical considerations in mental health technology
- Building user-friendly interfaces for vulnerable populations
- The potential and limitations of AI in mental health support
We also learned the importance of interdisciplinary collaboration, combining technical expertise with psychological insights to create a more effective and responsible tool.
What's next for our Project
Our immediate plans include expanding the bot's capabilities to offer guided mindfulness exercises and mood tracking functionality. We envision creating a comprehensive mental wellness companion that not only responds to distress but proactively supports emotional wellbeing.
Longer-term goals include:
- Developing a mobile application for greater accessibility
- Adding optional anonymous community features where users can share experiences
- Implementing more sophisticated emotional intelligence through additional training
- Creating partnerships with university counseling centers as a complementary resource
- Establishing robust safety protocols for crisis detection and response
The scalability of our solution is promising - with minimal infrastructure costs, we could potentially reach thousands of students who might otherwise have no mental health support.
Built With
- LangChain: Framework for developing context-aware AI applications
- *Google Gemini: Large language model powering conversations
- Streamlit: Web application framework for the user interface
- Python: Core programming language
- RunnableWithMessageHistory: Memory system for context retention
Built With
- langchain
- python
Log in or sign up for Devpost to join the conversation.