Inspiration

The growing mental health crisis and the stigma associated with seeking help inspired me to create MindWave. I realized that many individuals avoid traditional mental health assessments because they feel too clinical or invasive. By leveraging Generative AI, I wanted to create a more engaging and comfortable way for people to share their mental health status, making it easier to detect potential issues early.

What it does

MindWave offers users an interactive and conversational experience through GenAI. The AI engages users in meaningful conversations, gently collecting information that could indicate their mental health status. The data collected is processed using traditional machine learning models, which predict the user's mental state based on patterns and insights extracted from the interaction.

Current Features:

  • Personality Test using the OCEAN model, providing insights into openness, conscientiousness, extraversion, agreeableness, and neuroticism.
  • Mental Health Check powered by traditional ML models trained with standard mental health datasets.
  • Talk to Me session, where users can discuss anything on their mind, including reflections on previous sessions.

How I built it

This MVP was built with Streamlit, MongoDB, MongoDB Atlas Vector with LangChain, LangChain, Scikit-Learn, and OpenAI.

Mental Health Check

Data was sourced from Kaggle and used to train multiple ML models, with the Random Forest model achieving the highest accuracy of 90%. This model was saved for prediction. For a more conversational data collection process, OpenAI and LangChain were used instead of a traditional form. The data collected is then formatted, and based on the model’s prediction, a report is generated.

Personality Test

For the personality test, I adopted the widely accepted OCEAN model. Using LangChain and OpenAI, I extract relevant user information to evaluate their traits on each dimension of the OCEAN model. Once collected, a personalized report is generated for the user.

"Let's Talk" Session

The "Let's Talk" session is powered by a Retrieval-Augmented Generation (RAG) implementation with MongoDB Atlas Vector Search. Here, previous session reports are stored as embeddings, giving the model context to refer back to user history. This allows users to freely discuss their feelings and ask about previous sessions, making the interaction more relevant and supportive.

User History

The user history feature enables users to review past reports and conversations, leveraging MongoDB document retrieval to store and present data from previous sessions.

Additionally, all user data, including authentication details and message history, is securely stored and managed in MongoDB.

Challenges Encountered

One primary challenge involved ensuring that the AI could gather relevant data without appearing overly invasive or clinical. Balancing sensitivity and accuracy in the machine learning models required substantial fine-tuning to prevent both overgeneralization and underestimation of a user’s mental health indicators.

Another significant challenge was managing user-specific embeddings to maintain unique, personalized interactions. This was fairly easy with Chroma DB as it allowed unique embeddings to be saved locally; however, MongoDB Atlas Vector Search works on an entire collection, making it difficult to isolate individual embeddings. An initial approach involved creating a separate collection per user, which would have allowed distinct embedding retrieval. However, MongoDB’s M0 cluster limitations (only four indexes per collection) constrained this approach.

The final solution involved dynamically generating embeddings on each user interaction. For each session, historical reports for the user were used to create a vector store and retriever specific to that session. Once a response was provided, the user’s document was removed from the collection, and the vector store reset, ensuring that embeddings remained unique to each user and were efficiently retrievable. This approach allowed for personalized interactions within the capabilities of MongoDB Atlas Vector Search.

Accomplishments that I'm proud of

I'm proud of creating a solution that has the potential to democratize access to mental health assessments. MindWave engages users in a way that feels natural, reducing the discomfort often associated with mental health checks. Successfully integrating GenAI with traditional machine learning in this context was a significant achievement.

Additionally, this was the very first time trying Atlas Vector Search, which proved to be a valuable experience, as ChromaDB had always been the preferred choice for embeddings previously, though MongoDB is my primary choice for databases.

What I learned

I learned a lot about the intersection of AI and mental health. Developing this project helped broaden my understanding of how AI can be used responsibly to support mental well-being. Insights were also gained into building conversational AI that maintains a balance between engagement and sensitivity.

Additionally, I got to learn about Atlas Vector Search and its utility for storing embeddings and retrieving them in a conversational AI setting.

What's next for MindWave

Moving forward, there are plans to refine the AI’s conversational abilities, making it even more adept at detecting subtle mental health signals. Expanding the mental health models to cover a broader range of mental health conditions is also on the roadmap. Additionally, partnerships with mental health professionals are being explored to validate predictions and further improve the platform's accuracy.

Built With

Share this project:

Updates