Inspiration

As a team, we were inspired by how people naturally talk about stress and emotional overload—often anonymously, casually, and without the intention of seeking a diagnosis. We noticed that most existing mental health tools either feel too clinical or too unstructured. This gap motivated us to build something that simply listens, helps users reframe their thoughts, and offers gentle guidance without labels, scores, or judgment.


What it does

SahayakAI is an anonymous, privacy-first conversational mental wellness companion. It listens to users’ day-to-day concerns, helps them reflect and reframe their thoughts, and provides personalized, non-clinical coping suggestions such as journaling prompts, grounding techniques, or focus tools. The system avoids diagnosis and medical advice, prioritizing emotional clarity and user agency.


How we built it

We built SahayakAI using a structured conversational architecture that adapts to the user rather than following a fixed script. A large language model powers reasoning and response generation, constrained by carefully designed system instructions to ensure safety, neutrality, and ethical boundaries. The frontend MVP is intentionally minimal, focusing on psychological safety, anonymity, and ease of use. Personalization emerges from conversational context instead of stored profiles.


Challenges we ran into

One of the biggest challenges was ensuring the system remained empathetic without crossing into clinical or therapeutic territory. Balancing open-ended conversation with enough structure to be genuinely helpful required multiple iterations. Designing for privacy—while still delivering meaningful personalization—also pushed us to rethink conventional product patterns.


Accomplishments that we're proud of

We successfully built a conversational system that feels supportive without being intrusive or diagnostic. We are particularly proud of creating a privacy-first design that limits data collection while still delivering contextual, relevant responses. Establishing clear ethical boundaries within an AI-driven emotional support system was a major achievement for our team.


What we learned

We learned that effective emotional support is more about listening and reframing than providing solutions. Small design decisions—tone, pacing, and constraint—have a significant impact on user trust. We also learned that building responsible AI systems requires saying no to features that could compromise safety or agency.


What's next for SahayakAI

Next, we plan to refine conversational depth, improve long-term contextual memory with explicit user control, and expand coping tools while maintaining our non-clinical stance. We also aim to conduct broader user testing to better understand how people interact with supportive AI over time and continue improving SahayakAI as a safe, ethical mental wellness companion.

Share this project:

Updates