Inspiration
This project was inspired by the urgent need for accessible and safe mental health support. Many people hesitate to seek professional help due to stigma, cost, or lack of access. Advances in neural networks and on-device AI offered an opportunity to build a supportive system that combines empathy, personalization, and safety, while protecting user privacy. This will add the capability of human health in many locals.
What we Learned
Working on this project taught me how to blend:
- Machine Learning (ML) for mood and risk classification.
- Conversational design for empathetic interactions.
- Privacy-first engineering, including on-device inference and encryption.
- The importance of clinical input in AI systems that deal with sensitive topics.
I also deepened my understanding of explainable AI and how to use techniques like attention heatmaps and SHAP/LIME for transparency.
How we Built It
Architecture Design I started with a modular architecture:
- Flutter mobile frontend for journaling and chat.
- On-device TFLite model for fast mood detection.
- Cloud backend with FastAPI + PyTorch for heavier inference.
- Safety pipeline combining rules + classifiers.
Model Training
- Fine-tuned a transformer (DistilBERT) on emotion and risk datasets.
- Created an ensemble for crisis detection, mixing rule-based triggers with neural outputs.
- Designed a personalization layer using sentence embeddings and k-NN similarity.
Safety & Escalation
- Implemented keyword and risk-threshold checks.
- Built deterministic fallback templates for high-risk cases (e.g., “Please reach out to [hotline] if you’re in immediate danger”).
Deployment
- Packaged backend services in Docker and deployed on Kubernetes.
- Integrated monitoring with Grafana + Prometheus for both system and ML metrics.
Challenges Faced
- Data sensitivity: Mental health data is extremely private, so strict encryption and opt-in training policies were mandatory.
- Bias & safety : Neural nets sometimes missed subtle risk expressions, requiring ensemble models and heavy rule-based guardrails.
- Clinical alignment: Ensuring interventions were evidence-based meant continuous consultation with clinicians.
- Performance trade-offs: Running ML models on-device needed careful optimization (quantization, pruning) without losing accuracy.
This journey was a balance of AI innovation , human-centered design, and clinical responsibility. The system is not a replacement for therapy but aims to offer immediate support and safe triage.

Log in or sign up for Devpost to join the conversation.