Inspiration
Mental health support remains stigmatized and inaccessible for many, especially among youth and culturally diverse communities. Traditional screening methods face challenges in accurately identifying and prioritising genuine mental health concerns in chaotic communication spaces. MannKiBaat was created to leverage AI-driven conversation intelligence to provide privacy-first, culturally aware mental health preliminary screening that helps bridge this gap and empowers early intervention.
What it does
MannKiBaat combines rule-based and machine learning classification to filter genuine mental health conversations from casual chats with high precision. It integrates sentiment analysis inspired by validated clinical scales to assess depression severity preliminarily. The system supports multiple languages and Hinglish dialects, offering session-only data processing with no storage for strong privacy. Its intuitive user interface enables scalable screening across digital platforms.
How we built it
The backend uses Python and leverages DistilBERT for NLP tasks fine-tuned on PHQ-8 labeled datasets. A hybrid two-stage classifier ensures precise conversation filtering by combining linguistic rules and ML predictions. The frontend dashboard developed with Streamlit is mobile responsive and designed for clinical-grade interaction. Docker deployment scripts and cloud options enable flexible, scalable hosting.
Challenges
Achieving 100% false positive filtering accuracy in noisy chat environments required careful tuning of rule sets and ML features. Ensuring cultural and language context sensitivity necessitated custom dataset creation and model adaptation. Privacy concerns required a session-only data processing model, avoiding user data retention without compromising usability.
Achievements
The tool demonstrated robust filtration of mental health conversations with 100% precision in tests, outperforming many prior approaches. It provides transparent inference times under 2 seconds and clinically relevant PHQ-8 severity scoring. User surveys showed approachable UI and trusted pre-screening capabilities, valuable for non-clinical environments.
What we learned
A hybrid approach balancing rules and machine learning yields the most reliable conversational intelligence. Privacy-first design is essential for mental health applications to gain user trust. Cultural nuances significantly influence classification accuracy, so dataset diversity is critical. Scalable deployment demands modular architectures and containerization.
Current and Future Work
Roadmap includes integrating interactive chatbots to provide empathetic automated support, expanding language and dialect coverage, and collaborating with clinicians for validation and certification. Plans exist to enhance user feedback loops and develop companion digital mental health literacy resources. Ultimate goal is a scalable, inclusive, AI-powered mental health ecosystem.
Log in or sign up for Devpost to join the conversation.