Inspiration

Health and well-being touch every dimension of a person's life — emotional, physical, social, and psychological — yet accessible, intelligent support in this space remains scarce. Millions struggle with anxiety, depression, stress, and trauma with no easy way to get a first read on what they're experiencing or where to turn. We wanted AI to serve as a first-responder layer for health and well-being: not replacing doctors or therapists, but helping people understand themselves better and giving clinicians a richer picture before a session even begins. Rather than prompt-engineering a general LLM, we built a collective of purpose-built, fine-tuned models — each targeting a specific dimension of health and well-being.


What it does

Coalitus Collective is a collective mass of machine learning models that can be used across a broad range of health and well-being topics — from emotional awareness and mental health support to stress assessment and cognitive pattern recognition. Four custom models run simultaneously on every user message:

Emotion Classifier — Fine-tuned DistilBERT detecting six emotions (sadness, anger, love, surprise, fear, joy) to give an immediate read on a user's emotional state and overall well-being.

Mental Health Topic Classifier (empath) — Routes text into one of eleven health and well-being categories (anxiety, depression, grief, trauma, sleep issues, relationships, and more). Includes built-in crisis detection — when suicidal language is found, crisis resources are shown immediately.

Cognitive Distortion Classifier (emphasist) — Multi-label classifier detecting five CBT-defined thinking patterns: overgeneralization, catastrophizing, black-and-white thinking, self-blame, and mind reading. Gives counselors an instant, clinically informed note on the user's thought patterns.

Stress Level Triage — A custom MLP assessing stress across 20 psychosocial dimensions covering mental health, physical health, environment, and academic/social factors. Classifies stress as Low, Medium, or High with a concrete follow-up recommendation — built for school counselors and wellness coordinators.

All four are unified in a Next.js frontend: a Chat interface powered by Llama 3.3 70B (Groq) with live model analysis on every message, a Triage Tool for clinicians, and an Analysis panel visualising all model outputs at once.


How I built it

Each model was fine-tuned or trained independently using PyTorch and HuggingFace Transformers, then deployed as its own HuggingFace Space running FastAPI. We chose FastAPI over Gradio to get clean REST endpoints with full JSON I/O and CORS support, making the models callable from any frontend.

The Next.js backend calls all three text models in parallel via Promise.all with a shared timeout budget, so one user message triggers simultaneous inference and results arrive in a single API response. The Llama 3.3 70B model then receives those analysis results as context in its system prompt, making the conversational AI genuinely aware of what the classifiers found.


Challenges we ran into

HuggingFace Space Limits — Free-tier cold starts and resource constraints made real-time parallel inference unreliable. We built timeout budgets and graceful fallback states throughout the frontend, and had to solve static file path issues by embedding all frontend HTML inline inside the Python app.

Time Training Models — Fine-tuning multiple transformers in a hackathon timeframe meant hard tradeoffs on dataset size and training duration. The distortion classifier was hardest — CBT-labelled data is scarce and its conservative sigmoid scores required careful threshold tuning to avoid false positives.


Accomplishments that we're proud of

Creating multiple AI/ML models that can accommodate a broad range of health and well-being topics — and making them work as a unified system rather than four disconnected tools. Every model was purpose-built and fine-tuned for its specific task. We're especially proud of the real-time analysis pipeline where a single message triggers all models simultaneously and the results surface across the UI within seconds, giving users and clinicians a multi-dimensional view of well-being that no single model could provide alone.


What I learned

Building AI for health and well-being requires far more care than general NLP — outputs that look fine in isolation can be misleading or harmful in a clinical context, so every result is framed as a signal to inform, never a diagnosis. We also learned that running multiple models in real time is as much a distributed systems problem as an ML one, and that clean REST APIs make cross-service coordination dramatically simpler than framework-specific protocols.


What's next for Coalitus Collective

  • Stress model in chat — a conversational intake flow that collects the 20 psychosocial inputs naturally through dialogue then runs the stress classifier
  • Clinician dashboard — session transcripts annotated with model outputs, pattern tracking across sessions, and exportable wellness reports
  • More models — physical health symptom checker, sleep quality assessment, and a resource recommendation model mapping topics to therapeutic exercises
  • Multilingual support — starting with Filipino and Spanish to reach more communities
  • Responsible AI — bias audits, explainability overlays, and a clinician feedback loop for continuous retraining

Built With

Share this project:

Updates