Discharge Risk AI: Turning Lived Discharge Friction into Explainable Operations Intelligence
Discharge Risk AI is an explainable operations intelligence system that predicts hospital discharge delays before a discharge order is placed—so teams can prevent failure instead of reacting to it.
This project was inspired by firsthand experience inside hospital–SNF discharge workflows, where medically ready patients routinely remain admitted due to operational—not clinical—barriers.
Project context: Discharge Bridge
What Inspired Me
This project did not start as a data science exercise—it started as a daily operational failure.
Working directly with hospital case managers, SNF admissions teams, and post-acute operators, I repeatedly saw the same pattern:
- Patients were clinically stable
- Beds were available somewhere
- Yet discharges stalled for days
The causes were always operational: payor rules, missing documentation, SNF capacity mismatches, distance constraints, timing windows, and late discovery of authorization barriers.
Hospitals were losing thousands of dollars per patient per day, not because care was unsafe—but because the system could not see discharge risk early enough.
That gap—predictable failure without early warning—is what Discharge Risk AI was built to solve.
What I Learned
1. Discharge delays are operational, not clinical
Most delay drivers live outside diagnosis codes and vitals. Modeling this as a clinical AI problem leads to black boxes that teams cannot trust or act on.
2. Black-box AI fails in regulated operations
Care teams do not need another probability score. They need to know why a discharge is at risk and what can be done earlier to reduce that risk.
3. Prevention beats acceleration
Most tools try to move faster once a discharge is already failing. The real leverage is identifying failure points days earlier, when intervention is still low-friction.
How I Built the Project
Discharge Risk AI is intentionally designed as advisory, explainable, and non-clinical.
Core Components
Operational Feature Modeling
- Payor type & authorization complexity
- SNF availability and acceptance patterns
- Care complexity & service alignment
- Distance and placement geography
- Timing factors (day of week, staffing cycles)
- Payor type & authorization complexity
Deterministic Risk Scoring An XGBoost model generates a normalized discharge delay risk score from 0–100.
Inline math example:
The discharge risk score \(R\) is computed as a weighted sum of operational constraints.
$$ R = \sum_{i=1}^{n} w_i \cdot f_i $$
Where:
- \(f_i\) = a concrete operational risk factor
- \(w_i\) = learned importance of that factor
- Explainable Reasoning Layer (Gemini 3)
Gemini 3 does not make predictions.
It interprets structured model outputs and produces:- Human-readable explanations
- Ranked drivers of discharge risk
- Actionable, non-clinical operational recommendations
- Human-readable explanations
Gemini is explicitly constrained: No diagnoses. No medical advice. No autonomous decisions.
Example Output (Conceptual)
Code block example:
risk_score = 82
top_drivers = [
"Medicare Advantage prior authorization likely required",
"Limited SNF bed availability within preferred radius",
"High therapy intensity mismatched with local capacity"
]
recommendation = "Initiate SNF outreach and prior auth prep 48–72 hours earlier"
Challenges I Faced
1. Preventing clinical scope creep
It was tempting to include diagnoses and clinical predictions—but doing so would reduce trust, increase regulatory burden, and obscure explainability. The system was deliberately constrained to operations intelligence only.
2. Making explanations trustworthy
Translating model features into natural language without hallucination required tight contracts between ML outputs and Gemini’s reasoning layer.
3. Designing for auditability
Hospitals must explain outcomes retrospectively. Every score and explanation needed to be reproducible, reviewable, and clearly labeled as advisory.
Why This Matters
Discharge Risk AI shifts discharge planning from reactive firefighting to proactive prevention.
By identifying why a discharge is likely to fail—days earlier—hospitals can:
Reduce avoidable inpatient days
Improve patient flow without adding staff
Intervene earlier for high-risk populations
Create more equitable discharge outcomes
This project demonstrates that powerful healthcare AI does not need to be opaque to be effective.
Explainability here is not a compliance feature—it is the core product.
Built With
- docker
- fastapi
- firebase-hosting
- github-actions
- google-ai-studio
- google-cloud-run
- google-deepmind-gemini-3
- javascript
- python
- react
- typescript
- xgboost


Log in or sign up for Devpost to join the conversation.