Discharge Risk AI: Turning Lived Discharge Friction into Explainable Operations Intelligence

Discharge Risk AI is an explainable operations intelligence system that predicts hospital discharge delays before a discharge order is placed—so teams can prevent failure instead of reacting to it.

This project was inspired by firsthand experience inside hospital–SNF discharge workflows, where medically ready patients routinely remain admitted due to operational—not clinical—barriers.

Project context: Discharge Bridge


What Inspired Me

This project did not start as a data science exercise—it started as a daily operational failure.

Working directly with hospital case managers, SNF admissions teams, and post-acute operators, I repeatedly saw the same pattern:

  • Patients were clinically stable
  • Beds were available somewhere
  • Yet discharges stalled for days

The causes were always operational: payor rules, missing documentation, SNF capacity mismatches, distance constraints, timing windows, and late discovery of authorization barriers.

Hospitals were losing thousands of dollars per patient per day, not because care was unsafe—but because the system could not see discharge risk early enough.

That gap—predictable failure without early warning—is what Discharge Risk AI was built to solve.


What I Learned

1. Discharge delays are operational, not clinical

Most delay drivers live outside diagnosis codes and vitals. Modeling this as a clinical AI problem leads to black boxes that teams cannot trust or act on.

2. Black-box AI fails in regulated operations

Care teams do not need another probability score. They need to know why a discharge is at risk and what can be done earlier to reduce that risk.

3. Prevention beats acceleration

Most tools try to move faster once a discharge is already failing. The real leverage is identifying failure points days earlier, when intervention is still low-friction.


How I Built the Project

Discharge Risk AI is intentionally designed as advisory, explainable, and non-clinical.

Core Components

  1. Operational Feature Modeling

    • Payor type & authorization complexity
    • SNF availability and acceptance patterns
    • Care complexity & service alignment
    • Distance and placement geography
    • Timing factors (day of week, staffing cycles)
  2. Deterministic Risk Scoring An XGBoost model generates a normalized discharge delay risk score from 0–100.

Inline math example:
The discharge risk score \(R\) is computed as a weighted sum of operational constraints.

$$ R = \sum_{i=1}^{n} w_i \cdot f_i $$

Where:

  • \(f_i\) = a concrete operational risk factor
  • \(w_i\) = learned importance of that factor
  1. Explainable Reasoning Layer (Gemini 3) Gemini 3 does not make predictions.
    It interprets structured model outputs and produces:
    • Human-readable explanations
    • Ranked drivers of discharge risk
    • Actionable, non-clinical operational recommendations

Gemini is explicitly constrained: No diagnoses. No medical advice. No autonomous decisions.


Example Output (Conceptual)

Code block example:

risk_score = 82

top_drivers = [
  "Medicare Advantage prior authorization likely required",
  "Limited SNF bed availability within preferred radius",
  "High therapy intensity mismatched with local capacity"
]

recommendation = "Initiate SNF outreach and prior auth prep 48–72 hours earlier"
Challenges I Faced
1. Preventing clinical scope creep
It was tempting to include diagnoses and clinical predictions—but doing so would reduce trust, increase regulatory burden, and obscure explainability. The system was deliberately constrained to operations intelligence only.

2. Making explanations trustworthy
Translating model features into natural language without hallucination required tight contracts between ML outputs and Gemini’s reasoning layer.

3. Designing for auditability
Hospitals must explain outcomes retrospectively. Every score and explanation needed to be reproducible, reviewable, and clearly labeled as advisory.

Why This Matters
Discharge Risk AI shifts discharge planning from reactive firefighting to proactive prevention.

By identifying why a discharge is likely to fail—days earlier—hospitals can:

Reduce avoidable inpatient days

Improve patient flow without adding staff

Intervene earlier for high-risk populations

Create more equitable discharge outcomes

This project demonstrates that powerful healthcare AI does not need to be opaque to be effective.

Explainability here is not a compliance feature—it is the core product.

Built With

Share this project:

Updates

posted an update

** Feb 2, 2026 — Project Started**

We kicked off Discharge Risk AI to address a persistent hospital operations problem: discharge delays that occur after patients are medically ready to leave. The goal is to predict discharge risk earlier and explain why a discharge is likely to fail before it becomes a bottleneck.


** Feb 3, 2026 — Discharge Risk Scoring Implemented**

Implemented the first version of the discharge delay risk scoring logic using structured, non-clinical operational inputs such as insurance type, placement constraints, and timing factors.

# Simplified example of risk scoring inputs
features = {
    "payor_type": payor_type,
    "snf_availability": snf_beds_available,
    "distance_to_placement": miles,
    "authorization_required": prior_auth,
    "day_of_week": discharge_day,
}

risk_score = xgb_model.predict_proba(features)

Early testing confirmed these signals can meaningfully predict discharge risk without using PHI.


** Feb 4, 2026 — Backend API Deployed on Google Cloud**

Built a FastAPI backend and deployed it on Google Cloud Run, establishing a production-grade API layer and clean separation between inference and presentation.

@app.post("/assess-risk")
def assess_risk(payload: DischargeScenario):
    score, factors = calculate_risk(payload)
    explanation = explain_with_gemini(score, factors)
    return {
        "risk_score": score,
        "factors": factors,
        "explanation": explanation,
    }

** Feb 5, 2026 — Gemini 3 Explainable Reasoning Added**

Integrated Google DeepMind Gemini 3 as the explainable reasoning layer. Gemini interprets structured model outputs to produce clear explanations and operational recommendations.

prompt = f"""
Given a discharge risk score of {score}
and these contributing factors: {top_factors},
explain why this discharge is at risk and suggest
early operational actions. Do not provide medical advice.
"""

Gemini is used for reasoning and explanation, not prediction.


** Feb 6, 2026 — Public Web App Launched**

Launched the first public version of the web app using React and Firebase Hosting. The demo is intentionally login-free so judges can test it immediately.

<button onClick={handleAssessRisk}>
  Get Risk Assessment
</button>

(Attach dashboard screenshot here)


** Feb 7, 2026 — Explainability & Dashboard UX Improvements**

Refined the dashboard to emphasize the discharge risk score, ranked contributing factors, and Gemini-generated explanations. Improved layout and hierarchy to resemble an enterprise healthcare operations interface.

<RiskGauge score={riskScore} />
<FactorList factors={topFactors.slice(0, 3)} />
<GeminiExplanation text={explanation} />

(Attach explainability / risk visualization screenshot here)


** Feb 8, 2026 — Visual Identity & Architecture Added**

Introduced a minimal logo mark and subtle dashboard hero visuals to reinforce product credibility. Finalized a one-page architecture diagram showing Firebase Hosting, Cloud Run, XGBoost risk scoring, and Gemini 3 explainable reasoning.

Browser → Firebase Hosting → Cloud Run API
        → XGBoost Risk Model
        → Gemini 3 Reasoning Layer
        → Explainable Results UI

(Attach logo + architecture image here)


** Feb 9, 2026 — Demo Video & Final Submission Ready**

Recorded a short demo video showing live risk assessment, contributing factors, and Gemini explanations. Completed final QA, verified public access, and prepared Discharge Risk AI for submission to the Gemini 3 Hackathon.


Current Status

  • Fully deployed on Google Cloud
  • Public, login-free demo
  • Explainable AI powered by Gemini 3
  • Hackathon submission complete

Log in or sign up for Devpost to join the conversation.