Solace: The AI-Powered ER + Patient Intake companion

AI-assisted emergency triage. You're not waiting alone.

Every year, 130 million patients visit U.S. emergency departments. Most will tell their story to a clerk, a triage nurse, a bedside nurse, and a doctor, four times, while scared and in pain. Between those repetitions is a black hole: 30 minutes to 3 hours of clinical silence where nothing happens, no one checks in, and the patient has no idea whether their condition is serious or stable.

Solace eliminates that entire bottleneck. A patient scans a QR code on the waiting-room kiosk, speaks their symptoms in any language, and optionally snaps a photo of their injury. Within ~7 seconds, they hear a warm voice explain their triage level and receive a personalized comfort protocol. On the clinician's side, a full AI-generated pre-brief, and ESI prediction, SHAP explanation, scribe draft, EHR match and even appears on the dashboard before the patient is even roomed.

The doctor walks in already knowing.


Inspiration

Our team had previously competed in the Triageist hackathon (hosted by the Laitinen-Fredriksson Foundation on Kaggle), where we built a stacked ensemble to predict Emergency Severity Index (ESI) acuity levels from patient-reported data. That model achieved a near-perfect out-of-fold Quadratic Weighted Kappa of 0.9999 showcasing that ML can reliably approximate clinical triage judgment.

But a triage number alone doesn't save time. The real bottleneck is the manual intake workflow: patients repeating themselves, nurses re-asking the same questions, doctors walking in blind. The model ended the moment the score was assigned. Solace picks up exactly where that left off, turning a prediction into an end-to-end clinical decision-support system.

We were also motivated by the equity gap in ER intake. 30% of ER patients on any given night may speak Spanish, Vietnamese, Haitian Creole, or another language. Traditional clipboard-based intake fails them. Solace is multilingual from the first interaction, voice in, voice out, in the patient's own language.


What It Does

Solace serves two users simultaneously with a single intake event:

For the Patient

  1. Scan the QR code in the waiting room → opens instantly in any mobile browser (zero download)
  2. Speak symptoms for ~30 seconds → OpenAI Whisper transcribes in any language
  3. Snap an optional photo of the injury or insurance card → Claude Vision analyzes
  4. Hear a calm, empathetic voice explain their ESI triage level and deliver a personalized comfort protocol (breathing techniques, positioning tips, ice/heat guidance) all via ElevenLabs TTS in their own language
  5. Tap "My pain got worse" at any time → immediate nurse escalation flag

For the Clinician

  1. Open the dashboard on desktop or tablet → real-time polling patient grid sorted by acuity
  2. Scan each patient's AI-generated pre-brief card: provisional ESI, symptom summary, SHAP feature attributions, comfort protocol given, wait time
  3. Click into a patient → full transcript, photo analysis, AI scribe draft, EHR-matched history (allergies, meds, conditions, prior visits)
  4. Refine → when bedside vitals land, a LightGBM 5-fold ensemble recalculates ESI with real numeric signal + conformal prediction intervals
  5. Walk in already informed → skip redundant intake, go straight to exam and plan

The Result

  • 20+ minutes of "storytelling time" per patient → collapsed to 30 seconds
  • 5–10 doctor-minutes saved per patient per shift
  • Patient feels heard, informed, and cared for while waiting, not abandoned

How We Built It

Two-Stage Inference Architecture

Solace uses a two-stage triage pipeline that mirrors real clinical workflow:

Stage 1 —> Narrative Triage (on intake): Claude Sonnet 4.5 processes the Whisper transcript + optional photo and generates a provisional ESI, a structured clinician pre-brief, an AI scribe draft, and a patient comfort protocol. This gets the queue moving within seconds.

Stage 2 —> Quantitative Refinement (at bedside): When vitals are taken (heart rate, blood pressure, respiratory rate, SpO2, temperature, pain scale), a LightGBM 5-fold ensemble trained on the Kaggle Triageist dataset refines the ESI with real numeric signal. The model outputs a probability distribution across ESI levels:

$$\hat{y} = \text{softmax}(W \cdot h_{\text{fused}} + b), \quad \hat{y} \in \mathbb{R}^5$$

where \(h_{\text{fused}}\) is a joint embedding of the text features (via keyword extraction + cyclic encoding) and vitals features (normalized, with derived features like shock index \(SI = \frac{HR}{SBP}\)).

The ensemble provides SHAP feature attributions via pred_contrib=True (not heuristic weights — real Shapley values) and split-conformal 90% coverage prediction sets calibrated on noise-perturbed vitals, giving clinicians both an explanation and a confidence interval.

Full Pipeline

The end-to-end flow completes in ~7 seconds:

Patient speaks → Whisper transcription (1.5s)
                          ↓
              Claude Sonnet 4.5 (parallel):
                 • Narrative triage + ESI    (2s)
                 • Clinician pre-brief       (2s)
                 • Comfort protocol          (2s)
                 • Photo vision analysis     (1.5s)
                 • AI scribe draft           (2s)
                          ↓
              ElevenLabs TTS (parallel): (1.5s)
                 • ESI explanation audio
                 • Comfort protocol audio
                          ↓
              Result screen + dashboard update

Claude and ElevenLabs calls are parallelized to hit the <10s latency target.

Adaptive Intake Form

The intake form uses clinical skip-logic:

  • Pregnancy questions only appear for female patients aged 12–55
  • Diabetes type follow-up triggers on diabetes mention
  • NYHA class appears for heart failure patients
  • Severity classification per allergy
  • All conditionally rendered, not hidden, thus reducing cognitive load for patients already in distress

EHR Integration (FHIR-Shape)

On intake, Solace auto-matches the patient by name against seeded EHR records and merges:

  • Allergies (with severity)
  • Active medications
  • Chronic conditions
  • Family history
  • Prior ED visits

This appears in the clinician view before rooming. The doctor sees the patient's full history alongside the AI pre-brief.


Technical Architecture

Patient phone ─► QR /demo/patient ─► Vite+React SPA
                                  │
                                  │  WAFv2 → CloudFront → API GW (HTTP)
                                  ▼
                            FastAPI on Lambda
                         (container, arm64, Python 3.12)
                ┌───────────┼────────────┬─────────────┐
                ▼           ▼            ▼             ▼
          OpenAI Whisper  Claude 4.5  ElevenLabs    LightGBM
          (transcribe)   (pre-brief,  (empathetic   5-fold ensemble
                          scribe,     TTS)          + SHAP
                          comfort,                  + conformal
                          vision,                   prediction
                          insurance)
                │           │            │             │
                └─────┬─────┴────────────┴─────────────┘
                      ▼
                DynamoDB (11 tables, CMK-encrypted)
                S3 media (24h lifecycle, CMK)
                CloudTrail audit + SNS alerts

Stack

Layer Technology
Frontend Vite + React 18 + TypeScript + Tailwind CSS + Framer Motion
Backend FastAPI + Mangum (Lambda ASGI adapter), Python 3.12
Runtime AWS Lambda container image (ECR, arm64) behind API Gateway HTTP API
CDN / WAF CloudFront + WAFv2 (IP reputation, known bad inputs, OWASP common, rate-based limits)
Storage DynamoDB (11 tables, all CMK-encrypted, PAY_PER_REQUEST, TTLs)
Media S3 (CMK, 24h lifecycle, TLS-only, public access blocked)
Secrets AWS Secrets Manager (CMK-encrypted: JWT key, API keys, demo PINs)
Observability CloudWatch (13 alarms) + CloudTrail + SNS alerts + EventBridge
AI Providers OpenAI Whisper + Anthropic Claude Sonnet 4.5 + ElevenLabs multilingual v2
ML LightGBM 5-fold ensemble, SHAP (pred_contrib), split-conformal 90% coverage
Deploy Amplify Hosting (frontend) + ECR container deploy (backend)

Security & HIPAA Compliance

Healthcare demands the highest security bar. Solace was built HIPAA-aware by construction, not retrofitted:

  • §164.508 Consent: Logged on every intake (version + granted-at timestamp), persisted on the patient record
  • §164.514 Minimum Necessary: AI prompts are scrubbed to minimum-necessary payloads — Claude never sees data it doesn't need
  • §164.312 Technical Safeguards: TLS 1.2+ enforced at every layer (CloudFront, API GW, S3 bucket policy); HSTS headers via Amplify
  • Encryption: One customer-managed KMS key encrypts all Solace data, this means every DynamoDB table, every S3 bucket, every Secrets Manager secret.
  • Auth: JWT HS256 (key in Secrets Manager), bcrypt-hashed PINs for clinicians, rotation script included.
  • Abuse Prevention: IP+UA-bound intake nonces (4h TTL, atomically consumed per submit), identity-keyed rate limits, content-safety guard on text uploads, multi-layer abuse-event audit with auto-blocklist (30m cooldown).
  • Audit: CloudTrail (management + S3 data events), AI attribution logs (provider, model, input/output tokens, duration) persisted on every patient record for Bedrock migration path.
  • Monitoring: 13 CloudWatch alarms on Lambda error rate, throttle count, 5xx rate, throughput, duration p99, cold starts, DDB throttles, WAF blocks.

Challenges We Faced

Hallucination risk in healthcare. Getting Claude to provide supportive care guidance without crossing into diagnostic territory required extensive prompt engineering and output validation layers. Every Claude prompt was carefully scoped so the model would never give a definitive diagnosis — only supportive, evidence-backed comfort guidance with clear escalation triggers. We added a content-safety guard and multi-layer output validation.

Multimodal fusion. Combining image and text signals into a single ESI prediction required careful architecture decisions. For non-visual complaints (chest pain, headache), we needed to prevent image features from dominating over symptom text. The two-stage approach solved this: Claude handles narrative understanding, LightGBM handles quantitative vitals, so essentially each model does what it's best at.

Latency under 10 seconds. The full pipeline (Whisper → Claude × 5 calls → ElevenLabs × 2 calls) had to complete fast enough that a patient in pain wouldn't give up. We parallelized every independent call and hit ~7 seconds end-to-end.

Trust and UX in crisis. Designing a UI that a frightened, possibly non-English-speaking patient would actually use and trust was harder than any of the technical problems. The mic button is 96px, the dominant element. Every interaction is one tap. The voice responds in the patient's own language. We optimized for shaking hands and blurred vision, not for design and power.

HIPAA from day one. Most hackathon projects bolt on security as an afterthought. We wrote setup_security.py before we wrote a single route handler. Every table has CMK encryption, every secret is in Secrets Manager, every AI call is logged with attribution metadata for the Bedrock migration path. This isn't security theatrics, it's the real compliance posture needed for production healthcare. Our code is 100% HIPPAA Compliant, and if we applied for compliancy right now, we would be in a good spot.


What We Learned

  • Clinical triage is structured , the Emergency Severity Index maps patients onto levels \(1\) through \(5\), where level \(1\) is immediate life threat and level \(5\) is non-urgent. Predicting this from unstructured speech + images is a hard NLP + vision fusion problem, but the structure gives us a clear optimization target.
  • Voice changes everything in a clinical setting. A scared patient doesn't want to read a wall of text. ElevenLabs let us deliver calm, empathetic audio instructions that felt human, reducing perceived anxiety in our user tests. The voice is the product.
  • AI safety guardrails matter more in healthcare than anywhere else. Every Claude prompt had to be carefully scoped so the model would never give a definitive diagnosis. We learned that the best guardrail isn't a filter — it's a prompt that never asks for a diagnosis in the first place.
  • The hardest part wasn't the model, it was the UX. Getting a frightened patient to speak naturally into a kiosk, trust an AI, and follow instructions required thoughtful flow design above all else. We spent more time on button sizes and voice tone than on model architecture.
  • Two-stage inference is the right pattern for healthcare AI. Fast narrative triage gets the queue moving; quantitative refinement with SHAP + conformal prediction gives clinicians the evidence they need to trust the system. Neither stage alone is sufficient.

What's Next for Solace

  • AWS Bedrock migration -> env-var flip from direct Anthropic API to Bedrock (adapter already built), enabling HIPAA BAA coverage on the AI layer
  • Apple Watch vitals ingestion -> continuous SpO2, heart rate, and HRV monitoring during the wait, feeding the LightGBM refinement stage in real time
  • Real EHR integration -> FHIR R4 API connection to Epic/Cerner for live patient matching (currently seeded)
  • Family SMS notifications -> "Your family member has been triaged at ESI 3. Estimated wait: ~45 minutes."
  • Multi-hospital routing -> if Hospital A has a 3-hour wait and Hospital B is 15 minutes away with a 30-minute wait, Solace suggests the transfer before the patient sits down
  • FDA 510(k) pathway -> the LightGBM ensemble + conformal prediction framework is architecturally ready for Class II clinical decision support validation

Built With

python typescript react tailwindcss framer-motion fastapi aws-lambda dynamodb s3 cloudfront wafv2 openai-whisper anthropic-claude elevenlabs lightgbm shap conformal-prediction docker


Try It


Disclaimer: Solace is a decision-support proof of concept built at Hook'em Hacks 2026. Every escalation requires physical nurse confirmation. Our triage model is trained on a research dataset -> it is not a certified medical device. No real patient data is used or stored, and we operate off of regulations for synthetic datasets.

CLINICIAN PASSWORDs: 123456

Built With

Share this project:

Updates