Inspiration
The spark for PatientPulse came from a deeply human frustration: patients are discharged from hospitals and then largely disappear from view.
Post-discharge care has a silent crisis. A patient leaves the hospital on a Friday, and their care team may not see them again for two weeks. In those fourteen days, their glucose trends spike, medication adherence slips, and activity drops significantly—yet no one knows until they’re back in the ER. A doctor might never see that a diabetic patient’s glucose has been rising at 2 AM, or that they quietly stopped taking their Metformin three days ago, until something goes wrong.
The data exists. Wearables generate it. EHR systems store it. Patients report it. But no system connects these streams continuously, reasons across them, and surfaces what matters before a clinician has to ask. We were studying Medical Informatics and saw how standards like HL7 FHIR R4 were designed to solve exactly this problem, yet most healthcare software still treats EHRs, wearables, and patient communication as disconnected silos.
We set out to collapse those silos into a single, continuously updated clinical profile—one that proactively surfaces what a busy clinician needs to know before they even ask.
What it does
PatientPulse is a dual-interface healthtech platform that builds a continuously updated clinical profile for each patient, assembled in real time from three synchronized data streams:
- Hospital EHR data via HL7 FHIR R4 (diagnoses, medications, labs, vitals, allergies)
- Wearable sensor data (72-hour heart rate, continuous glucose monitoring, daily step count)
- AI-powered daily patient check-ins via a companion chat interface
The platform serves two users simultaneously from a shared data layer:
Dr. Priya (Clinician Dashboard) — sees a real-time clinical picture with live wearable vitals, proactive AI alerts, streaming diagnostic analysis with cited FHIR observations, and medication scenario simulations (e.g., "what happens if we add a GLP-1 agonist?").
Maria (Patient Companion) — sees a mobile-optimized interface with her personalized Recovery Score, medication reminders with streak tracking, a warm AI companion for daily symptom check-ins, and a one-tap care team escalation button.
Key insight: Every action Maria takes — confirming a medication, describing a symptom — writes a FHIR Observation back to the server. When Dr. Priya opens the dashboard, she's looking at three weeks of Maria's recovery before Maria says a word.
The Recovery Score is computed as:
$$\text{Score} = (Adherence \times 0.4) + (Symptom_Trend \times 0.4) + (Engagement \times 0.2)$$
How we built it
PatientPulse is a microservices stack fully containerized with Docker Compose, with five services working in concert:
| Layer | Technology | Role |
|---|---|---|
| FHIR Store | HAPI FHIR R4 (Java) | Source of truth — all clinical data |
| Database | PostgreSQL 15 | Persistence for HAPI FHIR |
| Cache | Redis 7 | FHIR context (5 min TTL) + conversation history (24h TTL) |
| Backend | Python 3.11 + FastAPI | AI orchestration, FHIR client, API gateway |
| Frontend | React 18 + TypeScript + Vite | Clinician dashboard + patient companion |
The AI layer uses 6 specialized agents, governed by a strict golden rule: LLMs are never used for numeric computation, drug interactions, anomaly detection, or safety-critical escalation decisions.
- OrchestratorAgent — routes clinician queries to specialist agents
- *Clinical Insights Agent * (LLM) — Claude Sonnet performs grounded clinical trend analysis, citing every data point with a FHIR Observation ID
- AlertAgent (Rule-Based) — 4 deterministic threshold rules generate proactive clinical flags (e.g., nocturnal HR > 100 bpm → HIGH alert)
- Treatment Impact Simulator (LLM + Deterministic) — drug interaction check via RxNorm API + deterministic pharmacological projection, with Claude writing the narrative only
- Vitals Monitoring Agent (LLM + State Machine) — extracts structured symptom data from patient messages; escalation is always a deterministic state machine, never an LLM decision
- WearableAgent (Rule-Based) — threshold anomaly detection on streaming wearable data
The frontend renders AI responses as Server-Sent Events (SSE), streaming tokens in real time. FHIR data is assembled via a PatientContextAssembler that fetches 8 resource types in parallel and caches the result in Redis for 5 minutes.
PHI Protection: A PHIRedactionValidator strips patient name, DOB, and address before every Claude API call. The AI always receives age and sex — never identifiable information.
Challenges we ran into
FHIR complexity was the first wall we hit. The HAPI FHIR server runs database migrations on cold start — a 60-second initialization window that silently broke our Docker health checks and caused cascading startup failures we initially misdiagnosed as network issues.
Deterministic vs. LLM boundary design was philosophically challenging. We had long debates about where to draw the line. In a healthcare context, having an LLM decide whether to escalate a patient to emergency care felt deeply wrong — but figuring out the right hybrid (LLM extracts structured data, state machine decides action) took real iteration.
FHIR citation in streaming responses was a technical challenge unique to our project. We needed the DiagnosticAgent to cite specific FHIR Observation IDs inline (Obs:#ba1c-001) within a streaming SSE response, then have the frontend parse those citations after the __DONE__: marker arrived without blocking the token stream.
The PATIENT_ID bridge between the HAPI FHIR server and the Vite frontend environment was surprisingly fiddly — HAPI assigns numeric IDs dynamically, and keeping PATIENT_ID in the root .env in sync with VITE_PATIENT_ID in frontend/.env required a seed script that writes back to .env automatically.
Accomplishments that we're proud of
- Safety-first AI architecture: the LLM is never in the escalation decision path. All safety-critical decisions run through deterministic logic first.
- Built a fully standards-compliant FHIR R4 data layer using real clinical coding systems: SNOMED CT for diagnoses, RxNorm for medications, LOINC for observations. This isn't mock data — it's the real schema captured using Synthea.
- Real-time streaming with grounded citations: every AI diagnostic response cites the exact FHIR Observation IDs that support each claim, making the AI's reasoning auditable.
- Two complete, production-quality UIs — a clinician dashboard with wearable analytics and a mobile-optimized patient companion — all from one shared FHIR data layer.
- Built the entire stack in hackathon time while maintaining a thoughtful HIPAA notes section documenting exactly what would need to change for production.
What we learned
- FHIR is powerful but unforgiving — the resource model forces you to think carefully about data ownership and update semantics. Every patient action that writes a FHIR Observation made us think about audit trails and provenance.
- Hybrid AI/deterministic systems are the right model for healthcare — we came in thinking "AI will do everything" and left believing that Claude's greatest value in clinical settings is natural language understanding and synthesis, not numeric judgment.
- Streaming UX changes how people perceive AI — watching a diagnostic analysis appear token-by-token, with citations populating in real time, felt qualitatively different from waiting for a response. It made the AI feel like a colleague thinking out loud.
- Redis TTL design is architecture — the choice of 5-minute context cache vs. 24-hour conversation history isn't a minor config detail; it's a product decision about data freshness vs. API cost.
What's next for PatientPulse
- SMART on FHIR OAuth2 to replace the demo Bearer tokens with a real authentication layer compatible with hospital identity providers
- Parallel agent fan-out — the OrchestratorAgent is designed for parallel specialist routing; production would run DiagnosticAgent, WearableAgent analysis, and AlertAgent simultaneously
- Full HIPAA AuditEvent trail — the FHIR AuditEvent schema is already designed; wiring it up is the next step
- Real wearable integrations via Apple HealthKit and Google Health Connect APIs
- Population-level dashboards — extending the unified patient profile from one patient to a panel view for attending physicians managing post-discharge cohorts
- Federated learning — training anomaly detection models on wearable data without centralizing PHI
Built With
- anthropic
- claude
- docker
- fastapi
- fhir
- hapi
- hl7
- java
- javascript
- loinc
- postgresql
- python
- react
- recharts
- redis
- rxnorm
- snomed
- typescript
- vite
- zustand

Log in or sign up for Devpost to join the conversation.