Inspiration
When a loved one is rushed into emergency surgery, families are often left in silence.
Clinicians focus on stabilizing the patient, so updates are brief, delayed, or filled with medical jargon. That gap creates anxiety for families and constant interruptions for care teams.
We wanted to make critical care more transparent — giving families real-time clarity without adding work for providers.
What it does
AtriaAI is a real-time, voice-enabled clinical copilot that keeps families informed during emergencies and guides them through recovery after discharge.
In our demo, a college student undergoes an emergency appendectomy. Because she previously received a kidney transplant and takes immunosuppressants, her infection risk is higher.
Her mother — located remotely — accesses a secure dashboard that shows:
- Live surgery updates in plain language
- Contextualized vital sign changes
- Confidence + stability scoring
- Personalized recovery plans
- Monitoring and cost tradeoffs
After surgery, the system evaluates post-care pathways — standard recovery, extended monitoring, or ICU observation — modeling complication risk, healing timelines, and out-of-pocket costs.
When the family selects ICU monitoring, the care plan updates instantly.
How we built it
We built a multi-agent reasoning system that breaks medical decision support into structured components:
- Hypothesis generation → Recovery + monitoring pathways
- Evidence retrieval → Guidelines + literature grounding
- Safety validation → Risk adjusted for comorbidities
- Response composition → Clear family-facing updates
Simulated EHR data drives real-time updates to risk scores, recovery timelines, and monitoring recommendations.
The frontend visualizes reasoning through update cards, pathway comparisons, and live care timelines — with voice interaction layered on top.
Tech Stack
Frontend
- React + TypeScript + Vite
- Tailwind CSS + Radix UI + MUI
- Framer Motion animations
- Recharts data visualization
- Web Speech API + ElevenLabs TTS
Backend
- Python + FastAPI
- WebSockets (real-time updates)
- LangGraph (agent orchestration)
- Pydantic validation
AI & Data
- OpenAI (GPT-4o-mini / GPT-4 reasoning)
- Jina AI embeddings
- Elasticsearch hybrid search
- Synthetic FHIR EHR data (Synthea)
Infra
- Modal (scalable inference compute)
- Docker (Elasticsearch local deployment)
- Twilio (SMS alerts)
Challenges we ran into
- Structuring AI reasoning safely instead of relying on one LLM output
- Modeling realistic medical risk without overwhelming families
- Designing confidence scoring that reflects real physiological events
Accomplishments we’re proud of
- Built a true multi-agent clinical reasoning system
- Modeled safety vs monitoring vs cost tradeoffs in real time
- Created emotionally intelligent, transparent medical updates
What we learned
Healthcare AI must be structured, observable, and explainable — not just conversational.
Separating reasoning into agents dramatically improved safety and clarity, especially in high-stress care moments.
What’s next
Next, we plan to integrate live FHIR hospital data and expand recovery modeling to trauma, cardiac, and oncology care.
Our goal is to become the real-time alignment layer between families and care teams.
Built With
- docker
- elasticsearch-8.x
- elevenlabs
- fastapi
- fhir
- framer-motion
- gpt-4
- jina-ai
- material-ui
- modal
- openai-gpt-4o-mini
- pydantic
- python
- radix-ui
- railway
- react-18
- recharts
- synthea
- tailwind-css
- typescript
- vercel
- vite
- web-speech-api
- websockets

Log in or sign up for Devpost to join the conversation.