Inspiration
Emergency rooms (ER) across the US are overwhelmed with over 155 million visits every year in the United States. $32 billion is spent annually on non-emergent ER visits while truly urgent cases get buried in triage queues. Meanwhile, diagnostic bias causes real harm: women are 50% more likely to be misdiagnosed during a heart attack than men, Black patients' pain is systematically underestimated, and 14% of young adults with strokes are misdiagnosed in the ER. Discussing with my father, an ER physician and former regimental and flight surgeon, about this issue reaffirmed to me that something must be done.
I built Asclepiad because I believe an AI ER doctor can serve as a first line of defense, routing patients to the right level of care before they sit in a waiting room for hours and catching the biases that humans miss.
What it does
Asclepiad is an AI ER doctor in your pocket. Patients talk to Dr. Galen, a voice-first AI emergency physician that conducts a natural clinical interview, then provides recommendations and triages them into one of three tiers:
- Tier 1 — Self-Care: A personalized care plan with OTC medication recommendations, warning signs to watch for, and follow-up guidance — all written at a 6th-grade reading level so any patient can understand it.
- Tier 2 — Telemedicine: A structured clinical handoff packet (chief complaint, HPI, differential diagnosis, bias flags, recommended workup) sent directly to a telehealth provider.
- Tier 3 — Emergency: Immediate action steps, nearest ER routing, a pre-arrival summary for the receiving physician, and a word-for-word 911 dispatcher script.
What makes Asclepiad different is what happens behind the scenes: a clinical reasoning engine actively checks every patient against documented diagnostic bias patterns and goes beyond them, using deep research for medical knowledge to flag disparities not in the reference set. It retrieves real-time evidence from FDA drug databases and PubMed clinical studies, and measures vital signs through the phone camera alone — no additional hardware required. Every recommendation is backed by citations, and the full reasoning trace is transparent to both the patient and any downstream provider.
How I built it
Asclepiad runs on three AI engines working in concert:
- OpenAI GPT-5.2 — Powers the patient-facing conversation. Handles natural, empathetic dialogue and voice interaction via Whisper (speech-to-text) and TTS (text-to-speech). Facial emotion recognition (face-api.js) and audio tone analysis provide supplementary patient context.
- Anthropic Claude Opus 4.6 — The clinical reasoning backbone. A Python FastAPI backend sends Claude the conversation transcript, patient demographics, vital signs, FDA data, and emotional context. Claude returns a structured JSON assessment: symptom analysis, differential diagnosis, bias flags, tier recommendation, confidence score, and a full reasoning trace. Claude can also request Perplexity queries for real-time medical knowledge when needed.
- Perplexity Sonar — Real-time medical knowledge retrieval for current guidelines, drug interactions, and FDA alerts that go beyond static training data.
The frontend is Next.js 14 (App Router) with Tailwind CSS and shadcn/ui. Vital signs are captured via remote photoplethysmography (rPPG), a completely contactless method that estimates heart rate and respiratory rate from the patient's webcam. Using face-api.js for real-time face detection, the system isolates the forehead region (the top 30% of the face bounding box, which has minimal muscle movement) and tracks subtle frame-to-frame changes in the green channel intensity caused by blood flow pulsations beneath the skin. A Whittaker smoother detrends the raw signal, a low-pass filter removes high-frequency noise, and peak detection extracts inter-beat intervals to compute heart rate. Respiratory rate is estimated via RIIV (respiratory-induced intensity variation). Breathing modulates the PPG pulse amplitude envelope at 0.1–0.5 Hz, which is isolated with a bandpass filter. A contact PPG mode (finger on rear camera with flash) is also available for SpO2 estimation, which remote PPG cannot reliably provide.
Clinical data comes from two open-source APIs requiring no API keys: FDA openFDA for drug adverse events, labels, interactions, and recalls, and PubMed NCBI E-utilities for clinical studies and diagnostic accuracy research.
For bias detection, six evidence-based diagnostic bias patterns (each backed by peer-reviewed literature like Mehta et al. Circulation 2016, Hoffman et al. PNAS 2016, etc.) serve as reference examples in Claude's system prompt. The prompt explicitly instructs Claude that these are not an exhaustive list. Claude must also flag bias patterns it recognizes, such as underdiagnosis of depression in men, delayed autism diagnosis in women, sickle cell pain crises undertreated in Black patients, and others. A programmatic pre-check against the reference patterns also runs before Claude's analysis, ensuring the documented patterns are always surfaced even if the model misses them.
During development, I frequently consulted my father to validate that responses were clinically sound and accurate.
Challenges I ran into
- Coordinating three AI engines without introducing unacceptable latency. The patient expects a natural conversational pace, but behind every response, Claude is analyzing the full transcript while FDA and PubMed queries run in parallel. I parallelized tool calls (e.g., drug label + adverse event queries for each medication run concurrently) and stream reasoning steps to keep the interaction feeling responsive.
- PPG signal quality was a hard technical challenge. Phone cameras vary wildly in frame rate, exposure, and color response. Getting reliable heart rate from a noisy red channel signal required iterating on bandpass filter parameters, peak detection thresholds, and a signal quality scoring system (periodicity + amplitude metrics) that knows when to reject bad data rather than show a wrong number.
- Diagnostic bias detection required careful calibration. The system needs to flag real risks without incessant false alarms. I anchored six reference patterns to specific peer-reviewed studies with quantified miss-rate multipliers, then instructed Claude to go beyond them using its own medical knowledge, always citing relevant research.
- Getting the LiveAvatar API to sync correctly was a persistent headache and required careful timing coordination to ensure the live avatar seemed realistic while also not burning through credits (i.e., utilizing a static image for idle moments).
Accomplishments that I'm proud of
- A fully functional AI ER doctor that produces structured, clinically useful outputs, not just chat responses, but care plans, handoff packets, and ER summaries a real provider could act on, all written at a 6th-grade reading level for patient accessibility.
- Diagnostic bias detection that goes beyond a fixed checklist. Six peer-reviewed reference patterns anchor the system, but Claude is explicitly instructed to flag any bias pattern it recognizes, making the detection open-ended rather than limited to a hardcoded list. A programmatic pre-check ensures the documented patterns are never missed.
- Vital signs from a phone camera with no additional hardware. Heart rate, SpO2, respiratory rate, and HRV extracted in 20 seconds using only the device’s camera and flash.
- A transparent reasoning trace that shows patients and providers exactly how the AI reached its conclusion. Every analysis step, every evidence citation, every bias check, timestamped and is visible in a real-time sidebar.
- Integration of real clinical data sources (FDA openFDA and PubMed NCBI, both open-source and free) with live citations, making every recommendation evidence-based and verifiable.
What I learned
- Over-triage is safe; under-triage is dangerous. This principle shaped every architectural decision. When the system is uncertain, it escalates because sending someone to the ER who didn't need to go is inconvenient, but missing a pulmonary embolism is fatal.
- Building a multi-agent AI system taught me that orchestration is harder than any individual model call. The challenge is making GPT, Claude, and Perplexity work together with acceptable latency and consistent clinical reasoning.
- Bias detection can't be a checklist. A fixed list of patterns will always be incomplete. The real power comes from using documented patterns as examples that teach the LLM what to look for, then letting it generalize to disparities not in the reference set.
- Above all, I learned how to create something end-to-end, how to design a tech stack, and about so many different APIs!
What's next for Asclepiad
- Greater robustness — Improving remote PPG accuracy across diverse lighting conditions and skin tones, hardening the multi-engine orchestration against API failures and edge cases, and running clinical validation studies with emergency medicine physicians to measure triage accuracy against standard protocols like ESI.
- Implementing the telemedicine handoff — Right now Tier 2 generates a structured handoff packet. The next step is a live telemedicine integration where the patient is connected directly to a provider who receives the full clinical summary, reasoning trace, and bias flags in real time — no re-explaining symptoms from scratch.
- Customization of avatar and models — Letting patients choose Dr. Galen's appearance, voice, and communication style. Some patients respond better to a calm, slow-speaking physician; others want direct and fast. I also want to let healthcare systems plug in their own preferred LLM backends and clinical protocols.
- Multi-language support — ER misdiagnosis disproportionately affects non-English speakers, and a voice-first AI ER doctor is uniquely positioned to close that gap. I want to support full conversation, TTS, and care plan generation in Spanish, Mandarin, and other high-need languages.
Built With
- anthropic-claude-opus-4.6
- face-api.js
- fastapi
- fda-openfda-api
- heygen
- javascript
- next.js
- openai-gpt-5.2
- openai-tts
- openai-whisper
- perplexity-sonar
- photoplethysmography-(ppg)
- pubmed-ncbi-e-utilities
- python
- railway
- react
- shadcn/ui
- tailwind-css
- typescript
- vercel
Log in or sign up for Devpost to join the conversation.