MediVoice AI: The Live Patient-Doctor Link

Inspiration

Medical emergencies are high-stakes, but the intake process is often a serial bottleneck. Nurses must listen to callers one-by-one, remember symptoms, and manually rank acuity. MediVoice AI parallelizes this process, allowing anyone to access immediate care via a standard phone call while automatically re-ranking the emergency queue based on real-time risk.

What it does

MediVoice converts live PSTN audio into structured clinical data. It listens to patients, extracts key medical drivers, and assigns a 0–100 risk score and P1–P3 priority in real-time. This allows high-acuity cases to float to the top of a nurse's dashboard before the caller even hangs up.

How we built it

  • Voice Gateway: Twilio Voice captures live speech via a standard phone line and streams data to our backend.
  • Clinical Brain: AWS Bedrock running Claude 3 Haiku in us-east-1. We chose Haiku for its sub-second inference latency, which is mandatory for responsive voice-turn analysis.
  • Live Infrastructure: We implemented Server-Sent Events (SSE) to create a reactive UI pipeline, pushing AI insights to the dashboard the millisecond they are generated without the overhead of REST polling.
  • Interoperability: Triage results are exported as HL7 FHIR-compliant RiskAssessment resources, bridging the gap between AI and legacy EHR systems like Epic or Cerner.

Challenges faced

  • The Pivot: We originally prototyped on Google Gemini, but quickly hit free-tier rate limits and credit quotas. We migrated the entire triage engine to AWS Bedrock mid-development to ensure production-grade reliability and higher throughput.
  • Audio Logic: Configuring TwiML to handle medical crises required removing default 10-second gather caps—implementing a 60-second silence timeout and 3,600-second speech window to ensure zero data loss during patient reporting.
  • Connectivity: Routing Twilio webhooks through Cloudflare Tunnels to reach our local FastAPI environment during the rapid iteration phase.

What we learned

We mastered Asynchronous Systems Architecture and the necessity of high-throughput, low-latency models. Transitioning to a scalable AWS stack taught us that in healthcare AI, "intelligence" is secondary to availability and latency.


Built With

  • AI: AWS Bedrock (Claude 3 Haiku), Google OR-Tools
  • Backend: FastAPI, Python, Twilio API
  • Frontend: React (Vite), Tailwind CSS, Lucide-React
  • Infrastructure: AWS (us-east-1), SSE (Server-Sent Events), Cloudflare Tunnels
  • Standards: HL7 FHIR (RiskAssessment)

Built With

Share this project:

Updates