Inspiration
Every doctor's appointment begins the same way — "What brings you in today?" The doctor spends the first 5-10 minutes collecting basic information they could have had before walking in. Patients fill out paper forms they don't understand. Non-English speakers struggle. Elderly patients get frustrated. Time is wasted on both sides.
I asked: what if the patient could have that conversation BEFORE the appointment?
What it does
MediVoice is a voice-first medical intake assistant that replaces paper forms with a natural AI conversation before doctor appointments.
- Patient logs in and clicks Start Voice Check-In
- Amazon Nova 2 Sonic conducts a warm, empathetic intake conversation — asking about symptoms, pain level, medications, and allergies one question at a time
- Patient clicks End Check-In
- Amazon Nova Lite analyzes the full transcript and generates a structured clinical brief instantly
- Doctor logs into a separate secure portal and sees all today's patients with complete clinical briefs ready
How I built it
- Frontend: React + TailwindCSS — separate portals for patients and doctors
- Voice AI: Amazon Nova 2 Sonic via AWS Bedrock WebSocket streaming API
- Report AI: Amazon Nova Lite via AWS Bedrock InvokeModel API
- Backend: FastAPI (Python) with JWT authentication
- Storage: File-based persistence for demo purposes
The most technically challenging part was implementing real-time bidirectional audio streaming with Amazon Nova 2 Sonic — capturing microphone input, converting Float32 PCM to Int16, encoding to base64, and streaming via WebSocket while simultaneously receiving and playing back the AI's audio response.
Challenges I faced
- Real-time audio streaming: Getting the exact WebSocket event format right for Nova 2 Sonic took significant debugging — the order and structure of sessionStart, promptStart, contentStart, and audioInput events had to be perfect
- Audio playback quality: Output audio is 24000 Hz while input is 16000 Hz — mismatching these caused garbled audio until we correctly configured separate AudioContext instances
- Dual model pipeline: Coordinating Nova Sonic for conversation and Nova Lite for report generation required careful session management
What I learned
- Deep understanding of Amazon Nova 2 Sonic's WebSocket streaming protocol
- Real-time audio processing in the browser using AudioWorkletNode
- How to build a complete clinical intake workflow from scratch
- The importance of patient-side vs doctor-side healthcare AI solutions
What's next for MediVoice
- EHR Integration — Connect with Epic and Athena for seamless doctor workflow
- SMS Access — Send patients a link before appointment, no login required
- 50+ Languages — Expand multilingual support using Nova Sonic's capabilities
- ICD-10 Suggestions — Auto-suggest diagnosis codes from intake data
- Wearable Integration — Pull in Apple Watch and Fitbit vitals automatically
- Ambient Mode — Continue listening during the appointment for complete notes
🏆 Why MediVoice is different
Unlike Nuance DAX, Suki AI, and Epic's ambient documentation — which all assist doctors DURING appointments — MediVoice works BEFORE the appointment on the patient side. Zero workflow change for doctors. Immediate value from day one.
AmazonNova
Built With
- amazon-web-services
- bedrock
- fastapi
- jwt
- nova2sonic
- novalite
- python
- react
- tailwindcss
- websocket
Log in or sign up for Devpost to join the conversation.