Inspiration
Wearables track everything but explain nothing. We wanted a voice-first coach that actually remembers you.
What it does
V.I.T.A.L reads 20 Apple Watch metrics via Thryve, keeps a per-user markdown memory (Baselines, Events, Protocols, Challenges), and runs a daily voice brief + chat + silent nudges when biometrics deviate ≥2σ from your own baseline. Blood panel OCR, vocal onboarding, and Alan-covered specialist booking included.
### How we built it
FastAPI + Mistral Small with 10 function-calling tools, Voxtral for STT/TTS, Nebius Llama Guard for safety, Thryve for wearable data, SSE to a web frontend. Memory is append-only markdown — no DB.
Challenges we ran into
Grounding insights in personal baselines instead of population averages. Keeping the LLM from playing doctor. Making vocal onboarding feel like a conversation, not a form.
### Accomplishments we're proud of
One memory spine shared across four surfaces (onboarding, brief, dashboard, chat). Silent memory-driven nudges that cite past events. Full voice loop under 12h.
### What we learned
Persistent markdown memory beats vector DBs for coaching. Tool-calling + a tight system prompt is enough — no fine-tuning needed.
### What's next for Vital
Native Swift/WatchOS app, real Alan booking integration, longitudinal trend coaching beyond 14-day baselines.

Log in or sign up for Devpost to join the conversation.