Inspiration
During my university years, I once suffered a sudden health scare—sharp stomach pain that forced me to sprint across campus to a distant medical center. By the time I arrived, I’d missed an important group project. It was a simple issue that shouldn’t have derailed my day, but the barriers to fast, reliable first-aid guidance were real. I built CoreVitals AI so no one else has to go through that.
What It Does
- AI Symptom Analyzer: Daily check-ins interpret your descriptions (text, voice, or images) to flag early warnings across all 11 body systems.
- Multimodal Chatbot: Talk, type, or snap a photo—our GPT-4o engine responds with personalized guidance.
- Health Dashboard: Track integrative metrics for each system (e.g., heart rate trends, respiratory alerts, digestive logs), with push-notification reminders.
- Video Consultations: Tap into on-demand, AI-driven “doctor” videos via Tavus for deeper explanations.
- Voice Reports: Get spoken summaries of your health status through Eleven Labs’ ultra-natural TTS.
- Medical Imaging: Upload histopathology images to classify common lung/colon tissue types using your DenseNet121 model.
- Exportable Reports: Generate PDF summaries of your history, perfect for sharing with real clinicians.
How We Built It
- Frontend: Vite + React, mobile-first, responsive UI with Tailwind for rapid styling.
- AI Core: Azure OpenAI’s GPT-4o for multimodal understanding and natural dialog.
- Voice: Eleven Labs API for human-quality speech synthesis of diagnoses.
- Video: Tavus API for personalized doctor avatars delivering actionable advice.
- Image Classification: TensorFlow Keras model (
histo_densenet121_model.keras) integrated via a FastAPI microservice. - Data Storage: Supabase for real-time check-in logs, user profiles, and history export.
- Deployment: Continuous deployment to Netlify, with GitHub Actions for auto-builds on every push.
Challenges We Ran Into
- Multimodal Input Sync: Aligning text, voice, and image inputs into a single conversation flow required careful prompt engineering.
- Latency Tuning: Keeping responses under 2 seconds meant optimizing calls to Azure, Eleven Labs, and Tavus in parallel.
- Model Integration: Wrapping the Keras histopathology model in a lightweight API without slowing the front-end.
- UX Balance: Designing a medical-grade interface that still feels approachable for busy non-clinicians.
Accomplishments We’re Proud Of
- Seamless AI Pipeline: Real-time fusion of three AI services (text, voice, video) in one chat interface.
- First-Class Mobile Experience: Sub-30-second end-to-end interactions even on 3G connections.
- Diagnostic Imaging: Working prototype that classifies five tissue types with >90% accuracy on test data.
- Open-Source Toolkit: All core integrations documented and easily reusable for other health-tech developers.
What We Learned
- The power of prompt chaining to maintain context across multiple AI providers.
- How to orchestrate webhooks and serverless functions for low-latency AI calls.
- Best practices for securely storing API keys and mitigating unauthorized usage in client-side apps.
- Importance of clear medical disclaimers and user education when dealing with health data.
What’s Next for CoreVitals AI
- Real-Time Wearable Integration (Bluetooth): Add optional syncing with popular fitness trackers to enrich our inferences.
- Local Model Inference: Ship a lightweight on-device TensorFlow Lite version of our symptom-analyzer for offline use.
- Tele-consultation Booking: Partner with clinics to seamlessly schedule follow-up appointments when alerts exceed safe thresholds.
- Expanded Imaging Library: Support dermatology and ophthalmology image classification with additional pretrained models.
- Community Portal: Enable users to share anonymized health trends, tips, and self-care recipes in a secure forum.
Built With
- bolt.new
- elevenlabs
- next.js
- tavus

Log in or sign up for Devpost to join the conversation.