Kintsugi

A mobile health companion that combines vitals, mental wellness, recovery support, clinical tracking, and emergency response in one app.

Inspiration

Most health apps are fragmented. You use one app for vitals, another for appointments, another for journaling, and none of them really connect into one experience.

We wanted to build something that feels less like a collection of tools and more like a real health companion, something that supports the full cycle of care: monitoring, mental support, recovery, clinical organization, and emergencies.

The idea was inspired by kintsugi, the Japanese art of repairing broken pottery with gold. Instead of hiding damage, it highlights healing. That became the core theme of the project: health is not about perfection, it is about recovery and resilience.


What it does

Kintsugi is a mobile app with 5 core tabs:

❤️ Vitals Dashboard

  • Live heart rate, respiratory rate, and SpO₂ (auto-refresh every 5s)
  • Anomaly detection
  • Animated pulse alerts
  • Trend visualization

🧠 Mental Health

  • AI chat companion powered by Groq LLaMA 3.3
  • Mood journal for quick emotional check-ins
  • Full voice call mode with speech-to-text, AI response, and neural text-to-speech

🌿 Recovery

  • Daily recovery checklist with progress tracking
  • Categories for:
    • exercise
    • nutrition
    • mental wellness
    • sleep

🏥 Clinical

  • Appointment scheduler
  • Push notification reminders
  • Symptom logger for tracking patterns over time

🆘 Emergency SOS

  • One-tap SOS flow
  • Direct 911 call support
  • Emergency contact management with call and SMS actions

How we built it

We built the app using a clean React Native architecture with Expo + TypeScript, a bottom-tab navigator, and shared global state using Context + AsyncStorage.

Core stack

  • React Native 0.74.5
  • Expo SDK 51
  • TypeScript
  • React Navigation (Bottom Tabs)

AI + Voice pipeline

One of the strongest parts of the project is the real-time voice companion flow:

  1. 🎙️ User speaks into the app (expo-av)
  2. 📝 Audio is transcribed using Groq Whisper (large-v3-turbo)
  3. 🤖 Transcript is sent to Groq LLaMA 3.3 70B
  4. 🗣️ AI response is converted to speech with ElevenLabs
  5. 🔊 Audio is played back in-app (expo-av)
  6. 🔁 Listening restarts automatically for continuous conversation

This makes the mental health feature feel much more like a live companion than a standard chatbot.

UI and UX stack

  • expo-linear-gradient for layered backgrounds
  • expo-blur for frosted-glass tab bar styling
  • react-native-reanimated for smooth motion
  • react-native-chart-kit for vitals trend charts
  • expo-haptics for tactile feedback
  • expo-notifications for appointment reminders

Eye-distance tracking foundation

We also added the base structure for future vision-related features:

  • react-native-vision-camera
  • react-native-worklets-core
  • MLKit Face Detector for distance monitoring

This supports future work on screen-distance safety and guided vision screening features.


Challenges we ran into

1) Real-time voice UX is harder than chat UI

A full voice call experience requires many moving parts:

  • recording
  • upload and transcription
  • AI response latency
  • TTS generation
  • playback
  • auto-restart listening

Even small delays can make the experience feel broken, so we focused on making the flow feel smooth and continuous.

2) Health data integration vs shipping the experience

We wanted to validate the complete product flow first, so we used mock vitals data instead of blocking on native health integrations. This helped us test the dashboard, anomaly logic, and user experience quickly.

3) Native build tooling friction

We ran into Android native build issues on Windows (Ninja/CMake). We fixed it, but it slowed down development and reminded us that mobile development includes infrastructure and tooling, not just UI code.


Accomplishments that we are proud of

  • Built a working multi-tab health app with a clear and scalable architecture
  • Shipped a voice AI companion flow end-to-end (STT → LLM → TTS → playback)
  • Designed a cohesive dark-mode UI with animated alerts and strong visual identity
  • Added practical health workflows (recovery tasks, appointments, symptom logging, emergency actions)
  • Created a production-structured codebase that can support future native integrations

Hackathon build metrics

  • 19 source files
  • 5 screens
  • 5 reusable components
  • 2 custom hooks
  • 2 services
  • ~2,800 lines of code

What we learned

  • Health UX is about trust, not just features. Timing, animation, and clarity affect whether users feel safe using the app.
  • Voice AI experiences need orchestration, not just API calls. The state transitions are a major part of the product.
  • Shipping a strong foundation fast is better than overbuilding too early. Mock data let us prove the concept quickly.
  • Healthcare features need a balance of technical capability, comfort, clarity, and safety.

What is next for Kintsugi

We see this as a strong foundation, not just a demo. Next steps include:

  • Replace mock vitals with real integrations (HealthKit, Google Fit, wearables)
  • Move API keys into secure environment configuration
  • Expand anomaly detection into trend-based warnings, not only threshold triggers
  • Add clinician-ready exportable reports
  • Improve iOS native build stability
  • Expand eye-distance tracking into guided vision safety workflows
  • Add personalization for recovery plans and mental health check-ins

Why this project stands out

Many health apps solve only one problem. Kintsugi combines:

  • monitoring
  • mental support
  • recovery
  • clinical organization
  • emergency response

It is not trying to replace doctors. It is designed to help users stay engaged with their health every day, especially between appointments, when support is often missing.

The core idea is simple:

Healing is a process, and the app should support the whole process.

Built With

Share this project:

Updates