Inspiration Most modern wearables are incredible at collecting data. However, companion health apps often fail to contextualize this data. They give users raw numbers, complex charts, and generic insights, leaving them to manually connect the dots between a bad night's sleep, a low HRV (Heart Rate Variability), and their workout readiness. We wanted to build something different: a profoundly empathetic, intelligent wellness companion. We asked ourselves what it would be like if we could give everyone access to a world-class personal trainer and sleep coach who knows exactly how their body has been adapting over the past 30 days. That is how Zylience Fit and our AI persona, Zyla, were born.
What it does Zylience Fit is a smart wellness application that transforms raw wearable data from any brand into actionable, deeply personalized health coaching. Featuring a highly empathetic persona named Zyla, the app intelligently analyzes your past 30 days of metrics, including sleep, heart rate variability, training load, and vitals. Instead of just showing you charts of numbers, Zylience correlates your health data (like linking a poor night's sleep to declining recovery) to provide nuanced daily overviews, adaptive workout recommendations, and personalized sleep tips. It gives users an expert wellness companion right in their pocket.
How we built it We built Zylience Fit natively for Android to ensure a seamless, high-performance user experience.
Frontend: Written in Kotlin using Jetpack Compose for a modern, fluid, and responsive UI.
Data Aggregation: We used the Android Health Connect API to act as a universal hub, securely fetching data from whatever wearable the user prefers (Steps, Sleep Sessions, Vitals, Active Calories, HRV).
SensorDataProcessor (Math & Feature Engineering): Before data hits the AI, we process it locally. We handle missing vectors using physiological defaults and classify the user's readiness. We also derive metrics like BMI and a data completeness coefficient. For example, BMI is dynamically mapped using standard indices:
$$BMI = \frac{\text{weight (kg)}}{\text{height (m)}^2}$$
We also calculate a Data Completeness Score to help the AI understand its own blind spots, normalizing available data $D_{available}$ over the total expected data points $D_{total}$:
$$Completeness = \frac{\sum_{i=1}^{9} D_i}{9} \times 100%$$
AI Integration (Gemini 2.0): We integrated the Firebase Vertex AI SDK. We use gemini-2.0-flash for our robust multi-turn coaching chats, and the much lighter gemini-2.0-flash-lite-preview-02-05 to generate ultra-fast, contextual dashboard overviews every time the app is opened.
Prompt Engine Pipeline: Our HealthPromptEngine systematically converts the 30-day day-by-day matrices and derived vitals into a strict Markdown prompt, forcefully bounding the AI to act as an empathetic coach and preventing medical diagnoses.
Challenges we ran into Sparse & Missing Data: Real-world health data is incredibly messy. If a user does not wear their watch to bed, variables like resting heart rate or sleep duration return null. We had to build an imputation fallback model using physiological baselines (e.g., $HRV_{default} = 35.0 \text{ ms}$) to prevent the AI context builder from crashing or generating poor advice. Context Window vs. Latency Trade-offs: Injecting 30 days of day-by-day records (steps, sleep stages, calories) into the LLM context on every request risked high latency. We solved this by using the Gemini Flash models and moving heavy math (like calculating 30-day averages and TRIMP training load scores) into local Kotlin code before passing the summarized string block to the LLM. Asynchronous Permissions: Handling granular Health Connect permission scopes via Coroutines was tricky. We had to heavily utilize Kotlin suspend functions and supervisorScope wrapping to ensure that if the app failed to read one specific metric (like Blood Glucose), the rest of the dashboard would still aggressively load the available metrics. Accomplishments that we're proud of We are incredibly proud of successfully bridging a strict local data API with a powerful cloud AI while maintaining rapid performance. Giving Zyla a distinct, empathetic voice that dynamically adjusts tone based on the user's actual physical readiness is a major win. Building a robust data processing pipeline that gracefully handles missing sensors without disrupting the user experience was also a highly rewarding technical achievement.
What we learned Building Zylience Fit was an incredible learning experience in bridging mobile health APIs with modern Generative AI. We learned how to deeply integrate the Android Health Connect API to securely read highly sensitive, cross-platform wearable data. More importantly, we learned that feeding raw sensor data directly into an LLM can produce unreliable results. We mastered how to build a robust pre-processing pipeline for cleaning outliers and deriving new engineered features before it ever touches the AI. We also refined our prompt engineering skills by using dynamic system instruction overrides to give our AI a functioning and mathematically sound memory base.
What's next for Zylience Fit Moving forward, we plan to transition from our cloud-based Gemini implementation to a fully on-device LLM using MediaPipe and LiteRT. This will push privacy to the absolute maximum by keeping all generative AI processing strictly on the user's phone. We also want to expand the Zylience ecosystem by deeply integrating the Diet & Nutrition tab, mapping caloric intake alongside our advanced cardiovascular load (TRIMP) scores to create the ultimate holistic wellness companion!
Log in or sign up for Devpost to join the conversation.