About the Project: Candor
Inspiration
Candor was inspired by the idea that recovery support should feel private, honest, and judgment-free. Many people struggling with relapse risk do not always have someone available in the moment when they feel vulnerable. Even when support exists, it can be difficult to talk openly because of shame, fear, or pressure.
We wanted to build something that lives entirely on a person’s phone and helps them reflect before a difficult moment becomes a relapse. The goal of Candor is not to replace therapy, sponsors, or support systems. Instead, it acts as a quiet companion that helps users recognize patterns, identify triggers, and understand their risk earlier.
At its core, Candor is built around a simple idea:
$$ \text{Awareness} + \text{Reflection} + \text{Timely Support} \rightarrow \text{Better Recovery Decisions} $$
Candor is meant to give users more awareness of themselves without making them feel watched, judged, or punished.
What Candor Does
Candor is a judgment-free recovery companion that lives entirely on your phone. It learns from a user’s personal patterns, surfaces possible triggers, and helps them see relapse risk before it arrives.
The app focuses on three main goals:
Learning patterns
Candor observes recurring behaviors, emotional states, routines, and check-ins to understand what may increase or reduce risk.Surfacing triggers
The app helps users recognize situations, moods, or habits that may be connected to relapse risk.Generating personal insights
Candor uses on-device AI to provide supportive, private, and reflective feedback without requiring sensitive data to leave the phone.
A simplified version of the relapse risk idea can be represented as:
$$ R = f(T, M, B, C) $$
Where:
$$ R = \text{relapse risk} $$
$$ T = \text{triggers}, \quad M = \text{mood}, \quad B = \text{behavior patterns}, \quad C = \text{context} $$
Candor uses these signals to help users better understand when they may need extra support.
How We Built It
We built Candor as an on-device AI recovery companion using LiteRT-LM for local insight generation. The project was designed around privacy from the beginning. Since recovery-related data can be extremely personal, we wanted the experience to run on the phone as much as possible rather than depending on cloud-based processing.
The main pipeline of the app is:
$$ \text{User Input} \rightarrow \text{On-Device Processing} \rightarrow \text{AI Insight} \rightarrow \text{Supportive Reflection} $$
The user can enter personal reflections, check-ins, or context about how they are feeling. Candor then processes that information locally and generates a supportive insight that helps the user think through their current state.
Our original architecture was more complex. We planned to use an embedding-first system where EmbeddingGemma would run on the NPU, while the CPU handled similarity search and clustering. Then, Gemma or LiteRT-LM would generate the final insight with GPU/CPU fallback.
That intended architecture looked like this:
$$ \text{Text Input} \rightarrow \text{EmbeddingGemma on NPU} \rightarrow \text{Similarity + Clustering on CPU} \rightarrow \text{Gemma/LiteRT-LM Insight} $$
The benefit of this design was that Candor could compare new reflections to previous patterns and identify similar moments from the user’s history. In theory, this would allow the app to say something like:
“This looks similar to previous moments where stress and isolation were high.”
However, because of integration challenges, we shifted toward a simpler and more reliable architecture:
$$ \text{Text Input} \rightarrow \text{LiteRT-LM} \rightarrow \text{Generated Recovery Insight} $$
This allowed us to focus on making the core experience work well: private, local, supportive insight generation.
Technical Challenges
One of the biggest challenges we faced was integrating EmbeddingGemma into the Android pipeline.
Our original plan required several technical pieces to work together correctly:
- A verified Kotlin-to-LiteRT execution path
- SentencePiece tokenization
- Correct tensor input and output formatting
- NPU delegate validation
- Large model asset handling on the Samsung device
- CPU-based similarity and clustering after embeddings were generated
- GPU/CPU fallback for final response generation
The concept itself made sense, but the difficulty came from validating all of these pieces quickly on the device. EmbeddingGemma required a reliable path from Kotlin into LiteRT, and we needed to confirm that the model was receiving the correct tokenized inputs and returning usable embedding tensors.
The most difficult part was that every stage depended on the previous one:
- If tokenization was wrong, the tensor input would be wrong.
- If the tensor output was unclear, the similarity and clustering step would not work.
- If the NPU delegate failed, the system needed a fallback.
This made debugging difficult under time constraints.
In simplified terms, our technical bottleneck was:
$$ \text{Model Integration Risk} > \text{Available Validation Time} $$
Because of that, we made an engineering tradeoff. Instead of continuing with the full embedding-first architecture, we shifted to a simpler LiteRT-LM path with NPU acceleration for insight generation and GPU fallback.
This decision helped us prioritize a working product over a more complicated architecture that could not be fully validated in time.
What We Learned
Building Candor taught us that good AI products are not only about model capability. They are also about system design, reliability, privacy, and user trust.
We learned that on-device AI has major advantages, especially for sensitive use cases like recovery. Keeping user data on the phone makes the product feel safer and more respectful. However, we also learned that on-device AI introduces real engineering challenges, especially around model size, delegates, tokenization, hardware acceleration, and Android integration.
We also learned the importance of simplifying when needed. Our first architecture was ambitious, but it had too many uncertain pieces for the time we had. By moving to a simpler LiteRT-LM approach, we were able to preserve the heart of the project: helping users reflect on relapse risk in a private and supportive way.
The biggest lesson was:
$$ \text{A simpler system that works} > \text{A complex system that cannot be validated} $$
Candor reminded us that the best technical solution is not always the most advanced one. Sometimes, the best solution is the one that delivers value reliably, especially when users may depend on it during vulnerable moments.
Challenges We Faced
The main challenge was balancing ambition with feasibility. We wanted Candor to recognize patterns over time, compare reflections, and generate deeper insights based on past user behavior. That required embeddings, clustering, and local model execution working together smoothly.
However, Android integration with EmbeddingGemma was more difficult than expected. We had to think through tokenization, tensor shapes, device performance, model loading, and hardware acceleration. These were not small issues, and they affected the entire architecture.
Another challenge was designing the tone of the app. Since Candor is meant for recovery support, the language had to feel supportive without sounding clinical, judgmental, or overly dramatic. The app needed to encourage reflection while respecting the user’s autonomy.
We wanted the experience to feel like:
$$ \text{Supportive} \neq \text{Controlling} $$
Candor should guide users, not shame them. It should help users notice risk, not make them feel like they have failed.
Final Reflection
Candor is a recovery companion built around privacy, reflection, and early awareness. It helps users understand their patterns and recognize relapse risk before it becomes overwhelming.
Although we faced technical difficulties with our original embedding-first architecture, those challenges helped us make better design decisions. By simplifying the pipeline and focusing on LiteRT-LM for on-device insight generation, we were able to create a more realistic and reliable version of the product.
Candor represents the kind of AI we believe is most valuable: AI that is personal, private, supportive, and designed to help people make better decisions in difficult moments.
$$ \boxed{\text{Candor helps users see the risk before the relapse.}} $$
APK is in GitHub as a Release
Built With
- gemma4-e2b
- html
- kotlin
- litert-lm
Log in or sign up for Devpost to join the conversation.