Inspiration
We’ve all stood in the grocery aisle staring at an ingredient list, confused by long chemical names and conflicting health advice. Is diet soda a weight-loss hack or a microbiome nightmare? Is oatmeal a superfood or a carb bomb?
We realized that "healthy" is subjective—what’s good for a bodybuilder cutting weight might be bad for someone focusing on whole foods. We wanted to solve the problem of information overload and context-less advice. Truth Bite was born from the desire to help users "see beyond the label" and find their version of the truth, instantly.
What it does
Truth Bite is an AI-powered food scanner that decodes nutrition labels based on your specific goals and values. Instead of a one-size-fits-all score, it uses a unique Lens System:
Context-Aware Scoring: Ingredients are categorized as Positive, Negative, Neutral, or Mixed depending on the lens you choose (e.g., "Real Food" vs. "Focus").
Lab Label Decoder: It translates obscure chemical names into plain English, explaining exactly why an ingredient is there.
Personal Fit Analysis: It automatically flags conflicts based on your profile (e.g., highlighting hidden Gelatin for Halal dieters or allergens for specific sensitivities), so you never have to scan the fine print yourself.
Credible Sourcing: Unlike generic AI searches, Truth Bite restricts its knowledge base to trusted sources relevant to each lens (e.g., WHO & NOVA for the Real Food lens; Harvard Health & WebMD for the Focus lens) to ensure reliability.
How we built it
We built the frontend using React to create a snappy, mobile-first experience with a custom sliding camera interface. The backend is powered by Python, orchestrating calls to our AI models.
The core innovation is our Lens Logic Engine. We engineered prompts that restrict the AI's search context to specific, credible medical and nutritional databases. We also implemented a dynamic user profile system that cross-references the AI's analysis with user-specific flags (like "Vegan" or "Peanut-Free") to generate personalized "Fits You" vs. "Conflicts" scores.
We engineered a Multimodal AI Pipeline to bridge the gap between complex data and instant understanding. The process follows a three-step flow:
Visual Extraction (Google Gemini): We utilize Gemini’s multimodal vision capabilities to process camera input in real-time. It extracts raw text from curved bottles, crinkled wrappers, and nutrition tables, identifying ingredients and values with high precision.
Evaluation & Narrative Logic (Backboard IO): The extracted data is sent to Backboard IO, which acts as our central reasoning engine. Instead of just outputting a list, it evaluates the health impact of ingredients against the user’s specific profile (e.g., Halal, Keto). It then constructs a narrative speech script—converting data points into a conversational summary (e.g., "This is great for your bulk because of the high protein, but watch out for the added sugars.").
Audio Synthesis (ElevenLabs): To help busy users who hate reading fine print, we pipe that narrative script into ElevenLabs. This generates a concise, high-quality audio summary instantly. The result? Users can simply scan and listen to the truth about their food while they shop, completely hands-free.
Challenges we ran into
One of the biggest challenges was handling the nuance of nutrition. An ingredient isn't always strictly "good" or "bad." For example, oatmeal is great for bulking but might be "mixed" for someone on a strict cut.
Building the "Mixed" category logic was difficult—we had to teach the system to understand context rather than just binary outputs. Additionally, ensuring the AI didn't hallucinate or use "bro-science" sources required strict prompt engineering to force it to cite credible sources like the NOVA classification system.
Accomplishments that we're proud of
The "Personal Fit" Feature: We are incredibly proud of how the app handles complex restrictions. Seeing the app correctly flag "Gelatin" as a conflict for a Halal user or identify a hidden allergen feels like magic.
Source Integrity: We successfully restricted the AI to credible sources, making our results consistent and trustworthy compared to a standard Google search.
UX/UI: We turned a massive amount of complex data (ingredient lists, chemical functions, health impacts) into a clean, easy-to-read "Green/Red/Orange" bar chart that anyone can understand in seconds.
What we learned
We learned that context is king. A product can be a 10/10 for one person and a 2/10 for another. We also learned that users don't just want to know what is in their food; they want to know why it's there and if it matters to them.
Technically, we learn about using tools to help achieve the highest efficiency of the product. It includes using multiple AI models and API (e.g., Gemini API and Backboard.io) and also other AI products (ElevenLabs)
What's next for Truth Bite
Barcode Scanning: Adding instant barcode lookup for when you don't want to snap a photo.
More Lenses: Expanding our lens library to include Keto, Heart Health, and Diabetic-Friendly views.
Community Verification: allowing users to upvote/downvote analysis accuracy to refine our models further.
Built With
- backboard.io
- elevenlabs
- fastapi
- geminiapi
- javascript
- python
- react
Log in or sign up for Devpost to join the conversation.