The Story Behind EatWise aka EatSafe

What Inspired Us

As parents of two young children, our home is frequently a hub for playdates. With a house full of energetic kids, we often find ourselves needing to be incredibly cautious and respectful of the various allergies and dietary preferences of our guests. Reading every fine-print ingredient label while managing a chaotic kitchen is stressful, and the stakes for getting it wrong are high. We realized that having a multimodal AI assistant could instantly ease that anxiety, acting as a reliable second pair of eyes to help us make the right, safe food decisions with total confidence.

How We Built It

We built EatWise around the Gemini Live API to take full advantage of real-time, multimodal interactions. The core of the application relies on supporting visual data (camera feeds of food items or ingredient labels), audio data (natural voice queries from the user) and text input to the model. The agent has guardrails built in to only answer food and diet related question, and not to entertain out of scope queries.

What We Learned

We learned just how transformative real-time multimodal AI can be for solving everyday parental anxieties. Moving beyond traditional text-based searches to dynamic, conversational voice and vision interactions drastically lowers the friction of keeping kids safe. We also gained deep practical experience in various capabilities supported by Gemini APIs.

Built With

Share this project:

Updates