🌱 PlantGuardian AI

Inspiration

Gardening and agriculture are profoundly sensitive to context. What works for a Monstera in a humid apartment won't work for a drought-stricken crop in an arid farm. Yet, most plant care apps provide generic, one-size-fits-all advice based solely on a photo. We saw an opportunity to leverage the multimodal reasoning capabilities of Gemini 2.5 Flash to build a "context-aware" agent—one that doesn't just look at a leaf, but understands the environment surrounding it.

What it does

PlantGuardian AI is a next-generation plant health assistant. It functions as a "Live Agent" that:

  1. Vision Analysis: Identifies plant health issues through photos or videos using Gemini 2.5 Flash.
  2. Autonomous Context: Automatically detects the user’s location and fetches real-time weather and soil data without any manual input.
  3. Actionable Intelligence: Instead of a wall of text, it generates a structured 7-Day Recovery Plan and a visual infographic.
  4. Interactive Support: Allows users to chat with the AI to refine care tips based on their specific local conditions.

How we built it

The core "brain" of PlantGuardian is built on Google’s Agent Development Kit (ADK) combined with the Gemini 2.5 Flash model.

  • Orchestration: We used the ADK to create a custom PlantCareAgent that seamlessly handles multimodal inputs (images + weather data).
  • Frontend: A sleek, glassmorphic UI built with Streamlit for a premium user experience.
  • APIs: Integration with OpenWeatherMap for environmental context and OpenCV for initial image processing.

Challenges we ran into

Our biggest hurdle was model stability and speed during complex vision tasks. We initially experimented with larger models, but they were often too slow for a "live doctor" feel. By standardizing on models/gemini-2.5-flash, we achieved the perfect balance of vision capabilities and near-instant response times. Additionally, forcing a multimodal model to return strictly structured JSON for infographics required significant prompt engineering.

Accomplishments that we're proud of

We are incredibly proud of the Auto-Context Engine. The fact that a user can simply upload a picture, and the AI autonomously deduces the weather and soil type to tailor its diagnosis, feels like magic! Seeing the agent correctly identify that a plant was suffering from "heat stress" because it knew the local temperature was 105°F was a huge win.

What we learned

We learned a massive amount about Agentic Workflows—specifically how to use the ADK to "ground" LLM responses in real-world data. We also discovered how critical "Multimodal Prompting" is when you need an AI to act as both a scientist (analyzing the image) and a designer (structuring the infographic).

What's next for PlantGuardian AI

In the future, we want to integrate IoT soil moisture sensors directly into the agent’s context stream. This would allow the Gemini Agent to pulse notifications to farmers before the plant shows visual signs of distress, moving from reactive care to proactive prevention.


Live Demo: plantguardianai.streamlit.app GitHub: github.com/saurabhhhcodes/plant-guardian-ai

Built With

  • gemini-2.5-flash
  • google-adk
  • ip-geolocation
  • opencv
  • streamlit
Share this project:

Updates