Inspiration
We all want to protect our planet, but when we step outside and see pollution, illegal dumping, or invasive species in our own neighborhoods, we often freeze. We suffer from action paralysis. We don't know what the exact problem is, how dangerous it is, or who to call to fix it. We realized that while there is plenty of global climate data, there is a massive lack of hyper-local, actionable guidance for everyday citizens. We built EcoGuard AI to bridge the gap between passive observation and meaningful environmental action.
What it does
EcoGuard AI is your personal, AI-powered environmental action toolkit.
- Snap & Analyze: Users can point their camera at an environmental issue (like a strange plant or a polluted stream). Using Gemini's multimodal AI, the app instantly identifies the problem (e.g., "Japanese Knotweed - Invasive Species").
- Action Plans: It doesn't just tell you what's wrong; it generates a safe, step-by-step action plan on how to handle or report it.
- Hyper-Local Grounding: Using Google Maps grounding, EcoGuard instantly connects you to the nearest relevant local authorities, eco-centers, or volunteer cleanup groups.
- The Eco-Pulse: Using Google Search grounding, the app provides a real-time feed of verified environmental news and issues specific to your region, keeping you informed and connected to the bigger picture.
How we built it
We built the frontend using React, Vite, and Tailwind CSS to ensure a fast, mobile-first, and highly responsive user experience.
The core "brain" of the application is powered by the @google/genai SDK. We heavily leveraged Gemini's latest capabilities:
- Multimodal Vision & Audio: To process live camera feeds, photos, and environmental audio.
- Google Maps Grounding: To anchor the AI's advice to real-world, local geographical data (finding nearby recycling centers or wildlife authorities).
- Google Search Grounding: To fetch real-time, verified environmental news and prevent AI hallucinations.
- Browser APIs: We utilized the Web Audio API, MediaRecorder, and Canvas APIs to handle live scanning and media capture directly in the browser.
Challenges we ran into
- Real-Time Media Handling: Capturing live video frames and audio chunks in the browser and formatting them correctly (Base64/PCM) to stream to Gemini's Live API was technically complex. We had to carefully manage browser memory and asynchronous state.
- Preventing Generic Advice: Early on, the AI would give generic advice like "pick up the trash." We had to heavily engineer our prompts and integrate Google Maps/Search grounding to force the model to provide specific, hyper-local, and actionable data.
- Cross-Device Compatibility: Ensuring the camera and microphone permissions and streams worked flawlessly across different mobile and desktop browsers required extensive testing and fallback UI designs.
Accomplishments that we're proud of
- Seamless AI Integration: We successfully combined Gemini's Vision, Live Audio, and Grounding tools into a single, cohesive user interface that feels like magic to use.
- Real-World Utility: We didn't just build a toy; we built a tool that can genuinely help someone report a toxic spill or identify a biodiversity threat in under 30 seconds.
- The UI/UX: We are incredibly proud of the sleek, modern, and empowering design. The app feels urgent yet hopeful, encouraging users to take action rather than feeling overwhelmed.
What we learned
- The Power of Grounding: We learned firsthand how Google Search and Maps grounding transforms an LLM from a "smart chatbot" into a reliable, real-world utility tool. It completely eliminated hallucinations regarding local resources.
- Advanced Browser APIs: We leveled up our skills in handling raw media streams (AudioContext, MediaStreamTrack, CanvasRenderingContext2D) in React.
- Prompt Engineering for Action: We learned how to structure system instructions to force the AI to prioritize human safety and actionable steps over dense, academic explanations.
What's next for Ecoguard AI
- Community & Gamification: We want to add features where users can earn badges for reporting issues and organize local community cleanups directly within the app.
- Offline Mode: Environmental issues often happen deep in nature where cell service is poor. We plan to implement local caching and offline queuing so users can snap photos in the woods and analyze/report them once they hit Wi-Fi.
- Open API for City Governments: We envision creating a dashboard for local municipalities to view aggregated, anonymized data of the environmental hazards reported by EcoGuard users in their city.

Log in or sign up for Devpost to join the conversation.