WasteX AI
Inspiration
Living in rapidly growing urban areas, we noticed a recurring problem: overflowing waste bins. They aren't just an eyesore; they are a public health hazard and a source of pollution. We realized that the current waste management cycle is reactive—garbage trucks follow fixed schedules regardless of whether a bin is empty or overflowing.
We asked ourselves: What if we could give the city "eyes"? Inspired by the capabilities of multimodal AI, we set out to build WasteX AI, a platform that transforms simple images into actionable data for smarter, cleaner cities.
What it does
WasteX AI is a real-time smart city dashboard that automates waste monitoring.
- Snap & Analyze: Users upload a photo of a waste bin.
- Gemini Intelligence: The app uses Google's Gemini 2.5 Flash model to visually analyze the image and determine the "fill level" (Low, Medium, High, or Critical).
- Live Mapping: The data is geotagged and displayed on an interactive map, allowing municipal authorities to identify hotspots instantly.
- Optimized Routing: By prioritizing "Critical" bins, cleanup times can be reduced by up to 50%.
How we built it
We built WasteX AI as a full-stack application using the Next.js 16 App Router for a seamless frontend and backend experience.
- AI Engine: We integrated the
@google/generative-aiSDK. The core analysis happens via a carefully crafted prompt sent to thegemini-2.5-flashmodel, which accepts the image as input and robustly classifies the waste level. - Database: We used MongoDB with Mongoose to store report data, specifically leveraging geospatial queries to map the
latitudeandlongitudeof every report. - Frontend & Visualization:
- Tailwind CSS & Geist UI principles for a clean, modern aesthetic.
- React Leaflet for the interactive map that renders live waste markers.
- Recharts for visualizing data trends in the dashboard.
- Authentication: Secure user management to ensure trustworthy reporting.
Challenges we faced
- Prompt Engineering for Vision: Clear distinction between a "full bin" and a "large bin" was tricky. We had to iterate on our system prompt to ensure Gemini focused specifically on the trash level inside the bin, not just the bin's physical size.
- Real-time Latency: Initially, image upload and analysis took too long. Switching to Gemini 2.5 Flash significantly reduced inference time, making the app feel snappy and responsive.
- Map Integration: Handling client-side map rendering (Leaflet) within Next.js Server Components required careful state management and dynamic imports.
Accomplishments that we're proud of
- Seamless AI Integration: We successfully built a pipeline where a raw image from a phone is converted into structured database capability in under 2 seconds.
- UI/UX Design: Creating a dashboard that doesn't just look like a spreadsheet but feels like a modern mission control center.
- Impact Potential: Realizing that this tool could genuinely help sanitation workers prioritize their routes and save manual effort.
What we learned
- Multimodal AI is accessible: You don't need to train a custom TensorFlow model for weeks. Gemini's out-of-the-box vision capabilities are incredibly powerful for real-world object analysis.
- The power of Next.js 16: Using the latest features of Next.js helped us move fast, keeping our API and UI logic tightly coupled but clean.
Built With
- cloudinary
- gemini
- leaflet.js
- mongodb
- nextjs
- tailwindcss
- typescript
Log in or sign up for Devpost to join the conversation.