WasteSortOS

Inspiration

Waste sorting sounds easy until you’re actually holding something weird like a greasy pizza box, a plastic‑lined coffee cup, or a dead AA battery. Suddenly it’s not obvious which bin it belongs in. Rules vary by city, contamination matters, and signage is usually vague or outdated.

Most people want to dispose of things properly, but the friction is real. If figuring out the correct bin takes longer than a few seconds, people guess — and contamination happens.

WasteSortOS started from a simple idea: what if disposal guidance existed exactly at the moment you needed it?

Instead of guessing, users can just point their phone at an item or type what it is and instantly get a disposal recommendation with an explanation. The goal isn’t just classification... it’s making sustainable decisions effortless.


What it does

WasteSortOS is an AI-powered waste sorting assistant that helps users quickly determine how to dispose of everyday items.

Users interact with the system in two main ways:

Camera Scan

  • Point the phone camera at an item
  • The system analyzes the image
  • Returns the correct disposal bin with explanation

Text Search

  • Type the name of an item
  • Autocomplete suggests matches
  • Returns the recommended disposal category

For every request, the system returns:

  • detected item label
  • recommended bin category
  • confidence score
  • inferred materials
  • explanation for the decision
  • contamination warning (if relevant)
  • fallback indicator when AI reasoning was required

Instead of just saying “recycling”, WasteSortOS explains why the item belongs there.


How we built it

WasteSortOS uses a hybrid architecture combining multimodal AI with deterministic logic.

The project is structured as a monorepo with three sub‑projects:

wastesortos/
├── backend/   FastAPI AI classification service
├── mobile/    React Native / Expo mobile app
└── src/       Next.js web frontend (placeholder)

Backend

The backend is built with:

  • Python
  • FastAPI
  • Uvicorn
  • Pydantic
  • Pillow
  • Pytest

Image understanding and reasoning use:

  • Google Cloud Vertex AI
  • Gemini 2.5 Flash

Instead of letting AI fully decide the disposal result, we built a rules engine containing structured disposal logic for hundreds of items. This prevents common model mistakes and ensures results match real waste policies.

Classification pipeline

Image requests follow this pipeline:

image upload
↓
image resize (Gemini constraints)
↓
vision model → identify item + materials
↓
rules engine lookup
↓
AI fallback classification if no rule exists
↓
decision resolver
↓
explanation generation
↓
structured response to client

Text queries skip the vision step and go directly through the rules engine.

Mobile App

The mobile app is built with:

  • React Native
  • Expo
  • TypeScript
  • react-native-vision-camera
  • react-native-worklets-core

Because the camera framework requires native modules, the app uses a custom Expo dev client instead of Expo Go.

The interface includes:

  • real-time camera scanning
  • animated result cards
  • bin category badges
  • text search with autocomplete
  • scan history modal
  • object detection overlays

The UI follows a minimal dark green aesthetic with the Chakra Petch font.


Challenges we ran into

Waste classification turned out to be way more complex than expected.

Many everyday items are multi-material objects. For example:

  • coffee cups contain plastic liners
  • pizza boxes change categories if greasy
  • plastic packaging often contains mixed polymers

A pure AI classification system struggled with these cases. Models could identify the object but didn’t always understand the disposal rule.

The solution was building a hybrid system where:

  • AI handles perception (what the item is)
  • a rules engine handles policy logic (where it goes)

Mobile development also introduced challenges. The camera stack required native modules, which meant we had to build and manage a custom Expo development client instead of relying on Expo Go.

Finally, designing the UI flow was tricky. A raw AI output is not a product — we needed to transform model responses into something clear, fast, and trustworthy for users.


Accomplishments that we're proud of

We’re proud that WasteSortOS feels like a real product rather than just an AI demo.

Key achievements include:

  • building a working camera-based waste scanning app
  • designing a hybrid AI + rules architecture
  • implementing 1000+ structured disposal rules
  • creating a backend pipeline that returns explanations and confidence scores
  • implementing autocomplete search and scan history
  • achieving 67+ backend unit tests with full offline execution
  • building a mobile UI with animations, detection overlays, and result cards

The system already works end‑to‑end and feels extensible enough to evolve beyond the hackathon.


What we learned

The biggest lesson from this project is that AI works best when paired with structure.

Vision models are great at identifying objects, but policy-based decisions still require deterministic logic.

Another key insight was the importance of explainability. Users trust the system much more when they understand the reasoning behind the recommendation.

Finally, we learned that small everyday problems — like deciding which bin to use — are actually great places to apply AI. The technology becomes valuable when it reduces friction in real-world moments.


What's next for WasteSortOS

There are several directions we want to explore next.

Short-term improvements:

  • expand the rules engine for additional cities and municipalities
  • improve multi-object detection and scanning
  • handle more complex multi-material packaging
  • improve confidence calibration and ambiguity detection

Platform improvements:

  • build out the web frontend
  • add authentication and rate limiting
  • deploy CI/CD pipelines
  • implement analytics for commonly misclassified items

Long term, the goal is to evolve WasteSortOS into a universal waste sorting layer that can be adapted for campuses, cities, and organizations worldwide.

Built With

  • apis
  • artificial-intelligence
  • cloud-services
  • computer-vision
  • databases
  • expo.io
  • fastapi
  • frameworks
  • gemini-2.5-flash
  • generative-ai
  • google-cloud-vertex
  • mobile-app
  • multimodal-ai
  • platforms
  • pnpm
  • python
  • react-native
  • sql
  • typescript
  • uvicorn
  • vertex-ai
  • visioncamera
Share this project:

Updates