Inspiration

Most people don’t think twice before throwing something in the trash, even when it could be recycled, composted, or is actually hazardous. While environmental awareness is emphasized in places like Canada, many people around the world are simply told “recycling is good” without ever being taught how to do it correctly.

Even within our own team, some members who didn’t grow up in Canada found it confusing to decide where items should go (trash, recycling, compost, etc.). That made us realize this is a real, everyday problem.

Hackathons are about building creative solutions that make people think differently, so we asked ourselves: What if a trash can could stop you before you made the wrong choice?

That’s how we came up with the idea of a smart, slightly sassy trash can that makes people think twice before throwing something away, because small habits like this make a difference, especially as environmental sustainability becomes more urgent.

Improper waste sorting is one of the easiest environmental problems to fix, but also one of the most commonly misunderstood in everyday life.

What it does

Our smart trash can uses a motion sensor to detect when an object is nearby. Once activated, it asks the user to place the item in front of its camera.

  • It captures an image of the item
  • Uses AI to classify what the item is
  • Determines the correct disposal method (trash, recycling, compost, or hazardous waste)

Then:

  • If it belongs in the trash: the lid opens and accepts the item
  • If it doesn’t belong: the trash can “calls you out” with a sassy response and explains where it should go instead
  • If the image is unclear: it makes a sarcastic remark and shuts down

The user can also respond verbally, argue, ask questions, or push back, and the trash can replies before eventually ending the conversation.

Overall, it’s an interactive system that educates users in real time while adding personality and humor.

How we built it

We built the system using a mix of hardware and software:

  • Python to power the main system logic and coordination
  • Raspberry Pi to connect and control all hardware components
  • Stepper motor to open and close the lid
  • USB camera + microphone for image capture and audio input detection
  • Speaker to deliver spoken responses from the trash can

For AI and interaction:

  • Gemma 4 for image classification and generating responses
  • ElevenLabs for text-to-speech, giving the trash can its voice

The entire system is triggered when the motion sensor detects an item within ~10-40 cm.

Challenges we ran into

One of the biggest challenges was hardware integration. We initially developed everything using our laptops (camera, mic, etc.), but switching to external hardware like the Raspberry Pi introduced a lot of unexpected issues.

We also struggled heavily with latency in the original pipeline. The system originally processed everything in a linear flow:

  • Listening to user input
  • Sending it to AI for processing
  • Generating a response
  • Playing audio output

This created noticeable delays and made the interaction feel unnatural.

To fix this, we restructured the system using threading and a queue-based architecture, allowing audio capture, AI processing, and response generation to run more independently instead of blocking each other. This significantly improved responsiveness by preventing the system from waiting on each step sequentially.

Accomplishments that we're proud of

We’re proud that we built something that is both functional and meaningful.

  • It actively educates users instead of just reminding them
  • It turns a boring, everyday action into something interactive and memorable
  • It encourages better environmental habits through real-time feedback

We’re also really proud of the intelligence behind the system. The AI is accurate at detecting and classifying objects, and producing context-aware responses. It can also retain information within the current session, which makes interactions feel more continuous and less robotic.

On top of that, the trash can’s personality turned out exactly how we imagined. It’s fun, a little sassy, and engaging enough to grab attention without being ignored. Combined with ElevenLabs text-to-speech, the whole system feels alive, as if the trash can has its own character and presence.

Most importantly, we took this from an idea to a working prototype with hardware + AI integration, which was a huge achievement for our team.

What we learned

We learned that working with hardware requires preparation and flexibility, as things rarely work the first time.

We also gained experience in:

  • Integrating hardware with AI systems
  • Managing real-time interactions
  • Optimizing performance under time constraints

For many of us, this was our first time doing full hardware-software integration, and it was challenging but rewarding.

What's next for EchoBin

Next steps include:

  • Expanding the system to support region-specific recycling rules
  • Enhancing classification across trash, recycling, and organic waste streams to handle more complex, real-world disposal scenarios
  • Adding a companion app for tracking habits and learning
  • Deploying in schools or public spaces to promote environmental education

Long-term, we want EchoBin to evolve beyond a “smart trash can” into a fully integrated waste-sorting system that intelligently guides users across all disposal types. Our goal is to build a scalable solution that helps people develop better waste habits and reduces contamination across recycling and organics streams on a global scale.

Built With

Share this project:

Updates