binQ
Inspiration
Barcelona has a recycling contamination rate above 40%. Most people genuinely want to recycle correctly, they just never find out if they do. There is no feedback loop between the citizen and the bin. Awareness campaigns get launched, money gets spent, and nobody can measure whether anything changed. We wanted to fix that at the most fundamental level: the moment the trash is thrown.
What it does
binQ is an AI module that lives inside the trash bins of Barcelona, powered by the Arduino UNO Q. Citizens open the binQ app, unlock through proximity the bin, and toss their trash. The camera inside the bin classifies the item in real time using a YOLO model running entirely on-device (no cloud, no internet required). The app instantly tells you whether you recycled correctly, awards points, updates your streak, and shows you where you rank on the city leaderboard. Get it wrong and it tells you exactly which bin to use and why. Behind the scenes, every toss is logged: who, what, which bin, where, and when. That data feeds a city dashboard that gives the Ajuntament granular, real-time visibility into recycling behaviour across every neighbourhood in Barcelona.
How we built it
The hardware module runs on the Arduino UNO Q, which has a Linux SBC and an STM32 MCU communicating via RouterBridge. The USB camera feeds frames to a YOLOv8 nano model exported to ONNX, running on the Linux side via the Ultralytics library and OpenCV. When a citizen unlocks the bin, a 60-second classification window opens where the model runs inference on every frame and returns the highest-confidence result. The citizen app is a PWA served directly from the board over local WiFi via the WebUI Brick, communicating in real time through socket.io. The model was fine-tuned on the Waste Detection datasets.
Challenges we ran into
Getting YOLO to run reliably on the UNO Q's Linux CPU at acceptable latency required careful image size tuning. We settled on 320×320 to hit under 300ms per frame. The camera angle was a major hurdle: every public dataset photographs waste from the side or top-down, but our camera faces upward from inside a dark bin. Integrating the classification pipeline with the App Lab framework and RouterBridge also required understanding the boundary between the Linux and MCU sides.
Accomplishments that we're proud of
Running a full YOLO object detection pipeline entirely on-device with no cloud dependency, inside a trash bin, with results delivered to a phone in under 60 seconds. Building a product that has two distinct user layers that all feel coherent and serve a single purpose.
What we learned
That the hardest part of applied ML is not the model, it's the domain gap between training data and the real world. A model that scores well on a benchmark can fail completely when the camera is pointing the wrong direction in a dark enclosure. We also learned that the Arduino UNO Q's dual architecture (Linux + MCU) is genuinely powerful once you understand how RouterBridge connects the two sides, but the learning curve is steep and the documentation is sparse. And that a good demo is worth more than a perfect codebase.
What's next for binQ
Training the model on a larger custom dataset captured specifically from inside bins in different lighting conditions. Adding organic waste detection, which is currently the hardest class to classify reliably. Integrating with the Ajuntament's existing waste management systems so the event data flows directly into their operations dashboards. And exploring whether the gamification layer can be tied to real rewards (discounts, transport credits, or neighbourhood recognition) to drive long-term behaviour change at scale.
Log in or sign up for Devpost to join the conversation.