There are photos if you scroll on the carousel above
Recycling rate in Singapore is quite low, with plastics at a mere 7%. Even if people know what materials are recyclable, they often don't have the habit of placing those items into recycling instead. Let's use technology to train people to build this habit!
What it does
The smart dustbin visually inspects the items, and alerts the user if the object is recyclable. This event is logged too to see how much trash has been spotted and (hopefully) diverted to the recycling stream.
How we built it
Hardware: A Raspberry Pi with camera snaps a photo periodically and sends it to the backend, and if it determines that the item is recyclable, a visual alert is displayed on the OLED screen.
Backend: The Python/Flask server first sends the photo to Microsoft Cognitive Services' Computer Vision API for analysis, which returns a textual description of the image. If that result is not conclusive, then the image is sent to a convolutional neural network (GoogLeNet) built on the Caffe library that has been pre-trained to do material classification.
Database: The backend logs successful recognitions to an Elasticsearch database, including the type of recycled material seen and the timestamp.
Frontend: Kibana is used to serve a dynamic dashboard.
Challenges we ran into
Inference in the neural network was too slow for our student-budget t2.micro EC2 instance, so we had to add the Microsoft Cognitive Services stage to have a faster 'happy path' when the item can be captioned appropriately by that service.
Accomplishments that we're proud of
We managed to hook up the hardware with the camera and OLED screen, brought up a dashboard, and integrated deep learning techniques to perform image recognition within this time-boxed hackathon, including a short nap.
What we learned
A renewed inspiration for making fun hacks.
What's next for iTrash
An image dataset of common household recyclables and trash can be created to train a target network specifically for recycling classification.