Inspiration

We inherently know importance of recycling, but we get lazy and sometimes forget. People often throw their waste in the wrong recycling bin (e.g. bottles for compost or food waste for landfill) which negates the entire purpose of recycling. Therefore, we wanted to create a project to solve this issue.

What it does

Introducing sorta, a project that combines computer vision and hardware components to make recycling more convenient and effective. Our system utilizes image recognition via a mounted camera to recognize waste and identify which kind of recycling bin it best belongs in. Our system will communicate with the recycling bins to ensure that only the identified bin will open for the user. Therefore, people have no choice but to correctly dispose their waste.

How we built it

For capturing images and opening specific bins properly, our system utilizes a Raspberry Pi 3 Microcontroller with an Arducam, an ultrasonic range sensor, and three servo motors. The Arducam captures images and the microcontroller sends the images to a storage account in Microsoft Azure. The Arducam only takes pictures when a user is within a certain range of the waste disposal system so that we do not have excessive, superfluous data; our system detects range with the ultrasonic range sensor. Finally, our system manages the opening and closing of the recycling bins through three standard servo motors.

In terms of software, after the Raspberry Pi takes the picture of the piece of waste it is uploaded into a Microsoft blob storage so that the list of pictures taken are easily iterated through and turned into the proper format (URL). The system then utilises the Microsoft cognitive services, computer vision API to detect the type of waste it is (eg. plastic bottle, plastic bag, can). A category hierarchy is then built on top of it to sort the detected objects into more general categories. This builds up until each object is sorted into the 3 categories of a typical recycling bin in Stanford: Compostable (fruits, vegetables), Recycle (glass, plastic, metal, paper), landfill. For extra accuracy the system then also runs the pictures through the Microsoft custom vision API which we manually trained with a mixture of categorised waste pictures from ImageNet and manually taken pictures of food wrapper waste lying around in the Hackathon. Our original plan also involved training this API to detect brand names and company logos.

The data coming in from both APIs are then stored up in a Microsoft cosmos db with Mongodb. If company brand logos, type of waste and location (trivial data because the physical system is stationary and it can be hardcoded) are successfully obtained. This has potential to be used in data analytics applications (eg. displaying in a heatmap showing which company is producing what type of waste at high levels in different locations)

Challenges we ran into

Waste sorting isn’t completely a visual problem. People sort waste according to what the wastes are, but what the wastes are depends not only on our vision of those objects. A piece of metal is categorized into recycled because it feels like a metal in our hands. Therefore, we are working on problems that contain more factors that we can tackle currently. Thus, we have to optimize under inevitably biased sub-optimal situations.

No one in our team had a strong background regarding hardware. Therefore, implementing all the Raspberry Pi features was a challenge. Although we managed to implement the camera capture with aspects of distance detection and image transfer to the Microsoft Azure data storage space, we were not able to implement the opening and closing of recycling bins through the servo motors.

Data selection was challenging, as we didn’t have access to database with good quality control. In the end, we have to manually select general and valid photos of glass, plastic, metal, paper, etc. so that our model developed with the Microsoft custom vision API could be better trained.

Along the way we came across many small bugs that really tripped up the teams progress. Many of those bugs, involved calling APIs because the errors returned were generally unclear on which parameter was missing in the call. So a large amount of time it came down to trial and error on guessing the conventions of these API calls.

Accomplishments that we're proud of

For all the members in our team, this event was the first hackathon we ever attended. Therefore, we’re very proud of submitting an idea and project we felt could an beneficial impact on society.

Although the learning curve at times was steep, we are proud that we stuck with it throughout the whole event. We definitely learned a lot and were exposed to a ton of amazing resources we can use for future projects. We also had a blast discovering the collaborative, innovative, and inspiring nature of hackathons. We met a lot of people this weekend that we otherwise would never have the chance to meet. To conclude, we are really looking forward to attending our next hackathon!

What we learned

Hardware, Database, Raspberry Pi, Microsoft Cognitive Services (including computer vision API and custom vision API)

What's next for sorta

We could integrate more sensors into our system so that it’s easier to distinguish certain material like metal.

We could also integrate motors into our system that we were not able to finish this time, so that the user only needs to place the trash on a plate, and our system can categorize it and place it into the right trash bin.

If company logo brand and location data is also collected at a larger scale with sharpener waste classification, we believe that the data collected can be very useful. It can be used to detect consumer patterns in different locations as well as waste data for environmental purposes.

Built With

Share this project:

Updates