Inspiration

As the council jotted down their ideas, the search for a project to better our lives came to an end when the idea of garbage sorting came to us. It's not uncommon that people tend to misplace daily objects, and these little misplays and blunders add up quickly in an environment where people tend to give little thought to what they dispose of.

Our methodology against these cringe habits calls for Sort the 6ix, an application made to identify and sort an object with your phone camera. With some very convoluted object detection magic and API calling, the application will take an image, presumably of the debris you're looking to dispose of, and categorize it, while providing applicable instructions for safe item disposal.

How we built it

With the help of Expo, the app was built with React Native. We used Google Cloud's Vision AI to help detect and classify our object by producing a list of labels. The response labels and weights are passed to our Flask backend to identify where the objects should be disposed, using the Open Toronto's Waste Wizard dataset to help classify where each object should be disposed, as well as additional instructions for cleaning items or dealing with hazardous waste.

Challenges we ran into

A big roadblock in our project was finding a sufficient image detection model, as the trash dataset (double entendre) we used had a lot of detailed objects, and the object detection models we used were not working or not expansive enough for the dataset. A decent portion of our time spent was looking for a model that would suit our requirements, to which we took the compromise of Google Cloud's Vision AI.

There were also issues with dependencies that caused some headaches for for group, as well as the dataset as using a lot of html formatting which we had trouble working with.

Accomplishments that we're proud of

We were proud that we were able to get the app working and the object detection. We successfully navigated Google Cloud's API for the first time and implemented it into the comfort of your phone camera.

We also used another Artificial Intelligence model from Hugging Face, called all-MiniLM-L6-v2. We utilized this for semantic search to better help contextualize the camera output, through the models ability to graph sentences & paragraphs to a 384 dimensional dense vector space, and comparing it to the most relevant trash categories that are given from the dataset.

What we learned

During the 36 hours, we learned how to make and deal with APIs, we learned how to use object recognition models **and properly apply them onto our application, as well as implementing that into **semantic search to give the result using a comprehensive .json dataset, and calling relevant information from said dataset.

And most importantly, we learned that react native wasn't the play for choosing a frontend language.

What's next for Sort the 6ix

The time constraint failed to give us the capability to implement this product physically, and we plan to implement this into a physical product, that can actively scan for objects to quickly output visual feedback. This then can be mounted directly onto garbage grabbers in Toronto, to better help people identify and clean up items to maximize their environmental impact on a whim.

Built With

Share this project:

Updates