The universal logo for Recycling, used for our app logo
What our app looks like from start-up
An item that is 100% recyclable in most places
The Inspiration: Our inspiration came from the need to help our environment in any way we can. At the Keynote to begin the weekend, Chamath Palihapitiya said words that stuck to us - find a problem, and fix it. While we don’t think our project will outright be the solution to the garbage crisis, we hope that we can help mitigate any unnecessary stress caused by lack of information.
What it does: Our project allows the user to take a picture of an object. It then takes this picture and tries to identify it. If the app thinks it recognizes the object, it will list a set of choices for the user to choose from. Otherwise, the user is prompted to tell the app what the object is. After the app knows what the object is, it will recommend different locations where the user can drop the object off.
How we built it: We built our project using Google Cloud Vision and their image classification API. We then implemented it into an android application that allowed us to take pictures of items, identify it is recyclable, and, if required, display a map of where the user can dispose of it. Using firebase we created a database that stores objects and their properties (recyclable or not, if it can be recycled regularly, or at a specific location).
Challenges we ran into: When we were making our project, we ran into several bugs, such as the camera not detecting objects at all. For example, we took a picture of a juice box and it returned nothing. We had to try several times for the program to start recognizing the juice box. Also, we were experiencing slow image processing times but we tried our best to compress the images, slightly lowering runtimes.
Accomplishments that we’re proud of: We’re proud to have been able to think of an idea and make it come to life at Hack the North. We were originally looking for 1 or 2 extra members in our group but we eventually concluded that we would have to do it as 2 people. We managed to learn about the APIs and implemented it into code and different languages, learning as we went.
What we learned: Coming into this project, neither of us knew anything about Machine Learning. Throughout the 36 hours, we learned how difficult it was to train a model to recognize images owing to the fact that you need many thousand images in order to get even a low degree of accuracy.
What’s next: Our next steps are is to create our own dataset and machine learning model, at which point we can use back-propagation in order to train the model based on user input. Currently, we use Google’s dataset which does not allow us to re-train it based on objects it doesn’t recognize correctly. Doing so would allow us to gain a higher degree of accuracy and faster recognition time since we would eliminate objects that have a low chance of being queried.