We saw documentary in class about how items that are supposed to be recycled are being thrown in the garbage dump. Eventually this hurts the environment, as it reaches oceans, and contaminates water sources. We felt that we can help solve this problem by integrating accessible technology such as phones, and emerging tech such as Machine Learning to help citizens identify whether the item is recyclable, garbage, or compost.
What it does
Pure uses the Core ML library in Swift, to identify whether the item is recyclable, garbage, or compost. It is live, so the user does not take pictures and upload them. Instead, the user just has to hover over the item, and the app will identify.
How I built it
I used Swift and the Core ML library. I also made my own models for Machine Learning. I was not able to find models for identifying garbage and recycling. Therefore, I used images from Google, and trained the AI, thus producing a model. Xcode has a library called Core ML, that allows developers to train and test to create their own model. For the model, we showed the AI, pictures of things that can be recycled and things that are supposed to be in the garbage. I was then able to implement that model into the app. I had the app use information from the camera in real time, to identify the items.
Challenges I ran into
It was hard for us to manipulate the UI of the app, and also it was our first time using Swift. We did not know how to use it and Swift was a completely new language for us. I used resources from online to help us code. I tried not to copy any code from the internet, and made sure everything was original. Another challenge we faced was that to test the app out on a real device we needed to connect the phone to the laptop with usb cable. However, my laptop only had USB-C ports. We did not have the adapters for it to connect. My group thought fast, and resorted to my fellow peers for aid and assistance. And luckily we found someone who had a dongle for USB-C to USB 3. Finally, one of the biggest challenges we faced was to have the result show as a label, under the picture. The labels were static, meaning they did not change. However, the label had to change according to what was being shown. We scrapped the internet looking through coding forums and YouTube videos, finally we were able to find a tutorial.
Accomplishments that I'm proud of
We are really proud of the app it self. This is because the app successfully identified items around the room in real time. Also, we are really proud of the fact that we were able to make a Machine Learning app, in a day. Finally we are really proud that the app is really ours, and we have not used any APIs, external codes, or even ML models. We created our own codes, and made our own models using Google Images.
What I learned
We learned a lot of things from this hackathon, and while making the app itself. We learned how to make ML models on Xcode. We also learned how to make a single page app on Swift as well. In general, we learned a lot of things about Machine Learning and how to use it to classify things.
What's next for Pure
Some next steps for Pure, can be using this model and implementing it to IoT. A product can be creating a smart trash can that identifies and separates the garbage and recycling, a step further from the app. Pure has great potential for helping the environment.