Inspiration

We thought that reusing waste items should be a frictionless experience. Today, there is a need for adapting reusability practices which are simple and easy due to the following reasons

  • hyper-consumerism in 21st century
  • lots of waste
  • oceans polluted with plastics
  • climate change
  • reusing is time consuming for people
  • recycling not always an option

Can be used as an educational tool by teachers to impart practical/real-life environmental education. During times of COVID, co-curricular activities, labs are not being held like before as classes are remote, so teachers can use Snap N' Reuse to encourage kids to make DIY stuff, utilize waste items at home, which'll also help in reducing their carbon footprint and they'll be encouraged to lead a greener, sustainable life from a young age.

It also helps in reducing waste on earth, reducing carbon footprint, reduce ocean pollution due to plastic, increasing Circular economy. Educating people of all age groups about the importance of reusing, especially youngsters.

What it does

Our app takes images of waste items as input, passes them to an ML model for predictions, and then gives recommendations on how to reuse that item to make something useful. This helps in increasing reusability of waste household items which people normally throw away.

The UI of our webapp has a simple 3 step process:

  1. application takes an image of a waste item from the user
  2. our custom ML model runs inference on the image and predicts the waste item.
  3. based on the type of waste item, curated instructables are fetched for reusing that item

How we built it

TL;DR

Used TensorflowJS for our custom ML model. Python for scraping images and article links. Made webapp with HTML,CSS,JS. Deployed via Github+Netlify.

Detailed Explanation

First, we gathered a total of ~500 images belonging to 6 categories of waste items (cardboard, paper,glass jar, plastic bottle, plastic container, glass bottle) from Google images. Then we trained our ML model on these images and got an accuracy of 90% and converted this Tensorflow/Keras model (.h5 format file) to a tensorflow.js model(which is in our github repo).

We also gathered links to curated instructables on "how to reuse" the above 6 waste items and stored it in a JSON file (which is in our github repo)

Then we built the webapp with html,css,JS in which the user will be able to upload images or click images from webcam, phone camera and get recommendations on how to reuse that item.

Challenges we ran into

During the development, we encountered problems while training our ML model and while integrating the Camera/Snap feature on the web app and some other issues while using Javascript.

  1. We were training our own ML models on our own datasets. Initially, our model had really bad accuracy and was not ideal for deployment. Also, though we had experience with ML, we hadn't used Tensorflow/Keras much (as we mostly use PyTorch). Some issues were ML related and some issues were just due to some depreciated code (tensorflow 1.0 vs tensorflow 2.0 errors).

Eventually, we were able to figure out solutions to these challenges by going through tensorflow documentation, github issues, and stackexchange. We got a model accuracy of 90% and also learnt Tensorflow along the way!

  1. We also encountered some issues while integrating the camera-snapshot feature in javascript, which took a lot of time and going through stackexchange etc to finally resolve.

Accomplishments that we're proud of

We're proud of the fact that we were able to make a full-stack ML web application. We aren't too good in Javascript, so we're quite satisfied that we got all the features working. We also learnt how to integrate Tensorflow.js into webapps to make light-weight, client-side ML webapps. All these skills will come in handy in future projects. Finally, a good project/product should focus both on the engineering and design aspects (UI,UX). We put in a lot of effort to make sure that the user experience, onboarding is smooth when they use our app "Snap N' Reuse". We also incorporated elements of neumorphic design to give the user a delightful experience.

What we learned

We learnt how to train ML models using Tensorflow-Keras and then converting those models to Tensorflow.JS format, which can then be easily integrated in a web application.

We also learnt how to integrate Camera mode in a webapp and get ML predictions on the images taken by the user via webcam by passing the image captured to the ML model for inference using Javascript.

What's next for Snap N' Reuse

There are a lot of future prospects and scope for additional features.

  • multi-object detection to make the user experience even better. A user will be able to detect multiple different items in one image and get recommendations to reuse all the items in the image!
  • add a social layer, gamification to increase participation of people.
  • build an open public database for spreading awareness among the general public against generating a lot of waste

Built With

Share this project:

Updates

posted an update

This weekend (28th-29th November), we introduced some new features and a lot of bug fixes and refactoring the code+project structure. We also worked on making the UX(user experience) better and making the instructions on different parts of the app short and simple.

Camera Mode Improvements:

In the earlier versions, our CameraMode was not working properly. Since we aren't experienced in JavaScript, we spent a lot of time troubleshooting this part of our project; though, it was also a good challenge for us. This weekend we resumed work on that and finally got it working.

Landing Page: To make the purpose of our project "Snap N' Reuse" clear, we also made a landing page which explains the motivation, impact, technical working of our project. We dedicated a lot of time on designing the landing page, keeping best UI,UX, marketing/copywriting practices in mind.

Browse Guides: Apart from the landing page, we also added a "Browse Guides" webpage. We realized that maybe sometimes a user would just want to read some instructables on how to reuse a particular item and they shouldn't be forced to take/upload a picture. So, this page has a row of buttons for the 6 waste items which our ML app can detect, and clicking on any item will fetch a few of our curated instructables for that item.

As we added these pages and our assets used(icons, images etc) increased, we restructured our project directory like any other professional webapp development project.

Like any other good hackers, we also ended up spending 4 hours on 2 silly bugs. But, in the end it was a great experience, as we got to polish our JS,CSS, HTML skills even more.

Log in or sign up for Devpost to join the conversation.