According to Deloitte, only 9% of plastics are being recycled in Canada. We believe that such a low statistic is caused by wrong classification or contamination of garbage at the household level. If we could have a more efficient classification system for garbage, we would be able to diminish future landfills and pollution.

What it does

First, the mobile app takes a picture of the garbage, which will send the picture to our back-end. The back-end itself will run the image through our PyTorch model (which has a 95% accuracy), and will return to the mobile app the garbage category. If the garbage is in specific categories that need specific disposal, the mobile app will suggest the closest place to dispose of it.

How we built it

We divided our tasks in 3:

  • Alix worked on the ML side of things, built the PyTorch model with a kaggle garbage category dataset on a Jupyter notebook.
  • André worked on the back-end side of things, ensured a reliable transmission of images between the mobile app and the back-end, and integration with the PyTorch model.
  • Ricky worked on the mobile app side of things, built a Flutter app to take pictures, send them to the back-end, and show the classification. He demonstrates the ability to fetch disposal locations for special garbage types. For now, disposal sites are static locations, but cities, governments, etc. could easily offer this feature.

Challenges we ran into

Alix (ML):

This was the first time I did some image recognition with PyTorch, and initially, I was only getting accuracies of ~30%. I realised that I was in training/eval mode at the wrong time and that made me have poor results. Once I changed it, I had a global accuracy of 95% on my model.

André (Back-End)

This was the first time at all that I used Flask, yet alone Python! I had difficulty executing multi-threaded requests to be able to receive multiple requests while running a long process (image classification) in the background. To fix that, I used the celery library to be able to run tasks in the background.

Ricky (UI):

This was my first time building a full Flutter mobile interface. I was going into it thinking that it would be similar to JavaScript, but Flutter development requires a different workflow. Initially, to avoid wasting to much time, I used a lot of open source templates, but that created bugs which I didn't understand. I realised that the best approach is to write things yourself, so that you can digest what you are doing.

Accomplishments that we're proud of


As I mentioned earlier, this is the first time that I have done some computer vision with PyTorch, and I was feeling very discouraged until ~1AM because I wasn't able to get above ~40% accuracy in my model. I know that 95% is not that impressive in general for a classification model, but I'm still very happy for first-timer results.


This is my first time using Python, so I was very happy to have a fully-functioning back-end by the end of the hackathon! This was my first hackathon, and I am happy to do a project for social good that actually could have a real-life purpose.


Although this is my first time using Flutter, I am proud to have learned enough by 5AM to apply it in future projects! That being said, although the dynamic functionality of the UI is basic, it allows us to showcase what our project can do in the real world!

What we learned

We mentioned all of these points earlier, so here is an abbreviated list: Alix: Pytorch computer vision, using modified pre-trained networks André: Python, Flask, celery, encoded image transmission. Ricky: Flutter, Dart

What's next for free(garbage);

Technical improvements:

  • better computer vision model
  • expanded dataset for ML
  • feature tracking with opencv to detect multiple types of garbage continuously Feature improvements:
  • add database to get statistics on garbage collection , user usage, etc.
  • the ability to integrate this app in smart garbage systems (ex: automatic sorting incoming items)
  • Real-life disposal places

Built With

Share this project: