In The Beginning

We were always interested in ways to save the environment. Since we know it's our future, we have to do something about it. We have also always been interested in technology, and how we can utilize it to create new ways to save our futures. Since our phones are such a big part of our lives, we thought why not make an app concept? It would be easily accessible to everyone with a smartphone, and those are the people that need to reduce their carbon footprints (usually).

The Hardware

We used raspberry pi running the latest version of Raspbian to code using an open source software by google- which was a machine learning AI that was pre-trained to identify objects and put a name to them simply by scanning a picture of it. The USB Logitech camera was given to us by the Hardware Lab to incorporate into the pi in order to comprehend pictures.

Coding the Camera

The Logitech camera in native "Raspbian"-a Linux distro is non functional without a command. Therefore we had to use the Linux terminal to type in the commands fswebcam (filename). Using this command allowed us to take photos for the Pi to process. We then got the calculated percentages of image recognition. A picture of a plastic water bottle now says "water bottle.... .999640" when analyzed by the software.

Challenges

We planned on having the images, after going through recognition, to save into a file, which would then- at the end of the day- give a report on the user's single use plastic usage, as well as the carbon footprint they had that day. We had designed it simply, as a way to show proof of concept, which we will soon turn into a mobile app. The biggest challenge we had was when we were using the image recognition outputs in the user's "footprint score". The code got corrupted, and we had to download the software for image recognition again.

Built With

Share this project:

Updates