Inspiration

Last Saturday I was out riding my bike near a forest and I looked around and saw just how much stuff was on the ground. I thought about the 2030 climate deadline and just how many people were still ignorant to how their actions impacted the environment. I wanted to make something that could show people just how long it takes to undo a simple act such as dropping plastic on the ground, and thus I created the trash drone.

What it does

The drone is controlled by a mobile app that allows the user to both launch and land it. While in the air, it uses its onboard camera to scan the nearby environment and classify every piece of garbage on the ground it sees. Then, upon the user landing it, it takes the list of all the items it detected, calculates how long it would take for everything to decompose and sends that data back to the frontend app for the user to see.

How I built it

The app is run on a web server on a nodemcu which connects to my laptop via a serial port. Once the user enters a command, the nodemcu sends a command to my laptop which then runs two scripts simultaneously. The first launches the drone, keeps it moving from right to left while constantly taking the png buffer from its camera and sending that to the second script. The second script is a pytorch ML algorithm which uses a pre-trained model to identify each piece of trash and put it into an array.

Once the user clicks the "off" button, the nodemcu sends another signal to terminate both scripts and take the array of objects as it is. It then loops through said array and adds the decomposition times of each object together. The completed decomposition time value is printed in a paragraph on the frontend app, and once the user reloads the html page it becomes visible.

Challenges faced

Initially I wanted to use tensorflow for my sorting algrorithm, but custom training my model made it really inconsistent with how often it identified certain items such as trash bags. So in order to fix this I employed pytorch despite having never used it before. I spent all night looking at tutorials, pouring over documentation and eventually I created a simple model capable of identifying garbage bags, containers and cardboard.

Accomplishments that I'm proud of

Because of damage from a previous crash, my drone constantly drifts clockwise which really messes up any lateral movement it does. This resulted in 27 subsequent crashes during the testing period (trust me, I counted). In order to fix this, I built a routine that constantly checks the yaw and compares the initial heading to any changes in yaw that weren't intentional, and then uses the drone's clockwise() and counterClockwise() functions to correct. This proved to be the most difficult part of the project, and frankly I'm just happy that I made it through without completely destroying my drone.

What I learned

After nearly destroying my drone 20 times I began to think it wasn't worth it. I wanted to pack it up, but then something happened, I went outside and saw an old man picking up garbage on the street. I thought if he's doing something to help, what excuse do I have. So I went back in and in time I got a semi-functional version of my idea up and running. I learned that no matter the risk, the downside of not doing something is always greater. As bad as failure might taste, it's an absolute joy compared to the bitterness of regret.

What's next for trashDrone

I want to get a higher performance drone and make it so that I don't have to worry about unnecessary drifts or course adjustments mid-flight. I'd also like to use a drone with a downward-facing camera rather than having to keep a distance in order to see trash that's closer to the ground.

Built With

Share this project:

Updates