Inspiration

Those who live in NYC can relate to the fact that NYC is the city of lights but also one of the dirtiest if not the dirtiest cities in America. Our developers live in different boroughs of New York, but share the common problem of walking by overfilled garbage cans that can easily tip over and spill if one more coffee cup is placed on top. This problem cause the streets to be extremely dirty and polluted, and it affects the lives of many New Yorkers.

What it does

Our platform, NYClean, first trains IBM Watson's Visual Recognition with custom classifiers to classify a trash bin. Then we monitor our camera on our android phones as a live image feed to detect whether the trash bin is full or empty, then alert 311 depending on the results of the classification. This platform can be used by cameras on the streets so that many garbage cans can be tracked.

How we built it

We trained IBM Watson's Visual Recognition by giving positive and negative cases, where the positive case is given overflowing garbage cans and negative case is given empty garbage cans. Each cases were given many photos for optimization and accuracy. Our code makes the camera take a picture every second continuously, the pictures are then sent to IBM Watson using the API to get a score of whether the can is full or not. Our code takes the average of every 50 scores and compares that average our threshold. Once a average is above out threshold, we know that the garbage can is full and needs to be cleaned out. The platform will then send a text to 322 using the Twilio API to alert a full garbage can.

Challenges we ran into

This was our first time using IBM Watson's API calls and struggled with getting it to work in our code. Training Watson was also a challenge, because the quality of the training highly affects the results we get. We also wanted to implement an android app so that users can upload pictures of garbage can and then send it to Watson to analyze if its full. However the challenge we ran into here is uploading the picture to IBM Watson's API.

Accomplishments that I'm proud of

We are proud of automating the training procedure for IBM Watson's visual recognition, so that we don't have to manually upload 20 or 50 pictures to the classes. We are also proud of being able to call the IBM Watson API to train and also analyze pictures. In addition we are also proud of being able to use the Twilio API to work along side IBM's visual recognition.

What we learned

We learned how to train and use IBM's Visual Recognition to recognize when a garbage can needs to be cleared. We also learned how to use Twilio's API to work alongside IBM's

What's next for NYClean

We planned to integrate this application with security cameras near trash bins throughout the city to execute this operation on a bigger scale. We also aim to implement the same platform on an android app so that users can take pictures of filled garbage cans and have those pictures send to our Trash Identifier. When the threshold that we set is met, we know that the trash is actually full, and it will trigger the Twilio API to alert 311. To give users a incentive to use the app, we have spoken to NYC public data representatives, and unlimited metro cards can be given out to users when they reported a certain amount of full trash cans. With this app, we hope NYC will become a cleaner and healthier city!

Share this project:
×

Updates