Back of the Smart Recycler
Front of the Smart Recycler
Arduino Control of Motor/Platform
Motion Detection and Computer Vision Image Recognition Script
The United States produces 1.2 billion tons of trash each year at only a 34% recycling rate. We wanted to help curb this issue by creating a product that not only encouraged recycling, but also actively assisted with it. We thus created Smart Recycler as a hardware tool to automate the process of separating recyclables (e.g. cans and bottles) from trash items (e.g. food and wrappers) and allow users to manage their environmental footprint.
What it does
Smart Recycler automatically separates recyclables from trash items using computer vision.
How I built it
For preprocessing, we scraped web image urls of different types of recyclables and non-recyclables using a Python script in order to acquire a training set for which to test on the Microsoft Computer Vision API. The API returns a set of descriptions which we then used a bag of words model to determine keywords that best describes recyclables.
When the script for Smart Recycler runs, it uses motion detection to determine that an object was placed. It then queries the Microsoft API and matches keywords to the output of the API to determine the nature of the object.
For the hardware, we decided that the most reliable and feasible method of separating the two categories of items would be to create a tilting platform. Based on the placed item, the platform would either tilt to the left or to the right, dumping items into a bin for recyclables or for trash. This was accomplished using a high-torque motor attached to a tilt mechanism, and was controlled by an Arduino receiving serial commands from our Python scripts. The actual wooden frame was made using spare wood and screws, and the recycling bins were borrowed from our rooms.
Challenges I ran into
Our greatest obstacle was correctly identifying objects using the computer vision API. The Microsoft Computer Vision API is still under development with its features yet to be fully realized, which presented challenges in device accuracy. We thus had to compensate for somewhat frequent misidentifications by modifying the field of view and keyword-matching. Another challenge we ran into was in calibrating our motion detection, as slight vibrations and background noise are to be expected in everyday life. It was difficult to differentiate objects of different material but of the same shape (egs. paper cup vs. styrofoam cup).
Accomplishments that I'm proud of
We’re quite proud that we were successful in actually getting our Python scripts to communicate with the hardware. Based on our testing, the device successfully sorts distinguishable objects such as bottles, food wrappers, and electronics with over 90% accuracy.
What I learned
We learned how to navigate Microsoft Computer Vision API to characterize objects. Using computer vision to differentiate objects can be complicated to implement for non-recyclable objects that are difficult to identify such as paper towels and containers with food/liquids in them.
What's next for test
This hack has great potential in automating domestic and industrial recycling. Computer vision technologies are still heavily in development and with the continued improvement of available API’s and recognition algorithms, the accuracy and accessibility of computer vision projects will improve. If this hack were to be revisited, efforts would be made towards providing a user-friendly front end with environment statistics, as well as streamlining the recognition algorithm to sort and update more efficiently.