Inspiration: The inspiration behind TrashBot stemmed from the increasing global need for more efficient and sustainable waste management solutions. With growing environmental concerns, the ability to automatically sort waste and reduce human intervention became a pressing challenge. We thought of a system that could use AI, robotics, and automation to improve waste sorting processes, making them faster, more accurate, and environmentally friendly.
What it does: TrashBot is a fully integrated waste sorting system that uses AI to identify and classify different types of waste—plastic, paper, metal, and more—through real-time image processing. It combines computer vision, deep learning, robotics, Arduino-based sensors, and solid modelling to create a functional prototype that can autonomously sort waste. The system captures images of the trash, processes them using the trained AI model, and guides the robot to pick and sort the items based on their type.
How we built it: The first step in building TrashBot was gathering a labelled dataset of waste images, which we used to train using PyTorch. This deep learning model was trained to identify various types of waste, such as plastic, paper, and metal, based on image features. We focused on optimising the model to ensure it could generalise well and handle real-time image data. We integrated OpenCV to handle real-time image capture and preprocessing. The camera feeds images to OpenCV, which processes and formats the images for classification by the AI model. OpenCV also helps detect waste in the environment and ensures smooth interaction between the camera and the AI model. To enable the robotic car to function, we used an Arduino board to manage sensors that detect waste items and control the car's movements. The sensors capture the waste’s position, sending data to the Arduino to control the car's steering and movement. We also used Arduino to interface with motors that move the car in the desired direction for sorting the waste. The combination of image classification and real-time sensor input allows TrashBot to design the physical components of the robot. We used solid modelling software to create detailed designs for the axles, wheels, and other structural elements. Once the designs were complete, we 3D printed the parts and assembled them. This allowed us to prototype efficiently, ensuring that all components fit together and functioned properly in the waste sorting process.
Challenges we ran into: One of the challenges was integrating all the different components—AI, sensors, robotics, and solid modelling—into a cohesive working system. Synchronising the movement of the car with the AI model’s predictions, as well as making sure the sensors functioned properly, required careful debugging and testing.
What we learnt: We learnt that having a large and diverse dataset is key to training an accurate AI model, as the quality of the data directly affects the model's performance. Integrating AI with hardware and sensors in real-time was challenging, but it taught us how to balance software and hardware to create a functional system that adapts to real-world conditions. Using solid modelling and 3D printing helped us quickly build and test our design, highlighting the importance of rapid prototyping in development. Lastly, working under tight deadlines improved our ability to prioritise tasks, collaborate as a team, and manage time effectively.
Log in or sign up for Devpost to join the conversation.