Inspiration
Our goal was to address the "lazy vacuum" problem. Most robot vacuums seem to wander randomly until, at long last, it will hit every spot on the floor. However, we wanted to design a robot vacuum that would know a particular vocal command and would go straight to the mess, instead of bumping into walls until it gets lucky.
What it does
SmartRover is an AI-powered vacuum cleaning device that utilizes computer vision to spot and target a specific piece of trash. SmartRover does not randomly move around; instead, it stays docked until it is given a command to move. After receiving a command, it looks around, finds where "trash" is, and goes directly to that point to clean it up.
How we built it
We developed a prototype of a SmartRover by connecting a high-definition camera to a modular body to form a vision-based cleaning device. The basis of the navigation algorithm is pathfinding software that determines the optimal path from the charging dock to the detected trash. To ensure that the device moves smoothly and tracks the path correctly, we used a PID algorithm to control the speed of the device's motor. This mathematical algorithm ensures that the device can make appropriate corrections to the speed of the motor to prevent the device from moving too fast as it approaches the trash. By connecting the device with an object detection algorithm, we ensured that the device can move from a "searching" state to a "targeting" state to move towards the trash with high accuracy. The aim was to eventually connect this with a voice query to create a seamless connection between a query and a device's ability to clean.
Challenges we ran into
However, the jump from conceptual AI model to physical prototype proved to be many hurdles to overcome. This is primarily due to the hardware capabilities. We didn't have the specialized components for the chassis or the high-torque motor necessary to make the physical robot mobile for a live demonstration. We had to focus on the modular design of the "brain." The biggest problem for us was the camera compatibility. The integrated circuits (ICs) and Arduino-based controllers used in the project were never designed for the high-speed video processing necessary for the hackathon. We spent a good chunk of the hackathon attempting to "flash" the firmware to make the camera compatibility work. Unfortunately, the hardware gave us more issues than the actual problem itself. The camera compatibility issues caused us to spend hours debugging the communication protocols rather than perfecting the pathfinding AI. Moreover, the lack of a GPU made running the object detection in real-time on the local controller impossible. We would have to rethink the design of the rover's ability to "see" the environment and "think" in real-time without a significant lag in the PID control loop.
Accomplishments that we're proud of
We are extremely proud of our ability to create a custom AI model that can run locally on a small integrated circuit chip. Most object detection models are far too large to fit within the limited flash memory and RAM of a standard IC chip, but we were able to curate a special dataset and apply model compression techniques to shrink the model without compromising accuracy too severely. By creating a model that was trained on a very specific dataset of "trash" items that you would find within a home environment, we were able to strip the model down to only the necessary weights to fit the entire intelligence system within a tiny storage space. Seeing our code run on the chip and accurately classify our curated dataset within the strict memory confines was a huge win for us.
What we learned
This project was a huge reality check in terms of the hard limits imposed by the hardware. We learned that even if the software is correct in terms of logic, the physical limits imposed by an IC or an Arduino can cause bottlenecks that cannot be avoided. We also learned that if we want to really succeed with an autonomous robot project, it is not a "hardware project" alone but needs a sophisticated ML modeling track to be successful. We also learned more about the massive scope of the project. It is not an easy feat to create a robot that can "see," "think," and "move" correctly, especially considering that we need to do so in a highly efficient format. We now know better how to balance the high-level goals of AI with the low-level realities of memory and processing power.
What's next for SmartRover
The next step is to spend dedicated time within the makerspace to progress from a conceptual prototype to a fully realized mechanical system. This can be accomplished by creating a custom chassis that is designed to house our sensor array as well as the camera at an optimal height to detect any potential debris. This will allow us to focus on perfecting our environment scanning capabilities to create more detailed spatial maps of the world around us, which can then be used to elaborate on our current vision targeting logic to move towards long-term testing and finally integrating the voice command functionality to make SmartRover a fully autonomous household assistant.
Log in or sign up for Devpost to join the conversation.