Automation can be used for more than it is used today, so instead of for economical reasons, this can be used to help better the world today and tomorrow. This robot is a representation of what the real world automation could be like and what it could be used for, things like recycling. Automation like this could be used to help the world drastically, if we deployed robots on the west coast and if we picked up 5% of the trash in the Pacific ocean it would still be 12.1 million pounds of trash every year ( USA today). Our project is a simple robot that drives up to a can and recycles it.
So now that we have established some background about this project, let's talk about some of the technical challenges that we encountered while attempting to solve this control problem. Now, from an outside observers point of view, the task seems relatively simple and not all that impressive, and it would be, if it were not for the limitations of this chassis, more on that later.
Initially when we grabbed this bot from the back room, we were super excited and immediately finalized it’s design objectives and came up with a simple two step protocol for getting a vehicle from point A to point B.
The first and most important objective is localization, i.e, knowing your place in the operating environment. This step is absolutely vital for going from point A to point B. Like you can’t expect me to go from here to bathroom if I have absolutely no idea where in the building I am, right?
We wanted to solve this with this nonlinear state estimator, a seemingly complex but really simple algorithm that outputs an x y coordinate position given velocity readings from in-wheel odometers and orientation information from an onboard gyro.
The second step is to actually chose some movements to get us closer to our goal. In order to do this complex task, we wished to utilize the “RAMSETE” algorithm, a nonlinear controller that I found in a recently published italian research paper. This controller takes in our location from the last step along with a goal trajectory and generates wheel velocities accordingly.
The crippling issue with this approach is that our chassis is severely limited in its sensory abilities. By severely limited I mean it has no sensors at all, it is an entirely blind stupid machine. I know like it sounds like I am exaggerating this but this is essentially the equivalent of parking you car in a mall parking lot while blindfolded. Not fun. This destroys our localization hopes, unless…. we can use some clever physics and tuning to relate velocity with voltage.
So we created a model of our system, very useful as all you software people know. If your a physics nut you can follow our math but essentially, this model is linear!
This means we can do some sneaky stuff like flooring our robot and taking some measurements, also known as characterizing the drivetrain. More details about that process are right here. But in the end, this lets us estimate velocity and therefore position given voltage, which we know since we control. Now this is only an approximation so this is why its definitely not perfect but it was a unique control problem to say the least.
Built With
- robotc
- vex-edr
Log in or sign up for Devpost to join the conversation.