Inspiration

We are very interested in robotics, where 2 of us have already taken the Foundations of Robotics (ELE 456) at URI and the other member will be taking the class next semester. In addition we all participate heavily in clubs that involve person control and robotics. This colliding interest led us to want to build a robotic arm that can control itself.

What it does

Our program uses computer vision to locate different colored boxes, then a robotic arm moves to a mirrored location. The boxes are located on a white plane that has a 10mm thick black border around the perimeter. Then, with a camera mounted on a tripod facing top down onto the field takes video, relaying information to our program. Which, then will highlight the boxes on the screen and then mirror them to the other side of the field. Finally, the coordinates of both the mirror and physical box are sent to an Arduino to control an arm to pick up and then drop off the block in a new location.

How we built it

Electrical Components: Constructed breadboard with 6 digital servo motors. Ran code in Arduino IDE for calibrating motors as well as determining specific turning angle of each motor to reach a given coordinate with the arm. Computer Vision: For the computer vision, we used OpenCV to capture the what the camera was seeing and convert into useable data. Firstly OpenCV was used to calibrate the camera and then it was used to spot blocks on the paper. 3D Model: Used an open source 3d model of a 3 -4 joint robotic arm using this link link

Challenges we ran into

aquiring parts, servos troubles, difficulty of project, and opencv recognition

Accomplishments that we're proud of

improvement on skills

What we learned

Controller servo control: We learned calibration techniques, as well as an introduction to inverse kinematics.

Computer Vision: When we started this project we had never done any kind of computer vision before. Our robotics background helped us in knowing that we needed to calibrate the camera to do computer vision. However, after that we were all on our own. Firstly we had to follow a tutorial for calibrating a camera by Nicolai Nielsen: link where were learned that camera calibration shows how the 3d world can be connected.

What's next for ARM - Autonomous Vision-Guided Object Mirroring Application.

The purpose of this project is to learn new skills to be able to be brought into other projects that we are currently working on. One project that we currently have in common is the Autonomous Racing Club (ARC) at URI, where we can directly transport what we learned by using OpenCV into real-time rc car recognition, when going against another car.

Share this project:

Updates