Team Members

  • Ryan Tang (127)
  • Kee Wan Ting (125)
  • Chew Jing Wei (143)

Inspiration

We wanted to make something that can make convert any kind of image into sketches because we saw a video of a Youtuber borrowing an industrial robot arm to draw out Christmas Cards in an automated fashion. Also, it happened to be found in our team member's own box of project ideas as well, so being able to strike that off was nice.

What it does

It takes in a .svg image in Processing, and then processes it to be doodled out by the robot. It opens the possibility of being able to print out a small digital image into a much larger one.

How we built it

Hardware

For the hardware, we assembled a 2 wheel drive robot with a caster wheel on an acrylic chassis. To control the motors in 2 directions, we used a L298N Motor Controller.

For sending the paths parsed from the SVG files, we use a Raspberry Pi to parse the SVG file into paths for the robot, then send the paths over Serial to an Arduino. However, due to insufficient voltage and battery capacity, we decided to move all the motor controls onto the Raspberry Pi.

After we managed to control the motors using the Raspberry Pi, we realize that calibrating the motors was very difficult without any feedback or knowledge about the robot's position, thus we tried to interface a MPU6050 Gyroscope and Accelerometer with the RPi over the I2C bus. Using a Gyroscope would eliminate the voltage regulating issues that caused discrepancies in the fine movements of the robot. However, after interfacing, we realized that we were not able to get reliable gyroscope position data from the sensor.

In a last attempt to obtain a reliable feedback loop for the robot, we interfaced an RPi Sense Hat. In order to fit both the external GPIO pins, we wired the necessary pins from the Sense Hat individually to a breadboard and then to the RPi. Using it's onboard gyroscope data, easier interfacing made turns of the robot more precise.

The robot motors are currently powered with 4 x AA Batteries and the Raspberry Pi is powered by a 5V Power Bank.

Software

We forked a project from a blog which created a mechanical hand which can draw out SVG images.

So, we inherited the methods for parsing the SVG file and for performing interpolation on the Bezier curves found inside it to obtain the pixels.

However, we had to convert the pixels generated from the SVG into vectors for drawing, and further process it into relative angle and magnitude for the robot to move to for each stroke.

Finally, a separate program had to be written to read the angles and magnitudes generated and process it to be sent as signals for activating and deactivating the bot's motors.

Challenges we ran into

  • There were numerous hardware hiccups
  • It was tedious to figure out the formulas for converting the vectors into their relative angles and magnitude for drawing.
  • Filtering the vectors generated from the image such that they are not too small nor is their angle from the previous vector too small.
  • Being unable to work using the MPU6050 accelerometer and gyroscope alone for determining the robot's current angle of direction.
  • Finding out that we did not have enough electrical power for running both an Arduino and Raspberry Pi on the bot.
  • Debugging the Sensehat for operation on the RPi

Accomplishments that we're proud of

  • The fact that we did not give up even though the robot was unable to turn precisely at specified angles initially.
  • Whole team being able to stay awake until 8am the next day
  • Being able to draw a decent bunny at the very end
  • Being able to debug every unexpected hardware issue we faced thus far, such as broken wires, inadequate power supply, whether to use a supplementary Arduino for controlling the bot.

What we learned

  • Importance of a rotary encoder DC motor
  • How to parse SVG files
  • Techniques to improve performance, such as image downsampling

What's next for DoodlerBot

With better accuracy and scaling, this can be implemented for personal use whenever someone needs to display an extremely large image on the physical space. Examples include environment landscaping and printing for large public displays.

Share this project:
×

Updates