Inspiration

For this hackathon, we wanted to tackle a hardware hack for the first time. Our goal was to make an autonomous robot using materials from the MLH Hardware Lab and Google Cloud Platform API.

What it does

The Roller Toaster

The robot is remotely controlled through bluetooth using a GUI written in Processing. The user is able to remotely pilot the robot using an IP camera accessed through the local wifi network. We use Google’s Computer Vision API, Google Cloud Vision, to annotate live streamed video from the IR camera. Objects are labeled and printed for display to the user via Flask.

How we built it

Hardware: Assembled UCTRONICS Smart Robot Car Arduino kit. Ultrasonic sensor scans in front of the bot before it progresses forward. It will shake its head and not move forward if it detects an obstacle within 20cm. Android Phone with DroidCam app (will work with any android phone) is secured behind the ultrasonic sensor, which swivel together on the same servo.

Software-Robot Controls: Began with remote controls on Android device controlled via Bluetooth. Programmed a GUI using Processing to control the robot from a PC via Bluetooth using mouse or keyboard controls.

Software Machine Learning & Computer Vision: We used the Google Cloud Platform Vision API to identify & label objects from the IR camera video stream.

Challenges we ran into

Most of our challenges are a product of hardware constraints.

With the hardware we were given, we initially did not have a camera, even though our goal was to create an autonomous robot. We considered depending solely on the ultrasonic sensor, but we craftily found a way to obtain a camera and stream the feed to our laptops.

The ultrasonic sensor does not detect every obstruction to the car’s movement. The smart car kit used a motor drive board that made it difficult to use additional pieces of hardware such as accelerometers or mechanical bumpers, as the drive board covered the necessary pins and soldering was no available.

Additionally we encountered difficulty connecting the subsystems in our design as we originally envisioned, such as having the remote controller autonomously direct the robot in response to computer vision data. This could be due to hardware constraints as we were limited only to Bluetooth connectivity for our robot.

Initially we were planning to add autonomous functionality based on results from the googleVision

Accomplishments that we're proud of

Our team is proud of discovering how to apply computer vision to a webcam video stream and receive meaningful output, as well as successfully mounting our phone camera without displacing the ultrasonic sensor. We also overcame some manufacturing defects in the kit that restricted the robot car’s range of movement by carefully modifying the plastic components.

What we learned

In our hack we attempted to create something novel by combining embedded systems and machine learning technology. We quickly came to the realization that we had no hardware to support machine learning, as that kind of technology is just barely beginning to exist in industry (TPUs instead of GPUs, etc.). To circumvent that, we designed a system with three points of communication: the robotic vehicle; local machine hosting the computer vision API, data processing, and visualization; and remote control GUI to pilot the robot via Bluetooth.

What's next for Roller Toaster

We would like to implement autonomous movement of the smart car in response to the types of objects recognized in the video stream by the computer vision API, as well as collecting and depositing behavior, for example collecting litter in a dust pan or depositing a payload in a specified area. Additionally we would like to augment the car with additional sensors to improve the car’s maneuverability and obtain metrics on its performance.

Share this project:

Updates