Inspiration
We wanted to create an autonomous vehicle that uses image to text to identify what item in the warehouse to locate and retrieve, streamlining logistical operations in an industrial setting, minimizing manpower hours while decreasing uncertainty in the task.
What it does
The robot uses a camera, Raspberry Pi 4 Model B, ultrasonic sensor, and necessary chassis and motors to enable OCR capabilities and identifies "matches" with the given storage database, which represents an example company's inventory. After determining a match, it selects the corresponding route to navigate to the item's location within the warehouse. Once reaching the item, the ultrasonic sensor detects the distance and once close enough to engage the pincers to grab the item, and lets the robot know to grab the item. The third motor is then activated and grabs the item, triggering the return sequence, where the robot will return to its "home" to deliver the desired item.
How we built it
Once assembling the given chassis, we created several custom parts to achieve our goals. The first part we created was a camera mount to be able to hold the camera at the desired angle and "read" the text. The next part we designed was a pair of pincers with built-in gears, meant to be actuated by a single motor. Along with these key parts, we also made a front-end mount with additional mounting points to be able to secure the third motor that controls the pincers, as well as the ultrasonic sensor mount. These parts were printed using polymer FDM for rapid prototyping and proof of concept. To connect the third motor, we soldered a mosfet to a GPIO pin to control its activation.
Challenges we ran into
Various electronic components either broke or malfunctioned throughout the course of this project at key points in design and testing. Specifically, the micro-usb port kept disconnecting from the Raspberry Pi HAT, and most critically we encountered several motor failures; the right motor no longer works due to unknown reasons. We performed soldering operations to fix the aforementioned micro-usb port as well the motor connections.
Accomplishments that we're proud of
As we all didn't have experience with Raspberry Pi and advanced image recognition, we are proud of being able to implement OCR and design & fabricate custom parts to enable the physical aspect of our project. We are also proud of managing to debug and fix the great majority of our problems throughout the course of this hackathon!
What we learned
We learned how to use openCV and tesseract effectively on Raspberry Pi and create a working proof-of-concept for our autonomous vehicle system.
What's next for OCR Warehouse Robot
Once replacing faulty parts, we first need to perfect the mechanical system, ensuring repeatable and reliable results for our robot. Since this is a scale-model of an actual operations level robot, this would lead to a full-scale robot that is able to perform the same operations using our software. We then aim to create the frontend of our product, so customers have an accessible interface for entering their warehouse inventory to populate the database of keywords, so that they can begin using our product!
Log in or sign up for Devpost to join the conversation.