Inspiration
Robotic grasping is still one of the biggest roadblocks to putting robots into messy, real world manufacturing. Traditional grippers are rigid and task specific, so even small changes in part geometry or material can break a carefully tuned setup. We wanted to explore what it would look like if a gripper could adapt more like a human: look at a part, decide how to hold it, and notice when something seems off. That idea, plus our team’s “Guardians of the Gearbox” theme, led to Guardian OmniGripper.
What it does
Guardian OmniGripper is an adaptive end effector for the UR10e that first understands a part and then chooses how to grasp it. It fuses camera vision with a distance sensor, IR sensing, and a linear Hall effect sensor to infer shape, material, surface condition, and an approximate center of gravity. From there, it automatically selects a grasping mode (articulated fingers, electromagnet, or suction cup) and a grasp location, and it uses the same perception pipeline to recognize known parts and flag visible defects. The goal is a versatile gripper that can handle a wide variety of automotive components without constant retooling.
How we built it
We split the build into perception, decision making, and hardware:
Perception and feedback
We wired a camera and an Arduino based distance sensor together with IR and Hall effect inputs to characterize each object. For the prototype, we added a small LCD screen that visualizes what the system “thinks” about the part in real time, including detected class and chosen grasp mode.Decision logic and control
An ESP32 microcontroller runs the core logic that maps sensor readings to grasp strategies. Based on the sensed geometry and material, it decides whether fingers, magnet, or suction is most appropriate and computes a candidate grasp region that respects center of gravity and clearance. The ESP32 then commands the selected mechanism and reports state back to the user interface.Modular gripper hardware
Mechanically, we designed a modular end effector with interchangeable finger, magnet, and suction modules. Fitec FS90MR servos drive the adaptive fingers, while the other modules share a common mount and electrical interface. We iterated through several 3D printed designs to balance strength, weight, and cable routing so the prototype could be quickly assembled and tweaked during the hackathon.
In demos, the system identified more than 15 different parts from the hardware booth, showing how the same gripper and code path can adapt across shapes, sizes, and materials rather than being tuned to a single object.
Challenges we ran into
We ran into a lot of real hackathon turbulence:
Limited and changing parts
We could not reserve all the components we originally planned on, so the set of test parts shifted over time. That forced us to design the perception and decision logic to be as general as possible instead of overfitting to one or two “hero” objects.Hardware and fabrication setbacks
Early 3D prints lacked the precision and stiffness we needed, which meant reprinting critical pieces under time pressure and redesigning some joints and mounts to be more forgiving. We also discovered missing components and had to rework the electronics layout on the fly to match what was actually available.Sensor fusion under time pressure
Getting consistent readings across distance, IR, and Hall effect sensors while the robot and parts moved was harder than expected. We spent a lot of time debugging noisy signals, tuning thresholds, and simplifying what we asked each sensor to do so the overall system behaved predictably.
Despite those setbacks, seeing the ESP32 drive the FS90MR servos, update the LCD with live object understanding, and switch modes as we swapped parts in front of the gripper was a big “it finally works” moment.
What we learned
- Adaptability is as much about software as mechanics. The modular gripper is important, but the real differentiation comes from the perception and decision layers that choose how to use that hardware.
- Prototyping under constraints forces good abstractions. Not getting the parts we wanted and fighting with imperfect 3D prints pushed us to design cleaner electrical and mechanical interfaces so we could swap components without rewriting everything.
- A transparent prototype tells a better story. The LCD showing live detection logic turned out to be more valuable than we expected; it helped us debug quickly and made it easy for others to see that the gripper was not just “moving” but actually reasoning about the parts.

Log in or sign up for Devpost to join the conversation.