Inspiration
Soldering intricate electronics often requires precise, stable positioning of components. Traditional "helping hands" with static alligator clips are clumsy, hard to adjust, and constantly break our workflow. Every time we had to stop, put down our iron, and manually re-position a clamp, we lost focus. We were inspired to build a tool that felt less like a static vise and more like an intuitive, dynamic assistant. What if you could position your workpiece as easily and fluidly as moving your own hand?
What it does
The robotic helping hands arm allows for touchless control and seamlessly integrates itself into the user's workflow. Instead of interrupting work to manually adjust each arm and position of a suboptimal helping hand, the user can simply swipe towards the Gesture-Grip which provides precise, stable, and hands-free positioning for delicate components, wires, and PCBs, letting the user keep their tools in hand for a faster, more efficient, and less frustrating workflow. There are 3 settings built into the Gesture-Grip and each of the setting states are indicated by a singular RGB led housed on the base.
States: Each state can be accessed by having your hands be in close proximity of the each and moving back quickly while keeping your hands in line and sight of the sensor.
| Type | Indicator | Control |
|---|---|---|
| Presets | Static White Light | Controlled by UP and DOWN gestures |
| Selecting Servo Joint | Blinking Light of Respective Servo Color | Controlled by LEFT and RIGHT gestures |
| Moving Servo Joint | Static Light of Respective Servo Color | Controlled by LEFT and RIGHT gestures |
How we built it
The electronic backbone of the Gesture-Grip consists of dual APDS-9960 gesture sensors, an ESP32, and five SG90 servo motors. To create the physical structure, we measured each of these components, 3D-modelled the arm and base pieces in Solidworks, and then 3D-printed the final parts. The software side of the Gesture-Grip was deployed with the Arduino IDE under PlatformIO and utilized several libraries like 'ESP32SERVO' to move the servos, 'SparkFunAPDS-9960' to read gestures, and 'FreeRTOS' to manage parallels tasks embedded on separate cores of the ESP32.
Challenges we ran into
As it was our first time working with microcontrollers, servos, and sensors, much of the time spent in making the project gone to researching how each component worked individually and having to combine all into a functional product. Specifically implementing the dual gesture sensors was a challenge itself despite having little to no knowledge of I2C protocols and the conflicting similar addresses of the sensors resulting in deep diving the 'SparkFunAPDS-9960' library and rewriting some parts such that it works with the dual setup and 3rd-party sensors. Furthermore, running the gesture sensors in parallel with the servos resulted in having to learn to use 'FreeRTOS' to effectively handling and manage each task. In creating the arm itself, it was also our first time using Solidworks to make such complex designs for each limb of the Gesture-Grip, and even 3D-printing each piece came with some undesired results which we had to constantly fix with each piece iteration. We also had several exams coming up at the same time so managing this project such that we beat the deadline was difficult.
Accomplishments that we're proud of
We were able to make Gesture Grip a fully functional product with minimal issues. :D
What we learned
We gained a deeper understanding in microcontrollers, sensor readings, and even mechanical knowledge of basic servos, and having to use Solidwork works helped us realized what makes a product polished which is the overall design.
What's next for Gesture-Grip
Originally, the Gesture-Grip was suppose have its own custom gestures to select from and the concept can be seen from remnants on the Github. We started a pipeline of collecting data from raw FIFO data, as discern from how the original gesture sensor reads gestures, and deploying a trained model library created in Edge Impulse into the ESP32. However, due to unsatisfactory data collecting and model accuracy, hours of retraining, and time constraints, we decided to fall back to using the built-in gestures from the gesture sensor and fine tuning the existing pipeline.
If we had more time, implementing custom gestures would be a priority to make Gesture-Grip more dynamic.



Log in or sign up for Devpost to join the conversation.