Inspiration
Robotics holds immense potential to reshape industries and assist in daily life, yet the high cost and complexity of capable robotic arms limit their accessibility. We saw an opportunity to bridge this gap, inspired by the idea of creating a powerful, dexterous, yet remarkably low-cost robotic system.
We envisioned a platform that could be easily built, customized, and deployed for various tasks—from scientific research to everyday assistance—making advanced robotics attainable for hobbyists, researchers, and small businesses alike. Our project name, "The Hand of God", playfully nods to achieving sophisticated control and capability with accessible means.
What It Does
The Hand of God (THOG) is a low-cost, multipurpose robotic arm system featuring both tele-operated and autonomous capabilities, integrated with Anthropic's Claude. Its core features include:
Low-Cost Hardware
A fully functional 6-DOF robotic arm built primarily from 3D-printed parts and standard servos, costing around $200—roughly 180x cheaper than comparable industrial arms.Teleoperation Mode
Allows users to directly control the arm, providing an intuitive way to perform tasks manually and generate training data.Autonomous Operation
Uses an imitation learning model (ACT Policy) trained on teleoperated demonstrations, enabling the arm to perform tasks autonomously based on camera input.Claude Integration
Leverages Claude’s capabilities via its MCP (Multi-Capability Platform) server to send commands to the arm and process camera feed inputs for high-level understanding and task execution.Multipurpose Design
Built to be adaptable for various applications—demonstrated through potential uses in lab automation, personal assistance (e.g., robo-chef), or precision agriculture.
How We Built It
Our tech stack and development process combined hardware prototyping, machine learning, and AI integration:
- Hardware: 3D printed the arm using common printers; standard servo motors provide actuation. The system includes top-down and wrist-mounted cameras for vision.
- Data Collection: Used a teleoperation setup where a “leader” arm controls a “follower” arm to collect demonstration data (joint states + images).
- Machine Learning: Trained an imitation learning policy using ACT (Action Chunking with Transformers), which excels at learning long-horizon, complex tasks.
- Training Infrastructure: Utilized a cluster with 4× NVIDIA A100 GPUs for over 100k training steps, tracked with tools like Weights & Biases (WandB).
- AI Integration: Integrated with Anthropic’s Claude through MCP to bridge between the AI model and the robot, enabling high-level command-based operation.
- Assembly & Calibration: Careful tuning of servo calibration and physical assembly was essential for real-world operation.
Challenges We Ran Into
- Component Logistics: Servo motor stock issues and international shipping delays forced last-minute changes and emergency orders.
- Hardware Setup & Calibration: Required intensive hands-on work; at one point, we used a banana to prop up the camera.
- Data Collection: Creating high-quality, diverse demos for learning was slow but critical.
- Model Training: ACT training needed major compute and fine-tuning. Effective GPU parallelization was vital.
- System Integration: Merging the hardware, ML model, control code, and Claude into one system was a major engineering challenge.
Accomplishments We're Proud Of
- Designed, printed, and built a functional 6-DOF robotic arm for just ~$200.
- Enabled both teleoperated and autonomous modes of operation.
- Successfully trained a real-world ACT policy during the hackathon.
- Leveraged a multi-GPU cluster for massive training speedup.
- Integrated the full system with Claude and explored task-level AI control.
- Navigated and overcame severe hardware sourcing constraints.
- Demonstrated a proof-of-concept for affordable, capable robotic manipulation.
What We Learned
- Practical skills in robotic design, 3D printing, assembly, and servo calibration.
- Effective strategies for collecting demonstration data for imitation learning.
- The intricacies of training and debugging ACT policies on real robot data.
- Methods for integrating LLMs like Claude into physical robotics workflows.
- Parallelizing ML workloads across multiple GPUs.
- The value of creativity, resilience, and rapid prototyping under pressure.
What's Next for The Hand of God (THOG)
We aim to further democratize robotics. Our next steps include:
Expanding Task Capabilities
Train the arm on more complex and diverse manipulation skills.Improving Autonomy & Robustness
Enhance perception and control to handle varied environments more reliably.Deepening Claude Integration
Allow Claude to handle natural language task specification, error recovery, and multi-step planning.Open-Sourcing
Release our hardware designs and software stack to the public to empower others to build on THOG.Application Development
Create concrete solutions for lab automation, accessibility, and STEM education.
We believe THOG provides a blueprint for intelligent, accessible robotics—empowering anyone to build and innovate with physical AI systems.
Log in or sign up for Devpost to join the conversation.