Inspiration

This project was motivated by the architectural constraints of robotic systems deployed in high-latency environments such as lunar and Martian operations. Conventional approaches rely heavily on onboard autonomy to compensate for communication delays, which introduces significant computational overhead, system complexity, and increased power consumption. We aimed to explore an alternative control paradigm that minimizes onboard processing by leveraging human-in-the-loop control, while still maintaining precise spatial awareness through lightweight sensing and coordinate transformations.

What it does

We developed a low-resource robotic control system that replaces autonomous decision-making with mirrored teleoperation. The system consists of two robotic arms: a primary arm operated by a human and a secondary arm that replicates the motion in a remote environment. This approach reduces the need for onboard planning and perception by directly mapping human input to actuator output. To provide spatial awareness, the system integrates a LiDAR sensor and utilizes ROS transform (TF) frameworks to compute the relative pose of the end effector with respect to the sensor frame. Real-time distance calculations are derived from these transforms, enabling precise interaction with the environment without requiring full 3D reconstruction or complex perception pipelines.

How we built it

The system was implemented using a hybrid architecture combining ROS for real-time robotics control and Viam as the primary hardware orchestration and device abstraction layer. Viam served as the central interface for managing and communicating with heterogeneous hardware components, including the robotic arm and LiDAR sensor. This allowed us to standardize device communication, streamline configuration, and eliminate the need for low-level driver implementation. On top of this infrastructure, ROS nodes were used to handle real-time control logic, coordinate transforms, and sensor fusion. The robotic arm publishes joint states that are processed through forward kinematics to compute end effector pose. In parallel, LiDAR data is streamed into the system to provide spatial measurements within its native frame. We constructed a TF transform tree to maintain consistent spatial relationships between the LiDAR frame and the robot’s kinematic chain. This enabled real-time querying of the end effector position relative to the LiDAR sensor. Python-based ROS nodes subscribe to TF data, extract translation vectors, and compute Euclidean distance for continuous feedback. By leveraging Viam for hardware-level integration and ROS for real-time spatial computation, the system maintains a clear separation between device management and robotics logic, significantly reducing integration complexity while improving modularity and scalability.

Challenges we ran into

A primary challenge was achieving robust integration between hardware components and the ROS software stack under real-time constraints. Synchronizing data streams from the robotic arm and LiDAR sensor required careful handling of message timing, buffering, and update rates. Inconsistent timestamps and communication delays initially led to unstable transform queries and inaccurate distance outputs. Another significant challenge was constructing and debugging the TF transform tree. Accurate spatial reasoning depended on correct frame definitions, proper transform broadcasting, and consistent coordinate conventions. Misalignment between frames or incorrect transform chains resulted in compounding positional errors. Additionally, validating the relationship between sensor data and the robot’s kinematic model required iterative calibration and visualization. Tools such as TF debugging utilities and visualization environments were used extensively to diagnose discrepancies and ensure consistency across the system.

Accomplishments that we're proud of

We successfully implemented a real-time robotic control system that reduces reliance on autonomy by leveraging human-in-the-loop teleoperation. By integrating LiDAR-based sensing with ROS TF transforms, we demonstrated that accurate spatial awareness can be achieved through efficient geometric computation rather than complex perception models. We are particularly proud of establishing a stable and consistent transform pipeline, enabling reliable distance computation between the LiDAR sensor and the end effector in.

What we learned

We gained experience building a robotics system with a layered architecture, using Viam to abstract hardware complexity and ROS to handle real-time kinematics and spatial reasoning. This separation of concerns demonstrated how modern robotics platforms can accelerate integration, reduce boilerplate development, and improve system scalability in multi-device environments.

What's next for sudo Arms

Next, we plan to focus on improving our understanding of full-stack robotic system design by further exploring the integration between hardware abstraction layers like Viam and real-time control frameworks such as ROS. We want to deepen our knowledge of how data flows through multi-device robotics systems, particularly around latency handling, synchronization, and coordinate frame consistency. We also aim to experiment with more advanced sensor fusion techniques to better understand how multiple perception inputs can be combined with kinematic models for more robust spatial reasoning. On the control side, we want to study different teleoperation strategies and evaluate how human-in-the-loop systems behave under varying delay and noise conditions. Overall, our next steps are centered on learning how to design more scalable and reliable robotic architectures by iterating on what we built—refining our understanding of transforms, real-time systems, and hardware-software integration in practical, real-world scenarios.

Built With

Share this project:

Updates