We were inspired by the idea of letting humans interact with autonomous robots at a very high level, especially in space-like environments where direct control isn’t always possible. Our goal was to allow an operator to describe a mission in plain English and have the robot handle the rest. To do this, we built a mobile robot and an autonomy pipeline where natural language input is passed to an LLM with structured tools, which converts the request into a mission plan made up of concrete actions that the robot can execute autonomously. The long-term vision was to enable commands like searching an environment, mapping obstacles, and completing tasks with minimal operator involvement.
One of the biggest challenges we faced was time and hardware limitations. We spent a large portion of the hackathon building a reliable mechanical platform, which limited how far we could push autonomy. We also lacked advanced sensors like encoders, an IMU, or LiDAR, which restricted the robot’s ability to reason about its state and environment. That said, we were very happy with how the LLM mission planner turned out—it’s modular, flexible, and clearly capable of taking advantage of richer sensor data if it were available.
In just about 30 hours, we learned a huge amount across hardware, software, and autonomy. On the hardware side, we dealt with fragile wiring, limited microcontroller pins, poor documentation, and constant power issues (classic Murphy’s Law). On the software side, we built a UI using pygame, set up HTTPS communication with an ESP32, and learned how to call LLMs through APIs from scratch. Most importantly, we learned how to structure tools and actions so the model could reliably produce structured, machine-readable mission plans, which became one of the strongest parts of the project.



Log in or sign up for Devpost to join the conversation.