We saw something cool and wanted to make our own version with personal touches.
Gene’s Inspirations and Challenges: I always wanted to get involved with AI vision, autonomous systems, and path-finding, as I have always believed it would only elevate my personal projects and while I may have not achieved all of my goals during this hackathon, I have learned and done more in these 36 hours than I have done in the past 6 months, so regardless I am proud of what I was able to achieve in such short amount of time. I originally wished to learn how to use a Jetson Orin Nano as it seemed like the frontier in terms of small-scale AI projects of which it has blown up into an entire stack of AI vision, navigation, and mapping, and robotics. Despite this, I have also encountered problems, primarily just setting up and working with Docker. I spent a good chunk of the allowed time just setting up and debugging docker on the Jetson in order to house my system. This of course has led me to learn many things about docker considering this is my first time really diving into the deep end of VNC, Remote Connections, and docker usage. Regardless, I have thoroughly enjoyed this hackathon, and my only wish is that it has been longer for me to truly create a complete project.
Fannie’s Inspiration and Challenges: As a Mechanical Engineering student and a 3d-printer enthusiast, I love designing models and robots from a new idea into a physical object. When the opportunity to participate in Starkhacks came to me, I was really excited to test my ability to work under pressure and within a time limit. I was ready to challenge my creativity from the moment I submitted my application and got an idea of whats to come from the early event details.
I primarily worked on the mechanical design of Rover-Drone. I got an initial idea of what hardware we wanted to use from Gene and worked off of that, then as the development progressed, we realized there was more that might be mounted onto the robot. For me, the biggest challenge was to design a small compact Rover-Drone while creating space to mount each and every board, sensor, and component we wanted to use. With the limited space, I had to brainstorm alternative ways to mount boards without expanding the size of the robot. It was especially challenging when the programmers decided to switch to a different board due to incompatibility or other issues.
But, that is where the fun is for me, needing to redesign and improve upon a current design. Reiterating, tinkering, and remaking. Racking my brain, improving my ability to design, and creating something not just good, but better and best is why I like designing and 3d-printing.
Despite all that I am proud of the design we landed on during this 36hr hackathon. Of course there are a ton of improvements I can make in the design to make it more air resistant and lighter, but with some of the issues we've faced, I am proud of us!
Darsh’s Inspiration and Challenges: My work on this project was inspired by the M4 Robot from California Institute of Technology, which demonstrates seamless mobility across land and air. I wanted to recreate this concept while adding my own focus on AI-driven perception, control systems, and real-world deployability. I’ve always been interested in AI vision, autonomous systems, and path-finding, and this hackathon gave me the opportunity to bring all of these ideas together into a single system.
I developed the core Python-based control system for the rover–drone, designing a state-based architecture to manage behavior and mode transitions. I integrated perception outputs (YOLO) into decision-making and implemented low-level ESC control for brushless motors, including calibration and throttle management—all without a traditional flight controller. My focus was on creating a modular, scalable system that connects AI, perception, and hardware.
One of my biggest challenges was working on motor control without a traditional flight controller, which meant I had to manually handle ESC calibration, throttle consistency, and safe signal control. This made achieving stable and predictable behavior more difficult, especially when dealing with multiple motors. Another major challenge was integrating different systems together, particularly connecting perception outputs with the control logic in a clean and reliable way. Ensuring that the system could respond correctly to real-time inputs required careful structuring of the logic and handling edge cases.
At the same time, this experience pushed me to learn much faster than I expected. In just 36 hours, I was able to gain hands-on experience with both high-level AI integration and low-level hardware control, which is something I hadn’t worked with together before. It also showed me that building real-world robotics systems is less about individual components and more about making everything work together reliably. While there is still a lot to improve, I’m proud of how much I was able to contribute in such a short time, and this project reinforced my interest in working at the intersection of AI and robotics.
Log in or sign up for Devpost to join the conversation.