Inspiration

We were fascinated by the extra-planetary drive that Anthropic’s Claude had planned for the Perseverance Rover, specifically its trajectory mapping system, and decided to try that out with Gemini and its multimodal capabilities with a few improvements as well, including a sample collection module and autonomous rover control. Adding market relevance to our shared love for science-fiction filmography, ASPER-I is our imagined future comparable to the sentient AI concepts we saw in “Interstellar” or “Blade Runner 2049”. The upcoming Artemis II expedition, the first lunar expedition since the 70s, planned by NASA was also a fitting motivator (and opportunely timed as well).

What it does

ASPER-I is currently capable of a variety of different features, all managed autonomously and synchronized seamlessly, the breakdown of which is as follows:

  • Autonomous Drive Plan: ASPER-I is imagined to stay alive in the real-world mission control consoles at NASA and listen for high-level goal ('I want to explore my North and see if I can pick up some basalt along the way') for which it plans a drive based on strategically-calculated waypoints and regolith analysis. Each drive is executed to its endpoint without commander intervention, unless specified at planning.
  • Local Inspection: Some necessary features every Moon rover is equipped with are seamlessly integrated into the logic component of ASPER-I, including lunar crater rim analysis, geological assessments, as well as timely reports on the terrain.
  • Elemental Sample Collection: The lunar soil is rich in compounds such as basalt, ilmenite, and olivine, among many other smaller traces. We included the three aforementioned ores as recognized potential samples ASPER-I can identify along its route and store in its handy inventory. Currently, samples are identified based on geological probabilities and terrain densities, paired with visual markers from the Nano Banana-generated live-feed frames. We will discuss the future potential of this feature in the headings below.
  • Progress Report Generation: As with any digitized tool, the inclusion of aggregative and reporting mechanisms are crucial for the still-prevalent paper-based mechanisms, especially in-effect at NASA today. At the end of each drive, a comprehensive report can be downloaded which includes the inventory logs, trajectory map, and other relevant information otherwise required by on-ground support.

Though not a feature on its own, but we equipped ASPER-I with as close an attempt to sentient and emotionally-aware agents as possible, with special considerations for humour and sarcasm in its cadence. After all, a little humour is always welcome in the dark and lonely expanse of space :)

How we built it

We started off with making the agent backend using Antigravity and movement logic taken from Apollo systems to create fictional coordinates and topologies. To ensure the rover could navigate these terrains safely, we integrated a gradient-analysis protocol that calculates the terrain slope angle \( \theta \) in real-time. By processing the elevation changes \( \Delta z \) relative to the horizontal distance across the xy-plane, the agent determines the pitch using: $$ \theta = \arctan \left( \frac{\Delta z}{\sqrt{\Delta x^2 + \Delta y^2}} \right) $$ This allows the system to autonomously reject paths that exceed the safety thresholds established in our initial movement logic. Once the ‘brain’ of the system was ready with the tested parameters and instructions, we migrated the project over to Google AI Studio to augment a faceless, conceptual model to an interactive, high-fidelity mission control infrastructure. The project was optimized for static maneuvering to imitate Claude Code’s capabilities, but we later decided to pivot to live control after realizing Gemini 3’s potential for complex decision making could perform optimally for dynamic systematic maneuvers and searches. Taking inspiration from science-fiction, we thought voice-enabled interaction as paramount to the futuristic and intuitive allure of the idea.

Challenges we ran into

The most significant challenge would have to be simulating a projection for the rover’s trajectory using static images available thanks to the Apollo 15 and 17 missions. However, since those missions terminated more than 50 years ago, the imagery from the rovers wasn’t defined enough to go by on their own. To overcome this, we had to use lunar topography and geographic maps to fill in the gaps so that Gemini could generate accurate images with the measured distance required for its autonomous trajectory control.

Accomplishments that we're proud of

We're immensely proud of the proof-of-concept we were able to engineer in the short time we had, and would like to mention the end-to-end autonomy we were able to induce in the agent. Given that we made the entire system as-is using the publicly-available free versions of Gemini 3.0 Pro, we were skeptical in achieving both the pathfinding capabilities and continuous execution of sub-functions, despite infrequent but inconvenient API crashes under heavy thought signatures. But with sufficient fail-safe measures and a beautifully aesthetic interface that simulates the JARVIS experience, we are satisfactorily proud!

What we learned

The vibe coding phenomenon erupted upon our generation as fast as it was celebrated, overestimated, and then antagonized. We were only able to familiarize ourselves with the potentials of AI code assistants minimally before joining this hackathon and unlocking our productivity with the in-house Gemini 3 Pro code assistant. It turns out the responsibilities of a developer, when vibe coding, shift from planning out LOC or optimizing loops and logic to more high level decisions, complex design problem-solving and better proficiency in debugging and error-checking.

What's next for ASPER-I

We’ve currently restricted ASPER-I to the Shackleton region due to data limitations but if we make it to the finalists and get recognition, we would be able to expand the system and push Gemini’s capabilities further towards extraterrestrial exploration, bring in live feed processing capabilities, and extending the scope to the real-world Moon and Mars rovers. Sequentially, after reading about NASA’s Artemis II mission, we naturally thought of applying this strategy on simulated lunar rovers because Gemini has yet to explore rover autonomy and agentic space exploration.

Built With

  • antigravity
  • gemini-2.5-flash-native-audio
  • gemini-3.0-pro
  • react.js
  • typescript
Share this project:

Updates