Inspiration
As a space science lover myself, I wanted to solve something that helps a space scientist brainstorm. As we [me + gemini :) ] researched we came across this problem, The Artemis program faces a critical, microscopic enemy: Lunar Regolith. NASA’s Lunar Dust Mitigation Roadmap identifies "Shortfall 1561"—the inability to effectively monitor dust-induced mechanical failure in real-time. Lunar dust is electrostatically charged and jagged (due to lack of wind erosion), acting like diamond-tipped sandpaper that destroys seals and bearings in hours. With a 2.5-second communication delay to Earth and periods of total signal occultation on the dark side of the Moon, astronauts cannot rely on ground control to diagnose every squeak or rattle. We wanted to build an autonomous expert system—a digital forensic engineer that lives on the rover, understands physics, and can say "STOP" before a critical airlock failure occurs.
What it does
Vigilant-L is a Neuro-Symbolic AI Agent that acts as an onboard forensic tribologist. Multimodal Perception: It ingests raw telemetry—Input A (Audio/Spectrograms) and Input B (Macro-Optical Imagery). Physics-First Reasoning: Instead of guessing, it uses Google Gemini to extract observable variables (Material Type, Surface Roughness, Vibration Chaos) and feeds them into a deterministic physics kernel. Deterministic Calculation: It executes rigorous engineering equations (Archard’s Wear Law, Blok’s Flash Temperature, L10 Fatigue Life) to calculate physical damage. Digital Twin Simulation: It renders a live 3D Digital Twin using Three.js. If the physics engine calculates that stress exceeds the yield strength of Titanium, the 3D model actually deforms, fractures, or melts in the browser to visualize the failure mode for the astronaut. it can simulate various tasks other than these which inclues astrophysics, astrobiology, quantum physics, critcal evaluation for spaceships and much more based on actual physics logic and reasoning to solve actual real-world problems
How we built it
We built it as a multimodal reasoning engine that first accepts a spectogram image or audio of the surface (maybe the planet the mission is on) and the actual macro optical image of the component ie torroidal seal, bellow joints, helical actuator and other, based on these two inputs calculate certain metrics using advanced physics reasoning and calculation of logic to give the final answer. The final result would be either GO (normal conditions essentially permitting the materials to be actually sent to the place), NO-GO(critical conditions not permitting the components to be sent because of conditions justtified) and WARNING ( conditions and environments can get worse, so are warned before before sending the components/materials to space). The digital 3D simulation was later build to extend on that idea to visualise it effectively, then it later super-extended to be the first ever physics reasoning induced 3D simulator to simulate anything asked descriptively). It follows first principles physics as the primary logic.
WE BUILD IT AS THE FOLLOWING:
The Brain (AI): We used Google Gemini 3 Pro Preview and Gemini 3 Flash Preview via the GenAI SDK. We engineered a "Forensic Persona" prompt that strictly forbids the AI from guessing math. Its only job is to perceive materials (e.g., identifying "Ti-6Al-4V" vs "Viton Rubber") and environmental conditions. The Engine (Physics): We built a custom TypeScript physics engine (physicsEngine.ts) that functions as a look-up table for material properties (Young's Modulus, Thermal Conductivity) and runs the Archard Wear Equation to calculate volume loss. The Visualization (3D): We used React Three Fiber. The 3D meshes are "smart"—we wrote custom vertex manipulation logic that responds to the physics state. If the "Chaos" variable spikes, the mesh vertices vibrate. If "Flash Temp" exceeds 1600°C, the material changes emissivity to glow red-hot based on Wien's Displacement Law. The Interface: Built with React and Tailwind CSS, designed to mimic the high-contrast, data-dense displays used in aerospace HUDs.
Challenges we ran into
LLM Hallucination: Early versions of the AI tried to "predict" the Remaining Useful Life (RUL) metric simply by looking at the image, resulting in random numbers. We solved this by splitting the architecture: The AI sees, but the Code calculates. This Neuro-Symbolic approach ensures the math is always grounded in real physics. Vertex Deformation: Mapping abstract physics values (like "Yield Strength") to visual 3D deformation was tricky. We had to create a normalization layer that translates "Pascal units of stress" into "Three.js vector displacement" so the object bends realistically without exploding into a glitchy mess. Context Switching: Teaching the AI to handle "Command Uplink" queries (e.g., "Simulate a dust storm") while maintaining the context of the current mechanical failure required managing a strict state machine between the Chat Interface and the Simulation View. Color Prediction: Earlier versions had colours that wouln't match the simulation physics logic. Now the logic is used to reason that the colour is decided based on temperatures, environmental conditions of that place, wavelengths etc
Accomplishments that we're proud of
The Anti-Blob Geometry: We successfully implemented logic that prevents the 3D model from looking like a generic blob. It respects material stiffness—Titanium shatters, while Rubber melts. Verification Trace: The app displays the raw "Physics Kernel Trace" log. Seeing the app output the exact step-by-step math it used to determine a "NO-GO" status gives us huge confidence in its reliability for mission-critical contexts. Aesthetic Fidelity: We achieved a "Hard Sci-Fi" UI aesthetic that feels like it truly belongs on a SpaceX Starship or an Artemis Lander, utilizing a distinct monochromatic palette with functional semantic coloring.
What we learned
Tribology is Hard: We had to do deep dives into mechanical engineering papers to understand how lunar vacuum environments affect friction (cold welding). AI as a Sensor: We learned that LLMs are best used as "fuzzy sensors" that convert unstructured data (images/audio) into structured JSON that traditional code can process reliably. Three.js Optimization: managing high-poly deformable meshes in the browser required careful management of the render loop and React state to maintain 60FPS.
What's next for Vigilant-I
Adding more metrics and maybe creating our own dataset or fine tuning on a seperate public dataset for extraordinary and 0% real and calculated results Making the 3D simulation more robust and efficient enabling this as an actual tool to be used by real space engineers.
Log in or sign up for Devpost to join the conversation.