Inspiration The inspiration for this project stems from the stark realization that as humanity prepares for the 140-million-mile journey to Mars, we are leaving our most critical medical resource behind: instant communication. While current AI can detect patterns, it is "Earth-biased," trained on healthy humans in 1G gravity. I was inspired to bridge this "Medical Autonomy Gap" by creating a system that doesn't just predict what might happen, but understands why it is happening in the hostile, shifting environment of deep space.

What it does The project is a Biologically-Aware Digital Twin system that provides a 24/7 medical "flight surgeon" for every crew member.

Real-time Monitoring: Continuously tracks vitals and biomechanical markers.

Causal Reasoning: Unlike standard AI that looks for correlations, our engine uses Causal Inference to distinguish between harmless microgravity adaptations (like facial puffiness) and life-threatening pathologies (like SANS or radiation-induced cellular stress).

Predictive Simulation: It allows the crew to run "What-If" scenarios, simulating the effectiveness of a medication or procedure on their specific digital twin before administering it.

How we built it The Brain (Causal AI): Built using Structural Causal Models (SCMs) to map the physiological relationships between radiation, bone density, and fluid dynamics.

The Network (Federated Learning): Implemented a decentralized training loop using Flower and PyTorch, allowing models on separate "Edge" devices to learn from each other without sharing private biometric data.

The Body (Modeling): Integrated differential equations representing the cephalad fluid shift and calcium metabolism to make the Digital Twin "biologically aware."

The Hardware (Edge Computing): Optimized for low-latency inference on NVIDIA Jetson and similar edge-grade hardware to ensure zero reliance on a cloud link.

Challenges we ran into The biggest hurdle was the "Data Scarcity Paradox." There isn't a massive dataset of "Astronauts on Mars" to train on. We had to rely on high-fidelity synthetic data and ground-analog studies. Additionally, implementing Federated Learning in a high-latency, intermittent connectivity environment required building a custom aggregation protocol that could handle "dropped" nodes without crashing the global model.

Accomplishments that we're proud of Explainability: We successfully moved beyond "Black Box" AI. Our system can provide a "Causal Path" for its diagnosis, which is essential for earning a crew’s trust.

Efficiency: We achieved a 70% reduction in bandwidth requirements by only transmitting model "weights" rather than raw health data.

NASA Alignment: The framework aligns with NASA’s Artemis requirements for "Thinking" Autonomy and the Human Research Program (HRP) goals.

What we learned We learned that in life-critical environments, Causality > Correlation. A model that knows why bone density is dropping is infinitely more valuable than one that simply flags it. We also realized that medical privacy isn't just a legal requirement—in the psychological pressure-cooker of a Mars mission, it’s a mission-safety requirement.

What's next for Autonomous Health The next phase involves integrating Multi-Modal Foundation Models that can process medical imaging (like ultrasound) and text-based logs alongside sensor data. We are also looking into "Closed-Loop" integration, where the AI can provide real-time guidance for robotic surgery or autonomous medication dispensing systems, truly closing the gap between Earth and the stars

Built With

  • and
  • biologically-aware
  • built-with-causal-inference
  • digital
  • edge-ai
  • federated-learning
  • modeling
  • twin
Share this project:

Updates