🌌 Inspiration: The Signal Gap

In 2026, as humanity pushes further into the space onboard vessels The Signal Gap.

We built EIMO (Earth Independent Medical Operations) to solve this problem. We wanted to create an AI that doesn't just "chat," but takes command—bridging the gap between the expert care you need and the isolation you are in.

🚀 What it does

EIMO is an offline-first Autonomous Medical Officer. It is designed to function in zero-connectivity environments, acting as a high-speed diagnostic partner.

  • Protocol 1 (Voice Triage): EIMO listens to unstructured voice commands (e.g., "Subject has arterial bleed!") and instantly converts them into structured, life-saving protocols (Tourniquet application, Shock management). It filters out panic and focuses on the solution.
  • Multimodal Vision: Using Gemini 3's vision capabilities, EIMO acts as a second pair of eyes. Users can upload images of injuries or EKGs, and the AI verifies the diagnosis (e.g., "STEMI Confirmed"), providing clinical confidence when a specialist isn't available.
  • Dual-Context UI: The interface shifts between "Orbit Mode" (High-contrast, dark mode for space/night) and "Earth Mode" (High-visibility for field work), ensuring usability in any environment.

⚙️ How we built it

We built EIMO using a React frontend powered by the Gemini 3 API.

The core innovation lies in the System Prompt Engineering. We moved away from the standard "helpful assistant" persona and engineered a "Medical Officer" persona that prioritizes:

  1. Brevity: High-stakes situations require short, directive answers.
  2. Multimodality: We used Gemini's native ability to process audio and images in the same context window.
  3. State Management: The app tracks the "Patient State" (e.g., Bleeding -> Controlled -> Stable), simulating a real medical encounter rather than a static Q&A.

🔧 Challenges we faced

  • The "Empathy vs. Efficiency" Balance: We struggled to make the AI sound authoritative without sounding cold. We iterated on the prompt to add "Calming Protocols" (detecting panic words) while keeping the medical advice strictly tactical.
  • Visual Latency: processing high-res medical images in a browser environment required optimizing the image compression before sending it to the Gemini API to ensure the "Real-time" feel remained intact.

🏆 Accomplishments that we're proud of

We are most proud of the "Flash Diagnosis" feature. Seeing the AI correctly identify a simulated arterial bleed from a voice command and instantly throw up the correct tourniquet protocol felt like magic. It proved that AI can be more than a productivity tool—it can be a survival tool.

🧠 What we learned

We learned that Context is King. A generic LLM is useful, but a context-aware LLM that knows it is "in space" or "in a rural clinic" changes the entire interaction. Gemini 3 allowed us to build that context deeply into the application layer.

🔮 What's next for EIMO

  • Wearable Integration: Connecting EIMO to real-time biometric sensors (Heart Rate, O2 Saturation) for automatic "Red Alert" triggers.
  • Satellite Uplink: Implementing a "Burst Transmission" feature that compresses the medical encounter into a tiny text packet to be blasted back to Earth/Hospital when a signal window opens.

Built With

Share this project:

Updates