Inspiration

In the aftermath of major disasters—like earthquakes or conflict zones—telecommunications infrastructure is often the first to fail. I realized that while AI has advanced rapidly, most "smart" tools become useless bricks the moment the internet cuts out. I asked myself: How can I bring Level 1 Trauma Center intelligence to a paramedic standing in the rubble, with zero signal?

What it does

ResQ-Node is the blueprint for the ultimate Offline-First Medical AI. My goal is to kill the dependency on the cloud for critical life-saving data and turn any ruggedized Android tablet into an autonomous triage partner.

It Listens: Designed to capture paramedic voice logs hands-free using on-device speech recognition (Whisper).

It Reasons: Unlike basic chatbots, I architected it to use a Reasoning Model (DeepSeek-R1) to strictly follow the S.T.A.R.T. Triage Protocol logic chains.

It Acts: Instantly assigns a Triage Color Tag (Red/Yellow/Green) and recommends immediate life-saving actions.

It Protects: I designed the system to guarantee 100% data sovereignty—medical records are encrypted on-device and never leave the blast zone.

How I built it

Since this is an Ideathon, I focused on validating the technical feasibility and user experience using a suite of prototyping tools to prove this architecture works:

Architecture: I mapped the complete offline data flow using Draw.io, proving that a unidirectional system (Whisper -> DeepSeek -> SQLite) can function without the cloud.

Logic Validation: I used Ollama locally to test if a Small Language Model (DeepSeek-R1-Distill) could actually handle complex medical scenarios. The terminal outputs in my presentation prove the model can correctly reason through triage protocols.

User Interface: I designed a high-contrast "Dark Mode" interface in Figma, specifically tailored to reduce battery consumption in disaster environments.

Presentation: I compiled my vision and technical roadmap using Canva.

Challenges I ran into

The main challenge I faced was determining if a "Small Language Model" (SLM) was smart enough for medical logic. I didn't want to just guess—so I spent hours testing prompts in Ollama. I found that while standard models just predicted text, the "Reasoning" models (like R1) were required to accurately follow the step-by-step S.T.A.R.T. protocol.

Accomplishments that I'm proud of

I am proud of the Proof of Logic. I didn't just make a slide deck; I used my local environment to prove that an offline AI can make the right life-saving decision (classifying a "Black Tag" patient correctly). I am also proud of the professional, field-ready UI design I executed in Figma.

What I learned

I learned that "Offline" is a feature, not a limitation. By removing the dependency on the cloud, I realized I could actually gain speed (zero network latency) and security (total data sovereignty).

What's next for ResQ-Node

With Phase 1 (Offline Voice Triage Core) now completed and validated, my roadmap is focused on the next stages:

Phase 2: Implementing Peer-to-Peer Mesh Networking so devices can sync data with each other without a central server. Phase 3: Integrating Edge Computer Vision to analyze wound severity and burn surface area directly via the device camera.

Built With

  • canva
  • deepseek-r1
  • draw.io
  • figma
  • ollama
  • sqlite
  • whisper
Share this project:

Updates