Inspiration
The worst emergencies tend to happen exactly where help is hardest to reach. A hiker breaks an ankle six hours from the trailhead. An earthquake knocks out the local cell tower for two days. A grandparent collapses in a rural village where the nearest ambulance is forty minutes away. In every one of those moments, the bystander has a phone in their pocket — and almost no app on that phone is built to work without a signal.
Existing first‑aid apps mostly assume connectivity. However, up to 49% of cell towers can fail during disasters. Generic chatbots are everywhere, but you really do not want a hallucinating model improvising the steps of CPR on a stranger's child. We wanted something different: an app that behaves like a calm, well‑trained first responder kneeling next to you — one that works offline, gives canonical, protocol‑grade instructions, and only uses AI to personalize and clarify, never to invent.
That is OASIS — a quiet place of guidance when the rest of the world has gone dark.
What it does
OASIS is an offline Android emergency response companion. The user describes — or shows — what is happening, and OASIS walks them through the right protocol, one step at a time.
- Deterministic triage. Every conversation starts in a structured decision tree (Entry → severity branching → specific protocol). Life‑threatening branching never depends on a language model's guess.
- Canonical protocols, locally stored. Steps come from structured emergency reference data based on Red Cross / wilderness first‑aid guidelines, not from the LLM's memory.
- On‑device AI for the human parts. A local LLM (LFM2.5-1.2B, served via Melange) handles personalization and step‑related Q&A. It rephrases the canonical step for this user, in this moment.
- Graceful fallback. If the LLM is slow, low on memory, or its output fails validation, the app falls back to the canonical protocol text. The flow never breaks.
- No network required. Core triage, protocol logic, and AI inference all run on‑device.
How we built it
We split the system into three layers and kept them strictly independent, so AI changes can never silently bend protocol meaning:
- Protocol & state layer — JSON decision trees plus canonical protocol text, loaded from local assets. A pure Kotlin state machine traverses them. This layer is deterministic and testable.
- AI orchestration layer — Wraps the on‑device LLM, prompts it with the current canonical step as ground truth, normalizes its output, and validates that the response did not change the meaning of the step.
- UI layer — Jetpack Compose, designed for a stressed user: large tap targets, single primary action per screen, persistent "Call Emergency Services" affordance, and minimal cognitive load.
We currently ship 20+ emergency decision trees covering CPR / unresponsive patients, choking (adult and infant), severe bleeding, anaphylaxis, drowning, chest pain, stroke, seizures, head injury, fractures, burns, hypothermia, heat illness, hypoglycemia, poisoning, and more — each backed by structured canonical text.
For the model side, we picked LFM2.5 variant because the effective memory footprint stays close to a 2B‑parameter model, which is what realistic phones can host alongside a UI process. We tuned prompts so generation is constrained personalization of canonical text rather than free composition.
Challenges we ran into
- Running a Local LLM on a real phone. Memory budget, thermals, cold‑start latency, and battery all conspire against you. We had to manage resources very carefully — quantization, lazy loading, aggressive cancellation when the user moves to the next step, and a strict ceiling so the LLM can never starve the UI thread.
- Designing UI/UX that is genuinely human. Designing for a panicking user is the opposite of designing for a power user. Every screen had to assume shaking hands, divided attention, and adrenaline. We rewrote the step screen four times before it stopped feeling like a settings menu and started feeling like a person talking to you.
- Keeping the model honest. Life‑critical instructions cannot be hallucinated. We spent meaningful time on the validator between the LLM and the UI: if the personalized output drifts from the canonical step, we discard it and show the canonical text. Safer is always better than slicker.
Accomplishments that we're proud of
- A fully offline‑first emergency assistant that still feels modern.
- A hybrid architecture — deterministic state machine + constrained on‑device LLM — that we believe is the right pattern for any high‑stakes consumer AI, not just first aid.
- 20+ protocols wired end‑to‑end against canonical reference text, with clean fallback behavior when AI is degraded.
- A UI a stressed bystander can actually follow — tested with people who had never seen the app before.
- An on‑device LLM that runs on phones people already own, not a $1,500 flagship.
What we learned
- For life‑critical AI, structure beats scale. A small model + strong protocol scaffolding beats a giant model with no guardrails.
- Validation is a feature, not infrastructure. Building the AI ↔ canonical‑text validator changed how we thought about every other feature.
- Designing for stress is its own discipline. The "obvious" interaction in a calm room is not the obvious interaction at 2 a.m. on a mountainside.
- Local LLMs in 2026 are actually viable for serious products, if you respect the device.
What's next for OASIS
- App Store / Play Store release so OASIS is one tap away when people need it.
- Specialized vision models for structured visual tasks — kit detection, step verification, wound classification — feeding back into the same protocol layer.
- Wearable companion (Wear OS) so guidance reaches your wrist when both hands are on the patient.
- Multi‑language protocols for travelers and multilingual households.
- Partnerships with Red Cross chapters, alpine clubs, and rural EMS programs so our canonical protocol layer is reviewed and signed off by the people who actually run rescues.
- A long‑term goal: become the offline layer that any emergency app, in any country, can build on top of.
Log in or sign up for Devpost to join the conversation.