Inspiration
Disasters shatter connectivity when most needed. Local responders still require triage checklists, resource plans, and transparent SITREPs—without the internet. Open models + LM Studio facilitate offline, private AI on commodity laptops.
What it does
Operates gpt-oss-20b locally through LM Studio to drive an offline-first agent. Streaming triage guidance with [Sx] citations from on-device documents. Fair resource allocation plans for many sites. One-click PDF exports (Triage Checklist, SITREP) for field utilization. "Mesh" simulator panel for preview of peer-to-peer sync (bridgeable to LoRa/WebRTC).
How we built it
Backend: FastAPI + Uvicorn with OpenAI-compatible calls to LM Studio; SSE streaming; strong model auto-resolution; health/config endpoints. RAG: FAISS (faiss-cpu) + SentenceTransformers (all-MiniLM-L6-v2); chunking; inline [Sx] citations; PDF ingestion with pypdf. Frontend: Vite + React + TypeScript; dark, minimalist UI; streaming de-dup; live header status; Knowledge Pack upload; Ops Board; export buttons. PDFs: ReportLab templates for triage and SITREP. Quality: pytest suite (health, chat, streaming, print); Windows CI workflow. UX polish: custom dark scrollbars, background clip fixes, and fallback text if model is down.
Challenges we ran into
LM Studio model ID mismatch → added /models probing + auto-resolution. Windows-native deps (FAISS) → pinned faiss-cpu and tweaked setup. Token repetition in long outputs → reduced temperature, penalties, and server/client de-dup. Startup latency from embeddings → lazy-loaded SentenceTransformer. CORS and 502s in dev → tighter 127.0.0.1 binding and health checks. Visual bugs (white scroll corners) → global CSS and layout adjustments.
Accomplishments that we're proud of
Things we're proud of Completely local, offline agent with streaming chat, RAG, and printable outputs. End-to-end reliability: health checks, graceful fallbacks, tests, and CI. Simple submission docs: README quickstart, SUBMISSION.md, .env.example.
What we learned
Open models + LM Studio make private, resilient AI feasible in crisis environments. Small engineering niceties (stream parsing, de-dup, model finding) strongly influence perceived quality. Windows-first packaging is important for accessibility in field situations.
What's next for Lifeline Mesh — Local Disaster Response Agent
Actual mesh transport: WebRTC data channels and low-bandwidth LoRa bridge. Voice loop: on-device ASR/TTS for eyes-free instructions. Stricter citation modes and red-team prompts for safety purposes. More printable templates (resource manifests, handoff forms). PWA packaging and lightweight device profiles; optional fine-grained domain variants.
Built With
- faiss
- fastapi
- github-actions
- gpt-oss-20b
- lm-studio
- node.js
- powershell
- pytest
- python
- react
- reportlab
- sentencetransformers
- typescript
- vite
Log in or sign up for Devpost to join the conversation.