Inspiration Women and marginalized individuals often navigate unsafe environments — from public harassment to domestic tension — without fast, personalized support tools. Most existing resources are: Static Generic Hard to access in urgent moments Not tailored to real-world context We wanted to build something different: A fast, contextual, privacy-respecting AI assistant that helps users think clearly and act safely — in under 60 seconds. 🎯 Problem Statement Women navigating high-risk environments lack immediate, personalized safety planning tools that are practical, non-judgmental, and accessible. Safety advice is often: Overwhelming Too generic Hidden behind paywalls or logins Not trauma-informed This creates hesitation and inaction during critical moments. 👤 Defined User A woman commuting alone at night, meeting someone for the first time, or navigating escalating tension at home — who needs a clear, actionable safety plan quickly. ✅ Success Test From a clean browser: User opens SHEild (no login) Completes 4-step form Clicks “Generate My Safety Plan” Receives structured, context-aware safety plan in under 10 seconds Downloads PDF successfully If this works — the demo passes. 🤖 What It Does SHEild uses Goose (Block’s open-source agentic AI framework) to: Analyze user context Assess risk level Generate: Pre-event preparation plan During-event de-escalation actions Emergency steps Trusted contact message template Verified hotline guidance (no fabricated sources) All advice is trauma-informed and bounded by safety guardrails. 🧠 How We Built It Frontend Next.js Accessible UI (WCAG AA contrast) Dark mode default No login required No personal data stored Backend Node.js API Goose agent framework Risk classification layer Structured JSON output renderer PDF generator AI Guardrails No weapons No vigilante advice No illegal suggestions Structured output only Escalation to hotline for high-risk scenarios 🧪 Proof Demo runs from clean start No authentication required No external data dependencies One-command setup in README Evidence log included Risk mitigation documented Commit history post Jan 6, 2026 ♿ Accessibility High-contrast mode toggle Large typography Simple language No flashing elements Keyboard navigable Captions on demo video ⚠️ Risk & Rigor Identified Risk: AI could generate unsafe or unrealistic advice. Mitigation: Structured JSON output only Pre-approved action templates Policy-enforced boundaries No free-form open advice High-risk scenario escalation protocol Tradeoff: Less creative AI → More reliable and responsible output. 🌱 UN SDG Alignment Primary: SDG 5 – Gender Equality SDG 16 – Peace, Justice & Strong Institutions SHEild advances preventative safety empowerment through responsible AI. 📈 What’s Next Localized hotline database SMS share integration Offline-first mode Multilingual support Anonymous analytics for impact measurement

Built With

  • all
Share this project:

Updates