Inspiration
In the most dangerous moments, asking for help is what makes the situation worse.
We asked: What if the safety app were invisible? What if your phone could call for help without anyone knowing, not even when they're standing right next to you?
1 in 3 women experience physical violence from a partner. 97% of domestic violence survivors report their abuser monitors their phone. Existing safety apps fail these people because they're visible safety apps — if an abuser finds one, it escalates the danger.
This reframed the problem: \text{Safety} \neq \text{Visibility}
And led us to a core idea: What if the safest system is one that cannot be detected even while it’s running?
What it does
SafeHaven is a covert, real-time safety system disguised as an everyday app.
To an observer, it is just a weather app. Underneath, it is a live, peer-to-peer incident pipeline. https://share.icloud.com/photos/00dPJB5PloK8Py5gRESlRkQgw
The user activates it through natural behaviour — a gesture, a phrase, or nothing at all. When they're in danger, they open the app covertly (Back Tap, Action Button, Control Centre, Lock Screen widget, or "Hey Siri") and say a natural codeword phrase. Three phrases trigger three tiers:
- Tier 1: "I need to check the forecast" → audio stream + GPS
- Tier 2: "What's the temperature outside?" → adds video
- Tier 3: "Red" → everything + emergency alert with "Call Police" button
The victim's screen shows the weather. The trusted contact's browser shows a live dashboard with video, AI audio labels, GPS, incident timeline, and a risk assessment.
If the user cannot act, on-device AI takes over. Detecting distress signals and triggering escalation autonomously: Apple's Neural Engine classifies ambient sound via CoreML and auto-triggers the appropriate tier when it detects shouting, screaming, impact sounds, or glass breaking. No user input needed. The AI acts on their behalf.
All data streams P2P over Hypercore: no cloud, no server. Every event is cryptographically signed in a tamper-proof append-only log for legal evidence.
How we built it
Sender (iOS): Swift/SwiftUI with interchangeable disguise skins. CoreML SoundAnalysis for ambient danger detection on the Neural Engine. CoreML Vision for person detection and pose estimation on the camera feed. Speech recognition for codeword detection. Bare Worklet runtime runs Hypercore independently of the UI lifecycle.
The system continues recording truth even if the UI changes, reloads, or disappears.
Data layer: Pear protocol: Hypercore append-only logs synced via Hyperswarm DHT. Audio/video chunks, AI metadata, GPS, and timeline events all append to the Hypercore and replicate P2P to the receiver. No backend, no signalling server, no cloud database.
Receiver: connects via hyperswarm-web over WebRTC data channels. Replicates the Hypercore log in real time. Renders a live, structured view of the incident.
Challenges we ran into
Browser + Hyperswarm. Browsers can't run raw TCP/UDP sockets. We used hyperswarm-web to tunnel over WebRTC data channels, which required significant debugging.
Latency vs. decentralisation. Streaming over Hypercore has 2-3 second delay vs WebRTC. We leaned on near-instant AI metadata sync to keep the dashboard feeling responsive despite video lag.
Disguise believability. The weather skin needed to be pixel-perfect. If it looks like a cheap imitation, the entire premise collapses.
Accomplishments that we're proud of:
Multi-layered resilience
- Voice triggers
- Gesture triggers
- AI auto-detection
Zero infrastructure. No cloud, no backend, no central database of DV evidence that could be breached. Data exists only on the two peers' devices.
Tamper-proof evidence. Hypercore's append-only log means every entry is cryptographically chained — genuinely more credible for legal proceedings than a database export.
What we learned:
Decentralisation is a product decision. We didn’t choose P2P for performance but we chose it because centralised safety systems are inherently unsafe in this context.
Design for the threat model, not the user flow. Instead of "what does the user want?" we started with "what can the adversary see?" That inversion changed everything.
P2P is harder but worth it for the right use case. Firebase would have been faster. But a central server storing DV evidence is a central point of surveillance and breach. The complexity was justified.
The Neural Engine is underutilised. Apple's SoundAnalysis framework ships pre-trained classifiers for hundreds of sounds on dedicated silicon. It took hours to integrate and gave us the most powerful feature in the app.
SafeHaven isn’t just a safety app.
It’s a system that treats truth as a stream, trust as cryptography, and safety as invisibility.
Built With
- corestore
- hypercore
- hyperswarm
- javascript
- node.js
- react-(pwa)
- react-native-(expo)
- react-native-bare-kit
- react-native-webrtc
- typescript
- webrtc
- websocket-(ws)
Log in or sign up for Devpost to join the conversation.