Safe Signal
Inspiration
For individuals living with social anxiety, a history of trauma, or neurodivergent shutdown/freeze patterns, the world can feel intensely overwhelming. Often, conscious awareness of panic or dissociation arrives too late—after the nervous system has already hijacked the body. We were inspired by the emerging science of neuroception, the body's pre-conscious ability to sense safety versus danger. While humans possess over 33 distinct senses, neuroception is the invisible alarm system that dictates our emotional and physical state.
We wanted to build a tool that makes this invisible sense visible. If a young professional or student could perceive their nervous system shifting from "Safe" to "Guarded" before hitting "Overloaded," they could intervene early and reclaim their mental and emotional wellbeing.
What it does
SafeSignal is a speculative, near-future wellness platform that functions as a wearable extension of your nervous system. By connecting to popular BLE heart rate monitors (like Apple Watch, Garmin, or Fitbit), it translates biometric data into a clear, three-state neuroception model: Safe, Guarded, and Overloaded.
When SafeSignal detects rising threat responses—such as shortening breath pace or spikes in skin conductance—it intervenes silently. The app provides grounding haptic pulses, suppresses non-essential digital alerts, suggests quieter physical routes, and offers a 60-second guided micro-reset. It actively manipulates the sensory environment to prevent a physical and emotional crash.
How we built it
We built SafeSignal as a full-stack web application designed for maximum privacy, responsiveness, and scale.
- Frontend: React 19, TypeScript, and Vite power a seamless, glassmorphic UI. State is managed via Zustand. We utilized the Web Bluetooth API to directly interface with wearable BLE heart rate profile devices without passing raw biometric data through third-party servers.
- Backend: Node.js, Express, and Prisma ORM backed by SQLite handle secure session storage. We implemented robust security with JWT authentication, Helmet, and bcrypt.
- AI & Analysis: For end-of-day reflection, we integrated the Google Gemini API to analyze a user's chronological "window of tolerance" data, generating personalized insights and behavioral recommendations.
- Neuroception Engine: We built a custom scoring algorithm that aggregates 6 micro-signals. To calculate the final threat score $S$, we used a weighted sum model:
$$ S = \sum_{i=1}^{n} (w_i \cdot x_i) $$
where $x_i$ represents normalized sensor inputs (e.g., breath pace, jaw tension, skin conductance) and $w_i$ represents their physiological reliability weights.
Challenges we ran into
- Quantifying an Invisible Sense: The biggest challenge was mathematically modeling "neuroception." Since it relies on micro-signals (like vocal flattening or posture collapse), we had to design a robust algorithm that could gracefully fall back to simulated inputs when advanced hardware (like IMUs or continuous facial tracking cameras) wasn't available.
- Web Bluetooth Complexities: Getting the Web Bluetooth API to reliably stream live heart rate and R-R intervals across different browser security contexts required deep diving into GATT profiles and asynchronous data buffering.
- Designing for Overload: We had to ensure the app itself wasn't overwhelming. If a user is approaching a panic attack, a flashy, alert-heavy UI is dangerous. We had to design an intervention system that was ambient, calm, and easily muted.
Accomplishments that we're proud of
- The Device Picker & Fallback System: We successfully built a branded, universal wearable picker that dynamically connects to physical smartwatches via Bluetooth, while seamlessly falling back to a realistic data simulator if a user doesn't own a device.
- Ethical Safeguards: Giving users an "extra sense" comes with huge responsibility. We implemented aggressive, privacy-first safeguards: consent-first sensing (no passive monitoring of others), encrypted on-device storage, an instant "manual override" to stop haptics, and a strict anti-misuse boundary ensuring data can never be accessed by employers or schools.
- Three Concrete Use Cases: SafeSignal handles real-world scenarios flawlessly: predicting commute overload on a crowded train, detecting vocal flattening during a stressful meeting to suppress notifications, and providing an end-of-day AI reflection of the user's window-of-tolerance.
What we learned
We learned that wellness apps often fail because they demand too much conscious effort from users who are already dysregulated. By tapping directly into the autonomic nervous system via wearables, we discovered how powerful ambient interventions (like a slow haptic pulse or delaying push notifications) can be. We also deepened our understanding of the Gemini API, leveraging it not just as a chatbot, but as an analytical engine for timeline data.
What's next for Safe Signal
The future of SafeSignal involves bringing the other 4 micro-signals out of simulation and into physical reality. We plan to:
- Integrate real-time camera-based jaw tension detection using lightweight, on-device machine learning models.
- Develop a native mobile app (React Native) to maintain background Bluetooth connections more reliably than PWA web constraints allow.
- Build Apple HealthKit & Google Health Connect server-side APIs to ingest historical sleep and HRV data, making the neuroception baseline even more accurate.
- Introduce a Clinical Dashboard where users can explicitly opt-in to share PDF reports of their window-of-tolerance with their therapist or mental health professional.
Built With
- express.js
- gemini
- node.js
- prisma
- react
- sqlite
- tailwindcss
- typescript
- vite
- web-bluetooth-api
- zustand


Log in or sign up for Devpost to join the conversation.