SilentSOS 🚨✋
Developer Week 2026 – Devpost Submission
When you can’t speak, SilentSOS does.
Inspiration
In emergency situations like kidnapping, domestic violence, or medical distress, victims often cannot shout or call for help. Inspired by real-world silent distress signals and the need for discreet emergency communication, we built SilentSOS — an AI-powered system that detects emergency hand gestures in real time and instantly alerts trusted contacts.
We wanted to create a solution where help can be requested without making a sound.
What it does
SilentSOS is a real-time emergency hand gesture detection and alert system.
- Detects predefined emergency gestures (SOS, distress, kidnap alert)
- Uses computer vision to track hand landmarks
- Automatically sends SMS / WhatsApp alerts via Twilio
- Shares live GPS location
- Captures a screenshot as incident proof
- Works in real-time using a webcam
It transforms silent gestures into immediate emergency notifications.
How we built it
SilentSOS is built using:
- Python for backend logic
- Mediapipe Hands for real-time hand landmark detection
- OpenCV for webcam processing and image capture
- Twilio API for sending SMS / WhatsApp alerts
- Virtualenv for environment management
Pipeline:
- Webcam captures live video.
- Mediapipe extracts 21 hand landmarks.
- Gesture logic classifies specific emergency patterns.
- When detected:
- Screenshot is captured
- GPS location is fetched
- Twilio sends alert message instantly
Challenges we ran into
- Reducing false positives in gesture detection
- Handling real-time performance without lag
- Securely managing Twilio credentials using
.env - Ensuring reliable message delivery
- Designing gestures that are easy to perform but distinct enough for detection
Accomplishments that we're proud of
- Successfully built a working real-time emergency detection system
- Integrated gesture recognition + live alert automation
- Achieved fast detection with minimal latency
- Built a socially impactful AI solution
- Designed a system that can scale to mobile and IoT devices
What we learned
- Practical implementation of computer vision using Mediapipe
- Real-time ML integration with external APIs
- Handling environment variables securely
- Building AI systems with real-world social impact
- Importance of user-centric design in emergency tech
What's next for SilentSOS
- Add voice-based emergency detection
- Improve gesture classification using ML models
- Deploy as mobile application (Flutter + TensorFlow Lite)
- Smartwatch integration
- Cloud dashboard for monitoring alerts
- Expand gesture library for elderly & differently-abled users
🔹 Features
- Detects emergency gestures using Mediapipe Hands
- Sends SMS / WhatsApp alerts with live GPS location using Twilio
- Real-time screenshot capture for incident proof
- Lightweight and easy to deploy
🛠 Tech Stack
- Python
- Mediapipe
- OpenCV
- Twilio
- Virtualenv
⚙️ Setup
git clone https://github.com/Harinath333/SilentSOS.git
cd help_me_alert_system
pip install -r requirements.txt
Log in or sign up for Devpost to join the conversation.