Inspiration


Women face disproportionate safety risks in and around their vehicles: from being followed in parking lots to vandalism, break-ins, and suspicious loitering. Most vehicle security systems are passive and reactive, triggering alarms only after an incident has occurred and offering little context or real-time situational awareness.

We wanted to explore how edge AI and embedded hardware could transform a car into an active, perceptive security system, one that observes its surroundings, reasons about potential threats, and keeps users informed in real time.

Sentryx was inspired by the gap between existing vehicle security systems and the real-world safety concerns many women navigate daily.

What it does


Sentryx is an AI-powered vehicle security system that continuously monitors a car’s surroundings and alerts the user when suspicious activity is detected. It identifies unfamiliar faces and people near the vehicle, detects motion, impacts around the tires, and trunk disturbances, and responds in real time.

servo-mounted dashcam physically rotates toward detected motion or sound, actively tracking areas of interest instead of passively recording. A mobile app streams live video, sends real-time alerts, and enables location sharing with trusted contacts, giving users immediate awareness and peace of mind.

How we built it


Sentryx integrates computer vision, edge inference, embedded hardware, and a mobile application into a single system:

  • OpenCV’s DNN (cv2.dnn) runs lightweight neural networks for real-time face recognition and person detection directly on the Raspberry Pi
  • Google Gemini performs higher-level reasoning to assess unfamiliar faces and contextualize potential threats
  • Raspberry Pi handles on-device vision processing and live video streaming from the dashcam
  • An ESP32 manages sensor input and IoT communication
  • Motion, light, and impact sensors detect tampering, break-ins, or suspicious proximity
  • servo motor physically rotates the dashcam toward detected motion or sound, enabling active visual tracking
  • React Native mobile app streams live video and delivers real-time alerts and location sharing

Sensor signals and vision-based detections are combined to trigger alerts only when activity appears genuinely suspicious, prioritizing responsiveness without overwhelming the user.

How it works (End-to-End Trigger Flow)


Sentryx follows an event-driven workflow, combining sensor signals, on-device vision inference, and real-time alerts:

  1. Initial Trigger: Motion, light, or impact sensors connected to the ESP32 detect activity near the vehicle (e.g., proximity, vibration, or tampering).
  2. Physical Response: If triggered, a servo motor physically rotates the dashcam toward the source of motion or sound, enabling active visual tracking and clearer situational capture.
  3. Edge Vision Activation: The Raspberry Pi activates the dashcam feed and processes frames in real time using OpenCV’s DNN for person detection and face recognition directly on-device.
  4. Threat Contextualization: Vision outputs (e.g., presence of a person or unfamiliar face) are combined with sensor events and passed to Google Gemini for higher-level reasoning to help contextualize whether the activity may be suspicious.
  5. User Alert & Awareness: The mobile app receives a real-time alert in SMS and Push Notification from the app, streams live video from the Raspberry Pi, saves the incident in the History log as evidence, and allows the user to alert and share their live GPS location with trusted contacts or call 911 for added safety awareness.

This pipeline prioritizes low latency, on-device inference, and rapid user notification without relying on constant cloud connectivity.

Challenges we ran into


One of our biggest challenges was integrating real-time computer vision with physical hardware under tight latency constraints. Coordinating sensor data from the ESP32 with on-device inference and live video streaming on the Raspberry Pi required careful synchronization.

Actuating the servo to rotate the camera toward motion or sound, without interrupting the video pipeline, was non-trivial. We also spent significant time tuning detection thresholds to reduce false positives while still erring on the side of user safety.

Accomplishments that we’re proud of


We built a fully functional end-to-end edge AI system that combines perception, actuation, and mobile interaction. Sentryx can detect people and unfamiliar faces near a vehicle, physically orient a camera toward suspicious activity, and stream live video with real-time alerts to a mobile app.

We’re especially proud of successfully running on-device computer vision using OpenCV DNN alongside multiple hardware sensors in a system that feels responsive and practical for real-world use.

What we learned


Through this project, we gained hands-on experience integrating AI-powered perception with embedded systems and managing real-time constraints. We learned how to deploy lightweight neural networks using OpenCV’s DNN module, perform higher-level reasoning with Google Gemini, and coordinate multiple sensors with live video streaming on resource-constrained devices.

We also learned how critical it is to balance detection accuracy, latency, and user trust when building safety-focused systems.

What’s next for Sentryx


Next, we plan to improve detection accuracy and reduce false positives through better sensor fusion and model tuning. We’d like to enhance the mobile app experience, explore secure cloud storage for recorded clips, and investigate ways to make Sentryx more deployable at scale.

Long-term, we envision Sentryx as a proactive, adaptive vehicle safety platform that responds intelligently to its environment and evolves with user needs.

Built With

Share this project:

Updates