Inspiration
Modern security systems force a tradeoff between simplicity and intelligence. Motion sensors trigger constant false alarms, while always-on cloud-based vision systems are expensive and unreliable in poor lighting. We wanted to rethink this model by building a system that thinks before escalating by using lightweight edge hardware for constant monitoring and powerful cloud vision only when it's necessary.
What it does
Our project is an edge-first intelligent security system that combines an Arduino-based edge device with a cloud-based computer vision service. The Arduino continuously monitors activity using low-power sensing and decides when a situation is suspicious enough to escalate. When the Arduino decides a situation needs further analysis, it triggers our cloud vision service. At this point, the system runs a YOLO-based detection pipeline that processes the incoming camera frame and tracks objects in real time. Rather than treating every detection equally, the code filters results by class and confidence, focusing only on people, animals, and vehicles. For each frame, it evaluates overall image quality by measuring brightness and sharpness. Then it separately checks for low-light conditions that could reduce detection reliability. These signals are combined with detection confidence to determine how trustworthy the result actually is. From there, the system assigns a threat score based on what was detected. Using this information, the service decides whether the event can be handled locally, needs cloud verification, or should be escalated immediately. Each incident is logged as a structured report that includes a risk score, a confidence value, a summary, and a snapshot of the frame with bounding boxes overlaid. To avoid spamming alerts, the system enforces a cooldown period between incidents, ensuring that only meaningful events are reported.
How I built it
The edge layer is powered by an Arduino device responsible for motion detection, trigger logic, and escalation decisions. This ensures low latency, low power consumption, and always-on reliability. The cloud layer is implemented in Python using YOLO for real-time object detection and tracking. The system evaluates detection confidence, image quality, and low-light conditions to determine whether a situation should be handled locally, verified, or escalated further. Incidents are packaged into structured payloads with risk scores, summaries, and base64-encoded snapshots and sent to a backend API for logging and review. This separation of responsibilities allows the system to scale efficiently while remaining cost-aware and responsive.
Challenges that we ran into
Designing a clean separation between edge decision-making and cloud intelligence Handling unreliable lighting conditions that reduce model confidence Balancing detection sensitivity with false-positive reduction
Accomplishments that I'm proud of
Building a true edge-to-cloud escalation pipeline, not just a vision demo Implementing explainable threat scoring instead of binary alerts Making the system robust to low-light and low-quality images Designing a scalable architecture where one cloud service supports many edge devices Producing structured, evidence-backed incident reports suitable for real-world use
What I learned
Edge-first architectures dramatically improve scalability and cost efficiency Image quality and environmental context are just as important as raw detection accuracy Explainable heuristics can make AI systems more trustworthy and practical Building resilient systems requires planning for uncertainty, not just ideal conditions
What's next for SentinelQ
Fully set up multi camera support Integrate with locks that automatically lock Integration with real estate systems to track safety scores across housing locations.
Built With
- arduino
- nextjs
- numpy
- opencv
- pil
- python
- rest
- tailwind
- typescript
- yolo

Log in or sign up for Devpost to join the conversation.