Inspiration
By the time danger looks obvious, it’s too late. Campus emergency response systems across the nation rely on human reporting, which is unreliable and often happens after-the-fact. This delay in reaching help can be the difference between life or death. AngelBox is a proactive threat detection system that constantly scans for threats using powerful computer vision algorithms, responding to emergencies in real time.
What it does
AngelBoxes are standalone modules or add-ons to existing emergency call boxes. An AngelBox transforms these call boxes from a simple point of contact to a wider area of effect. Edge compute modules analyze surveillance video to detect threats to human safety in real time and report to a centralized dispatch system. These incident reports appear in the AngelDash, enabling authorities to respond accordingly.
How we built it
AngelBox combines a wide angle camera with a Nvidia Jetson Nano, applying a modified YOLOv8 model for real-time pose estimation. Detections are streamed live to the backend server over websocket.
AngelDash uses React paired with Vite and FastAPI for a faster and leaner development experience. Designed with Figma Make.
Backend uses PostgreSQL hosted on Supabase for persistent data storage and managed database operations, with a Cloudflare reverse proxy providing request routing, TLS termination, and basic edge protection.
Physical Prototype was designed in Autodesk Inventor and 3D-Printed on a BambuLab P1S.
Challenges we ran into
Model fine-tuning
Our original YOLO (You Only Look Once) model with pose and gesture tracking was unable to differentiate exaggerated arm movement in cases such as regular conversation and expression which led to us having to project arm velocity vectors toward the target to separate punching/striking from waving, but edge cases such as sweeping arm blocks get penalized by gesture suppression.
Jetson Nano EOL
The Jetson Nano 2GB development kit was the only edge computer we had, but it had reached it end-of-life and was no longer supported by Nvidia. This meant that the newer USB Wi-Fi adapter we had wasn't supported, and thus couldn't connect to our backend endpoint.
What we learned
- David: "I learned about the complexities of integrating theoretical concepts into real world use cases. For instance, building the directional velocity system that tracked wrist movement towards YOLO pose estimation landmarks was a creative solution I'm really proud of."
- Kevin: "This was my first time using Figma Make and, as a computer engineer focused on low-level hardware, it really opened my eyes to how accessible frontend and web design has become."
- Jasmine: "I'm used to working with large datasets, but integrating our Supabase SQL database with the edge compute modules and our frontend gave me insight on the intricacies of how data is handled in the real world."
- Shivan: "When having no familiarity, trial and error is your best friend as it bridges the gap and these foreign concepts become less vague as time goes on."
What's next for AngelBox
- Sleeker, more streamlined physical design.
- More accurate classification model.
- Hardening security of edge device data.
- Multi-camera support, targeting scalability for hundreds of nodes.
- Reduce false positives through reinforcement learning.
Built With
- 3dprinting
- fastapi
- figma
- jetson-nano
- kaggle
- postgresql
- python
- react
- supabase
- vite
Log in or sign up for Devpost to join the conversation.