Both of us were intrigued and excited about the possibility of using computer vision in our hack. Computer vision has never been more accessible, and we intended to take advantage of some of the advanced techniques that can run even on conservative machines. We also wanted to work on a project that had the potential to make an impact in a good way. We went on to build a system that may only save civilians and authorities a few minutes, but could also be responsible for saving many lives.

What it does

The goal of Guardyn is to achieve real-time threat analysis; specifically, detecting and reporting active shooters as soon as their presence is made known. It process and recognizes both people and objects we deem dangerous (for testing purposes, a water bottle), all in real time. Once Guardyn detects a deadly weapon, it immediately sends out alerts to authorities and civilians, making sure they are aware of the situation.

We decided to take it a step further and provide users of the system with additional info, for added awareness and safety. Notifications sent to the user contain data about the detected shooter. We send an image of the suspect as they were detected by Guardyn. In addition to this, Guardyn analyzes the suspect's face and can even determine skin tone / complexion.

Guardyn sends alerts to civilians instantly which provide you with data about the emergency at hand, including what the attacker looks like, where they were seen with the weapon, and an image of the suspect at the time of detection.

The alert system isn't just a one-way street however. We've implemented two methods for the civilian who receives an alert to quickly and safely do two things: to send out an "SOS" distress signal indicating the attacker is nearby using a single tap, and to provide the police with a tip on the situation or other added information to support the apprehension of the suspect.

Outside of the alert system, we also implemented a dashboard application that visualizes all the data we collect and could be used by officers / security officials to keep track of the threats Guardyn detects and keep an eye on unfolding incidents. As civilian's use the "SOS" response to the emergency alert, their current location is collected and submitted to the system. This allows the user of the dashboard to see a visual of the path the attacker has taken thus far and where he may be headed next.

How we built it

The computer vision component Guardyn was made using Tensorflow's Object Detection API and a Haar Cascade classifier. Using these two in combination allows us to, in real-time, distinguish between harmless objects and dangerous ones, as well as the people that are in possession of them. We use the Haar Cascade to attempt to detect the location of the face and eyes of the suspect in the image (if they are facing the camera and/or not wearing a mask), and then we analyze this portion of the frame with our own algorithm to extrapolate skin tone, one of the few pieces of data we feed to users of the system.

We decided to use push notifications to alert civilians as it proved to be the fastest method of delivery vs SMS and other options. Our system is set up so that the notifications could be subscribed to across iOS, Android, or Web notification systems.

Push notifications offer a lot of flexibility within embedding data like images so that civilians can get more information about the current emergency. They also allowed us to implement the "SOS" and "Tip" feature in a way that doesn't require a ton of user input from a user, especially the SOS feature which collects geolocation data and sends it to police with a single tap.

Challenges we ran into

The use of computer vision was a challenge at this event as we were forced to rely on running object detection and classification models on our laptops, both of which do not have graphics cards to accelerate performance in this area. Despite this we did end up making use of some powerful but lightweight models that run with an acceptable framerate of about 3-5 FPS on our MacBooks. We believe the right hardware could make this system even better at detection, and we're excited to try it out at home.

Accomplishments that we're proud of

Both of us have dabbled in computer vision but had never taken on a challenge of this scale. It was a really fun project to build because of this as we were forced to dive in head first and see what we could hack together. I'm proud of our end product because we were not only able to implement a pretty complex computer vision algorithm on not-so-high end hardware, but also implement a handful of features that put the computer vision's results to good use.

What we learned

We learned what seems like a semester's worth of knowledge about computer vision and machine learning in general. We also enjoyed having the opportunity to explore the strategies of mass deploying push notifications and the ways that users could interact with these alerts.

What's next for Guardyn

Guardyn seems like the start of a project you might see in a futuristic society, and we hope that is the case. If a system like ours were to be fully implemented in society, one obvious improvement for Guardyn would be for it's notification system to be deeply rooted into our phones without a user having to manually connect.

The ideal scenario for us is to be the equivalent of modern AMBER Alerts, such that we could reach users in a geographic range with important information about critical events around them. We believe that Guardyn shows that current systems like the AMBER, weather, or government alerts could offer more types of data sent to the population during emergencies.

Built With

Share this project: