Gun violence is a dire problem in the United States. When looking at case studies of mass shootings in the US, there is often surveillance footage of the shooter with their firearm before they started to attack. That's both the problem and the solution. Right now, surveillance footage is used as an "after-the-fact" resource. It's used to look back at what transpired during a crisis. This is because even the biggest of surveillance systems only have a handful of human operators who simply can't monitor all the incoming footage. But think about it: most schools, malls, etc. have security cameras in almost every hallway and room. It's a wasted resource. What if we could use surveillance footage as an active and preventive safety measure? That's why we turned surveillance into SmartVeillance.
What it does
SmartVeillance is a system of security cameras with automated firearm detection. Our system simulates a CCTV network that can intelligently classify and communicate threats for a single operator to easily understand and act upon. When a camera in the system detects a firearm, the camera number is announced and is displayed on every screen. The screen associated with the camera gains a red banner for the operator to easily find. The still image from the moment of detection is displayed so the operator can determine if a firearm is actually present or if it was a false positive. Lastly, the history of detections among cameras is displayed at the bottom of the screen so that the operator can understand the movement of the shooter when informing law enforcement.
How we built it
Since we obviously can't have real firearms here at TreeHacks, we used IBM's Cloud Annotation tool to train an object detection model in TensorFlow for printed cutouts of guns. We integrated this into a React.js web app to detect firearms visible in the computer's webcam. We then used PubNub to communicate between computers in the system when a camera detected a firearm, the image from the moment of detection, and the recent history of detections. Lastly, we built onto the React app to add features like object highlighting, sounds, etc.
Challenges we ran into
Our biggest challenge was creating our gun detection model. It was really poor the first two times we trained it, and it basically recognized everything as a gun. However, after some guidance from some lovely mentors, we understood the different angles, lightings, etc. that go into training a good model. On our third attempt, we were able to take that advice and create a very reliable model.
Accomplishments that we're proud of
We're definitely proud of having excellent object detection at the core of our project despite coming here with no experience in the field. We're also proud of figuring out to transfer images between our devices by encoding and decoding them from base64 and sending the String through PubNub to make communication between cameras almost instantaneous. But above all, we're just proud to come here and build a 100% functional prototype of something we're passionate about. We're excited to demo!
What we learned
We learned A LOT during this hackathon. At the forefront, we learned how to build a model for object detection, and we learned what kinds of data we should train it on to get the best model. We also learned how we can use data streaming networks, like PubNub, to have our devices communicate to each other without having to build a whole backend.
What's next for SmartVeillance
Real cameras and real guns! Legitimate surveillance cameras are much better quality than our laptop webcams, and they usually capture a wider range too. We would love to see the extent of our object detection when run through these cameras. And obviously, we'd like to see how our system fares when trained to detect real firearms. Paper guns are definitely appropriate for a hackathon, but we have to make sure SmartVeillance can detect the real thing if we want to save lives in the real world :)