Inspiration
Philadelphia has the ultimate underdog story, from the mindnumbing defeat of the Union at the last minute of the MLS final to the superbowl that was clutched from us in the last 8 seconds of the game, this year has been a rollercoaster for Philadelphians.
Regardless of the crushing defeats we suffered, we have been brought together under the spirit of the city all year. That’s why when we were thinking of a project, we immediately focussed our attention on Philly’s problems. And the horrifying crime rates topped the list so we decided to do something to help with that.
What it does
The AI model will access real-time security footage and run a computer vision model over it that will detect the presence of guns or knives in the visuals. The model will then send an alert to the Raspberry Pi that will ring an alarm (different for guns and knives) and will also send an alert to the mobile app database. The mobile app will send alerts to users who are in the radius of the alert epicenter. The radius of alerts will be up to the users. The server will also send an alert to the police system to alert them of the brandishing of weapons. It is illegal in Philadelphia and all of Pennsylvania to display a gun or any deadly weapon in public.
How we built it
We worked on the project by splitting the work into three. Annafy worked on the hardware side of the project, where he integrated the AI model deployed on Google Cloud with Raspberry Pi and processing the inputs by the YOLOv5 model. He also helped Rafi and Khang with setting up the real-time database for the mobile app.
Rafi and Khang worked on the mobile app, where they used React Native to build the mobile app, which would triangulate the user’s location and their preferred radius and alert them about any danger in their zone.
Sami and Adith worked on the computer vision model that would access real-time footage and apply a YOLOv5 model that they trained to detect firearms and knives. They trained the model with Convolutional Neural Networks(CNN) using annotated jpegs and xml files of weapons. This YOLOv5 model was later deployed in Google Cloud with the help of Annafy.
Challenges we ran into
1. The first and biggest challenge we ran into was the computational limitations of training such an accurate model with the laptops we had. We ran the training script on the most powerful computer we had and the model took about 14 hours to run all the 100 epochs needed to achieve relative accuracy. Something we did to solve this was use the computational power that we have access to through Google Colab remotely.
2. The next challenge we ran into was implementing the trained AI model on Raspberry Pi. Our intention was to run the model with real-time analysis on the RPi but when we tried doing that, the video was way too laggy for the model to be functional. So, we had to look at other ways to deploy the model.
3. Rafi and Khang had trouble integrating the Firebase and React Native frameworks and it took them a long while to work around that.
Accomplishments that we're proud of
We were able to design and build a fully functional mobile app in such a short time, which is impressive. We were even able to add a few visually appealing graphical features to the app in that time.
Something else we are proud of is passing all the hurdles we had to go through to design the computer vision model we have. We processed a lot of information about the nuances of using neural networks in the short span of the hackathon.
We also spend a lot of time integrating the Google Cloud framework and the machine learning model. There was a lot to learn about APIs in that experience.
What we learned
We learned a lot about the development of computer vision models and how weights and biases are used for the training of models. We had to finetune the confidence levels for inputs in multiple cases to achieve maximum accuracy. The learning curve for Google Cloud integration was very steep because we had very little previous knowledge of APIs.
What's next for Weapon Detection System
We are planning to deploy the computer vision model in an NVIDIA Jetson Nano later on so that we can have a fully remote implementation of the model.
We are also planning to expand the scope of the model by training it with more datasets and adding more dangerous items to the classes that the model detects.
Adding more features to the app like the capability to deploy amber alerts in a very dangerous situation.
Built With
- firebase
- google-cloud
- google-colab
- javascript
- matlab
- opencv
- python
- pytorch
- raspberry-pi
- react-native
- tensorflow
- twilio
- yolov5


Log in or sign up for Devpost to join the conversation.