Inspiration

In a lot of mass shootings, there is a significant delay from the time at which police arrive at the scene, and the time at which the police engage the shooter. They often have difficulty determining the number of shooters and their location. ViGCam fixes this problem.

What it does

ViGCam spots and tracks weapons as they move through buildings. It uses existing camera infrastructure, location tags and Google Vision to recognize weapons. The information is displayed on an app which alerts users to threat location.

Our system could also be used to identify wounded people after an emergency incident, such as an earthquake.

How we built it

We used Raspberry Pi and Pi Cameras to simulate an existing camera infrastructure. Each individual Pi runs a Python script where all images taken from the cameras are then sent to our Django server. Then, the images are sent directly to Google Vision API and return a list of classifications. All the data collected from the Raspberry Pis can be visualized on our React app.

Challenges we ran into

SSH connection does not work on the HackMIT network and because of this, our current setup involves turning one camera on before activating the second. In a real world situation, we would be using an existing camera network, and not our raspberry pi cameras to collect video data.

We also have had a difficult time getting consistent identification of our objects as weapons. This is largely because, for obvious reasons, we cannot bring in actual weapons. Up close however, we have consistent identification of team member items.

Using our current server set up, we consistently get server overload errors. So we have an extended delay between each image send. Given time, we would implement an actual camera network, and also modify our system so that it would perform object recognition on videos as opposed to basic pictures. This would improve our accuracy. Web sockets can be used to display the data collected in real time.

Accomplishments that we’re proud of

1) It works!!! (We successfully completed our project in 24 hours.) 2) We learned to use Google Cloud API. 3) We also learned how to use raspberry pi. Prior to this, none on our team had any hardware experience.

What we learned

1) We learned about coding in a real world environment 2) We learned about working on a team.

What's next for ViGCam

We are planning on working through our kinks and adding video analysis. We could add sound detection for gunshots to detect emergent situations more accurately. We could also use more machine learning models to predict where the threat is going and distinguish between threats and police officers. The system can be made more robust by causing the app to update in real time. Finally, we would add the ability to use law enforcement emergency alert infrastructure to alert people in the area of shooter location in real time. If we are successful in these aspects, we are hoping to either start a company, or sell our idea.

Built With

Share this project:
×

Updates