Inspiration

According to Washington D.C.'s ShotSpotter program, nearly eight out of every ten events in D.C. go unreported. ShotSpotter is a government-sponsored program that uses acoustic tracking technology to better monitor and control gun violence in cities. It was first implemented in Historic Anacostia starting in 2005, and the whole process took 2 million dollars and four years to reach completion throughout the district. Currently around 300 sensors are placed in D.C., and each time noises were suspected to be a gunshot, they are verified by humans before being passed to the Metropolitan Police Department. Though the idea is sound, its implementation is expensive and thus not widely adopted. ShotSpotter can also be very slow due to its reliance on human grading of reports. In this light, we introduce our fully automated solution for shooting detection with the goal of saving more lives more quickly.

Our project proposes a crowd-sourcing solution that will not only use significantly less resources and time to implement, but also require less maintenance and likely provide a faster and more accurate response time. We spent a significant amount of time on the mathematical modeling aspect of our solution, as outlined below. Unlike ShotSpotter, our solution does not rely on new infrastructure and can be installed on users' phones easily, adding close to no overhead cost and a much faster distribution. Our app can automate the process of gunshot identification and localization, where we crowd-source the verification aspect to people near the area of interest. Under this structure, we expect to reduce the number of false positives and delay time since we will have input from many users instead of just one human and users can provide feedback at faster rates. This is because users are only asked about events that occur near them, as opposed to employees of ShotSpotter who have to respond to many events.

What it does

Our app can automate the process of gunshot identification and localization. We propose a crowd-sourcing solution that will not only take significantly less resources and time to implement, but also require less maintenance and likely provide a faster and more accurate response time.

How I built it

We implemented algorithms (Closed-Form Least-Squares Source Location Estimation) for localizing the sound source by using the time delay, geo-location, and sound amplitude difference experience by a collection of receivers scattered around the source. We aggregate crowd sourced data with an android application that detects the sound, sends it to a server for analysis, and alerts the user with the approximate location of the source.

Challenges I ran into

We faced challenges in localizing the sound source accurately. Additionally, due to hardware limitations and limited access to data sets, we faced challenges in implementing a machine learning model that could correctly distinguish gunshot sounds. In developing an Android application, we also only had two Android phones that could be used for testing, so instead we simulated the detection of a gunshot event by sending multiple reports to the server.

Accomplishments that I'm proud of

We managed to implement a localization scheme based on Closed-Form Least-Squares Source Location Estimation. We also built a front end and back end that work seamlessly together. Lastly, we spend quite a good amount of time optimizing the app experience so that users are notified of any important news in a clear and concise manner.

What I learned

Concepts underlying sound source localization and its difficulties. Difficulty with multi-threaded programming processing time series data

What's next for Safe City

Refining the localization scheme and obtaining more data for the machine learning approach to be tractable.

Built With

Share this project:
×

Updates