Inspiration

911 call centers around the US are suffering from an understaffing crisis. According to the International Academies of Emergency Dispatch, half of all US states are experiencing severe staffing shortages and increased calls, leading to a cycle of high turnover rates and burnout. According to the Stanford Center for Racial Justice, this has led to an increase in inaccurate reporting, less available operators, and an overall decrease in quality.

What it does

CallSense is an aid for 911 operators to ensure accurate reporting and provide emotional support. When an operator receives a call, CallSense will be able to transcribe the 911 call, highlight the key information, assign a severity score, and provide further clarifying questions in real time. It then displays a clear summary report to support dispatchers in making faster, more informed decisions and for operators to learn from their previous calls. It also manages dispatched officers by priority and can estimate arrival times. This will reduce the burden on 911 operators and risk of inattention.

How we built it

We built CallSense using NextJS/TailwindCSS (frontend), Flask (Python, backend) and Google Colab for model development, as well as scikit-learn for machine learning. The predictive model is based on a Random Forest classifier.

Challenges we ran into

A big challenge we ran into was working with the Gemini 2.0 Flash-Live API. The endpoint required converting standard webm/opus audio that is recorded on the browser to 16-bit PCM, which proved more difficult than it was worth. In the end we found another way to get speech-to-text done in a quick manner. Additionally, because of the rate limits, we struggled to test our prompts effectively.

Accomplishments that we're proud of

We were able to provide helpful advice in real time to ensure the app would actually work in a high-stakes scenario. We’re proud that our model was able to successfully return accurate priority and display them in a sleek, clean UI. The key details from the call are displayed in the app as well, reducing the time needed for the dispatcher to take detailed notes and prompt action.

What we learned

We learned how to build a full AI pipeline, from taking in 911 audio to producing a final output. We figured out how to use the Gemini API to summarize conversations in a useful way. We trained and evaluated a Random Forest model to predict severity, which taught us about ML. We learned how to handle messy data and still get reliable results. We also saw how important fast and clear communication is in emergency response. Most importantly, we better understand the real challenges dispatchers face and how technology can help solve them.

What's next for CallSense

Next, we want to further detail in the app, editing the design to prompt the right agency and specific employees and departments into action. One thing we could experiment with is having the app generate a series of detailed instructions similar to the current standard, thereby making dispatchment more efficient by reducing time even further. Finally, we hope to test CallSense in real-world environments by partnering with public safety agencies, continuously refining the system with real call data and feedback from dispatchers.

Built With

Share this project:

Updates