Inspiration
A dispatcher is an important part of the first responders care process. A good dispatcher can save someones life, and a bad dispatcher can destroy the efficiency of the system. The loss of efficiency has life or death implications in EMS, and the improvement of the system can give competent advice and guidance to help people through some of the worst days of their lives.
There is rigorous training, with strict protocols that drastically improve the dispatchers performance, and ideally every dispatcher should have this training. The certification system is called EMD: Emergency Medical Dispatching.
The only drawback of the EMD certification and training system, is the fact that there needs to be training. Counties often don't have the ability to hire trained dispatchers, and so their EMS services suffer.
ambuLLMce is our solution to this.
What it does
this application is designed to be a support tool for 911 call operators that provides automated assistance during calls. User can either upload an audio file or record a voice message. The system then produces the most relevant codes to the situation, and displays it to the user.
How we built it
We built this project in 4 concurrent subsystems: Front-end The Front-End was written in Javascript, using React. Started using the base create-react-app call to make the foundations, and tested and implemented each individual React Component. Originally tested the code with mock backend systems, and then connected to the back-end via HTTP POST and GET requests. The front-end server was deployed using Netlify. After everything was set up, we made the components more interactive, with background animations, smooth transitions, and different file pages using the React Router system.
Back-end The Back-End was written in FastAPI, where it receives audio data posted by the front end service, and then calls the in house audio transcriber to create text from the real time audio data. After the text version of the call is acquired its sent to the LLM that was RAG trained on EMD protocols, and it generates a set of directions that are sent back to the front-end. Back end was deployed to the cloud using defang.
AI The AI system was designed as a modular in house API for the back-end to call. First, we developed the RAG system using Azure OpenAI and llama-index. Once the RAG system worked, it was refined using prompt structuring to ensure consistent output. We then integrated Azure Speech to read audio data and convert it to text so it could be sent to the RAG pipeline.
Project structure and Deployment We used Defang to deploy our back-end and netlify to deploy the front-end
Challenges we ran into
We had no back-end specialists, and thus dealing with any back-end was not easy and had to be learned on-site. There were many small oddities and type errors that we are lucky to have made it through. At first, rate limiting was posing a serious problem when attempting to use OpenAI API directly. This was solved by migrating to Azure OpenAI , which ended up being much faster as well.
Accomplishments that we're proud of
This project has come together almost exactly how we imagined it. We believe that this could be highly beneficial in its intended use case, and we are very happy to have the entire system connected and working as expected
What we learned
We learned we need a back-end dev next time. Also that Azure is very helpful. We also learned a lot about developing with LLMs and using the cloud,
What's next for ambuLLMce
asynchronous microphone implementation, LLM fine-tuning, more data for RAG, vector DB.
Built With
- azure
- fastapi
- javascript
- llama-index
- netlify
- python
- react
Log in or sign up for Devpost to join the conversation.