Inspiration

Often, when one walks into a doctor, more time is spent documenting the visit rather than giving the patient quality care. In fact, about 6 hours in an 11 hour workday is spent entering information in the [Electronic Health Records (https://en.wikipedia.org/wiki/Electronic_health_record). This leads to increased burnouts in doctor, as reported in a study that found that more than half of all doctors report experiencing at least one symptom of burnout. To reduce stress as well as increase quality of patient care, the use of scribes has risen. However, there are not nearly enough scribes for all doctors, so they are forced to share their attention with both a computer and the patient. For a patient to have the best care possible, they need the complete attention of their doctor. Kronos aims to solve this problem.

What it does

Instead of having a scribe, the doctor can transcribe their speech into a text document using Kronos. It will then use text analysis to summarize the visit, sorting the contents into prescriptions, procedures, etc.

How we built it

We first made a flask based website that is able to send a signal to the raspberry pi. The page refreshes, and the doctor now has to stop recording for the transcript and summary to become available. The website acts as a panel for the doctors, and has patient history, body measurements (heart rate, oxygen levels, etc.), as well as extra notes the doctor can add. The website runs on a flask web server, and we use MQTT to transfer commands as well as audio files between the raspberry pi+usb mic and the computer. Once the audio file is on the computer, when the doctor presses print summary/transcript, the algorithm we created converts the audio into text. The analysis we used was a Bidirectional RNN with LSTM and Attention System. We ran into some trouble finding large amounts of data for our neural network to train, so we did some research of possible algorithms that we could implement in the future. We came across a relatively new and trending NLP method called BioBERT which trains data through more commonly found text data and tunes it for high level vocabulary and conversations.

Challenges we ran into

Initially, we planned on using Amazon Echo to record conversations. However, we were late into finding that the echo is not capable of recording conversations but more so for commands. We then transferred to a regular mic which we attached to the Raspberry Pi. We then had to implement the Google AI api which was relatively hard to use because it wasn't a stable library leading us to replace the API key multiple times until it worked. Finally, our fully functioning NLP algorithm was hard to train as the conversational data between doctors and patients was not readily available due to confidential issues. As mentioned before, this problem could be fixed by implementing a BioBERT NLP algorithm. On the hardware side, there were big issues with using MQTT to communicate between the website on the computer and the raspberry pi. Since the wifi card on the api was too slow to be able to handle the load, for the time being we are not able to transmit large file sizes. Since MQTT is asyncronous, we cannot do live transcribing, so it necessary to create a .wav file and send it over MQTT to the web server. This is the reason we get latency when we press stop as well as request a transcript. MQTT was also very difficult to set up, as often it would not connect to the web server. Thankfully, we were able to get MQTT to work, so now, the process is effortless, except for a few bugs.

Accomplishments that we're proud of

We are very proud of our fully functioning application that we have made over the past 24 hours. Not only have we implemented an MQTT server and connected it to a raspberry pi, but we have also done computation in the cloud for time optimization as well. This is only due to our constant work ethics in which our team has slept a combined total of 4 hours and have managed our time quite well despite some of the obstacles we faced while building our project.

What we learned

We have learned that if we work together and coordinate amongst each other we can accomplish an inhumane amount of work in a period of 24 hours. We had very good team chemistry, and were able to plan out what we were going to do. This was an important life lesson to us because we learned that without planning, it will be very difficult to create a product. To plan and work efficiently, we learned scrum, and we were actually able to do the brunt of the project 5 hours before the time it was due, so we could properly plan presentations, and give our best to the judges. After all, we are not only creating the best product we can, but also competing against others.

What's next for Kronos

We think that Kronos is actually something that can be implemented in hospitals worldwide. If hospitals already have preexisting software, our idea is to actually create a software overlay that can be shown on top of preexisting software. This allows universal use, so it can be implemented from the more well funded hospitals we go to to the 3rd world hospitals as well. We want to also lower the cost of the materials, and transfer from a rpi+usb mic to something more like a gpis mic connected to a pi zero. We actually believe this project can do good in this world, improving the healthcare system not just in our local area, but worldwide.

Built With

Share this project:

Updates