Doctors spend a lot of time on notes, with patients and outside the office. My parents over the years have been continually busy in the evenings because of note taking and slow EMRs. (I think they were faster when they used paper, if my memory serves me.)
What it does
If we can provide a way to automate parts of the note-taking process, especially the chief complaint and HPI, we can save doctors a lot of time and headache. Our app works by listening to the conversation between the doctor and their patient. The transcript is saved and can be accessed later if desired, which can be very useful if one needs to remember something specific from their conversation. A summary is generated, and key points are highlighted. Everything is editable for later adjustment.
How we built it
The app uses the phone's in-built microphone to listen to the conversation, and it uses a gRPC streaming API to perform accurate speech-to-text recognition in near-realtime. Firestore saves the transcript on the fly, as the discussion is happening, so there is no risk of data loss. On the website, the doctor can review the transcript later, as well as the summary and other details.
Challenges we ran into
Building out the entire platform in the short amount of time was quite taxing.
Accomplishments that we're proud of
The UI melds quite seamlessly with the core features, which is really exciting to us. It's always nice to make something that feels more polished, rather than hacked together.
What we learned
We pushed our comfort zone, technically. And diving into the world of EMRs has shown us that we have barely scratched the surface.
What's next for Khaki Pants and a Pen
Improving the technology's robustness, and the feature-set to be more specifically tailored for a speciality.