One of our members works in the medical field and has noticed that a big issue in the emergency room is the latency in providing non-English-speaking patients with translation services. In emergencies, every second is important, so delaying a patient's treatment is a dangerous waste of invaluable time. Our team member has personally experienced how Translators are often late or unavailable or no one has the ability to translate. In remote areas and during natural disasters, internet connection is not available either, so our goal was to create a kiosk-like translation application that is able to run as an offline desktop app so it remains available in emergency situations.
What it does
The application takes audio input from a microphone and uses the Microsoft API to detect what language is being spoken and then provides a written translation as an output. The application also displays possible relevant ailments/terms--as selectable buttons--that might relate to the patient's situation in the language spoken. This allows the doctor to quickly assess the severity and begin treatment as soon as possible.
How we built it
We used Microsoft Cognitive Services Speech API to detect the language being spoken and the Microsoft Translator Text API to translate the options to the spoken language. We displayed the application in a Winform.
Challenges we ran into
We initially got stuck on how to use the Speech SDK that Microsoft offers, our development really was not a straight path-- at some point we even got sidetracked developing a way to record audio from the user as a step before translating the audio. But that took us back to square one: doing the actual language detection! Thankfully, we got a lot of help from Glenn Nethercutt from Genesys with using Microsoft Cognitive Services and it got slightly easier from there.
Azure currently only supports two languages at a time, and their library does not include acoustic models for all languages. The translation that Azure provides is not completely accurate. Most services cost money as well, so we had to use the limited free versions.
Accomplishments that we're proud of
We were able to get the translation software to work correctly and provide us with options for patients to select. We were also met with some success with the direct translation and immediate output in written English, but that is a feature we are still working on improving.
What we learned
We learned a lot about the capabilities of these Microsoft APIs. We learned to try everything, work fast and fail fast so we could move on. We also reinforced the importance of asking for help.
What's next for Emergency Medical Translator
We would like to do more research to find what other APIs are available that have a larger library for languages and acoustic models. Azure is currently upgrading their system to add more languages, so perhaps this could work in the future. We ran out of time during this event, but would like to try IBM's API and Google Translate API to see if we are able to expand the application's functionality.