Our inspiration for the Nightingale concept was found in two sources.
The first was our interest in the potential of applying speech recognition technology for a tangible purpose, which can deliver real world value over and above a novelty factor. The second was our desire to help deliver greater efficiencies in a hospital setting, with the aim of achieving a better level of care for patients.
The Nightingale concept which we will be presenting today is the product of speaking with actual nurses, hospital clinicians, and former hospitals patients.
What it does
To put it in a nutshell, Nightingale’s raison d’etre is freeing time to care.
Nightingale gives nurses more time to focus on providing hands-on clinical care to patients by automating responses to simple, non-clinical questions asked by hospital patients. These include questions such as “what time is my operation?” or “when are visiting hours?”, which do not need to be answered by a busy nurse who has more pressing clinical issues to attend to.
In providing an automated real-time response to such questions Nightingale also ensures that patients are afforded a better quality of service from their healthcare provider, as it eliminates what is often a long wait for answers to simple non-clinical queries.
How we built it
Components: Intel Edison, IBM Watson, bespoke Q&A database + matching logic, Bluetooth speaker, microphone.
We started by using an Intel Edison as the platform for our stand-alone IoT device, which can facilitate speech-to-text and text-to-speech conversion using IBM Watson, provide basic patient vitals readings, and interact with remote databases as well as with existing hospital hardware and systems. We chose to build a complete all-in-one package for the practical usability benefit that this offers, as it allows us to take the full burden of set-up and maintenance away from the client. We also built a remotely located bespoke Q&A database containing questions commonly asked by patients, in various iterations, which are routed to relevant answer templates using a bespoke matching logic algorithm (vector space model).
When a patient asks a question (e.g. “what time is lunch?”), it is recorded and saved as an audio file. Watson then converts the audio file to text, outputting JSON with the question text and confidence measure. The Q&A database takes this question and looks for a good match, as patients may ask the same question in different ways (e.g. “what time is lunch?” is the same question as “when is lunch?”). The database then returns a matching answer (e.g. “lunch is served at 13:30 every day”), which is then converted back into speech by Watson and spoken back to the patient via the Nightingale device speaker.
If a patient asks a question that has not been asked before, and no match is found in the existing Q&A database, the question is recorded in the database and an alert raised to the on-duty nurse. If this question is asked frequently enough, the Q&A database will be updated with the relevant answer, or the question routed to an existing answer for a similar query.
Every question that is asked by a patient is also stored in a database, complete with the patient’s unique identifier and a date-time stamp, enabling business intelligence capabilities. This can be as simple as generating a report which shows the ward manager key diagnostics, such as increases in the instance of certain questions being asked. This information can then be used to improve patient experience, through simple initiatives like providing certain information to patients as part of the admission process. However, there is also potential for more complex analytics as the database becomes richer, which can be used to learn more about different patient groups.
Challenges we ran into
Our aim from the outset was to use IBM Watson technology for both speech-to-text and text-to-speech applications. In practice, we ran into some issues with the speech-to-text application, which initially proved time-consuming to resolve. Considering the intense time pressure of the Hackathon setting, we made the decision to temporarily alter our approach and utilise Google’s conversational API to facilitate our speech-to-text application, while keeping the IBM Watson technology for the text-to-speech application. With this workaround in place, we were able to focus on developing other key aspects of the concept. Once these were more fully developed, we went back to investigating the speech-to-text errors we had encountered previously, and found that the problem was a result of an API call issue as well as deficiencies in the bit rate. Once we diagnosed these issues, we implemented relevant bug fixes which allowed us to utilise IBM Watson technology for the full concept offering, in line with our original plan.
Accomplishments that we’re proud of
We are proud of our overall organisation, strong project management ability, and teamwork from the outset of the Hackathon. All of these aspects were key to our delivery an MVP which we are proud of within the limits of an incredibly tight deadline.
What we learned
We learned how to integrate a variety of python packages into IBM Watson and our bespoke Q&A database, before combining this with Intel Edison hardware to form a single system made up of these three components. We also learned how to implement a number of minor bug fixes to our text mining algorithm to achieve a consistently accurate output for a variety of inputs.
What’s next for Nightingale
For the next phase of the Nightingale concept, we plan to explore the opportunities presented by IBM Intu technology. We want to utilise Intu’s machine learning and AI capabilities to expand the Q&A database organically, and use its emotional intelligence technology to implement an intuitive user feedback diagnostic which does not require manual feedback prompts. This will allow us to ensure that Nightingale is a continually evolving smart application, which we can effectively tailor itself to the needs of the user.
We also plan to explore the possibility of using Nightingale to deliver some aspects of routine clinical care. Specifically, we would like to carry out clinical trials where Nightingale will prompt patients to submit routine vitals readings such as temperature and blood pressure, and guide them through the process of doing so, before uploading the data captured to the patient’s electronic file. We would seek to compare the data set captured by Nightingale to that logged by hospital staff, in the hope of proving a strong correlation between the two. This would allow us to develop a strong case for clinical application, as this would deliver an even more significant time saving to nurses than the existing non-clinical Q&A offering.