Inspiration

myNursebot was born out of need to take care of my mom who despises tracking medications or readings on any screen be it a smartphone or a tablet. She is 76 years old and has high blood pressure for quite some time. Last year, she was rushed to hospital in an emergency because of very high blood pressure. The cause was non compliance with medication as well as not having the ability to log symptoms on a regular basis. Had she taken medications regularly, the emergency visit could have been prevented. Also she did not know that her medications had side effects like nausea, leg pain or state of confusion. Had she known this, she could have consulted the doctor earlier to get different medications that do not have these side effects.

Wouldn't it be nice, if there was a way to log the health data at the whim of a verbal command? It is much easier than picking up the device, opening the app or just write it down with old school paper and pen. Our quest began in mid 2016. We searched for easier solutions. We could not find any voice enabled app that could do the job so we thought we will build one. My wife and I tested the feel of Amazon Alexa with our parents. They liked it. We surveyed a group of friends, who had similar issues of logging symptoms and medications. Having a strong background in software and passion for solving real world problems specifically in the area of health, we decided to build a voice skill.

What it does

myNursebot logs health data through voice and generates a health summary. It captures body vitals, symptoms, medications and appointments just by user’s voice. No screen is required. myNursebot also emails health summary whenever the user asks it. The user can choose to share the summary with doctor at next visit for a better treatment plan or can share it with loved ones. By logging the symptoms regularly patients can provide a detailed history to the doctor rather than relying on memory that could be unreliable.

How we built it

We started with the product definition. In this phase, we defined what will be the feature set and what is the minimal viable product (MVP) to showcase it to our user community and test the waters. After defining the feature set, we started on two parallel paths - Interaction Model to be developed by Product Management Team and Engineering team to layout the architecture.

On the architecture side, we discussed different tool sets and a heated debate on Java vs. PHP continued for couple of weeks. We knew that we need to have a response time of less than 100ms for the voice application to functional normally. Our architecture team finally chose to build the server side in PHP. Because alexa did not support the SDK in php we wrote one.

On the product management side, the team wrote the interaction model. They tested different models before finalizing on one as Alexa has its own nuances in which voice interface is implemented. Once an interaction model of a particular product feature was stable, the engineers started implementing it.

Testing - We went through normal cycles of testing (feature tests, regression testing, negative testing etc). It took quite some time to foolproof the application specially when users aborted the current context or spoke out of turn.

Challenges we ran into

Interaction Model Challenges - Interaction model defines how voice utterances are to be recognized, interpreted and converted to data (text) that is meaningful to the app. For example, if the user says “Add my blood pressure reading of 120 over 80” will convert to reading_type = “blood pressure”, Systolic BP = 120 and Diastolic BP = 80.

  • Reduce the number of utterances - Alexa only allows 200,000 characters in defining utterances. We worked many iterations to reduce the number of utterances yet keep the quality of the application

  • Fool proofing the interaction model - With voice interface, user can choose to speak anything they want to. For example, if the app asks, what is your glucose and the user says 1:30PM, the app needs to react in the right way. Either record the glucose as 130 or ask it again. It was quite a challenge for the product management team to foolproof the voice app.

Architecture Challenges - We aimed to have an application response time sub 100 ms.

  • Java vs. PHP - Our engineering team debated for more than 3 weeks, built prototypes and finally concluded to go with PHP as it performed better than Java

  • Rules System - myNursebot requires a rules engine that can execute decision trees. It took many weeks to select a rules engine that can deliver responses sub-30ms. The rules execution included querying database in real time.

Accomplishments that we're proud of

We are seeing good traction from the user community and the health provider community. Couple of reviews worth mentioning are

  • “Instead of relying on a single read of vital signs in clinic, I have access to a temporal log of vital signs that makes my decisions sound.”

  • “My grandpa measures blood pressure 3 times a day. Mynursebot makes it easier to log the details hands free and create a quick summary to show to the doctor.”

We have seen new user signups of more than 75 on a single day and people are buying into the vision of myNursebot. We are proud of the agility of our team team who was able to release the product in less than 5 months from conceptualizing to a stable product.

What we learned

Building Voice User Interfaces (VUI) requires a different mindset than building Graphical User Interfaces (GUI). The product design, implementation and testing is totally different for a VUI based application. Natural Language Processing (NLP) being quite new is going through major development and refinement cycles. This slows application developers like us quite a bit. Alexa and Google Home platforms are quite different when it comes to application definition as their architectures are different. They require VUI tailored to their platform. It is just like developing a native mobile app for iOS, Android and Microsoft platforms and each require a different skill set.

What's next for myNursebot

Being in stealth mode, we cannot reveal our upcoming features, however our vision is to provide a health companion with whom patients can talk to and get the necessary actionable information. myNursebot will strengthen the AI algorithms to have a pro-active interaction with the patients and strives to be a market leader in its category.

Built With

Share this project:
×

Updates