QUICCC NextGen presenting: VADER

Who are we?

John Piette: John is a Professor in the Department of Health Behavior Health Education and Co-Director of the Center for Managing Chronic Disease at the University of Michigan School of Public Health. He is also a VA Senior Research Scientist, Associate Director of the Michigan Diabetes Translational Research center, and a Fulbright scholar with ongoing projects focused on mobile health and chronic illness care in Latin America. Much of his work is focused on improving patient health monitoring via behavior changes. John's position as a global leader in innovation for chronic disease self-management provides us a very strong foundation upon which to develop new and exciting technologies.

Toan Tran: Toan is a software engineer with over 10 years experience collaborating with scientists and Universities in the United States and Australia. He has created numerous behavioral intervention programs, such as web-based applications, apps, Interactive Voice Response (IVR), and text messaging (SMS) for diabetes (type-1 and type-2), depression and chronic pain research. Toan graduated from Portland State University with a BS in Computer Science and a background in Artificial Intelligence. Combined with his knowledge and expertise working on Amazon Web Services (AWS), Amazon Machine Learning, database, and security, he is an excellent fit for creating and maintaining the behavioral application with an emphasis on data security.

Maxwell Geisendorfer: A type 1 diabetic of 20 years, Max brings a great deal of disease-management experience to the table, and with it some very valuable perspective as well as passion for the topic. Max graduated from the University of Michigan with a BS in Cognitive Science, which marries together the fields of computer science, linguistics, and psychology; this makes him a particularly good fit for developing software based on natural language interactions, as well as for offering insight regarding patient behaviors and how best to change them positively.

The namesake of our team is the department led by John at the VA: Quality Improvement for Complex Chronic Conditions.

Inspiration

The global prevalence of diabetes in 2030 will be 439 million. Management of type 2 diabetes relies heavily on the individual, calling them to follow new treatment regimens that are challenging and repetitive. In an effort to lessen the burden on the newly diagnosed diabetic patient, we aim to use the Amazon Echo/Alexa as a disruptive technology to provide gentle reminders and informed recommendations to patients. Given Amazon’s growing sprawl and availability of technology, the Echo has the ability to seamlessly integrate into people’s lives, enabling hands-free documentation of glucose readings and providing personally tailored medical advice. The availability of the Echo within patients' homes provides several opportunities for artificial intelligence (AI) to optimize the experience for the patient and her care partner. One way this could be done is by having an AI agent learn through patterns in the patient's use of the Echo or in the self-reported glucose readings, when to contact the care partner about the patient, or what message Alexa should give to the patient to encourage her adherence to reporting her glucose readings.

What it does

Our technology is a skill for Amazon Alexa, a hands-free glucose log which analyzes your blood glucose data and provides recommendations for treatment and behavior improvement using artificial intelligence; tentatively, it is named VADER: Voice-Activated Diabetes Event Recorder. It also has the capacity to notify designated caretakers of any urgent blood glucose events encountrered by the patient. The goals are two-fold: to encourage better monitoring behavior and to bring long-term blood glucose averages toward a target number.

The user begins a session with VADER by invoking Alexa, ideally immediately following a blood glucose test. The trigger phrase is: "Alexa, open VADER". Alexa will then ask the user to report their most recent blood glucose reading, which the user will speak aloud. Alexa logs this value and then responds with one of a possible variety of responses; for example, if a user reports a low (hypoglycemia) number, Alexa may recommend consuming a fast-acting sugar and re-testing in 20 minutes. Following this recommendation, she can then set a reminder for the user, and in 20 minutes she will awaken and remind the user to check their BG level. In the case of extreme events (such as acute hypoglycemia), the skill can generate an alert message to send to a designated caretaker or assisting family member. As discussed before, an AI agent could streamline this process by helping to determine whether a particular course of action, such as contacting a care partner, would be worthwhile. The use of an AI agent in determining key aspects of the interaction with the patient, such as what recommendation to make, or when to contact the care partner, allows the system to be more adaptive to the needs of individual patients and also be responsive to changes in the needs of patients over time.

We intend it to be simple to use but sophisticated in terms of what it can provide the user: encouraging better monitoring, offering useful recommendations, and automatically connecting the user to those most involved in their care, all hands-free!

How we built it

The basic structure of the interaction with VADER was built using the Alexa Skills Kit provided by Amazon. We defined a simple back-and-forth exchange where the user wakes Alexa, who then asks for a piece of data (blood glucose reading) and then uses that data to make a decision. That decision-making is powered by Amazon Lambda, which allows us to run backend functions (NodeJS) for this skill without having to worry about server space. Long-term data for the skill is stored in AWS, and can be accessed during each invocation of the skill so Alexa may incorporate this data into any decisions being made.

Challenges We ran into

The Alexa development landscape is currently a bit bare -- the developer community isn't yet very large, and Alexa's basic functionality is still changing and has some notable gaps. For example, we encountered an issue where Alexa does not support recurring reminders: she supports recurring alarms (which simply play a tone, with no associated reminder) and single-occurrence reminders, but not recurring reminders. This is part of the reason that the skill prompts the user to set a reminder: since Alexa cannot handle recurring reminders, we simply set a new one each time the skill is run, which ideally will promote better BG testing adherence in our users.

When dealing with health data, HIPAA is always a concern -- in production, the database behind this skill would have stringent security requirements, and the penalties for HIPAA violations would need to be avoided at all costs. Our group's experience working with the VA has exposed us to very stringent health data security requirements, and we know at the very least that products like Amazon GovCloud are sufficient for these elevated requirements.

Accomplishments that we're proud of

Learning the capabilities and limitations of the Alexa platform. Even outside the context of this competition, this has kickstarted our efforts to develop new and innovative interventions for chronic disease. We're pleased with the way we worked around some of these limitations, such as the recurring reminder issue discussed above. In addition, we are proud of getting our feet wet in a particularly new and exciting development space -- we hope that our early involvement with the Alexa platform can position us to remain pioneers every time Amazon expands its capabilities and their influence.

What's next?

We plan to expand our capacity with Alexa-based interventions along several vectors. Of particular interest to us is the support of other languages (besides English) -- we have a great deal of experience working in Latin America, and as soon as Amazon rolls out a Spanish-speaking version of Alexa we will be prepared to create a Spanish version of our application.

We may also develop similar applications for competing home assistant devices, such as Google Home, to reach a broader potential population. We are also considering new devices like the Echo Spot which includes a video display along with the voice component. Expansion to a broader population will also allow us to gather enough data from patient interactions to refine the decision making process of the AI agent.

Built With

Share this project:

Updates