The pandemic has greatly accelerated existing trends towards delivering civil services like education remotely over the internet without requiring in-person contact. But health-care services differ in that they traditionally require one-on-one interactions between providers and patients, and this model cannot scale to deal with the drastically increased requirements for both physical and mental health services from a population dealing with a global pandemic. The substantial, rapid advances being made in machine learning and natural language understanding today can be used to greatly multiply the efforts of mental health professionals by providing patients with a way to access mental health self-management, self-care, self-care, education, knowledge, skills and assessments, at any time or place when it is needed, using a familiar one-to-one social interaction mode, while also maintaining a desired level of privacy and confidentiality.
Selma is a multimodal conversational user interface that provides an inclusive interface to self-management tools like medication trackers, mood and symptom trackers, dream and sleep journals, time, activity and exercise, trackers, personal planners, reliable knowledge bases on health conditions and diseases, and similar tools used in the management of chronic physical and mental diseases and disorders and conditions like ADHD or chronic pain where.self-management skills for life activities are critical.
Selma follows in the tradition of 'therapy bots' like ELIZA but updated with powerful ML-trained NLU models for interacting with users in real-time using both typed text and speech. Patients interact with Selma using simple natural language commands or questions and enter their journal or medication or symptom tracking entry using speech or text. The captured audio and text is analyzed using NLU models trained to extract relevant details spoken by the patient on their medication intake, mood, activities, symptoms and other self-management details, which are then added to the user’s self-management journals.
Health-care software must by necessity must be extremely security conscious and comply with regulatory standards like HIPAA, but as user-facing systems, must also strive to be accessible and easy to use and not overly frustrating for its users. Multimodal biometric authentication provides a great solution for dialogue systems and chatbots in sectors like healthcare and finance which balances a high-level of security with ease of use and not requiring complex passwords or authentication schemes.
What it does
How I built it
Selma is written in F#, running on .NET Core and using PostgreSQL as the storage back-end. The Selma front-end is a browser-based CUI which uses natural language understanding on both text and speech, together with voice, text and graphic output using HTML5 features like the WebSpeech API to provide an inclusive interface to self-management data for one or more self-management programs the user enrolls in. The front-end is designed to be accessible to all mobile and desktop users and does not require any additional software beyond a HTML5 compatible browser.
The CUI, server logic and core of Selma are written in F# and make heavy use of functional language features like first-class functions, algebraic types, pattern-matching, immutability by default, and avoiding nulls using
Option types. This yields code that is concise and easy to understand and eliminates many common code errors, which is an important feature for developing health-care management software. CUI rules are implemented in a declarative way using F# pattern matching, which greatly reduces the complexity of the branching logic required for a rule-based chatbot.
The Selma server is designed around a set of micro-services running on the OpenShift Container Platform which talk to the client and stored data in the storage backend.