Gaia Mental Health Risk Assessment Bot

Background and motivation

1 in 4 people in the UK will experience a mental health problem each year. People are finding it increasingly difficult to deal with mental health problems as the number of people who self-harm or have suicidal thoughts gradually increases1. With the stigma around mental health decreasing thanks to campaigns such as #HereForYou, #BPDChat, #ColourTogether, #IWishMyFriendsKnew, more people are wanting to open up and access mental health services. Increased demand have placed a significant strain on current medical resources. With consultant-led mental health services in the UK requiring waiting times around 18 weeks2, getting the right help at the time that is most important isn’t always possible. During a medical assessment, doctors must go through lengthy questionnaires consisting of 100+ questions to assess an individual’s mental health state, this can be daunting and time-consuming for patients. While there are many organisations with trained individuals who can provide immediate assistance3, not many people are aware of these services. There is a pressing need to connect individuals with mental health organisations that can provide temporary support until the right medical resources become available, and to cut down on the lengthy assessment process of evaluating an individual’s mental health state in clinic. The proposed approaches must adopt a friendly and non-judgemental manner if we are to continue to encourage people to open up about their mental health issues.

Our solution

Gaia is a friendly chatbot who can perform a real-time assessment of an individual’s mental health state, and can recommend alternative services to support them until the medical resources are available. Gaia makes use of GRIST, a powerful mental health risk assessment tool used by the NHS. GRIST makes use of the expertise of 3,000 practitioners, 50,000 users, 500,000 completed assessments and complex machine learning algorithms to aid in the assessment of an individual’s mental health state, and decrease assessment time. While the questions in GRIST are blunt and unfriendly, Gaia can provide a supportive and sympathetic environment through positive reinforcement during the user-chatbot interaction, including messages such as “Thanks for reaching out Tom. Mental health is just as important as physical health”, which reinforces the user’s actions to reach out and change their situation. Through Gaia, individuals access GRIST through a more conversational and friendly manner that encourages the user to vent about how they are feeling without realising that they are doing the assessment. This step will move the current GRIST technology forward, such that it can be easily accessible to the general public in the form of a chatbot, as opposed to just being used by clinicians. Furthermore, Gaia can help identify individuals with a higher risk of harm so that they can be prioritised for care by existing mental health services. This will lower the level of resources required by medical services to help individuals with poor mental health. Here we present a relatively early prototype of Gaia, and envision further developments that will enhance the user experience. In the first prototype that we present here, Gaia provides self-assessment only for suicidal thoughts and helps users connect with organisations that can support them until they can access the appropriate medical resources.

How we built it

We used python to create a Lambda function

GRIST

GRIST Logo GRIST uses a computational model of psychological processes to represent structured clinical judgements of multidisciplinary mental-health practitioners. GRIST models clinicians' expertise and how they perceive and assess mental health risk with the aim to assist with the early detection of mental-health risks. GRiST represents clinical expertise in a way where mental-health risk knowledge is a hierarchical structure elicited in the first place from interviews with 46 experts. In this model, the risk nodes such as suicide are hierarchical `trees' that are deconstructed into progressively more granular concepts (branches) such as current intention and feelings & emotions until the input data nodes or leaves of the tree (e.g. anger) are reached.

GRIST is being used by various NHS secondary Mental Health Trusts, charities, IAPT services. Recently, a version tailored to the needs of paramedics, A&E and other front line agencies was introduced to enable non-experts, to assess the initial risk of mental health patients in the same way that clinicians do. This allows high-risk patients to be directed to the right place for detailed monitoring. It is this version of GRIST that we implement here in Gaia, as the assessment will evidently be conducted by the user. However, we add our own For more information about GRIST and its latest version myGRACE, we direct the reader to the following references: Rezaei-Yazdi, Ali, and Christopher Buckingham. "Introducing a pilot data collection model for real-time evaluation of data redundancy." Procedia Computer Science 96 (2016): 577-586. Rezaei-Yazdi, Ali, and Christopher Buckingham. "Capturing Human Intelligence for Modelling Cognitive-based Clinical Risk Assessment". Communications in Computer and Information Science, Springer, no 732.

Challenges we ran into

There were many problems with type recognition from the user, where the chatbot would execute the error handler, without any errors generated in the CloudWatch logs. This became more pronounced after we added in more than 10 slots. We run out of time to resolve these issues; however, we are still keen to use Lexbot platform as it would then be easy to extend it to Alexa. Alternate ways would be to implement several intents so that the chatbot did not throw errors, or to create different chatbots for each set of questions. As it stands, we’ve implemented a hierarchical sequence of questions, which, from a technical standpoint, was the most difficult part of the submission. We needed to ensure that we only asked certain questions based on the responses of previous questions. For example, we would not ask about previous attempts if the users answered no to ‘Have you previously attempted to end your own life?’ To facilitate this conditional hierarchy, we implemented all the slots for the intent as required. The slots were required so that we could store and leverage off the slot-type for validating user responses. In the lambda function, we integrated the current response to the questions to determine whether to set default values for the dependant questions and use the platform native slot elicitation mechanism. As we did not have sufficient data for the GRiST API call, we implemented a function that looked at the relative question weightings to return a risk assessment.

Accomplishments that we're proud of

Deciding on branding – coming up with the name for our chatbot and logo has been challenging. We wanted to incorporate the GRIST logo and the squirrel mascot for AWS to show the fusion of both technologies coming together to tackle mental health issues. Creative video that highlights the problem space and how we see the technology being used in practice with a neat and clean brand. Creating a team of individuals with different research backgrounds including neuroscience, applied mathematics and computer science to provide a multidisciplinary solution to tackling mental health issues.

What we learned

While our aim is to make the GRIST questions more conversational and clear, we have learnt that some questions must be asked in a clear manner. For example, it is important to ask whether someone is thinking of committing suicide in a clear and unambiguous manner. Conducting this research on how to ask questions around mental health issues has been eye opening, and we have come to the realisation that for conducting an accurate risk assessment, not all the questions from GRIST can be changed into a friendly and conversational manner.

What's next for Gaia Mental Health Risk Assessment

Currently, Gaia can run through a small number of the GRIST questions related to suicide. We implemented a link to the GRiST platform if there was an error with the chatbot. We further provide the contact details of PAPYRUS, a helpline that organisations trained to help individuals having suicidal thoughts. We aim to expand Gaia such that it embeds all of the questions in GRIST and contains the contact details for a variety of organisations trained to help individuals with mental health issues. To do this, we aim to create a chatbot for each topic in the mind map above, and embed these chatbots as utterances in the first initial user-chatbot interaction. If the user confesses to Gaia about having suicidal thoughts, Gaia will then run the appropriate chatbot that encompasses all the questions on suicide, or any other topic that the user has on their mind. By creating a unique chatbot for different topics, we can extend the use of chatbots for self-assessment in the medical space. At the end of the assessment, and by making use of GRIST, we hope to be able to predict an individual’s mental health risk as low, medium and high, and provide the individual with the support they need accordingly. For example, individuals predicted as having a low risk will be provided with the contact details of support forums where they can get in touch with other individuals who also have concerns about their mental health state. Individuals predicted as having high risk will be provided with the details of organisations that can provide immediate support until the medical resources are available. In addition to this, we aim to increase the utterances for each question such that the user has more freedom in expressing their thoughts and feelings when interacting with Gaia. Again, this aims to help the user feel like they are talking to a friend, as opposed to running through a clinical assessment. Moreover, we aim to modify the questions in GRIST such that they are more friendly and conversational in the hope that this will encourage users to open up. However, we are aware that some questions cannot be modified into a more conversational form, and must be asked clearly. For example, it is important to ask an individually who is having suicidal thoughts whether they are thinking of committing suicide very clearly4.
Furthermore, we aim to build on flagging up questions that indicate that an individual’s life is in immediate risk – these questions will be chosen carefully thorough research and conversations with clinical experts. For example, individuals having suicidal thoughts will be put through to a help line immediately if their answers indicate that they have a high risk of committing suicide. This will mean that Gaia will skip through the remaining questions in the assessment to provide the user with the support they need. In addition to this, we aim to have Gaia send a text to a loved one about the current situation of the user. This group of people will be chosen by the user when they sign up at the beginning. Moreover, this information can be used such that individuals at high risk can be forwarded onto medical health services to allow these individuals to be prioritised on the waiting list. This will filter the increasing demand on mental-health services. The immediate next step for Gaia is to add the chatbot to the GRiST web platform and allow users to access this from home. After a short test period, we would want to show the prototype to the UK National 111 Helpline. We would like to offer the telephone operators the ability to run through the smallest set of questions before coming to an outcome for the person they are speaking to. Once this proves successful, and we have been able to iterate through solutions to any issues raised, we would look to embed Gaia within popular online websites such as Facebook, Twitter, Slack, and even on the National Health Service website in the UK. Gaia will be accessible across different digital devices such as desktops, laptops, smartphones and tablets through platforms that users are comfortable using. A further development would be to add the voice interaction through Alexa for users with poor vision, or who have difficulty typing. References:

  1. https://www.mind.org.uk/information-support/types-of-mental-health-problems/statistics-and-facts-about-mental-health/how-common-are-mental-health-problems/
  2. http://www.nhs.uk/NHSEngland/AboutNHSservices/mental-health-services-explained/Pages/accessing%20services.aspx
  3. http://www.nhs.uk/Service-Search/Mental-health-information-and-support/LocationSearch/330
  4. https://www.papyrus-uk.org/help-advice/im-worried-about-someone/starting-the-conversation

Built With

Share this project:

Updates