Inspiration

By the time you finish reading this sentence, there will be 2 more diagnosed cases of dementia in the world. By the time you finish reading this submission, there will be a total 100 new cases of dementia. And by the time we reach the year 2030, nearly 80 million people across the globe will be affected by this devastating neurological disorder.

The scientific and medical community is racing against the clock to find a treatment for dementia before the disease reaches epidemic proportions. However, about 99.6% of clinical trials have failed. A potential reason for this is because by the time symptoms emerge, it’s too late. Therefore, efforts are concentrated on diagnosing the disease in its early stages.

Most tools used to detect dementia involve neuroimaging such as CT scan, PET and MRI, which are very expensive and might dissuade individuals from seeing if they have early signs of the disease.

Our goal at Mobile Memory is to introduce a new cost-effective method — natural language processing — to screen for dementia during the early stages and allow a greater window of opportunity for treatment. Additionally, we also store the user’s audio files within the app’s Memory Bank to have their memories documented before they’re affected by dementia. Our team chose the aging and resilience track to improve the way we diagnose dementia in a cost-effective manner and preserve what makes us who we are — our memories.

What it does

There are two main functions of the Mobile Memories app. The first one is the Natural Language Processor, in which we have developed an algorithm to analyze speech patterns and check for the earliest signs of dementia, decades before other cognitive symptoms occur.

The app collects samples of speech by posing a prompt to the user. These prompts include “What’s on your mind”, “How was your day”, and other customizable questions. Users have the option to select how often they want to be prompted and over what time period (1 every day vs. 2 a month). Once the audio recording is obtained, the app applies a machine learning algorithm from Google Cloud to test for linguistic metrics such as pause duration and speech segment duration to name a few.

If our algorithm detects a significant decline in speech pattern, the app will notify the user, the assigned caregiver/family member, and the user’s physician. From there, the physician can proceed to conduct standard mental exams to assess the extent of symptoms and provide the next steps forward.

The second function is in the Memory Bank feature. After audio files are analyzed by Natural Language Processing, they are stored in the Memory Bank so that users and their loved ones are able to access past memories and relive the pleasant experiences. This feature of the app is meant to preserve the identity of the user so that individuals are not characterized by their disease, but by their good character and humanism beforehand. MemoryBank is empowering as it reminds everyone the authentic identity of the user, not the one forced upon by dementia.

How we built it

The project was divided into 2 main components: 1) Developing the machine learning model; 2) Building the Mobile Memories app.

For the machine learning model, we used Google Cloud and the available libraries there to build a speech analyzer that tests for linguistic metrics such as pause duration and speech segment. We were able to access actual datasets from people living with dementia and used it to train our model, making it more accurate and authentic.

In the app development side, we developed it as a React Native app, using Expo CLI to make it available for both Android and iOS. The Mobile Memories app contains a variety of features, including audio recording, audio storage (in the form of audio diaries), and data analytics of the speech patterns from the audio diary. For data analytics, we linked our model with the speech patterns of the user to monitor their cognitive ability over time.

Challenges we ran into

Since we are creating an app to help identify early signs of Dementia, we had to figure out the metrics of speech by which we analyze those signs. It was a challenge for us to look at past research papers to find scientific backing on the metrics of speech used, and implement those findings into our code.

We also found it challenging to get the dataset needed for developing the machine learning model. Since the model requires a lot of dataset, we decided to ask an organization for actual data . We also had to design a program to be able to parse through the data and get the ones we want.

It was also a challenge to familiarize ourselves with React Native and understanding how to use it to build the Mobile Memories app.

Accomplishments that we're proud of

We are very proud of being able to use actual datasets from people who live with dementia. We think it’s important to build this project on a foundation that is credible and authentic, and being able to access and implement real data to our machine learning model was an encouraging accomplishment for our team.

What we learned

From the technical side, we learned more about 1) app development; 2) how to base a ML model on scientific findings and data; and 3) how to parse through datasets.

We also developed a bigger sense of empathy to those living with dementia. Researching and developing the Mobile Memories app made us aware of how dementia affects our ability to comprehend and interact with life and puts into perspective the impact of the neurological disorder. This allowed us to better consider inclusivity when building apps or programs in the future, reminding us that technological innovations should always include everybody.

What's next for Mobile Memories

We understand the opportunities that Mobile Memories can bring and plan to improve the Machine Learning model with more datasets and possibly collaborating with researchers to better tailor it to meet current needs. We also plan to integrate an official screen test into the app that once administered can be sent to a physician for further inspection and action. In addition to that, we also hope to do our part in early intervention by integrating cognitive games for patients to play in hopes of improving their cognitive abilities.

Share this project:

Updates