Inspiration

We were inspired by the lack of accessibility and/or accuracy of current options in both diagnosing and monitoring progress in Alzheimer's disease. The largest options that are currently available include radioactive biochemical trackers and brief cognitive impairment tests. We looked for an alternative to the current options and saw that most AD patients suffer from mild to severe cognitive impairment, which manifests itself as problems in language as the disease progresses. This makes language a quantifiable tracker of the progression of AD that is less invasive, time-consuming, and expensive than biochemical trackers and more accurate and specific than brief cognitive impairment tests.

What it does

Our app takes audio as input, processes the audio, and then produces graphs based on the processed data. First, to convert the audio to text, we are using Android’s built in speech to text feature. We then set up a WebSocket connection between our Java client and a Python server so we could send the audio transcript to our Python program. There, we use Python’s Natural Language toolkit, lexical richness, and Matplotlib to process the data. We started by removing stopwords and tokenizing all the text by word. Then we calculated type token ratio using NLTK and analyzed grammar using lexical richness. We also produced graphs of all of the data using Matplotlib. These graphs make it easy for users to understand the data and gauge their condition.

How we built it

Natural Language Processing We started by implementing NLTK, the natural language processing library that we relied on heavily to process the text transcript from the audio into meaningful chunks. We first processed the text into sentence tokens, which were then processed into word tokens that were tagged by part of speech by the pos_tag function. Now, we were able to identify the purpose of each word in the context of the sentence. To free up our spreadsheet from unnecessary data, we used filtering to remove words pre-categorized as stop words by NLTK. Using all of this information, we wanted to quantify the cohesion of sentences, so we used 5 separate measures to do this using the lexical richness library: word count, unique word count, type token ratio (TTR), root TTR, and corrected TTR. We then read all of this information into a CSV file and created bar graphs from this data.

WebSocket We set up a Python server and a Java client to easily send data back and forth between our Java and Python programs. We used Ngrok to create a URL for our server so our programs could communicate. The text converted from speech is stored in an ArrayList of strings. We created a new string and concatenated all of the words from the ArrayList to send the data more easily. We then passed our string as a parameter to our async task function to send the data to our python server.

App Interface We built our app interface using Android Studio. In our activity_main xml file, we included a toolbar with tabs so the user could switch between the input, data, and home screens on our app.

Challenges we ran into

We planned on using Python to easily process data, but we also wanted to use Java to be able to easily build our interface. This created a challenge for us, as we needed to be able to send data back and forth between our Java program and our Python program. This led us to look more into websockets, which would provide a channel for the data. First, we created our Python server using the Python websockets library. Then, we used Ngrok to connect our Python server to the internet. Creating our websocket client in Java took a little more time — we needed to be able to send and receive data in the background so our app would not freeze whenever data was being sent/received. To accomplish this, we created an async task to handle all of our websocket connections. This allowed us to have a fast and efficient app.

Accomplishments that we're proud of

We are proud that we were able to start working on a minimum viable product that we could build off of in the future to create a much more effective way for both caregivers and AD patients to keep healthy. Learning about natural language processing and how it could be implemented in real world scenarios allowed us to give back to our community, which we are very proud to be able to do. We were also able to expand our technical skills, exploring diff

What we learned

Throughout the duration of the hackathon, we developed both technical and soft skills. Technically, we explored the functionalities of various libraries, utilizing websockets and servers, and developing an app interface. We learned about how the semantics of a sentence can be quantified through values such as the type-token ratio. We delved into various techniques such as stemming, chunking, tokenization, lemmatization, and other ways where we could make sense of the order of different parts of speech. In soft skills, we learned to be persistent and delegate tasks and organize them as well, so we could create the most effective solution we could.

What's next for MetisTracker

In the future, we’d like to incorporate artificial intelligence into MetisTracker so it can suggest treatments and possible medication options to users of the app. Additionally, we’d like to be able to differentiate between different voices so the program can understand the context of a conversation and therefore analyze the data more accurately. We’d also like to integrate MetisTracker with the Google Maps API so caregivers/family members can track their patients location.

Share this project: