The recent wave of increased self-isolation the COVID-19 has brought with it an urgent need for self-care and monitoring. While measures have been taken to address the issue of mental health, there is a lack of mental health platforms targeted towards the welfare of the elderly people.

What it does

The app receives speech audio input from the user, and classifies them into an emotional state based on the user’s tone of voice. From this classification, the app produces a custom list of questions designed to analyze anger/depression/anxiety levels. All communication occurs via speech. After analyzing the responses and quantifying the user’s anger/depression/anxiety, it produces an action based recommendation to provide coping abilities to the user. The algorithm also takes into account the age and mobility level of the user. It also provides immediate resources for the suicide hotline if it detects statements of self harm, hopelessness, and suicide.

How we built it

Our process began by intensive research followed by curating a list of questions to ask the user based on research from reputable sources ranging from the World Health Organization, Centers for Disease Control and Prevention, Mental Health America, and a multitude of academic papers. These questions were mostly open-ended and designed to assess characteristics in a user's response.

We used the Empath API to gather information pertaining to cues in tone of voice (anger, sadness, anxiety), and IBM Watson to analyze content from speech input. This information was then fed into a Python script that could make action-based recommendations based on the user's age, mobility, and mood. We used FastAPI to create an endpoint that could be accessed by our React frontend and use these recommendations to interact with the user in realtime.

We analyzed the 7 quality of life domains to open dialogue and determine our recommendations. In addition to our customized recommendations, we link our users to a centralized net of resources ranging from applications, videos, articles, and events to continue supporting their journey.

Challenges we ran into

Fetching the audio file into the back-end proved to be a somewhat cumbersome and complicated process. Currently we are able to use file encoding transmit it as text but this is not very efficient. The Empath API has a size limit to the size of audio files that can be inputted so refinements would have to be made before deploying the app for production. Moreover, we had initially planned to use Voiceflow to serve as our main front-end component; however, it did not allow us the flexibility to integrate our custom backend API that we needed so we had to switch to building our own web app halfway. This led us to not being able to finish all of out front-end code.

Accomplishments that we're proud of

In less than 36 hours, we were able to build a fully functional back-end program! The application is able to receive a .wav file, classify the user into an emotional state based on the tone of their voice, provide a series of questions based on that classification, and produce a final recommendation based on the content in the user’s answers and previously stated mobility levels. Moreover, even with the time crunch we were able to get the preliminary code developed for frontend and backend communication.
As a team we were very adaptive and communicated and collaborated effectively while supporting each other. We respected each other's commitments, time zones, and skill rangers and made sure everyone felt that they were able to raise their questions/feedback/concerns and that they were able to make a meaningful contribution.

What we learned

As a whole, the process of conceptualizing and developing an idea from scratch delivered a lot of insight into product strategy and marketing. Moreover, this was much of the team's first time developing and assembling a full-stack web application, especially using tools such as the FastAPI framework.

What's next for Mealth

We hope to be able to expand this project in several ways.

  • Developing a mobile application to increase accessibility among the 55+ population.
  • Build an Alexa skill in order to integrate its capabilities with Alexa’s virtual assistant.
  • Automation testing to increase the precision of our backend API.
  • Conduct further research to incorporate more emotional states and combinations.
  • Integrate different languages and analyze change in sentiment

Built With

Share this project: