Loneliness and depression affect many in our communities. Per Forbes, in 2019 "More than two in ten adults in the United States (22%) [...] say they always or often feel lonely, lack companionship, or feel left out or isolated". We wanted to make an application to address this issue.
What it does
Our iOS mobile application tackles the mental health vertical. It allows a user to "talk" to any person that they want. The user can choose pre-trained models of celebrities or feed their own data of someone personal. The app then creates a copy of the person which acts as a mental care agent. It will then talk to the user (in the voice of the person of their choosing) with the objective of empathizing/comforting the user while monitoring their mental health via sentimental analysis.
How we built it
We created pre-trained DeepFake videos using DeepFake web service.
Step 1: We converted the user audio into text using google speech to text and then did sentiment analysis using google sentiment analysis
Step 2: The sentiment score was compared to the previous score to finalize the current distress level of the user
Step 3: The distress level was then given to the ios device to detect the next response to be played to the user
Step 4: The response was played to the user and the reaction was again obtained from the user using the users' response to the deepFake agent.
Challenges we ran into
Building DeepFake models in real-time was a large hurdle. We were unable to accomplish this, as DeepFake is an ongoing research field. In its place, we used DeepFake Web service to map a person's image onto a pre-recorded video to create a DeepFake video.
Accomplishments that we're proud of
We completed a prototype of our application.
What we learned
Google Speech API, Firebase. We became more aware of how tough it is to deal with mental health issues and how challenging it is to help get people out of loneliness and depression.
What's next for DeepCare
Real time video training, speech synthesis and auto response.