Around the planet, the loss of social interaction has crept its way into the human population as mental-health has come to the forefront of our attention. In the midst of a pandemic that social distances us, we need solutions that improve our mental-health, now, more than ever.

What it does

Kare N Care leverages video streaming to connect people with each other in order to uplift our users' mental-health as well as providing professional emotional support. The app also is intended to analyze the emotions of the participant via voice recognition.

How we built it


We use React to build the user interface and host the application on Google App Engine. For emotions classification, we built a Neural Network model on Google Colab utilizing a compute instance hosted on Google Compute Engine. The API call is written in Python and hosted on Google Cloud Function. These will post the output using Dataflow with windows of 10s so the participant's voice can be analyzed in real time. Since working from home is a new trend, we use Twilio to help notify the supporters if they happen to be away from desks.


The data is collected from Spoken Emotion Recognition Datasets with 5 emotional levels ranging from angry to amused. The audio data is transformed to spectrogram images, then those images are used for training and validating the model. We use Convolutional Neural Network techniques to build the model.

Challenges we ran into

We have a team of four, in which there is a new member with little experience. For the front-end, we have a lot of debugging for JavaScript since we are not very experienced. For the back-end, there were a lot of challenges with data preprocessing as well as the model. Our team has little to no experience with Machine Learning and some of the team members are first-timers with this project.

Accomplishments that we're proud of

We are proud of the front-end's aesthetic look as well as its functionality. We were able to utilize a lot of Google Cloud Services for the application and it made the deployment of the app much easier. We are also proud of the deep learning model that we have created from scratch.

What we learned

This Hackathon has helped us to spend time learning new technologies such as Twilio, Google Cloud Platforms, and try out different Machine Learning techniques for audio classification.

What's next for Kare N Care

We definitely will try to improve the accuracy rate by modifying the model. In the future, we also plan to improve the emotion recognition by introducing facial emotion using Computer Vision as well as adding more emotion classifications.

Share this project: