We believe mental wellness is just as important as physical health, and yet the presence of tools on the web for those who suffer from mental illness is lacking, despite over 450 million people suffering from mental illnesses. A lack of coping mechanisms and tools can cause the affected person to suffer greatly and function poorly at work, at school, and in the family. Poor mental wellness at its worst, when left to fester, can lead to suicide. Close to 800 000 people die due to suicide every year. With suicide being the second leading cause of death in 15-29-year-olds. The inspiration for the project came from our experience with the global pandemic and the way it has disrupted our own and the mental well-being of others. During the pandemic, 4 in 10 adults in the U.S. have reported symptoms of anxiety or depression, up from one in ten adults who reported these symptoms last year. January to June 2019. Being mentally healthy makes you feel good about yourself. It also allows you to enjoy the pleasures of life, to grow, and to try new things. Maintaining good mental health is also one of the best ways to prepare for life's difficult moments both at a personal and professional level.
❓What it does:
Safespaces is a community where people can give and get advice about issues and topics related to mental health. A person signs up and creates a profile and has two options. One option a person has is to create a post about an issue related to mental health and the second option is to respond to other people’s posts and share advice and kind words. Unlike other websites such as Reddit, Safespace uses Machine Learning to filter out toxic and harmful posts, ensuring a safe community for everyone while providing you with the best quality counseling online.
🏗️How we built it
Safespace’s frontend was created using HTML, CSS, React, and Material UI components. Firebase was used to handle the backend logic which handles security, user authentication, and get and post requests. The backend also consists of the text toxicity model which is a TensorFlow.JS model. The toxicity model detects whether user messages contain toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language. The model was trained on the civil comments dataset: https://figshare.com/articles/data_json/7376747 which contains ~2 million comments labeled for toxicity. The model is built on top of the Universal Sentence Encoder.
🚧Challenges we ran into
The first problem we had was with the Tensorflow JS file. Since the Tensorflow model is on the website itself, it affected our client-side performance because the predictions are being made on the client-side. If the prediction was made on the server and returned to the client it would’ve improved the performance of the web application. The other problem we had was with user’s posting comments. We had a lot of issues linking users’ posts to comments and being able to properly be able to display those comments on the post. Another issue we had was connecting the Vonage API to firebase functions but it required upgrading to a more premium tier of firebase and due to time limitations, we weren’t able to add video calls and private messages between professionals for a 1 on 1 experience. We as a team don’t believe that projects should wither away in Github repositories but we believe that they should have an entrepreneurial and real-world impact. Safespace is the very manifestation of those beliefs, hence we would like to push this project into the hands of those that really need it.
✅Accomplishments that we're proud of
We are proud of making such an accurate model in TensorFlow. We are not very experienced with deep learning and TensorFlow, so getting a machine learning model that is accurate is a big accomplishment for us. Getting the code optimized for the TensorFlow JS model was also a huge accomplishment because we are able to run the model on the client-side while still We are also proud that we created an end-to-end ML solution that can help people stay safe in a community where sensitive topics are being discussed. We are also proud to overcome the challenge of user security, meaning that posts cannot be manipulated and changed.
🙋♂️What we learned
💭What's next for SafeSpace
We provision SafeSpace to be more scalable and reliable. We aim to deploy the model on a cloud service like google cloud or AWS and access the model from the cloud which would ultimately increase the client-side performance. We also plan on making a database of all the toxic comments the users upload, and passing those comments through a data pipeline for preprocessing the comments, and then saving those comments from the user into a dataset for the model to train on weekly. This allows the model to be up to date and also constantly improves the accuracy of the model. We also want to recommend people’s posts based on their recent searches and post history. We also plan to update our business model. Therapists and other professionals in the field can sign up to offer help at an hourly rate. Due to the exposure of Safespace, we would take a cut of the hourly rate while still providing people with help related to mental illnesses. We would add video calls and private messages for a completely private 1 on 1 with a professional.