Harassment, intimidation, and suicidal thoughts are prominent problems in our society. We want to create a safer community with the help of local volunteer supporters and voice assistants distributed at hot spots to detect for harmful and hurtful words.

What it does

Detects hurtful or harmful words. Uses machine learning to generate cogent responses. Replies automatically to stop the potentially dangerous situation. Sends message to Facebook volunteer support group in the local community.

How we built it

We trained Soap to detect "toxic" and "depressive" conversation with DialogFlow with specialized intents. By doing this, Soap learns over time what toxic conversation is, and gets better after each use, because of machine learning. We plan to implement our assistant in public places near schools and connect them to local support communities.

Challenges we ran into

It is hard to connect all the cloud services together with proper authentication.

Accomplishments that we're proud of

Creating a voice assistant able to get help for an at risk individual facing depression ro bullying. In addition, we were able to have Soap to try and stop others from saying offensive things.

What we learned

How to use the Google Cloud Platform, Firebase, Graphs API to post to Facebook, the difficulties individuals with depression and bullying face. Implementation of APIs needs careful checking of code.

What's next for Soap: Voice Activated Assistant Combating Harassment

We hope to implement Soap in the real world and monitor at risk individuals to try and combat bullying, and depression. Doctors and


Website for more information

Built With

  • dialogflow
  • google-cloud-functions
  • node.js
  • google-nautral-language-api
Share this project: