Cyber-bullying is an issue that impacts 18.6% of American children, according to the US Department of Education. As victims, and perpetrators of cyber-bullying, we felt compelled to take action against it. We believe that developing a platform where children can feel safe to communicate with each other, like a safe space, if you will, would allow children to effectively learn to not only be successful communicators, but also be politely social, bearing in mind the realities around them. While we understand that reading and analyzing messages is a violation of privacy, we also recognize the values of doing so, so long as a positive end-goal is targeted.

By teaching bullies to "keep calm" and helping victims of bullying to "carry on", we hope to guide children into a more respectful future.

And, we also just sort of wanted to mess with Azure :)

What it does is a simple messaging interface that allows for person to person communication, but also acts as a check on the content of messages being sent. By analyzing the sentiment of messages, and then classifying them through machine learning algorithms, we identify when and how a person is bullying, and alert the sender before it reaches the recipient. While every message is read in a clear violation of privacy, we believe that this infraction could be justified by our reducing bullying.

How we built it

We built a web app hosted with Firebase and stored data in real time with Firebase, implementing sentiment analysis tools from Microsoft Azure to adjudicate every message. We used Material Design Lite for the front-end design.

Challenges we ran into

This was our first college hackathon, so we were learning how they work and learning web development as we went. We had quite a few issues with authenticating Azure's API (turns out, we were using the wrong API keys). We hit issues with Firebase's API, as it is not as straightforward as SQL, but were able to overcome those with a little help from our good friend, StackOverflow.

Accomplishments that we're proud of

Implementing a messaging service from scratch, while incorporating material design and Google accounts. We added text and image sharing to the table, while maintaining a sleek UI. We used Azure APIs to do complex language analytics, to detect feeling behind messages. We also implemented a machine learning Naive Bayes algorithm to distinguish between self deprecation, bullying, and different offensive types of statements.

What we learned

We learned a lot about algorithm design, Firebase/NoSQL data structures, Azure, and Material Design! It was a lot of fun, overall!

What's next for

We hope to continue to improve the fundamental service we provide: a safe space for messaging that prevents microagressions and bullying. We hope to add more language analytics, to detect the type of offensive statement one may be making, and to look at subject and verb sentence structure to more accurately identify harassment. We want to be able to distinguish between emotions and feelings in the text.

We also hope to turn this into a full fledged messaging service, with group chat capabilities, and an infrastructure that can support a large number of people.

We also hope to use Twitter and Wikipedia APIs to give more depth to our training set. We also want to let the users report questionable behavior, and add that data to the training set.

Built With

Share this project: