Inspiration

We were inspired to do this project because we always hear about missing warning signs in those who end up killing tons of people. In an effort to combat this, we wrote a Social Media Threat Identification (SMTI) program to locate and identify word choices and combinations to trigger warning signs and links in user accounts across social networks.

What it does

Using the power of artificial intelligence, we trained a model to identify hate oriented and threatening speech in online postings. The program also maps people across various platforms in an effort to get a fuller story of what is going on. Through these efforts, the program identifies potential threats and has an API to alert authorities for follow up on threats identified.

How we built it

We built SMTI using docker, google cloud, and tensor flow. One end of the program scrapes various social networks and dumps the data into an Elasticsearch database up in google cloud. The other end of filters content through data processing algorithms implemented in tensor flow and openCV process the data that is found to mark threatening posts. Once threatening content is located, "investigative workers" try to relate posts and users into a single person to identify a threat.

Challenges we ran into

It was incredibly difficult to train this reliably in the limited amount of time we had.. The training data is biased in that sense because it was trained on known white supremacists and hate speech regulars. In addition, due to the lack of API support, mining Facebook has been a challenge since we have to literally parse html pages to get the information we need.

Accomplishments that we're proud of

We are very proud that our program actually identifies hate speech and marks users as potential threats. We are also very pleased with the functionality we were able to achieve in a 30 hour deadline..

What we learned

We learned that data mining and machine learning is a very challenging subject. But when applied properly, can be a very powerful in the fight against extremism. We also learned TensorFlow is really cool!

What's next for Social Media Threat Identification

Continuing to improve the Neural Network model to make more accurate predictions

Share this project:
×

Updates