Inspiration

Thorn focuses on removing Child Sexual Abuse Material (CSAM) from the internet and has made it one of their core objectives. Thorn knows this is no small feat, but beyond rescuing a child, it is the next top priority in Thorn’s battle to empower every child to simply be a kid.

What it does

The model searches in Google with the dictionary of abusive words given by us.All the urls consisting of the words would be checked for images and videos. These images and videos would be fed to our already trained model to identify if the video or image is abusive. If it's abusive a report would be sent to the respective board informing the same.

How I built it

We built by initially training our model by feeding in abusive images. Tensorflow by enabling optimisors and using feature extraction, would able to differentiate between abusive and non abusive content. After training the model, the actual contents from the urls are given to report whether they are abusive or not

Challenges I ran into

Identify the commercial (movies, advertisments etc,.) images and videos from the actual abusive content. Usage of multiple packages.

Accomplishments that I'm proud of

We acheived the overall flow of the model we intended to do. Our model could help in clearing the unwanted and abusive contents from the internet.

What I learned

We learnt to work in an agile way by completing each of the work as and when we could. This way the multiple steps involved were completed.

What's next for Find Abusive Image & Video

We intent to automate the entire model by providing an app for the users.

Built With

Share this project:
×

Updates