Inspiration

Our own struggle with the toxicity and mental struggles on dealing with undesirable web content.

  • Lack of proper content filtering tools.

What it does

FiltermyFeeds uses NLP to recognize the context of texts on the web and thus filter the content as per users preference.

How we built it

After creating our dataset, firstly we tokenized and preprocessed the texts and then encoded the preprocessed text as a sparse matrix to pass them to the LinearSVC model.

Challenges we ran into

  • Choosing the bets model for our task
  • Lack of proper Data set
  • Legal and Regulatory constraints

Accomplishments that we're proud of

  • Curating our own dataset
  • Implementing NLP model with high
  • State of Art Extension with novel features
  • Testing out various models like Xgboost, distilBERT, Linear SVC, NBclassifier, randomforest classifier.

What we learned

  • Encoding Strings into numerical values without losing the semantic context
  • Real world data challenges
  • Project Organization

What's next for FilterMyFeeds

  • Multi-modal Content Blocking that includes images, videos and other media formats
  • Launch of Extension as a open-source tool
  • Real-time Model and Dataset update using user input

Built With

Share this project:

Updates