Team name: 60% of the time, we win all the time


We were inspired by everyday issues in our lives. As a frequent browsers of Reddit, we were unable to tell the difference between posts from r/AnimalCrossing and r/Doom. Therefore, we decided to create two models, one which takes the title of the post, and one that takes the images, and accurately classifies the subreddit it came from.

What it does

Classifies Reddit posts into Doom and Animal Crossing using the title and the image from a post

How I built it

We used tensorflow to train BERT, Google's NLP (natural language processing) model, and MobileNetv2, an image classification model. Due to our limited time frame, we decided to use transfer learning to extract the maximum performance in the shortest time. Our BERT model trained for about 2 hours on an NVIDIA P100, while our image classification model took about 30min to train on an NVIDIA P100. Together, these models achieve about 90% validation accuracy. This is about the peak accuracy, as many images are simply tweets or text, and cannot be easily classified, and many titles can be shared between the subreddits. Bayesian hyperparameter tuning was also implemented to extract the maximum performance. We used flask to setup the backend and html templates for the frontend. File Uploads were done using Flask_Uploaded, and we added some basic validation. A flask api is used in the backend to get the information and to store the images.

Flask was used for the frontend.

Challenges I ran into

24hrs is a very short timeframe to train a model, let alone 2. I have a fairly large codebase due to previous experience with Machine Learning, so this was slightly easier. It was also surprisingly difficult to find a humorous yet quality dataset. Our biggest issue however, was using Huggingface Transformers. Huggingface is the standard NLP framework, but using it with tensorflow presented issues, as the models had to be saved separately. We had to rewrite a significant portion of our training code to be able to overcome this and save our models. There were issues with setting up flask initially, but after enough experimentation, we were able to fix it.

Our training notebook is here, along with downloadable models:

Accomplishments that I'm proud of

  • 2 machine learning models in 24 hours
  • Working frontend for machine learning models

What I learned

  • This was only our second experience with NLP, so we learned tons about working with Transformer models and deploying ML in general

How to use

  • Download the models from Our Kaggle notebook
  • Clone our GitHub repo, and put the models inside of the folder
  • Execute in terminal pip install -r requirements.txt
  • Finally, execute npm run flask-start-api
  • Image uploaded should be less than 500kb

Built With

Share this project: