Based on an early morning conversation with the Product Hunt team we learned about a pain point they have: They are inundated daily with submissions that they need to review and decide what will be submitted to the site. This is a lot of work for the PH team to individually review each submission.

Our team took this as a challenge. How could we keep the high quality hunts that PH is known for while reducing the workload?

We brainstormed for a while and the idea we came up with is to give the product hunt community the opportunity to review all submissions and vote on them. These votes would be summarized and presented to the PH staff which will allow them to dramatically reduce the review time but still maintain the high quality standards the site is known for.

We implemented a machine learning tool to scan the entire Product Hunt database and assign a score to each user. This score was based on the following factors: Number of comments, Number of Followers and number of up-votes. We augmented the score by taking into account when the user voted for hunts that were the most popular. Any user that has a score in the top 10% is categorized as an expert.

The summary statistics presented to the PH staff contained 2 metrics: Approval rate for all non-expert users and approval rate for all expert users. Based on these summary stats the PH staff can approve, deny or dig in further to make a quick decision.

This concept of crowdsourcing to find quality results by the community has many broad applications. This concept could be used by employees of any organization to review resumes to find quality applicants or by colleges to allow the student body to review applications to find students that they want to attend the university.

Share this project: