You have just won a hackathon! Yey! But, have you ever wondered how your idea is being evaluated? Of course, the good ol' rubrics. There is some instance when you are confident your idea is a winning one, but it didn't in the end. Why does it so? Most often than ever, your idea might be too complex to be realized in a short time. This is especially true for any hackathon that brings a particular corporate name, which their main objective is to crowd-fund ideas to improve a certain area that the company is currently facing. Therefore, this is where the judges, usually from the corporate and business-oriented, look closely into how feasible your idea is. What if there is a tool to "judge" your idea before pitching it to the judges? Stay tuned!
๐ก Inspiration
Automated essay scoring (AES) has been available as early as 1966, where it lets a computer score an essay. With the introduction of the Transformer module in 2017, many Natural Language Processing (NLP) tasks have seen a huge leap in performance. And till recently, when the world is amazed by OpenAI's ChatGPT, we can say that it is possible to let a computer do almost every task under the sun!
Now, hackathon judges are filled with emotion, and scoring differs from one to another. But, since DevPost has past hackathon projects, and a winner list, these data could be used to train an AI model to grade an idea!
๐ Zooming into a participant view
More often than ever, we as participants get less feedback or marks given by judges. The best we get is comments from the mentors, which are not the ones judging and awarding merits. Besides, the rubrics and requirements differ from one hackathon to another. Most of the time, I would skim through the judges (if available), rubrics, and scoring criteria to tailor make my presentation pitch to tick all the requirements. Of course, this is proven to work by many experienced hackers. But, can a novice hacker know? Or, can we even know how judges react and think during the presentation/pitching session? There are more "human factors" involved in scoring projects, and participants could not even know, except by going through all previous hackathons that the judges were judging.
๐ต๏ธโโ๏ธ Judges' view, from a participant's perspective
From my humble experiences, I would generally categorise the judges into two types: corporate judges and invited judges. Corporate judges are usually found in the company-organized/company-sponsored hackathons, which these judges are usually the C-level or directors of a particular department. Invited judges are those being invited to any hackathons, usually non-company-organized/general hackathons, to judge the projects.
Corporate judges
Corporate judges usually look into the possibility of a project being realized or implemented in the company they work for. They might be either a business person or a technical person, depending on the position held in the company. Even though rubricks are given, they might have a "golden button" to alter the final placing of ideas, especially C-level judges. Therefore, as a participant, to tackle these judges, I would look into the current direction of the company, and produce ideas in accordance with it. However, this provides more hassle since most of the corporate did not disclose their future directions publicly.
Invited judges
Invited judges are usually more rubrics-following since they aren't biased toward their affiliates. They only judge according to the rubrics and up to their expertise and knowledge. These judges usually throw useful comments and give constructive feedback based on the idea presented.
๐ Experiences from past hackathons
Generally, I could say the key ingredient of securing the top 3 is to convince the judges. As stated earlier, judges are the key evaluator that determines your prizes! Therefore, it is important to "mimic" how the judge reacts to your idea before pitching it to them. Imagine if an AI model could give you all this information to nail every single judge available! By then, your idea is solid-proof and question-proof since you have tackled all the weaknesses of your idea based on the judges' personalities the AI model determined.
๐ ValueAnIdea
In short, ValueAnIdea is a tool to judge your idea from an AI perspective, based on judges' behaviour in previous hackathons. Since DevPost is archiving all previous entry ideas for every hackathon and maintaining a list of winners, this information is useful if being analyzed. These data could be used to train a specialized AI model to judge if your idea is capable of winning the hackathon.
Think of it this way. You might already try out OpenAI's ChatGPT. ValueAnIdea is similar in the sense that you submit your DevPost project story to the AI model, and the model will rate it based on the previous stories available in DevPost.
Of course, there might be a paradox of everyone's winning, who's losing then. This is solely up to the final mile of how you pitch to the judges. ValueAnIdea provides insights into how good your idea is on paper, but not in terms of how you pitch it. Remember, you are going to pitch to human judges, which have complex emotions and biases. A perfect idea is only useful if you could communicate it well, and cover every flaw with mitigations.
๐งช What's the secret potion?
The AI model is trained using Reinforcement Learning from Human Feedback (RLHF). An initial AI model is trained in a supervised learning manner, where AI trainers provide past DevPost projects and the results (winner/proceed to final round/crowd favourite/not selected). Note that RLHF is what is being used in ChatGPT! More info on RLHF is available at HuggingFace's Blog.
Ideally, the methodology of training this AI model is almost similar to ChatGPT. However, training such a huge model takes multiple GPU-years. Therefore, this project is just providing ideas on how to achieve it. Of course, ValueAnIdea welcomes collaboration in making the project happen!
๐ What's next for ValueAnIdea
ValueAnIdea is in the infant stage, or specifically, the ideation phase. Ideally, we would like to make such project happening!
Log in or sign up for Devpost to join the conversation.