Inspiration

Our inspiration came directly from the chaotic and overwhelming experience of hackathon judging. We saw judges, who are often industry experts volunteering their time, inundated with hundereds of projects in a very short period. It's impossible to give each project deep, contextual attention. We believe many innovative projects get overlooked simply because their value isn't immediately obvious or their presentation isn't as polished. We wanted to build a tool that helps judges discover the most relevant projects based on their unique professional interests, moving beyond generic lists to a personalized, data-driven experience.

What it does

Hac-a-valuator is an AI-powered platform designed to streamline the hackathon evaluation process. For Participants: You simply submit the URL of your deployed project. For Judges: You get a personalized dashboard that ranks all submitted projects based on their relevance to your specific expertise and interests. Our application automatically crawls each submitted project's website, analyzes its content, and uses an LLM to score it against a pre-compiled profile for each judge. The key feature is that alongside a relevance score, the judge receives a concise, AI-generated summary explaining why a particular project is a good match for them, enabling them to focus their valuable time on the most promising entries.

How we built it

We built Hac-a-valuator MVP first on Bolt.new and later migrated it to the Cloudflare Developer Platform, creating a robust, serverless application. Frontend: The judge's dashboard is a modern and responsive Single-Page Application (SPA) built with React and TypeScript, and deployed on Cloudflare Pages for global, low-latency access. Backend & APIs: All server-side logic is handled by Cloudflare Workers. These TypeScript-based workers expose API endpoints for project submission, data retrieval, and triggering analysis. AI Processing: The core intelligence comes from Workers AI. We use AI Gateway models to generate judge interest profiles and to perform the relevance scoring. All AI calls are routed through the Cloudflare AI Gateway for robust analytics, caching, and rate limiting, which helps us manage costs and improve performance. Database & Storage: We used a hybrid storage model. Our relational data (judges, projects, scores) is stored in Cloudflare D1, a serverless SQLite database. The large, unstructured text content scraped from project websites is stored as markdown files in Cloudflare R2 object storage, which is more efficient and scalable.

Challenges we ran into

  1. Effective Prompt Engineering: Getting the LLM to provide consistent, accurate numerical scores and, more importantly, meaningful and concise justifications was a significant challenge. It required dozens of iterations to craft prompts that balanced creativity with structured output, ensuring the "why" was genuinely helpful to the judge.
  2. Reliable Scraping: Websites are incredibly diverse. We ran into issues handling different web frameworks (especially client-side rendered SPAs), complex layouts, and anti-scraping measures. Building a resilient scraper that could consistently extract clean, relevant text content and convert it into useful markdown for the LLM was a major engineering effort.
  3. Asynchronous Data Flow: The process of a user submitting a URL, a worker picking it up, scraping the site, calling the AI, and finally writing to the database is a multi-step, asynchronous flow. Managing the state and ensuring reliability across this entire pipeline, especially handling potential failures at any step, required careful architectural design.

Accomplishments that we're proud of

We are incredibly proud of building a complete, end-to-end AI application that solves a real-world problem. Specifically, we're proud of: The AI-generated "Why": The relevance score is useful, but the AI-generated explanation is our biggest accomplishment. It's the feature that delivers the "aha!" moment for judges and truly saves them time. A Fully Integrated Cloudflare Solution: We successfully composed multiple services from the Cloudflare developer platform (Pages, Workers, D1, R2, and Workers AI) into a single, cohesive, and powerful application. The Personalized Dashboard: We created a user experience that feels personalized and intelligent. Rather than just presenting data, we're providing actionable insights for our target users.

What we learned

This project was a massive learning experience. We gained deep, practical knowledge in several key areas: Applied AI: We learned that the true power of LLMs in applications isn't just generic chat, but in fine-tuning them for specific, structured tasks like scoring and justification. Prompt engineering is as much an art as a science. Serverless Architecture: We learned how to design and build a complex application without a traditional backend server, leveraging the power and scalability of a serverless platform like Cloudflare. The Importance of Data Modeling: Our decision to use a hybrid storage approach (D1 for structured data, R2 for unstructured text) was critical. We learned how to model data effectively for a serverless environment to ensure performance and cost-efficiency.

What's next for hac-a-valuator

Our vision for Hac-a-valuator extends far beyond the current MVP. We plan to: Implement Real-Time Analysis: Move from a manual trigger to a system that analyzes projects the moment they are submitted. Enable Deeper Project Analysis: Go beyond just the project's website by scraping and analyzing linked GitHub repositories to evaluate code quality, technology stack, and contribution patterns. Introduce Historical Context: Allow judges to compare current projects to winners and notable entries from past hackathons to better gauge impact and innovation. Build Full User Accounts: Allow judges to create persistent profiles, save notes on projects, and collaborate with other judges directly on the platform.

Built With

Share this project:

Updates