π‘ What it does .gitcheck is an automated integrity engine that converts a Devpost hackathon gallery into structured, evidence-backed audit results for every submission. It ingests project data, resolves linked repositories, and evaluates whether submissions were genuinely built during the hackathon and whether their stated features are actually implemented.
The platform performs a timeline audit by analyzing GitHub commit history against the official hackathon window to detect pre-existing or suspicious development activity. Beyond timestamps, .gitcheck runs AI-driven feature audits that translate unstructured project descriptions into testable claims and verify those claims by inspecting real code artifacts. All results are delivered through a private, organization-scoped dashboard that gives judges clear, explainable signals instead of raw data, and may guide judge questions during live presentation period.
βοΈ How we built it We designed .gitcheck around an AI-agent-first workflow rather than a rule-based checker. Using LangChain and LangGraph, we built a structured multi-agent system where each agent performs a focused task: extracting feature claims from descriptions, investigating repositories for supporting evidence, and verifying claims based on concrete code references. LangGraph allows these agents to operate as a coordinated workflow with state, retries, and evidence requirements, ensuring decisions are grounded rather than speculative.
This workflow is orchestrated by a Python FastAPI backend that handles Devpost scraping, GitHub API interaction, agent execution, and result aggregation while managing rate limits and partial failures. Results are stored in a PostgreSQL database (Supabase) using Prisma for consistency and traceability. A Next.js 15 frontend presents the findings through a high-contrast, brutalist dashboard, with Auth0 enforcing secure, multi-tenant access for organizers.
π§ Challenges we ran into Scaling GitHub analysis across many projects introduced immediate API rate limit constraints, forcing us to carefully optimize request patterns through batching, caching, and graceful degradation. We also discovered that determining legitimacy is rarely binary. Hackathon submissions often describe technologies and features in vague or marketing-heavy language, and the same capability can be implemented in many different ways, making naive keyword matching unreliable.
Another challenge was maintaining consistency across an asynchronous pipeline. Scraping, repository analysis, AI reasoning, and database writes all occur independently, so we had to ensure idempotent operations, partial-result handling, and stable identifiers to prevent duplicate or mismatched records. Designing for real-world messiness rather than ideal inputs was a constant challenge.
π Accomplishments that we're proud of We built a true dual-verification system that evaluates both when code was written and what the code actually does, significantly raising the standard for hackathon integrity checks. Our AI agents produce evidence-backed explanations instead of opaque flags, making results actionable for judges. We also successfully delivered a single-click scan experience that hides a complex, multi-stage pipeline behind a simple interaction. On top of that, we implemented secure organization-level access control and a distinctive, signal-focused UI that prioritizes clarity over decoration.
π§ What we learned
We learned that AI agents are most effective when treated as components in a workflow rather than a single prompt. Breaking reasoning into specialized roles and enforcing evidence requirements dramatically improved reliability and interpretability. We also learned that building AI systems is only half the challenge. The real difficulty lies in integrating agents with APIs, databases, and user interfaces in a way that remains resilient under failures, rate limits, and incomplete data.
π Whatβs next for .gitcheck Next, we plan to extend verification beyond a single event by checking submissions against recent hackathons to detect recycled or lightly modified projects. This includes comparing repository structure and code patterns across events to surface repeat submissions even when commit windows appear valid. Longer term, we aim to expose .gitcheck as a public API so other hackathon platforms can integrate automated integrity checks directly into their judging workflows.
Log in or sign up for Devpost to join the conversation.