Problem Statement

Current State: Many hackathons explicitly prohibit submitting pre-existing work, requiring all demonstrated functionality to be built during the event timeframe. Despite these rules, teams regularly submit projects built beforehand—whether recycled from previous hackathons, existing startups, or side projects—to gain unfair competitive advantage.

Impact:

  • Legitimate teams working under time constraints lose to polished, pre-built projects
  • Prize money rewards rule violations rather than hackathon effort
  • Organizers lack scalable tools to verify submission authenticity
  • Manual investigation is time-prohibitive (5-10 minutes per project × 50-200 submissions)

Why This Persists:

  • Teams rename projects to avoid detection
  • Judges lack time to investigate GitHub histories and previous submissions
  • Current detection relies on judges recognizing familiar projects
  • No systematic cross-referencing of participant histories exists

Solution Overview

DROPS deploys autonomous AI agents that investigate each submission across multiple data sources, perform semantic analysis to detect renamed/rebranded projects, and flag high-risk submissions for judge review. The system operates continuously during and after hackathons, learning from patterns to improve detection accuracy.

Core Capabilities:

  1. Multi-Source Investigation

    • Scans all team members' DevPost submission histories
    • Analyzes public GitHub repositories (commits, timestamps, project structure)
    • Performs general web search for matching startups/products
    • Cross-references project descriptions, features, and technical stacks
  2. Semantic Matching

    • Identifies projects with different names but identical functionality
    • Detects feature overlap (e.g., "AI meeting assistant" vs "conference productivity tool")
    • Compares use cases, target users, and value propositions
    • Flags commits predating hackathon start time
  3. Autonomous Learning

    • Builds pattern database of common evasion tactics (name changes, team splitting)
    • Improves confidence scoring based on judge feedback
    • Adapts search strategies based on detection success rates
    • Identifies repeat offenders across multiple events
  4. Judge-Optimized Reporting

    • Prioritizes findings by confidence level and severity
    • Presents evidence trails (commit history, previous submissions, web presence)
    • Provides side-by-side comparisons of current vs. prior work
    • Enables one-click deep dives into flagged evidence

Built With

Share this project:

Updates