Inspiration

Every developer has opened a legacy codebase and felt overwhelmed - "where do I even start?" We wanted to build a tool that could instantly diagnose repository health without requiring hours of manual review or complex configuration. The breakthrough came when we realized code quality issues are like haunting problems: ghosts (missing tests), curses (structural issues), and zombies (misplaced code). This spooky metaphor made code quality memorable and fun, perfect for the Kiroween hackathon.

What it does

RepoForge is a smart code auditor for JavaScript/TypeScript projects that:

~Automatically detects 15+ frameworks (React, Next.js, Express, Vue, Nuxt, Angular, etc.) with zero configuration ~Finds quality issues - missing tests, structural problems, misplaced code, security vulnerabilities, and naming violations ~Provides smart recommendations based on your tech stack and architecture patterns ~Integrates with CI/CD - fail builds on critical issues, pass on minor ones ~Works 100% locally - no API keys, no external services, complete privacy ~Generates AI-powered code from natural language prompts Connects to Kiro IDE via MCP server for conversational code auditing

All with a single command: repoforge audit

How we built it

Core Architecture:

~Built entirely in TypeScript for type safety and developer experience ~Smart Detection Engine analyzes package.json, file structures, and import patterns to identify frameworks with confidence scoring ~Rule-Based Analysis System with severity levels (CRITICAL → SUGGESTION) and configurable thresholds ~Framework-Aware Validators that understand Next.js pages belong in /pages, React components in /components, etc.

Key Technical Components:

  1. Static Analysis - AST parsing, pattern matching, dependency graph analysis
  2. Security Scanner - Multi-pattern validation for hardcoded credentials and vulnerabilities
  3. MCP Server - Built with @modelcontextprotocol/sdk for AI assistant integration
  4. CLI Interface - Using Commander.js for intuitive command-line experience
  5. Parallel Processing - Performance optimization for large codebases (10k+ files)

Challenges we ran into

Framework Detection Accuracy: Different projects structure files differently. We solved this by implementing a weighted confidence-scoring system that evaluates multiple signals (dependencies, config files, directory patterns) rather than relying on a single indicator. Performance at Scale: Initial implementations took 30+ seconds on large repos. We added parallel file processing, smart caching, and lazy evaluation to handle massive codebases efficiently. Reducing False Positives: Generic structural rules created too much noise. We made rules framework-aware - for example, Next.js API routes in /pages/api are valid, not "zombies." This required deep understanding of each framework's conventions. Security Scanning Reliability: Detecting hardcoded credentials without false positives was tricky. We implemented context-aware pattern matching that considers variable names, file types, and surrounding code. MCP Integration Complexity: Making RepoForge work globally across all projects in Kiro IDE required careful setup scripts and configuration management for both Windows and Unix systems.

Accomplishments that we're proud of

✅ True Zero-Config Experience - Works instantly on any JavaScript/TypeScript project without setup ✅ High Detection Accuracy - 95%+ confidence in framework detection across diverse project structures ✅ Production-Ready CI/CD Integration - Teams can enforce code quality standards with configurable severity thresholds ✅ Privacy-First Design - 100% local analysis, no data leaves your machine ✅ Extensible Rule Engine - Teams can write custom rules for their specific standards ✅ Natural Language Interface - Chat with Kiro to audit repos conversationally ✅ Comprehensive Documentation - Clear guides for setup, configuration, and custom rule authoring

What we learned

Developer Experience is Everything: A tool can be technically impressive but fail if it requires complex setup. Making RepoForge "just work" required anticipating dozens of edge cases and providing sensible defaults. Local-First Tools are Highly Valued: Developers care deeply about privacy and not needing API keys. The 100% local architecture became a major selling point. Severity Levels Transform Usefulness: Without severity filtering, audit results were overwhelming. Adding CRITICAL → SUGGESTION levels made RepoForge practical for real-world CI/CD pipelines. Framework Context Matters: Generic linting tools miss nuances. Understanding that Next.js components can live in /app or /pages depending on the router version made our analysis far more accurate. Good Defaults + Configurability = Adoption: Zero-config for quick wins, but configurable rules for team-specific standards struck the right balance.

What's next for RepoForge

🚀 Multi-Language Support - Extend beyond JavaScript/TypeScript to Python, Go, Rust, and Java 🔍 Advanced Security Analysis - Deeper vulnerability scanning with CVE database integration 🤖 AI-Powered Auto-Fixes - Not just detect issues, but automatically generate pull requests to fix them 📊 Project Health Dashboard - Visual analytics showing code quality trends over time 🔗 IDE Extensions - Native VS Code, JetBrains, and Vim plugins for real-time feedback 🏢 Team Analytics - Aggregate metrics across multiple repositories for engineering leadership ⚡ Incremental Analysis - Only re-analyze changed files for lightning-fast CI/CD checks 🌐 Cloud Service - Optional hosted version for teams wanting centralized dashboards (while keeping local analysis option)

Built With

Share this project:

Updates