## Inspiration

Every team wastes hours in code review catching the same recurring issues — SQL injection, hardcoded secrets, missing error handling. I wanted a tool that does the boring, pattern-based catching automatically so human reviewers can focus on architecture and logic.

The bigger frustration: every AI code tool requires an expensive API key. Most developers and small teams can't afford $20/month just to review code. That felt wrong.

## What I built

PRReviewer — an automated code review dashboard that combines two engines:

  • Static analysis — instant, zero-cost pattern matching for 10+ vulnerability types (SQL injection, XSS, hardcoded secrets, command injection, path traversal, weak crypto)
  • AI analysis — deep contextual review using free OpenRouter models with a 7-model fallback chain, so you never hit a dead end from rate limits

Connect your GitHub account, select a PR, and get findings annotated by file and line number in seconds. Or just paste a code snippet for instant manual review — no GitHub needed.

## How I built it

  • Monorepo with Turborepo: apps/web (Next.js 14), services/api (Express), packages/core (shared analysis engine)
  • Static analysis runs entirely server-side with zero external calls — regex detectors with Shannon entropy scoring for secret detection
  • AI layer is provider-agnostic: Claude, Ollama (local), and OpenRouter (free cloud models)
  • OpenRouter integration uses raw fetch against the OpenAI-compatible endpoint — no new npm packages, 7 fallback models across different backends so one rate limit doesn't kill the whole chain

## Challenges

False positives were brutal. The entropy-based secret detector kept flagging Tailwind CSS class strings like "min-h-screen bg-[oklch(0.10_0.04_265)]" as hardcoded secrets. Had to build CSS-pattern exclusion logic and skip entropy checks on className= lines entirely.

Free models use chain-of-thought. Qwen3-Coder emits <think>...</think> blocks before its JSON response. The greedy regex matched from a [ inside the thinking prose to the last ] of the findings array — producing invalid JSON that silently returned empty results. Fixed with a multi-strategy parser that strips think blocks first.

Model availability. Free OpenRouter models go offline constantly. Built a fallback chain spanning 7 models across Qwen, NVIDIA, Mistral, Google, and Meta backends — if one provider rate-limits, the next one picks up automatically.

## What I learned

You don't need an expensive LLM to catch 80% of real code issues. A well-tuned static analyser gets you there instantly and for free. AI fills the gap for logic bugs and context-sensitive issues that patterns can't catch.

Built With

Share this project:

Updates