AI Code Review Assistant — Project Story

🌟 Inspiration

This project was inspired by recurring pain points in modern software development. Code reviews are essential, yet they often suffer from time pressure, shallow inspection, or missed risks. I noticed that developers frequently switch between the Pull Request page, external documentation, and AI tools. When Chrome introduced Built-in AI APIs (Prompt, Writer, Multimodal), I saw an opportunity to embed an intelligent assistant directly into the PR workflow. The goal was simple:

Reduce context-switching and improve review quality through ambient, in-browser AI.


📚 What I Learned

Throughout development, I gained deeper insights in several areas:

Chrome Built-in AI Integration

I learned how to:

  • Trigger model downloads on-device,
  • Stream suggestions using Prompt API,
  • Generate documentation via Writer API,
  • Prepare for future architectural diagram analysis using Multimodal API.

Prompt Engineering

Fine-tuning prompts for concise, actionable suggestions was crucial. I used a simple risk model: [ \text{Risk Score} = \frac{\text{Impact}}{\text{Mitigation Effort}} ]

Front-End Architecture

By building with React 18, TypeScript, TailwindCSS, and Vite, I strengthened my understanding of:

  • Content script injection,
  • Manifest V3 constraints,
  • Chrome Storage API,
  • Multilingual UI workflows.

🏗️ How I Built It

The project’s architecture combines several parts:

  • Content Script injects a sidebar into GitHub/GitLab PR pages.
  • Background Service Worker manages messaging and permissions.
  • AI Integration Layer bundles file diffs, comments, and metadata into prompts.
  • React UI Tabs display analysis categories: bugs, performance, security, documentation, tests, file complexity, and more.
  • i18n System toggles English and Traditional Chinese.

The build workflow:

npm install
npm run dev   # watch mode
npm run build # production

Developers can then load dist/ as an unpacked extension.


🧩 Challenges Faced

DOM Complexity

PR pages evolve frequently. Diff extraction required robust selectors and mutation observers.

On-Device Model Requirements

Users must:

  • Run Chrome 127+,
  • Enable experimental flags,
  • Trigger model download manually.

Providing clear fallback guidance was necessary.

Prompt Robustness

Early versions produced vague output. Iterative prompting reduced hallucinations and noise.

Performance Considerations

Rendering an attached sidebar on large PRs caused layout shifts. Debouncing and lazy rendering mitigated this.

Multilingual Consistency

Ensuring both analysis text and UI components remained synchronized added maintenance overhead.


✅ Closing Thoughts

This project taught me that impactful developer tools must integrate directly into existing workflows rather than require new ones. By combining UI ergonomics, prompt engineering, and Chrome’s emerging AI capabilities, the assistant helps catch issues earlier and reduces cognitive load during reviews. In short:

[ \text{Developer Efficiency} = \frac{\text{Context Retained}}{\text{Tools Switched}} ]

I look forward to improving multimodal support, quality scoring, team collaboration features, and cross-platform compatibility.

Built With

Share this project:

Updates