Inspiration

Open-source maintainers are currently overwhelmed by a massive wave of AI-generated pull requests. Several major projects have restricted or closed external contributions because reviewing low-quality code became unmanageable. We were inspired by this crisis and asked a simple question: What if AI could help maintainers handle AI-generated code? OpenGuard was created to protect the open-source ecosystem and help contributors

What it does

OpenGuard is an AI-powered tool that analyzes and automatically corrects GitHub pull requests using Gemini 3. Users paste a pull request URL, and OpenGuard: analyzes the code quality, detects problems and best practice violations, assigns a quality score, generates corrected code, produces a detailed educational report, and provides a dashboard for maintainers to prioritize reviews.

How we built it

We built OpenGuard using a modern full-stack architecture: Gemini 3 Pro for code analysis and correction GitHub REST API (Octokit) to fetch pull request data Node.js and Express for the backend API React, Tailwind CSS, and Monaco Editor for the frontend A diff viewer and analytics dashboard for maintainers

Challenges we ran into

One major challenge was analyzing large pull requests while respecting API rate limits and context constraints. We also had to design prompts that produce consistent, structured outputs instead of generic AI responses. Another challenge was building a clean UI for code comparison and making the results understandable for both contributors and maintainers.

Accomplishments that we're proud of

We successfully built a working MVP that: analyzes real public pull requests, generates corrected code and downloadable reports, provides a quality scoring system, and offers a dashboard for maintainers. We are especially proud of the educational aspect, where OpenGuard explains mistakes and teaches contributors how to improve their code.

What we learned

We learned that generative AI can be both a problem and a solution. Prompt engineering and structured output constraints are critical for reliable AI tools. We also learned how to integrate large language models into real developer workflows using GitHub APIs and modern frontend tooling.

What's next for OpenGuard

Next, we plan to extend OpenGuard beyond pull request analysis into the developer workflow itself.

We will build a code editor extension (for VS Code and other IDEs) that provides real-time AI feedback during development. This will help developers detect and fix issues instantly, before submitting their pull requests.

We also plan to integrate a secure sandbox environment where OpenGuard can automatically apply corrections, run static checks, and validate the repository. Once the code passes all quality checks, OpenGuard will generate a downloadable corrected project URL, allowing developers to test the improved version locally before resubmitting their pull request.

In the future, we aim to integrate GitHub webhooks for continuous PR monitoring, add authentication and persistent storage, support multiple programming languages, and open-source OpenGuard for the global developer community.

Our long-term vision is to make OpenGuard a standard AI quality gate for open-source development worldwide.

Share this project:

Updates