Inspiration

Modern AI models can reason over entire codebases, but they still fail on real-world repositories.
The problem is not intelligence — it’s reliability.

Real projects often have broken builds, missing environment documentation, unclear structure, and unsafe workflows. When AI tools encounter these issues, they either hallucinate fixes, give up, or suggest changes that developers cannot trust.

We were inspired to build Autopatch after repeatedly seeing AI coding tools fail on messy but realistic repositories. We wanted to explore what AI looks like in the Action Era — not suggesting code, but executing, verifying, and delivering fixes safely.


What it does

Autopatch takes a GitHub repository URL and autonomously produces a verified Pull Request.

Instead of generating suggestions, Autopatch:

  • Clones the repository
  • Detects its framework, package manager, and build commands
  • Generates a machine-readable project steering file (ai.project.yml)
  • Runs the real install and build process inside Docker
  • Analyzes actual build errors
  • Uses Gemini 3 to generate a minimal unified diff patch
  • Applies the patch safely on a dedicated branch
  • Rebuilds to verify the fix
  • Commits the change and opens a Pull Request with full artifacts

Every fix is executed, verified, and delivered through a standard GitHub workflow.


How we built it

Autopatch is built as an orchestration system, not a chat interface.

We built Autopatch using the Gemini 3 API as our core development workflow.

  • Backend: FastAPI orchestrates the job lifecycle, patch loop, and artifact management.
  • Gemini 3 API is used for:
    • Generating ai.project.yml to help the agent understand the repository
    • Producing readiness reports that explain missing or unclear project context
    • Generating unified diff patches based on real compiler and runtime errors
  • Docker is used to run installs and builds in a clean, reproducible environment.
  • GitHub integration ensures all changes are delivered via Pull Requests, never touching the main branch.
  • Frontend: A Next.js UI provides live job status, logs, artifacts, and PR links.

This closed-loop design ensures that AI output is always verified by execution.


Challenges we ran into

  • Applying AI-generated patches safely required strict diff normalization and validation.
  • Many repositories lacked .gitignore files, which caused accidental commits of build artifacts.
  • Some build failures required multiple verification loops before stabilizing.
  • Handling different project structures and package managers in a generic way was non-trivial.
  • Ensuring the system never modified the main branch required careful Git workflow design.

Each challenge pushed us toward a more production-safe architecture.


Accomplishments that we're proud of

  • Built a fully autonomous patch → verify → PR loop with no human intervention
  • Designed ai.project.yml as a steering layer for AI understanding
  • Achieved real build verification using Docker
  • Delivered fixes exclusively through Pull Requests
  • Generated full audit artifacts for every job
  • Created a system that works on messy, real-world repositories

What we learned

  • AI becomes far more reliable when placed inside a verification loop
  • Execution and feedback matter more than raw context size
  • Small, well-scoped patches outperform large speculative changes
  • Trust in AI systems comes from reproducibility and transparency, not intelligence alone

Most importantly, we learned that autonomous agents must be designed like production systems, not demos.


What's next for Autopatch

  • Support for additional ecosystems (Python, Java, Go)
  • Deeper test and lint verification stages
  • Multi-PR workflows for complex failures
  • CI integration for continuous autonomous repair
  • Enterprise features such as policy enforcement and approval rules

Autopatch is a step toward AI systems that don’t just reason — they ship verified results.

Built With

Share this project:

Updates