Share this project:

Updates

posted an update

About the project

ArchMind started from a simple frustration: most AI coding tools can generate plausible code, but they often struggle with reliability in unfamiliar repositories. We wanted to build a system that behaves less like autocomplete and more like a disciplined engineer: understand architecture first, patch surgically, verify aggressively, and learn from failures.

Inspiration

Our core inspiration was to make AI coding safer for real software maintenance. Instead of “just generate code,” we focused on:

  • architectural awareness,
  • strict verification,
  • and iterative self-correction.

What ArchMind does

ArchMind is an architecture-first coding agent powered by Gemini 3.
For each task, it:

  1. Parses and reframes the problem intent.
  2. Runs graph-oriented codebase exploration (overview + zoom).
  3. Selects focused context windows.
  4. Generates targeted patch strategies.
  5. Applies and audits fixes with command-based verification.
  6. Uses memory signatures from failed attempts to avoid repeating mistakes.

How we built it

We implemented a modular TypeScript/Node harness with Python-based verification environments.
Key building blocks:

  • Planner + autonomous repair loop
  • Graph-driven target localization
  • Smart context budgeting
  • Structured patch application and fallback recovery
  • Strict audit/verification guards
  • Memory layer for failure patterns and lesson reuse
  • Gemini model routing/fallback for robustness under quota and response variance

Challenges we faced

  • JSON/schema instability under long prompts
  • Model quota limits and fallback coordination
  • False positives from weak verification signals
  • Environment drift across benchmark tasks
  • Balancing speed with strict correctness constraints

What we learned

The biggest lesson: reliability is primarily a systems design problem, not a single-prompt problem.
Better orchestration, constraints, validation, and recovery policies can outperform raw generation quality alone in difficult coding tasks.

Current state and next steps

This submission shows a working, evolving prototype with real end-to-end behavior.
Our next steps are:

  • improve pass rate on benchmark suites,
  • strengthen task-local verification,
  • improve memory ranking and retrieval quality,
  • and package ArchMind into a production-ready developer workflow.

This demo is illustrative of current capabilities; the project is actively evolving.

Log in or sign up for Devpost to join the conversation.