Inspiration:

We’ve all felt the “big codebase paralysis” when joining a project or revisiting an old one. Architect AI was inspired by the idea that developers should be able to “see” architecture instantly—components, dependencies, health, and hotspots—without days of manual spelunking. We wanted an assistant that makes large systems legible.

What it does:

Ingests a codebase (zips or individual files) Parses files to extract components, layers, and relationships Builds a dependency graph and detects circular deps Computes architecture health metrics (complexity, coupling, maintainability) Streams progress and memory stats during analysis Surfaces actionable recommendations

How I built it:

Using Kiro IDI Frontend: React + TypeScript + Vite, Tailwind for styling, custom components for progress, graph, and reports. Backend: Node.js + Express + TypeScript, with services for parsing, dependency analysis, metrics, and uploads (Multer). AI layer: OpenRouter API (pluggable for future LLM-powered refactoring/reviews). Architecture: /api/upload saves files and produces an analysisId /api/analysis/:id/parse extracts components, builds dependency graph, computes metrics /api/analysis/:id/* exposes results (components, dependencies, health, progress) Extra ergonomics: batch scripts to start both servers, proxy in Vite for clean /api/* calls.

Challenges we ran into:

IDE constraints: The hackathon-mandated Kiro IDE intermittently broke frontend↔backend integration (proxy and port conflicts), causing 404s under time pressure. Race conditions: Upload→parse orchestration needed careful sequencing. Large uploads: Balancing file count limits (up to 5,000) and memory safeguards. Robust parsing across multiple languages without deep language servers. I feel Kiro for early stages it does well but when it comes to building complex structures, I feel it lacks a bit and could not solve the issues toward the end to run a full solid build. It could work when Kiro would use mock data but other than that I could not get it to fix the issues without burning through my credits.

Accomplishments that I'm proud of:

End-to-end architecture implemented with clean modular services. Real-time progress SSE and memory stats. Clear, typed APIs and predictable response envelopes. Practical metrics with useful defaults (coupling, complexity, maintainability). A mock mode for reliable demos when infra is flaky.

What I learned:

Reliability beats cleverness under demo conditions—have a mock/fallback path. Observability from the start (logs, health checks, config endpoints) saves hours. Strict typing across boundaries reduces integration bugs. Simple models often go far: for example, coupling and complexity signals correlate well with real hotspots. For code complexity we use intuitive measures and can extend to metrics like: Average cyclomatic complexity

What’s next for Architect AI:

Add LLM-powered: Architectural smells detection and remediation suggestions Automated refactoring plans and PR diffs Deeper language support (TypeScript/JS first-class, Python/Java hardening) CI integration: fail quality gates, post inline review comments Interactive graph with filtering by risk, domain, and ownership Cloud deploy and one-click links for judges to run in “mock mode” or full modetion

Built With

Share this project:

Updates