Inspiration

Every AI coding tool has the same silent problem. You ask it something about your project and it either hallucinates, asks you to paste files, or burns 100K tokens scanning everything — then forgets it all next session. I got tired of re-explaining my codebase to AI every single time.

The core insight: your codebase isn't a list of files, it's a graph. Functions call functions. Routes hit services. Services write to databases. A senior engineer who's been on a project for a year doesn't re-read files to answer questions — they query a mental model. So I built that model for the AI.

What I Learned

Building the RTE (Round-Trip Engineering) parser across multiple languages was the hardest part — extracting meaningful typed nodes (API endpoints, functions, classes, DB operations, webhooks) and preserving their relationships accurately. I learned that code structure is far more regular than it seems once you commit to parsing it properly rather than treating it as text.

I also learned that the permission layer matters as much as the AI itself. Developers don't trust tools that act without asking. Building the approve/reject flow — where every file change is proposed before execution — changed how people interacted with Atlarix completely.

How I Built It

Atlarix is built on Electron + React + TypeScript for the desktop shell. The RTE pipeline parses TypeScript and Python projects into a typed node graph cached at ~/.atlarix/blueprints/{projectHash}/. A file watcher monitors changes and re-parses only affected nodes incrementally.

The Blueprint canvas is built on React Flow. The agent system runs three tiers — Direct, Guided, and Autonomous — with four specialist agents (Research, Architect, Builder, Reviewer) each scoped to a specific tool set and permission level.

Context management uses .atlarix/memory.md and spec.md for cross-session persistence, inspired by Claude Code's approach to project memory.

Challenges

Multi-language parsing — getting accurate node extraction across TypeScript and Python with different AST structures required building separate parsers with a shared registry interface.

The token budget — making the RAG query actually return the right nodes (not just any nodes) so the AI gets useful context in ~5K tokens rather than irrelevant noise took significant iteration on the graph traversal logic.

Permission UX — designing an approval flow that doesn't feel like constant interruption but still gives developers full control over what the AI touches was a careful balance.

What's Next

Full ANTLR4 parsing across Java, Go, Rust, C/C++. SQLite persistence for the Blueprint graph. Atlarix Workforce — Slack integration, GitHub Actions, live database connections, and team-shared project memory.

Built With

Share this project:

Updates