Inspiration
Modern codebases are expanding at a rate that outpaces human (and even AI) comprehension. Most AI coding tools suffer from "Context Drift"—treating code as flat text while losing the overarching architectural logic. Furthermore, existing Code Graph solutions are notoriously heavy, requiring complex infrastructure (GraphDB, VectorDB, etc.). We were inspired to build GCA: a Zero-Infra, neuro-symbolic platform that provides deep architectural understanding without the "infrastructure tax."
What it does
GCA transforms raw source code into actionable Wisdom by bridging the gap between neural reasoning and symbolic logic.
- The Reasoning Engine (Gemini 3 Flash): We leverage Gemini 3 Flash for its massive context window and superior reasoning. It acts as the "Architect’s Brain," synthesizing complex Datalog queries and ensuring that even the most intricate architectural patterns are correctly mapped.
- The Embedded Core (Mangle + BadgerDB): GCA features a custom-built embedded database. By combining the Mangle logic engine with BadgerDB, we’ve created a hybrid system capable of graph, vector, and complex relational searches.
- Massive Scale, Tiny Footprint: Unlike traditional DBs that load everything into RAM, GCA is storage-backed. It can manage and query tens of thousands of nodes directly from disk, providing "Enterprise-grade" analysis on consumer hardware.
- Zero-Infra Strategy: GCA is a single binary. It requires no external databases or cloud setup. Memory efficiency isn't just a feature; it’s a fundamental consequence of our architecture.
How we built it
Engineered with Giants: We utilized Antigravity in tandem with Gemini 3 Pro to architect and implement the complex Go backend. This high-order reasoning combination allowed us to solve difficult symbolic logic challenges and optimize the Mangle engine at record speed.
High-Performance Backend: Developed in Go for its superior concurrency and performance. We embedded Mangle to run directly on BadgerDB, creating a "Disk-First" fact storage system that scales without exhausting memory.
Pre-generated Virtual Paths: To handle complex or deeply nested project structures, we implemented a pre-generation layer that maps physical file hierarchies into a consistent Logical Virtual Namespace. This provides stable identifiers for every symbol, making it significantly easier for the AI to navigate and reason about the graph.
Advanced Optimization: We pushed the limits of efficiency by implementing mmap (memory-mapping) for off-heap vector storage and utilized Matryoshka (MRL) embeddings. This enables high-speed semantic search and instant "Zero-load" serving on modest hardware.
Challenges we ran into
- Bridging Neural Reasoning with Rigid Logic
The primary hurdle was ensuring Gemini 3 Flash could reliably translate complex architectural intent into Mangle’s deterministic Datalog facts. We solved this by dynamically injecting the graph’s predicate schema and relevant code symbols directly into the Gemini API's large context window. This backend-driven approach provides the LLM with the exact "symbolic dictionary" it needs to reason through code structures and output syntactically perfect logic every time.
- Scaling the "Zero-Infra" Embedded Engine
To avoid the overhead of external databases, we embedded Mangle to run directly on top of BadgerDB, creating a storage-backed fact retrieval system. This setup allows GCA to query thousands of facts and compressed documentation on-disk rather than exhausting the RAM. To further optimize performance, we implemented Matryoshka (MRL) embeddings, enabling dynamic vector truncation for high-speed semantic search with an ultra-minimal memory footprint.
Accomplishments that we're proud of
- Seamless Neuro-Symbolic Interaction: We successfully enabled users to perform deep architectural queries using natural language. By leveraging Gemini 3 Flash to translate intent into deterministic Datalog, we’ve removed the barrier of complex query languages, allowing any developer to gain high-fidelity insights into code logic and dependencies instantly.
- "Zero-Infra" Resource Sovereignty: We are proud to have built a massive-scale analysis engine that maintains an ultra-low RAM footprint. By embedding the logic and storage layers into a single binary, we proved that enterprise-grade architectural reasoning can be achieved on consumer-grade hardware without requiring heavy database clusters.
- Cross-Service Architectural Visibility: GCA goes beyond the single-repo limit. We successfully implemented the ability to trace logic and dependencies across microservice boundaries, providing a unified, bird's-eye view of the entire system's topology that traditional tools often miss.
What we learned
The Neuro-Symbolic Breakthrough: We learned that the "Golden Ratio" for code analysis is the Neuro-symbolic approach. Gemini 3 Flash provides the "Architect's Intuition"—understanding messy human intent and complex code patterns—while Mangle (Datalog) provides the "Mathematical Proof." This synergy ensures that AI-driven insights are not just fast and intuitive, but also deterministically correct, effectively ending the era of AI hallucinations in software architecture.
Reasoning as a Core Engine: This project proved that Gemini 3 isn't just a chatbot; it’s a high-order reasoning engine. By feeding it the graph schema and symbolic facts, we saw Gemini handle complex architectural logic and "new knowledge" with ease. It demonstrated an incredible ability to navigate deep dependencies and architectural constraints that are nearly impossible to solve with traditional hard-coded algorithms.
What's next for GCA
From Analysis to Autonomous Evolution: We’ve only scratched the surface of Gemini’s reasoning capabilities. Our next milestone is "Autonomous Architectural Refactoring." Instead of merely identifying patterns, GCA will leverage the Knowledge Graph to reason through and propose safe, large-scale structural migrations. This ensures that as a system grows, its architecture evolves automatically without breaking critical dependencies.
Predictive Architectural Foresight: We aim to expand GCA’s reasoning across entire microservice ecosystems. By feeding Gemini cross-service telemetry alongside our graph data, the system will move beyond reactive analysis. It will reason about potential system-wide failures and bottlenecks before they manifest, transforming GCA into a proactive engine for architectural health.
Universal Knowledge Protocol (MCP): We are refining GCA into a fully standardized Model Context Protocol (MCP) server. This turns our complex analytical engine into a "Plug-and-Play" context hub, allowing any AI Agent (from Claude to specialized coding bots) to instantly inherit GCA’s deep architectural wisdom as if it were their own native memory.
Log in or sign up for Devpost to join the conversation.