Inspiration
Every personal AI assistant we tried had the same problem: memory that forgets, or memory that never forgets the wrong things and ends up costing too much tokens.
Flat-file systems fill up with stale context. Vector stores recall embarrassing things from months ago with full confidence. And when your assistant lives across five different chat platforms, the fragmentation makes it feel like you're talking to different people every time.
We wanted something smarter — an assistant that treats memory the way humans do: with recency, relevance, and decay. And one that could actually do things — deploy apps, write code, search the web, manage skills — not just answer questions.
The result is Graphclaw: a graph-native multi-agent AI platform where five specialist agents share one structured memory, accessible from any channel, all in a language purpose-built for AI — Jac.
What It Does
Graphclaw is a personal AI platform with five layers working together:
🤖 1. Multi-Agent Routing
A Coordinator agent classifies your intent and routes to the right specialist:
| Agent | Responsibilities |
|---|---|
| DevOps | Infrastructure, deployments, CI/CD, skill installation |
| Builder | Code writing, file editing, git, shell commands |
| Planner | Task breakdowns, project milestones, dependency trees |
| Researcher | Web search, knowledge synthesis, source citation |
🧠 2. Graph Memory with Decay
Facts are stored as typed nodes (User, Project, Feedback, Reference) in a property graph. Each fact has a confidence score that decays 1% per day. At ~90 days, dead facts are tombstoned automatically. The Dream background walker revalidates decaying facts, auto-tags orphaned memories, and prunes the graph every 2 hours — with no manual intervention.
⚡ 3. Dynamic Skill System
Two skill types:
- Native Skills — typed Python functions with
skill.jsonmanifests (built-in support for Base44 and Loveable app scaffolding) - ClawHub Skills — 13,000+ community skills from clawhub.ai, installed dynamically at runtime via controlled approval flow
🌐 4. Universal Channel Support
One unified message bus handles Telegram, Discord, Slack, Email, and WhatsApp. One agent. One memory. Smart auth policies (pairing codes for unknown DMs, allowlists for group chats) keep it secure without manual setup.
🔀 5. Multi-Provider LLM
LiteLLM abstraction supports OpenRouter, Anthropic, OpenAI, DeepSeek, Groq, Ollama, and Azure — switch models via config, no code changes.
🔧 How We Built It
The entire platform is written in Python, and Jac (version 0.13.5), an AI-native full-stack language that compiles to Python 3.12+. Jac's by llm() semantic typing is what makes the agent definitions clean and declarative — the LLM generates the function body at runtime based on type contracts, with no prompt engineering buried in strings.
Memory architecture uses workspace-backed JSON stores (memories.json, profile.json) with a graph-traversal layer on top. The Dream walker runs on an asyncio scheduler, performs graph maintenance, and keeps the memory model honest.
The tool-calling loop in BaseAgent is iterative:
prompt → tool calls → execution → context injection → repeat (up to 200×)
Graceful truncation at 50k characters per result prevents context blowout.
The message bus is a singleton asyncio queue system — one inbound queue from all channels, per-channel outbound queues, and a broadcast queue for multi-channel delivery.
ClawHub skill loading fetches ZIP archives from the registry, extracts SKILL.md, parses YAML frontmatter, and hands the instruction markdown to the DevOps agent which executes it step-by-step using ShellTool — no code generation required.
😤 Challenges We Ran Into
- Jac toolchain constraints — Full OSP property graph persistence isn't stable yet on the current Jac build. We pragmatically implemented a workspace-backed JSON layer with graph semantics on top, so the architecture is graph-correct even if the physical store is flat files today.
- Tool loop reliability — Getting agents to cleanly handle tool errors, malformed LLM outputs, and recursive tool calls without infinite loops required careful circuit-breaking and truncation strategies.
- Channel auth that isn't annoying — Pairing codes for unknown DMs, allowlists for groups, and mention detection across three different APIs (Telegram, Discord, Slack) — each with different event models — was surprisingly tricky to unify under a single
AuthEventinterface. - ClawHub skill isolation — Skills installed dynamically can conflict. We implemented version lock files and per-skill sandboxing to prevent collisions without requiring containers.
🏆 Accomplishments We're Proud Of
- A fully working multi-agent platform in pure Jac — not a Python project with Jac sprinkled on top
- Memory decay that actually works autonomously — the Dream walker has been running unattended for days during testing and the graph stays clean
- 13,000+ skills accessible from day one via ClawHub without writing a line of code
- A clean channel-agnostic auth layer that protects group chats without annoying legitimate users
- Sub-2-second routing from message receipt to specialist agent on a local Ollama model
📚 What We Learned
- Jac's
by llm()semantic is genuinely powerful for agent definitions — it removes the gap between "what the function should do" and "what the function does" - Graph memory with decay is a better default than both ephemeral context and permanent vector stores — it matches how memory actually works
- The hardest part of a multi-agent system isn't the agents — it's the message bus and auth layer
- Dynamic skill loading from a community registry is a force multiplier that changes what "personal AI" can mean
🚀 What's Next
- Full OSP graph persistence — migrate from workspace JSON to the native Jac property graph once the toolchain stabilizes
- Dashboard — React UI for browsing and editing the memory graph visually
- Graphclaw Cloud — hosted multi-user mode with per-user graph isolation
- MCP plugin expansion — more Model Context Protocol tools exposed to every agent
- Voice channels — Discord voice + phone call routing via Twilio

Log in or sign up for Devpost to join the conversation.