Inspiration

Every product team ships faster than they update docs. APIs change, Figma designs change, but the documentation stays wrong — and users pay the price. We've all experienced the frustration of following a tutorial only to find the screenshots don't match the actual UI, or an API parameter was renamed three versions ago. We wanted to build an agent that doesn't just flag this problem, but autonomously fixes it and gets smarter over time.

What it does

DocAlive is an autonomous documentation freshness agent. It compares your current docs against code diffs, Figma design changes, and support logs to detect exactly which sections are stale. For each stale section, it scores the severity (0–100), drafts a publication-ready fix, and presents it through a human-in-the-loop review interface. Reviewers can approve, edit, or reject each suggestion via a chat-based UI.

The key differentiator: DocAlive improves itself. After each review cycle, a prompt optimizer agent analyzes the feedback — what got rejected and why, how reviewers edited the drafts — and rewrites the auditor's system prompt. It then runs an eval suite to verify the new prompt performs better before auto-deploying it. The agent literally writes better instructions for itself.

How we built it

  • Lovable — Generated the full-stack app (React frontend + Supabase backend with edge functions + database + hosting) entirely through prompting
  • Kimi K2.5 on Eigen AI — Powers the core analysis: comparing code diffs and Figma changes against documentation to detect staleness, generating draft fixes, and optimizing the system prompt
  • Senso — Verifies each suggested doc update against ground truth to prevent hallucinated or inaccurate fixes from reaching reviewers
  • Unkey — Protects the API endpoints with key verification and rate limiting
  • Supabase Edge Functions — Serverless backend running the analysis pipeline, review workflow, prompt optimization, and eval runner

The architecture is designed to also incorporate Nexla for real-time data ingestion from Git webhooks, Figma API, and support tools, and Google Gemini for multimodal visual comparison of screenshots.

Challenges we ran into

  • LLM JSON parsing — Kimi K2.5 doesn't support response_format: json_object, so we had to build robust parsing with markdown fence stripping and regex JSON extraction to handle inconsistent outputs
  • Solo time management — As a solo builder, scoping was critical. We focused on making the core loop (analyze → review → self-improve) bulletproof rather than spreading thin across every sponsor integration
  • Prompt engineering for precision — Getting the auditor agent to flag genuinely stale sections without false positives (e.g., internal code changes that don't affect public docs) required careful prompt iteration

Accomplishments that we're proud of

  • Real self-improving loop — The agent doesn't just record feedback; it rewrites its own system prompt, runs evals, and auto-deploys improvements. This is a genuine closed-loop learning system, not just a dashboard showing stats
  • Built entirely through AI-assisted development — The entire app was generated through Lovable with natural language prompts, demonstrating that context engineering + the right tools can produce a production-quality app in hours
  • 3 real sponsor integrations working in a single afternoon as a solo developer

What we learned

  • Context engineering is about designing the right information flow, not just writing better prompts. The self-improving loop works because we feed structured feedback (rejections with reasons, edit diffs) back into the prompt — giving the optimizer agent the right context to improve
  • Soft integrations (fail-open when API keys aren't configured) are essential for hackathon velocity — you can wire up the architecture without being blocked by missing credentials
  • Scoping ruthlessly matters more than building comprehensively. A working demo of one complete loop beats a half-built demo of five features

What's next for DocAlive

  • Gemini multimodal integration — Visual comparison of Figma screenshots vs doc images, not just text-based change descriptions (I was having difficulties getting credits from Deepmind)
  • Nexla real-time pipeline — Continuous ingestion from Git webhooks, Figma webhooks, and support tools so the agent runs automatically on every change, not just on-demand
  • Git PR integration — Auto-create pull requests with doc fixes so approved changes go directly into the repo
  • Multi-repo support — Monitor documentation across multiple products and learn cross-project patterns about what types of docs go stale fastest

Built With

  • augment-code-context-engine
  • kimi-k2.5-(eigen-ai)
  • lovable
  • nexla
  • postgresql
  • react
  • senso-api
  • supabase
  • supabase-edge-functions
  • typescript
  • unkey
Share this project:

Updates