Documatey

Inspiration

Documentation often sits unused or becomes complex more and more. We wanted to create a way to make documentations more actionable along with a goal or a desired outcome of using the documentation.

What it does

Documatey transforms raw documentation into structured, actionable plans along with the user's goal of using the documentation.

How we built it

  • Frontend: Built with Next.js + shadcn UI for a clean, responsive interface.
  • Backend: Next.js inbuilt API to handle requests between the AI agent and the database.
  • AI Agent: Implements an agentic pipeline using Gemini for LLM to generate relevant follow-up questions in order to clarify the goal and, for the steps in-context chat.
  • Database: TiDB Serverless with vector search for indexing, semantic search, and retrieving relevant documentation snippets.
  • Integration: The system chains LLM calls and queries the TiDB vector index to generate step-by-step, goal-aligned plans.

Features and functionality

DocuMatey turns documentation into an agentic workflow:

  • Index docs (crawl or paste text) into a vector store.
  • Ask clarifying questions to firm up requirements.
  • Generate a step by step plan with citations.
  • Chat with retrieval grounded to the selected plan step.

Challenges we ran into

  • Over-broad retrieval: Unconstrained vector search often pulled in off‑domain snippets, degrading plan quality.
  • System prompts and strict JSON: Getting Gemini to return strictly valid JSON was inconsistent.

What we learned

  • How to effectively combine AI agents with vector databases for real world productivity tools.
  • The importance of data indexing and embeddings in making search and retrieval reliable.

What's next for Documatey

  • Response robustness and schema validation: Strictly validate LLM JSON against zod schemas
  • Multi-model support and fallbacks: Allow model routing for LLMs with per-task prompts.

Additional submission details

Built With

Share this project:

Updates