💡 Inspiration

We constantly confront a challenge that slows us down and tests our patience: a codebase with outdated or missing documentation. New team members struggle to find their footing. Delivery timelines expand because understanding the code requires time-consuming exploration. Valuable hours are spent piecing together context that should already exist.

Out-of-date documentation, undocumented logic and stale runbooks are not just small annoyances — they are real impediments to productivity and collaboration. What should be straightforward becomes unnecessarily complex. What should be efficient becomes frustratingly slow.

Developer Confusion

We realize this is not just our problem — it is a pain shared across engineering teams everywhere: knowledge gaps that hinder progress, strain communication and inflate delivery cycles.

This shared challenge inspires our solution.

We have built a unified, intelligent platform that brings together Bitbucket and Confluence, powered by the latest Claude Sonnet model and seamlessly built on Forge with responsive web triggers, intuitive UI elements and efficient queues. Our platform does more than retrieve documentation — it interprets, connects and illuminates the context engineers need, right where they work.

What once took hours of guesswork now resolves in moments. Onboarding becomes smooth and empowering. Collaboration across teams becomes natural. Navigating complex systems feels — dare we say — inspired.

We are not merely addressing our own frustrations. We are redefining how teams understand, explore and engage with their code — because brilliant engineering deserves clarity, confidence and momentum.


🔍 What it does

At its core, our solution automates the hardest part of software engineering: keeping documentation accurate, up-to-date and deeply insightful — every time code changes. It works like this:

  • When a developer merges a commit in Bitbucket, our system instantly reacts. A web-trigger listens for merge events and places the update into a Forge queue.

Typing Frenzy

From there, the magic begins:

  • Inside Forge, we prepare a structured payload representing the newest code changes. This payload is then sent through our API Gateway to connect with the Claude Sonnet model, where AI carefully interprets and enriches the content. The model doesn’t just regurgitate text — it understands context, analyzes the change in relation to existing documentation and creates updated, meaningful content which offers explanations and context that elevate the understanding of the codebase.
  • Using Confluence’s APIs, we then update the service documentation directly where the team already collaborates. This includes:
    • Clear, relevant context for the service
    • Relevant sequence diagrams that visually illustrate service behaviour
    • Insightful sections that deepen comprehension and support learning
  • To keep everyone in sync, we send updates to Slack as soon as the documentation is refreshed, so developers and stakeholders are immediately aware that fresh context is available.

In practice, this means that what used to take hours of digging and guesswork now happens automatically the moment code is merged. Engineers spend less time searching for context and more time building features with confidence. Also as the documentation lives in Confluence and is linked to real development activity in Bitbucket, teams always have a single source of truth that evolves with their code.


🛠 How We Built It

We built an automated documentation sync solution that updates Confluence pages whenever changes occur in a Bitbucket repository — eliminating manual copy-paste and keeping docs always current.

At the core, this is an Atlassian Forge app that integrates Bitbucket Cloud, Confluence and optionally Slack. The frontend uses Forge Custom UI with React, which is implemented on Atlassian's Forge app development platform, to capture configuration (Confluence page link, API tokens, Slack webhook) and persist it in Forge KV storage, scoped to the workspace + repository.

On the backend, Forge resolver functions handle config storage and retrieval. A Forge web trigger receives Bitbucket push/commit webhooks, parses repository metadata, fetches stored config and retrieves existing Confluence content via the Confluence REST API. To avoid the 55-second timeout limit of web triggers, we enqueue a background job to do the heavy work asynchronously.

The background processor is where the magic happens: it invokes an AWS Lambda (via AWS API Gateway) to generate updated documentation, using AI on AWS Bedrock (Claude Sonnet 4.5) with Confluence content and Bitbucket metadata as input. We implemented a retry mechanism over the API Gateway to handle intermittent failures robustly. The system then updates processor with the generated content, which is then pushed to the Confluence page via REST APIs.

Every doc update attempt — success or error — triggers a Slack webhook notification, ensuring users are notified of the result.

Instead of static templates, the solution leverages AI for context-aware doc generation. The architecture is model-agnostic and easily adaptable — capable of plugging in Rovo Chat or other models (Note: Rovo wasn’t accessible during implementation). On top of automation, our built-in recommendations engine surfaces intelligent improvement suggestions based on usage patterns and content gaps.

This implementation balances real-time triggers, async processing, resilient API integration and AI-driven content synthesis to deliver seamless, always-up-to-date documentation.

Tech Stack:

  • Forge UI: React + Forge Reconciler
  • Backend: Forge Resolver (Node.js)
  • KV Storage: Forge Storage API
  • Event Queue: Forge Events (Queue)
  • AI Backend: AWS API Gateway → Lambda → AWS Bedrock (Claude Sonnet 4.5)
  • Integrations: Confluence REST API, Bitbucket REST API
  • Optional Notifications: Slack Webhooks

🧠 Challenges we ran into

  • First-Time Forge Development: Building on Atlassian Forge was new to the team & navigating its security model, API surface and execution constraints was an obvious early hurdle. Understanding how Forge scopes storage, handles triggers and interacts with external services required a steep learning curve.
  • Crafting High-Quality AI Prompts: Generating meaningful, context-aware documentation hinged on well-designed prompts for the AI. It took several iterations to refine prompts that consistently produced clear, accurate and relevant updates for a variety of doc structures and code changes.
  • No Access to Rovo Chat: Although we architected our system to be model-agnostic, we didn’t have access to Rovo Chat at the time of implementation. This pushed us to choose a third-party AI solution — AWS Bedrock (Claude Sonnet 4.5) — while keeping the door open for future model integrations.
  • Overcoming Web Trigger Timeouts: Forge web triggers have a 55-second execution limit, but our AI content generation often took 3+ minutes. To handle this, we split the workflow: the web trigger immediately enqueues a job and returns a job ID, while the background processor handles the heavy AI work asynchronously.
  • Rendering Rich, Full-Width Images: We initially received HTML content with images that did not render at full width on Confluence. Confluence’s storage format requires specific XHTML structures and CSS for consistent layout. To solve this, we built an HTML-to-Confluence XHTML image converter that rewrites <img> tags into proper <ac:image><ri:url> elements with CSS tweaks so images always scale to 100% width, preserving rich content quality.

🎯 Accomplishments that we're proud of

Celebration

  • Automatic, Context-Aware Documentation: Every commit merge triggers a fully automated pipeline that analyzes code changes and updates service documentation in Confluence — without manual intervention.
  • Rich Visual Aids: The system generates relevant diagrams, including sequence and class diagrams, to visually represent how services behave and interact, making documentation easier to understand and more actionable.
  • Multi-Language Support: Our solution works seamlessly across multiple programming languages, ensuring that documentation stays accurate and helpful regardless of the tech stack in your repositories.
  • Real-Time Team Updates: Developers receive instant notifications via Slack whenever documentation is updated, keeping the entire team informed and aligned without extra effort.
  • Deep Integration with Atlassian Products: We leverage Bitbucket and Confluence APIs to create a tight, dependable sync between code changes and living documentation, bringing context straight into the tools engineers use every day.
  • Scalable Architecture: By thoughtfully handling event queues and caching where appropriate, our system is built to scale as teams and codebases grow — ensuring fast feedback and efficient processing even under load.

📚 What we learned

Building this solution taught us a great deal about designing tools that are scalable, reliable and deeply integrated with the Atlassian ecosystem.

We strengthened our skills in Atlassian Forge, especially in building reactive UI with Forge UI (React + Forge Reconciler) and handling backend logic with Forge Resolver (Node.js). Working with Forge Events (Queue) and KV Storage (Forge Storage API) showed us early on how important efficient state management and event handling are, particularly as workflows grow with usage.

One key lesson was around scalability: as we processed merge events and interacted with external systems, we learned when and where to use caching to reduce redundant processing and keep response times fast — essential for a system that operates in the background but delivers real-time value.

Integrating with Bitbucket and Confluence APIs reinforced best practices in API design, error handling and content validation. We also learned how thoughtful notifications (via Slack) help keep teams aligned and confident that documentation is always current.

Above all, we learned that automating documentation is not just about generating content — it’s about creating a smooth, dependable experience that genuinely reduces cognitive load for engineers and keeps knowledge flowing naturally with development.


🔮 What’s Next for SherlockSync

📍 Integrate with Atlassian Compass for Documentation Quality We plan to connect SherlockSync with Atlassian Compass to elevate service documentation quality. By surfacing documentation metrics, tracking coverage gaps and aligning docs with software components and dependencies, we aim to make documentation a first-class metric in engineering health dashboards.

🛠 Integrate with Jira Service Management (JSM) for Runbooks & Service Docs We’ll build deep integration with Jira Service Management so that on-call insights and runbooks become a source of truth for SherlockSync. On-call teams often log detailed business logic, troubleshooting steps,and real-world scenarios — information we can leverage to automatically enhance service documentation and runbooks. This creates a closed loop where operational knowledge feeds back into living documentation.

📈 Enterprise-Ready Roadmap While SherlockSync’s hackathon prototype successfully enables automated, AI-driven updates for Confluence documentation based on Bitbucket events, we have an ambitious roadmap ahead to make it enterprise-ready and even more intelligent. We plan to optimize performance and scale, improving model efficiency and processing so that larger repositories and high-velocity codebases are handled seamlessly, including smarter incremental (diff-based) updates.

Built With

Share this project:

Updates