Inspiration

The scalability, flexibility, and huge potential of agentic teams. According to the 2025 Agency Report, agencies using AI extensively see 40 to 60 percent higher margins than more traditional agencies. Therefore, an AI-native approach enables cost reduction, maximises profit, and optimises performance. The 2026 YC call for startups further highlights this market opportunity.

What it does

Pragma watches how work happens in a company (e.g., commits, tickets, messages, reviews, etc.) and builds a mathematical model of it. Every employee gets a Sentinel (an AI observer) that discovers their recurring task patterns, creates an AI Shadow, and estimates human vs AI performance for each one. When AI can match/outperform the human, the Sentinel proposes a Shadow agent, built collaboratively with the person, not imposed on them, to take over that piece of work.

The result: a living, queryable model of the entire organisation that turns implicit knowledge into actionable, structured insights.

How we built it

Brainstorming

Paper-based sketches and MIRO diagrams to flesh out the flows and architecture

Design of the role algorithm and knowledge graph structure

Generation of a spec documentation using the above produced info with Claude

Iterative development of the backend w/ Claude Cod

Components:

Role Graph engine* (Effect-TS) — typed state machines that power both Sentinels and Shadows. Compile-time safe, recursively composable.

Federated knowledge graph — each person owns their subgraph (events, observations, task embeddings, efficacy estimates). The company-level graph stores the org structure. Write-disjoint, read-open. No conflicts.

Event pipeline — platform adapters (GitHub, Jira) normalise webhooks → rule-based extraction for structured events, LLM fallback for unstructured ones → task embedding → micro-role clustering → efficacy estimation.

Challenges we ran into

Ensuring the architecture is scalable for a large number and frequency of events

Thinking about the reduction of redundancy when processing the events, and ensuring the separation of concerns

Balancing accuracy, data purity with speed of read/writes

Designing and separating the individual data stores and the company-level knowledge graph, and defining the interactions and relations between the two.

Context management and precision when working with Claude.

Timely and clean event processing.

Cross-person events, where 2 or more subgraphs need updating. Getting fan-out right without duplication or ordering bugs required careful event partitioning and management.

Accomplishments that we're proud of

The system is proactive and essentially autonomous; it discovers real work patterns from raw event streams. For example, it can find "code review", "bug fixing, "data pulls", and "sprint planning" as micro-role clusters without being told to look for them.

What we learned

tbu

What's next for #pragma

  1. Completely set up the autonomous enterprise, monitor its performance.
  2. Deploy it in a real environment and enable money generation.
  3. Once our business assumptions are tested through the pilot and the model is proven to work through concrete measurable metrics -- pitch the solution to potential customers. Our Endgame: The company factory — discover viable task surfaces, spawn enterprise clones, deploy Shadow portfolios, and manage AI-operated organisations as optimised mathematical tuples.

Built With

Share this project:

Updates