Problem Statement
AI coding agents can now write production-grade code, run tests, and ship pull requests. Teams are starting to use them in real workflows.
But once you move beyond isolated tasks, a limit shows up quickly: agents don’t fail at coding -- they fail at deciding what’s worth doing.
In real engineering teams, there isn’t a single place that reflects the true state of a project. The codebase shows what’s been built. Tickets show what was planned. But the decisions that drive execution -- like a blocker mentioned in Slack, a scope change agreed on in a meeting, or a dependency slipping in a PR thread -- often never get reflected back into those systems.
This is the gap. A large portion of engineering time is spent coordinating work, and PMs and engineers spend hours each week manually tracking status and reconciling what changed. None of this is captured in a structured way. Humans keep track of it informally and adjust as things change. Agents can’t.
So even strong agents hit a ceiling. They work on stale or misprioritized tasks, miss dependencies that were never recorded, and generate output that can’t be shipped -- not because they can’t code, but because they don’t have an accurate view of the work.
Orbit turns unstructured updates and task dependencies into a continuously updated project state that agents can query via MCP -- while automatically inferring status so humans don’t have to track it manually.
What it does
Orbit turns a project into a live execution layer that agents can actually operate on.
It starts with a program charter and generates a structured execution graph -- tasks, dependencies, owners, and timelines -- giving both humans and agents a shared model of the work.
Orbit then keeps this system continuously up to date. It ingests signals from Slack and GitHub, detects events like blockers, progress, and scope changes, and maps them back to the execution graph. It also proposes updates -- such as marking tasks as blocked or identifying slipped dependencies -- and shows downstream impact across the graph. All updates are reviewed and approved in Slack, ensuring the system stays accurate without requiring manual tracking.
The result is a continuously updated project snapshot that reflects the current state of execution. Orbit exposes this state through an MCP server, allowing agents to query what’s unblocked, what’s at risk, how dependencies are evolving, and what to prioritize.
For example, if a dependency slips in a Slack thread, Orbit flags downstream tasks as at-risk before an agent ever starts working on them.
This allows agents to choose the right work, avoid wasted effort, and adapt in real time, turning them from isolated executors into participants in real-world execution.
How we built it
Orbit is built as a lightweight, end-to-end system that connects three pieces agents normally can’t bridge:
1. Planning → structure We use Claude to convert free-text program charters into a structured execution graph of tasks, dependencies, owners, and timelines. This becomes the canonical representation of the project.
2. Communication → signals Orbit ingests Slack messages and GitHub activity, classifying them into execution signals like blockers, progress, and completion. These signals are semantically matched back to the relevant tasks in the graph.
3. State → action From these signals, Orbit continuously updates task state, detects dependency impacts, and generates proposed changes (e.g., marking tasks as blocked or at-risk). Instead of applying updates automatically, we route them through a Slack approval workflow to maintain trust and correctness.
All state is persisted as a live project snapshot and exposed through an MCP server, giving agents direct access to execution context -- including current status, blocked work, priority ordering, and linked artifacts like commits.
Challenges we ran into
The hardest part was turning messy human communication into something reliable.
Slack messages are vague, inconsistent, and context-dependent. Mapping them to the right task without creating noise or incorrect updates required careful prompting, normalization, and human-in-the-loop design.
We also had to balance usefulness with trust. Fully autonomous updates are risky, so we designed an approval system that keeps humans in control while still reducing manual work.
Accomplishments that we're proud of
- Building a full pipeline from charter → structured plan → live project system
- Making Slack a usable signal source for project state
- Creating a human-in-the-loop update system that agents can rely on
- Exposing project cognition cleanly to agents through MCP
Most importantly, we gave agents access to something they fundamentally lack today: the ability to understand and reason about execution.
What we learned
The biggest limitation of AI agents isn’t intelligence -- it’s access to the right context.
Code is only part of the system. Execution happens in conversations, decisions, and dependencies. Until agents can see that, they’ll always hit walls.
We also learned that you don’t need perfect understanding to be useful -- just enough signal to guide better decisions.
What's next for Orbit
Right now, Orbit helps agents understand project state.
Next, we want agents to act on it more directly:
- tighter integration with coding agents so they can choose and unblock their own work
- better reasoning over long-running projects and complex dependencies
- deeper integrations across communication and tooling
- more autonomous (but still safe) action loops

Log in or sign up for Devpost to join the conversation.