Inspiration
Deployment lags development.
AI tools have cut the time it takes to write software, but deployment still requires someone to manually inspect the repo, pick a runtime, find the right commands, and sort out ports. That overhead hasn't changed.
And platforms like Vercel and Railway solve this, but only for specific frameworks; they're fast because they already know what you're building. That breaks down for solo developers and small teams working with Flask, Go binaries, Streamlit apps, CLI tools, or monorepos. dploy is the thing you reach for when you just want to share what you built: paste a repo URL and get a live link within minutes.
What It Does
Point dploy at a GitHub repo and it provisions an ephemeral sandbox, figures out how the project runs, and returns a public URL. Web apps get an HTTPS link, CLI tools get a shareable browser terminal, and monorepos get per-service URLs.
How We Built It
The core insight: you can be fast without being framework-specific, as long as you're smart about when to use AI.
Heuristics first. For the 80% of repos with a recognizable layout — Node, Python, Go — we synthesize the install/build/start plan from file inspection alone, skipping the LLM entirely and saving 60–90 seconds.
Agents for the rest. When the layout is ambiguous, Agent #1 inspects the codebase and produces a structured plan. Agent #2 runs install and build, starts the server, discovers the actual bound port, verifies HTTP, and reports back. Both agents run inside the sandbox where the code already lives.
Speed by design. A warm sandbox pool eliminates cold-start latency. Heuristics cut most LLM round-trips. The result is a typical repo going from request to running in a couple of minutes.
CLI repos as a first-class target. Instead of failing on repos with no web server, dploy exposes them as browser terminals via ttyd.
Multi-service support. We pre-provision tunnel URLs right after sandbox creation and inject them into the build step, so a frontend can be told its backend URL before it compiles.
Challenges
Heuristic conservatism. A wrong synthesized plan means Agent #2 fails and the deployment breaks. We made the heuristics bail on anything ambiguous and only fire when we're confident, so false negatives (falling back to the LLM) are always cheaper than false positives.
Port discovery. Servers bind to unpredictable ports and addresses. The expose agent probes and verifies; the orchestrator tunnels whatever actually responded.
Accomplishments
- The same pipeline handles Node, Python, Go, CLIs, and multi-service repos without framework-specific code paths
- Heuristics skip the LLM for common projects; agents cover the rest
- CLI repos exposed as browser terminals rather than rejected
- Full observability: agent transcripts, structured plans, and build logs per deployment
What We Learned
Separating analyze and expose was the most important decision. Analyze produces a plan that can be inspected and short-circuited by heuristics. Expose executes against it. Keeping them separate made the fast-path trivial to add.
What's Next
- Local project uploads (no GitHub URL required)
- More heuristic coverage so fewer repos need an LLM pass
- Better failure output when a build breaks
Built With
- claude
- fastapi
- ink
- modal
- openclaw
- python
- react
- sqlalchemy
- tailwind
- tanstack
- typescript
- vite
- xterm.js
Log in or sign up for Devpost to join the conversation.