Inspiration
Development and Sales teams are often bottlenecked by "Environment Hell." Sales engineers spend hours manually prepping demos, while QA teams struggle to reproduce bugs in environments that don't match production. We wanted to build Borderless: a system that creates a "living" sandbox that doesn't just host code, but understands it. We were inspired by the idea of making "Environment-as-Code" accessible to everyone through autonomous AI agents.
What it does
Borderless is a sandbox orchestration platform that generates on-demand, production-parity environments.
Instant Environment: It boots a Dockerized MicroVM containing the user’s full stack and a VNC-accessible desktop.
Autonomous Seeding: It uses AI to "sweep" the codebase, understand the database schema, and inject synthetic data for specific personas (Admin vs. User).
Agentic Walkthroughs: An AI agent (using Browser-Use) navigates the UI in real-time to "pre-warm" the demo or verify a QA scenario.
Scenario Reusability: Users can capture a walkthrough once; Borderless saves the DB state and the agent’s trajectory as a reusable "Golden Record" for future launches.
How we built it
Virtualization: Docker-in-Docker containers paired with Xvfb and noVNC to provide a browser-based interactive desktop.
Brain: Gemini 3 Flash coupled with code2prompt to ingest entire repositories and map out application logic and data schemas.
Agentic Automation: Browser-Use and Playwright served as the "hands" of our AI, allowing it to interact with the UI based on natural language prompts.
Backend: A Node.js orchestrator managing the lifecycle (TTL, reset, snapshotting) of the MicroVMs.
Data Layer: A custom Python tool that allows the AI agent to be "state-aware," querying the sandboxed database to verify that UI actions successfully triggered backend changes.
Challenges we ran into
The "Blind" Agent Problem: Initially, the AI didn't know if a button click actually worked. We had to build a "SQL-Awareness" bridge so the agent could verify its actions against the database.
Context Window Management: Feeding a whole repo into an LLM is noisy. We optimized this using code2prompt to flatten the codebase and focused the AI’s attention only on "structural" files like Prisma schemas and route definitions.
VNC Latency: Ensuring a smooth UI experience for the Sales Engineer while an AI agent was simultaneously clicking elements required fine-tuning the X11 framebuffer and WebSocket proxying.
Log in or sign up for Devpost to join the conversation.