As Peter Thiel once described, the Forward-Deployed Engineer model is about doing the hard, unscalable work directly with customers. Experiencing that firsthand while building our own companies made us realize something important: even though everyone talks about “talking to customers,” most teams still miss dozens of tiny insights hidden in those early conversations—the small comments, feature hints, or pain points that never make it to your notes but could completely change the direction of your MVP.
That’s what inspired us to build Lamp— an AI FDE that listens in on your customer calls, extracts what truly matters, and turns those insights into actionable product specs. It doesn’t stop there — it actually builds the first version of your product through a conversational workflow. You can talk to it like a teammate, review what it built, and instantly tweak or expand it.
We learned that most of the value in product discovery lies not in what users say they want, but in what they unintentionally reveal. Capturing that in real time and translating it into something you can immediately build became our north star.
Technically, Lamp leverages Fetch.ai to power its deployment agent and follows Anthropic’s “Building Effective Agents” framework to coordinate multiple specialized AI agents under a unified orchestration layer. The system integrates Whisper for high-accuracy audio transcription with timestamped speaker labeling, while an internal orchestrator structures these transcripts into well-defined specifications — outlining tasks, features, and dependencies. This structured context is passed through a network of specialized agents: • a code generation agent that scaffolds full applications from backend routes to frontend UI components, • a testing agent that automatically generates and executes tests to validate functionality and reliability, • a security agent that performs static analysis and vulnerability checks before deployment, and • a deployment agent that executes safe, automated deployments through controlled environments.
Together, these components form an autonomous development pipeline that transforms natural language product discussions into production-ready software.
Our biggest challenge was keeping the entire pipeline reliable in real time. Getting accurate timestamps and speaker segmentation from long calls, mapping unstructured conversation data into structured build instructions, and ensuring the generated code didn’t break between iterations took multiple design loops.
But the end result is powerful a system that lets a founder, PM, or FDE literally talk their MVP into existence. It captures every small user insight, translates it into code, and evolves with every conversation.
Built With
- anthropic
- fetchai
- genai
- nextjs
- python
- react
- ruby-on-rails
- supabase
- vercel

Log in or sign up for Devpost to join the conversation.