🤖 Kapstra - From Kiro to Platform

💡 Inspiration

The spark came from a simple frustration: why is building AI agents so unnecessarily hard?

We started by building Kiro, an AI assistant that could help developers with code reviews, bug fixes, and project management. But as we dove deeper, we realized we were spending 80% of our time on infrastructure - authentication, memory management, API integrations, deployment configs - and only 20% on the actual AI logic that made Kiro intelligent.

That's when it hit us: every AI agent project faces the same foundational challenges. What if we could solve this once and for all?

🛠 What it does

Kapstra is the missing infrastructure layer for AI agent development.

Born from building Kiro, we realized we had accidentally created something bigger - a complete platform that eliminates the repetitive work of AI agent development:

  • Agent Core: Containerized AI logic with persistent memory (what powers Kiro's intelligence)
  • Mastra SDK: The client library we wished existed when building Kiro
  • Authentication & Sessions: Battle-tested from handling Kiro's user management
  • GitHub Integration: Refined through Kiro's code review capabilities
  • Chat Threading: Perfected for Kiro's conversation management

🔨 How we built it

The Kiro Origin Story:

  1. Started with Kiro: Built an AI coding assistant to solve our own development pain points
  2. Hit Infrastructure Wall: Spent weeks on auth, memory, APIs instead of AI features
  3. Abstracted the Pain: Extracted reusable components into what became Kapstra
  4. Microservices Evolution: Separated concerns - Express for integrations, dedicated agent containers
  5. SDK-First Approach: Built the Mastra client we needed for Kiro's frontend

Tech Stack Journey:

  • Backend: Express.js → Containerized microservices
  • Agent Logic: Node.js with persistent memory system
  • Frontend: React with real-time streaming
  • Infrastructure: Docker + Caddy (learned from Nginx complexity)
  • Database: Started local, migrating to PostgreSQL

🚧 Challenges we ran into

The Authentication Nightmare: Getting Clerk integration working with our containerized architecture took days of debugging CORS and session management.

Container Coordination: Making Express and Agent containers communicate seamlessly while maintaining clean separation of concerns was trickier than expected.

Memory Persistence: Building agent memory that survives container restarts without losing conversation context required multiple iterations.

SDK Design: Creating a client library that felt natural for both console testing and React integration demanded careful API design.

Caddy Configuration: Switching from Nginx to Caddy seemed simpler but still required deep diving into documentation for our specific use case.

🏆 Accomplishments that we're proud of

Kiro Actually Works: Our original AI assistant successfully helps with code reviews and project management - proving our infrastructure approach works.

Container Architecture Success: Agent containers run flawlessly with proper isolation and resource management.

Mastra SDK Elegance: Achieved our goal of making agent integration feel as simple as mastraClient.streamVNext().

Real-time Streaming: Built smooth frontend-to-agent communication that feels responsive and natural.

Memory System: Created persistent conversation storage that maintains context across sessions.

Developer Experience: Reduced setup from days to mastra dev + npm run start in minutes.

🎓 What we learned

Infrastructure is 80% of the Work: Every AI project rebuilds the same foundation - that's the real opportunity.

Containerization is Non-Negotiable: Proper service isolation makes debugging and scaling infinitely easier.

SDK-First Thinking: Building the client library early forced us to create better APIs.

Memory is Critical: AI agents without persistent memory feel broken - users expect continuity.

Documentation Drives Adoption: Even the best architecture fails without clear setup instructions.

Simplicity Wins: Choosing Caddy over Nginx, local DB over complex setups initially - start simple, scale later.

🚀 What's next for Kapstra

Short-term (Next 30 days):

  • Complete Express container debugging
  • Finish Caddy webserver integration
  • Launch React frontend for Kiro
  • PostgreSQL migration for production readiness

Medium-term (3 months):

  • Open source the core platform
  • Add more AI model integrations (GPT-4, Claude, local models)
  • Build visual agent workflow designer
  • Create marketplace for pre-built agent templates

Long-term Vision:

  • The GitHub of AI Agents: Where developers discover, fork, and deploy intelligent agents
  • Enterprise SaaS: Managed hosting with team collaboration and analytics
  • Agent Ecosystem: Plugin architecture for extending agent capabilities

Our Bold Prediction: By 2025, no one will build AI agents from scratch anymore - they'll start with Kapstra and focus on the intelligence, not the infrastructure.


From solving our own problem with Kiro to building the platform we wish existed - that's the Kapstra story.

Built With

Share this project:

Updates