Baud Again: Resurrecting the BBS with AI
This hackathon wasn't just about building a BBS. It was an experiment: how complex a project can AI take on with minimal human intervention?
Inspiration
It was 1994, and I was a secondary school student in Hong Kong. I'd just bought my first modem—a US Robotics Sportster 14,400 Fax Modem—and discovered something called a "BBS" in a PC magazine. What happened next changed how I saw the world.
That screeching handshake sound. The interference bleeding into my Adlib sound card. Waiting hours for a file download only to get disconnected at 99%. But none of that mattered. Through that slow, noisy connection, I was making friends, reading forums, discussing everything under the sun. My world expanded beyond anything I'd imagined. I was too young for the serious discussions, but old enough to feel the magic of connecting with strangers through text on a screen.
Three decades later, I find myself reflecting on what we lost.
On a BBS with 50 users, you actually knew people. Conversations went deep. You'd wait for replies, and that waiting made each message meaningful. Today, on Facebook or LinkedIn, we see endless newsfeeds but rarely dive deep. Social media became "one-to-many broadcasting" instead of genuine one-to-one communication. We traded intimacy for scale.
BBSes died so social media could live. Baud Again is my attempt to fix that.
But this project has a dual purpose. I'm driving AI transformation in my organization, and agentic coding is a major frontier we're exploring. This hackathon became an experiment: can AI handle a complex, full-stack project with minimal human steering? The BBS was the vehicle; the real test was the development process itself.
Why This Matters Now
The BBS revival isn't just nostalgia—it's about decentralization. Taking back control from social media giants. Letting users own their networks instead of being products on someone else's platform.
And with AI, this old technology becomes surprisingly powerful:
- AI-powered content curation — Summarize busy discussions, surface what matters
- AI game masters — Revolutionary potential for community-driven D&D and interactive fiction
- AI community building — Intelligently seed discussions, welcome newcomers, moderate thoughtfully
- Extensibility — Plugins for RSS ingestion, news curation, and more
The combination of intimate community + AI assistance could be something genuinely new.
What it does
Baud Again is a BBS system enhanced with AI capabilities:
- AI-Generated ANSI Art: Describe what you want, and Claude generates ASCII art directly as text—complete with ANSI color codes and decorative borders. LLMs weren't trained for ASCII art, yet they produce surprisingly good results by understanding spatial relationships through text.
- The Oracle: An AI-powered door game where users consult a mystical fortune teller. Classic BBS door game vibes, modern AI brains.
- AI Message Summaries: Catch up on busy message bases with AI-generated summaries of discussions—a glimpse of AI's potential to enhance community communication.
- AI Conversation Starters: Help seed new discussions and keep the community engaged.
The terminal client delivers an authentic experience—ANSI graphics, the "dumb terminal" paradigm where the server controls everything, simulated modem sounds if you want them. It's not a simulation of a BBS; it's the real thing with modern plumbing.
How we built it
The Experiment: Minimal Human Intervention
I set out to test how much AI could own the development process:
- AI-driven ideation: I discussed with AI to shape the initial product concept, exploring classic BBS features and deciding what to preserve vs. enhance
- AI-scaffolded documentation: AI generated the initial specs, steering docs, and project structure
- Human as strategic advisor: I stayed at the high level—architecture decisions, feature priorities, scope choices—but took AI's suggestions whenever possible
- AI chooses the tech stack: I let AI recommend technologies best suited for the project, regardless of my familiarity with them
- Longer autonomous sessions: I encouraged AI to work through complex tasks without constant check-ins
- Closed feedback loops: Using MCP integrations to let AI see its own output, identify issues, and fix them
The result? Over 80% of development was done in Kiro with minimal hand-holding. While there were bumps (more on that below), I'm genuinely impressed by how much complexity AI can handle in a real software project.
How We Used Kiro
Spec-Driven Development
We created multiple specs for different project phases:
| Spec | Purpose |
|---|---|
baudagain.md |
Overall product specification—features, architecture, AI integration points |
ansi-rendering-refactor.md |
Focused spec for refactoring a critical rendering system |
user-journey-testing-and-fixes.md |
Comprehensive end-to-end testing scenarios |
deployment-and-open-source.md |
Containerization, deployment pipeline, open source preparation |
This multi-spec approach let us tackle different concerns with appropriate depth. The main product spec guided overall direction; focused specs drove specific implementation phases.
Agent Hooks for Automation
We implemented two agent hooks that monitor tasks.md:
- Architecture Compliance Check: When a major task is marked complete, automatically review the changes against our architectural principles
- Documentation Updates: After compliance check passes, trigger updates to keep docs in sync with implementation
This created a sustainable workflow: complete task → verify architecture → update docs → move on. No manual reminder needed.
MCP Integration
We used the Chrome DevTools MCP to close the feedback loop. AI could:
- See the actual rendered output in the browser
- Identify visual bugs and UX issues
- Fix problems without waiting for human screenshots
This was crucial for the ANSI rendering work, where visual accuracy matters enormously.
Steering Documents
We generated steering docs to guide Kiro's responses. Since our experiment prioritized AI autonomy over human micro-management, we were satisfied with the default generated steering and didn't heavily customize it—letting AI work within reasonable guardrails.
Tech Stack
AI recommended this stack, and we went with it:
Backend: Node.js 20 + Fastify + TypeScript, SQLite via better-sqlite3, WebSocket for terminal connections, Anthropic Claude API for AI features
Frontend: Terminal Client (Vanilla TypeScript + xterm.js + Vite), Control Panel (React 18 + Tailwind CSS + Vite)
DevOps: Docker (multi-stage builds) + Docker Compose, Nginx Proxy Manager, Vitest + fast-check for testing
Architecture: The Hybrid Approach
| Component | Protocol | Why |
|---|---|---|
| Terminal Client | WebSocket only | Authentic BBS feel—server controls the flow |
| Control Panel | REST API | Standard admin interface, stateless |
| Notifications | WebSocket | Real-time updates |
The terminal uses WebSocket exclusively because classic BBS systems had the server in complete control—the "dumb terminal" paradigm. This architectural choice preserves authenticity.
Challenges we ran into
The Integration Trap
Early architectural decisions that seemed minor created hours of debugging later. We experimented with different approaches, and transitions between them weren't always clean. Leftover inconsistencies haunted us.
Lesson learned: Get a deployable skeleton with solid architecture first, then add features incrementally. Don't build a castle of features on an untested foundation.
AI's Visual Blind Spots
Even with Chrome DevTools MCP, AI struggled to reliably detect UX issues from what it could see. Many times it would report "everything looks good" when the interface was clearly broken. Human eyes remain essential for visual verification.
This is a real limitation of current agentic development—the feedback loop for visual/UX work isn't fully closed yet.
Ambition vs. Reality
Our original roadmap was too aggressive. We had grand plans—AI SysOp for natural language configuration, LORD-style multiplayer dungeon games, one-click desktop hosting. But time and LLM quota are finite. We scoped down to what we could actually ship.
The current version isn't the "finished product" I envisioned, but it demonstrates the core concept and AI integrations effectively.
Accomplishments that we're proud of
The Experiment Worked
This was the big one. AI handled a complex, multi-layered project—backend, frontend, WebSocket protocols, AI integrations, ANSI rendering, Docker deployment—with minimal human intervention. Not perfectly, but substantially. That's a meaningful data point for the future of software development.
AI ANSI Art Actually Works
We weren't sure if LLMs could generate decent ASCII art directly. They can! Claude produces charming results by understanding spatial relationships through text. The output fits the retro aesthetic perfectly.
Authentic BBS Experience
The terminal client genuinely feels like connecting to a 1990s BBS—server-controlled menus, ANSI graphics, the works. We preserved what made BBSes special while adding AI capabilities that enhance rather than overwhelm.
Structured Agentic Workflow
The combination of multiple specs + agent hooks + MCP integration created a sustainable development rhythm. This isn't just "vibe coding"—it's a repeatable process for complex projects.
A Working, Deployable Product
Despite the challenges, we shipped something real—containerized, tested, and ready to run. You can connect and experience a BBS enhanced with AI.
What we learned
Architecture decisions compound. A small choice early on can save (or cost) hours later. This is true whether you're coding alone or with AI. Maybe especially with AI, which can propagate early mistakes at scale.
Multi-spec development works. Breaking a project into focused specs for different phases (product → refactoring → testing → deployment) gives AI appropriate context without overwhelming it.
Agent hooks create sustainability. Automated architecture checks and doc updates mean you don't accumulate technical debt as fast. The system maintains itself.
MCP closes loops, but not all loops. Chrome DevTools MCP was valuable for catching functional issues, but visual/UX verification still needs human eyes. The feedback loop for design work isn't fully autonomous yet.
AI augments, doesn't replace. AI handled the bulk of implementation, but humans are still essential for strategic decisions, visual verification, and knowing when to scope down. The collaboration is what works.
Ship the skeleton first. Get deployment working with minimal features, then iterate. We learned this the hard way.
What's next for Baud Again
The roadmap includes:
- AI SysOp: Natural language configuration ("make my BBS pirate-themed") and automated moderation
- Realm of Echoes: A LORD-inspired multiplayer door game with AI Dungeon Master—async gameplay where players share a persistent world
- One-click desktop hosting: Tauri wrapper + Cloudflare Tunnel for trivially easy self-hosting
- Plugin system: RSS ingestion, news curation, and other AI-powered extensions
The dream is to make hosting a BBS as simple as running any desktop app—bringing back intimate, user-owned communities for anyone who wants one.
And the experiment continues. Each project teaches us more about where AI excels and where humans remain essential. Baud Again was a great teacher.
Building Baud Again wasn't just a hackathon project. It was a conversation with my younger self—that teenager in Hong Kong, amazed that a phone line and a modem could connect him to the world. And it was an experiment in how we'll build software tomorrow.
The technology has changed. The magic doesn't have to.
The 90s are calling. Time to pick up.
Built With
- docker
- fastify
- node.js
- react
- sqlite
- tailwind-css
- typescript
- vite
- websocket
- xterm.js
Log in or sign up for Devpost to join the conversation.