Inspiration

I built Baseline MCP because I kept hitting the same friction point while coding: I could never tell, quickly and confidently, whether a CSS/JS feature or a property-value pair was safe to ship. Browsers move fast, features enter "Baseline" at different times, and the mental context switch between MDN, Can I Use, and my editor slowed me down. I wanted an in-editor, AI-friendly service that could answer “vibe checks” — is this feature safe to use right now? — without leaving my workflow.

Baseline MCP is inspired by:

  • the Baseline concept (a small, practical compatibility threshold),
  • MDN/browser-compat-data and the web-features package as canonical sources,
  • the Model Context Protocol (MCP) idea: surface useful tools inside editors and AI assistants.

What it does

Baseline MCP is an MCP-compatible service that exposes curated web-features and Baseline compatibility data via:

  • REST endpoints (/api/features/:name, /api/search, /api/meta/*, /api/baseline/:year),
  • MCP tools over stdio/streamable HTTP (tools like getFeatureSupport, findFeatureId, compareSupport),
  • an SSE server mode for MCP

How we built it

High-level architecture:

  • src/data-loader.ts — central loader that:
    • dynamically imports web-features when available,
    • maps raw feature objects into our FeatureData shape,
    • provides a fallback sample dataset for local/dev use,
    • caching with a TTL for efficiency.
  • src/mcp-server.ts — MCP server exposing tools via the Model Context Protocol (stdio transport support).
  • src/sse-server.ts — Express-based REST + SSE server for browser/HTTP clients.
  • src/types.ts — TypeScript interfaces defining data shapes.
  • Build & deployment:
    • TypeScript + tsc build,
    • Dockerfile (multi-stage) for container deployments (Coolify, Fly, Cloud Run, etc.),
    • README, examples, and a live deployment at your chosen URL.

Implementation highlights:

  • Dynamic import for web-features so devs can run without the package installed; DataLoader gracefully falls back to sample data.
  • findFeatures(query) performs fuzzy/substring matching across id/name/description/compat_features for quick search.
  • Exposed both a mapped FeatureData for compatibility with the rest of the app and the raw web-features object (for advanced tooling).
  • Added MCP tools and REST endpoints so editors and assistants can call the server using JSON-RPC or simple HTTP.

Files of interest:

  • src/data-loader.ts — loading & mapping logic
  • src/mcp-server.ts — MCP tool registration and handlers
  • src/sse-server.ts — REST & SSE endpoints
  • Dockerfile — multi-stage build for production

Challenges we ran into

  • Packaging & bundle size:
    • web-features is large. Importing it directly in serverless or edge environments (Workers, Vercel) can explode bundle size.
    • Solution: dynamic import and an option to prebuild/trim the dataset to a JSON snapshot that serverless functions can load cheaply.
  • SSE in serverless/edge environments:
    • Long-lived connections don’t work well on Vercel/Cloudflare Pages Worker (timeouts, no worker state). We ended up:
    • keeping the Express SSE server for dedicated container hosts (Coolify),
    • providing a Worker/REST prototype for cases where only REST is needed.
  • TypeScript + dynamic imports:
    • Type checking against packages that might not be present needed a small declaration (src/web-features.d.ts) and careful runtime checks.
  • Docker build gotchas:
    • .dockerignore accidentally excluded src/**/*.ts during the first iteration and made tsc find no inputs.
    • Iterating to ensure build context included the right files.
  • Mapping differences across data sources:
    • web-features objects have longer shape (compat_features, description_html, status), while our FeatureData shape was intentionally compact. Mapping while keeping raw access required careful design to avoid losing fields.

Accomplishments that we're proud of

  • Full end-to-end pipeline:
    • Data ingestion (web-features) → normalized API → MCP tools → editor/assistant integration patterns.
  • Built an editor/assistant-first API with real tooling in mind (search, per-feature support, raw metadata).
  • Implemented both MCP tool handlers and a developer-friendly REST API with a search endpoint and meta endpoints (/api/meta/*).
  • Made the project deployable:
    • Multi-stage Dockerfile for container hosts (Coolify).
    • Clear instructions and examples to integrate into VS Code, Cursor, and CLI.
    • A live instance: https://baseline.rcht.dev/
  • Designed the system with graceful fallbacks (sample data, well-typed interfaces, simple caching).

What we learned

Technical lessons:

  • Edge vs. container trade-offs: long-lived SSE needs a persistent process; serverless shines for on-demand REST.
  • Packaging matters: huge dependency graphs require trimming or prebuilding datasets for serverless deployments.
  • Runtime vs. compile-time safety: dynamic imports are powerful but require extra type shims and runtime guards.
  • Tooling ergonomics: exposing both mapped, stable shapes and raw objects is important so simpler clients can use compact structures while advanced tooling can use the full web-features data.

What's next for Baseline MCP Server

Planned improvements and roadmap:

  • Per-BCD-key checks using compute-baseline so linters and CSS parsers can ask about property-value pairs (not just feature ids).
  • Durable real-time updates:
    • Add a publish/subscribe layer (Redis/managed pubsub, or Durable Objects for Workers) so multiple hosts/clients can subscribe to updates robustly.
  • Bundle optimization:
    • Add a build step that writes a trimmed features.json snapshot for serverless deployment and a separate full dataset for container deployment.
  • Authentication & rate limits:
    • Add API keys / token-based auth and basic quotas to secure and protect the public endpoint for production use.
  • Editor/assistant integrations:
    • Create a tiny VS Code extension that wires Baseline MCP into hovercards and completions.
    • Provide a sample MCP config for popular AI assistants and a ready-to-use mcp.json snippet for teams.
  • Tests & metrics:
    • Add unit tests for DataLoader and integration tests for endpoints.
    • Add telemetry for query patterns (what devs search for most) to drive product decisions.

If you want, I can prepare the next artifacts:

  • A trimmed static/features.json generator script for serverless deployment.
  • A minimal VS Code extension prototype that shows a hover card using the MCP getFeatureSupport tool.
  • Authentication support (JWT/api key) and a README section on protecting your endpoint.

Built With

Share this project:

Updates