Inspiration

Canada Pooch is a premium Canadian dog apparel brand — great products, loyal customers — but when we asked AI assistants and search engines "what are the best dog boots for winter?", the brand barely showed up. Meanwhile competitors like Ruffwear dominated every answer.

That gap between brand quality and AI visibility is the new SEO problem. We wanted to build a tool that measures it objectively and tells brands exactly what to do about it.

What it does

Canada Pooch · GEO Audit measures how often Canada Pooch appears when real users search for dog boots, coats, and clothing — then uses AI to explain why competitors rank higher and generate a concrete action plan.

  • Runs 10 real benchmark search queries (the kind pet parents actually type) through Tavily web search
  • Detects brand presence across every result and computes a Share of Model (SoM) score — the % of queries where Canada Pooch appears
  • Identifies which competitors dominate the results and what authority signals they have (Wikipedia, press coverage, "best-of" list inclusion)
  • Uses Gemini 2.5 Pro to generate a competitive gap analysis, a Wikipedia-style brand summary, a comparison page outline, and a prioritized action plan
  • Displays the exact query sent, Tavily results received, and brand detection decision for every single query — fully transparent

How we built it

Two-layer architecture:

  1. Tavily web search — all 10 benchmark queries run in parallel. Results are scanned for brand mentions using keyword matching. This produces the SoM score: an objective, web-grounded metric.

  2. Gemini 2.5 Pro — receives the full Tavily dataset and runs deep agentic reasoning: competitor authority analysis, content gap identification, and content generation (Wikipedia draft, comparison page, action plan).

Stack: Node.js + Express backend, vanilla JS frontend, deployed to Vercel as serverless functions. Tavily handles search; AI Builder Space provides the Gemini 2.5 Pro endpoint.

Key technical decisions:

  • All Tavily queries run in parallel (Promise.all) — cutting audit time from ~85s sequential to ~8s
  • SoM is sourced from web search results only, not LLM opinions — making it reproducible and auditable
  • Deep analysis is a separate pass, keeping scoring honest and fast

Challenges we ran into

  • Vercel function timeouts — the original sequential search layer hit the 60s limit. Parallelizing all queries was the fix.
  • Inflated SoM scores — the first version used OR logic across multiple sources, so if Tavily's web results mentioned the brand the score jumped to 100% even when AI models didn't. We rebuilt scoring to be Tavily-only and strictly per-query.
  • Honest metrics — tempting to query multiple LLMs and inflate the "AI mentions" story, but the more defensible approach was: one search tool (Tavily), one analysis tool (Gemini), clear methodology.

Accomplishments that we're proud of

  • A genuinely useful metric — SoM from web search is reproducible, explainable, and doesn't depend on which LLM you ask
  • Full transparency: every query, every result, and every detection decision is visible in the UI — no black box
  • End-to-end in one audit: from raw search data → competitor map → gap analysis → ready-to-use content drafts → prioritized action plan
  • Fast: quick mode finishes in ~30 seconds on Vercel

What we learned

  • GEO (Generative Engine Optimization) is really about web presence that AI can find and cite — structured content, authority signals, third-party mentions — not gaming specific models
  • The simplest honest metric (web search presence %) is more actionable than a complex multi-model average
  • Tavily's include_answer field gives you the AI-synthesized answer alongside raw results — useful signal for where the brand appears in the narrative vs just in a URL

What's next for Canada Pooch · GEO Audit

  • Scheduled monitoring — run the audit weekly, track SoM over time, alert when it drops
  • Generic brand support — the backend already supports any brand/segment; connect a UI for it
  • Deeper citation tracking — detect whether the brand appears in Tavily's AI-synthesized answer vs just in a source URL (different weight)
  • Competitor drill-down — click a competitor to see exactly which queries they dominate and what content drives it
  • Content publishing pipeline — take the Wikipedia draft and comparison outline directly into a CMS

Built With

  • claude
  • tavily
  • vercel
Share this project:

Updates