CityPulse NYC

Inspiration

New York City's Council passes hundreds of bills every year, each one shaping the lives of 8 million people. But here's the problem: the average NYC resident has zero practical way to understand how a bill actually affects them. Legislative text is dense, summaries are oversimplified, and traditional news coverage picks one angle and runs with it.

We realized that the missing piece isn't information. It's perspective. A rent stabilization bill looks completely different to a tenant facing eviction versus a small business owner watching their lease triple. Both views are valid. Neither gets heard in a traditional news cycle.

CityPulse NYC exists to fix that. We built a platform where AI agents — dynamically selected based on each bill's policy domain — debate legislation from every relevant stakeholder angle, grounded in real-time news and sourced data via Linkup. The result: civic literacy that's accessible, multi-dimensional, and actually engaging.


What It Does

CityPulse NYC is a real-time legislative intelligence platform that transforms NYC Council bills into structured, multi-stakeholder AI debates.

The Core Loop

  1. A bill enters the feed. Users see upcoming votes and recently passed laws, filterable by topic, conflict level, and keyword search.
  2. Agents are dynamically selected. Based on the bill's policy domain (housing, transit, fiscal, labor, etc.), CityPulse selects the most relevant subset of stakeholder agents from our roster. A congestion pricing bill pulls in the Transit Rider and Small Biz Owner. A building emissions law activates the Budget Hawk and Real Estate Dev. No two bills get the same debate panel.
  3. Live debate generation. Selected agents argue the bill across two rounds using Claude (via Anthropic API), with each agent operating under a deeply engineered persona prompt. Round 2 agents directly challenge and respond to Round 1 arguments, creating authentic cross-examination.
  4. Real-time sourcing. Every debate is enriched with live news from Linkup, pulling the latest coverage, journalist analysis, and public data on each bill so agents argue with real facts, not hallucinations.
  5. Confidence tracking. After every debate round, a Judge evaluates consensus score and argument novelty. When agents converge — or stop introducing new arguments — the debate closes automatically, with a full consensus evolution chart showing how each agent's position shifted round by round.
  6. Ask any agent. Users can pose direct questions to any agent and receive in-character responses grounded in the bill's live context.
  7. Bill Playground. Users can paste any legislation, bill excerpt, or policy proposal — real or hypothetical — and trigger a fresh AI debate from scratch. The system selects the right agents, generates the debate, and produces a consensus report, all in under 60 seconds.

Agent Roster

Agent Perspective Activated When
🏠 Tenant Advocate Housing rights, eviction protections Housing, zoning, rent bills
🏪 Small Biz Owner Local business economics, compliance burden Commercial rent, licensing, tax bills
📊 Budget Hawk Fiscal impact, taxpayer cost analysis Any bill with budget implications
🏗️ Real Estate Dev Development economics, market effects Housing, zoning, construction bills
🚇 Transit Rider Commuter experience, infrastructure access Transit, congestion, infrastructure bills
🏛️ Council Member Political feasibility, coalition dynamics All bills (provides insider context)
🏥 Public Health Expert Community wellness, healthcare access Pollution, sanitation, safety bills
⚖️ Civil Libertarian Privacy, surveillance, individual rights Policing, data usage, facial recognition
📐 Urban Planner Zoning logic, long-term city design Land use, density, master plan changes
🌿 Environmentalist Climate impact, sustainability, green space Energy, emissions, parkland bills
🛠️ Labor Organizer Worker protections, wage standards Employment law, public contracts, prevailing wage
🌍 Immigrant Advocate Inclusion, language access, sanctuary policy Social services, ID programs, deportation issues

How We Built It

Architecture

Frontend: React 18 SPA on Vite with a custom dark editorial design system. Three font families (Inter, JetBrains Mono, Bebas Neue), CSS custom properties for theming, and a responsive layout optimized for reading long-form debate threads.

Backend: Express API proxy that keeps all keys server-side. The client never touches a third-party API directly — Anthropic and Linkup calls are routed through /api/claude/chat and /api/linkup/search.

AI Engine: Claude (via Anthropic API) powers all debate generation, agent responses, and the "Ask an Agent" feature. Each agent operates under a richly engineered system prompt encoding their policy worldview, rhetorical style, emotional triggers, and argumentation patterns.

Live Data Layer: Linkup is integrated at two critical points:

  • Bill-level news search: When a user opens any bill, Linkup fetches the latest real-world coverage, journalist analysis, and public data. This context is injected into the debate so agents argue about what's actually happening, not what a training set once contained.
  • Sourced answers for Ask an Agent: When a user asks a direct question, the response is grounded through Linkup's sourcedAnswer pipeline, pulling from live web results and returning citations.

Dynamic Agent Selection Pipeline

Not every agent is relevant to every bill. Our selection pipeline analyzes each bill's tags and policy domain, then activates the subset of agents with genuine stakes in the outcome. A school safety bill doesn't waste time on a Real Estate Dev take that adds nothing. Every voice in the debate earns its seat at the table.

Debate Generation Pipeline

For each bill, we generate agent responses in parallel (Round 1), then targeted cross-replies in Round 2, plus a Linkup news fetch. Round 2 uses a directed reply graph rather than having every agent respond to every other agent, keeping latency and cost bounded while preserving the feeling of real debate. Results are cached in memory and sessionStorage, so revisiting a bill is instant with zero additional API calls.

Confidence Metric & Dynamic Stopping

One of our most technically interesting additions is the Debate Judge — a lightweight evaluator that runs after every round and decides whether the debate should continue. It tracks two signals:

  • Consensus score: how much agents are converging (derived from community upvote ratios and argument overlap)
  • Novelty score: whether the latest round introduced meaningfully new arguments or is just restating prior positions

When consensus exceeds 75% or novelty drops below 10%, the judge closes the debate with a reason — "Agents reached consensus," "No new arguments emerging," or "Round limit reached." The hard cap is 5 rounds, preventing runaway API spend on debates that have already resolved.

This produces three visible outputs in the UI:

  • A Debate Status banner that updates live (e.g., "Round 2 of 5 · Consensus building: 61%") and snaps to a stopped state with the closing reason
  • Round Snapshots after each round showing the consensus bar, novelty score, and per-agent community approval
  • A Consensus Evolution Chart — an SVG sparkline showing each agent's approval trajectory and the overall convergence curve across all rounds, with a vertical stop marker where the judge intervened

Bill Playground

The Playground lets anyone — not just people following NYC legislation — bring their own content to CityPulse. Users paste any bill text, policy proposal, or even a news headline, and the system:

  1. Runs it through the dynamic agent selector to pick the right panel
  2. Generates a two-round debate using Claude with Linkup context
  3. Produces a full consensus report and evolution chart

This makes CityPulse useful for students analyzing any policy, journalists stress-testing a bill's implications, or community organizers explaining a proposal to neighbors in human terms. The same pipeline that powers the main feed works for any input.


Challenges We Faced

1. Making AI agents actually disagree. The hardest problem wasn't generating text — it was generating genuine conflict. Out of the box, LLMs tend toward consensus and hedging. We solved this with adversarial persona prompts that encode specific policy biases, rhetorical styles, and even emotional stakes ("you've seen three tenants evicted this month") that force agents into authentic disagreement.

2. Grounding debates in reality. AI debates are useless if agents argue about hypotheticals. Integrating Linkup's real-time search at the debate level was critical: agents reference actual news coverage, real budget numbers, and current political dynamics. This is what separates CityPulse from a ChatGPT wrapper.

3. Knowing when to stop. Early versions ran debates to a fixed round limit, which produced diminishing-return arguments that diluted the final output. Building the Judge Agent with a calibrated stopping criterion — and surfacing it transparently in the UI — made debates crisper and more trustworthy.

4. Dynamic agent selection without losing coherence. Picking different agents per bill means debate structure varies. The UI, consensus reports, confidence chart, and "Ask an Agent" panel all needed to work regardless of which 3, 4, 5, or 6 agents are debating. This required a fully dynamic rendering pipeline rather than hardcoded layouts.

5. Latency under parallel AI calls. Opening a bill triggers multiple API calls simultaneously. We built a DebateProgress component that shows real-time per-agent status (waiting → generating → done) so users see progress rather than a spinner. Combined with sessionStorage caching, revisiting any bill is effectively instant.


What We Learned

  • AI debate is a killer format for civic tech. Seeing a transit rider push back on a budget analyst's numbers is more engaging and educational than any summary paragraph. Multi-perspective AI turns passive reading into active understanding.
  • Grounding changes everything. Linkup integration transformed our debates from "interesting AI exercise" to "actually useful civic tool." Real citations and live news make users trust the output.
  • Dynamic stopping is a feature, not an optimization. Showing users why a debate ended — and charting how consensus evolved — adds a layer of transparency that makes the AI feel accountable rather than opaque.
  • Prompt engineering > model size. Claude with deeply crafted persona prompts outperformed every generic "debate this bill" approach we tried. The investment in per-agent system prompts was the single highest-ROI decision in the project.
  • Playgrounds unlock unexpected users. The most surprising demo moment wasn't the NYC bills — it was watching someone paste a local zoning dispute from their own town and immediately get a structured debate with winners and losers.

What's Next

  • Live bill ingestion from the NYC Council Legistar API to replace mock data with real-time legislative tracking
  • User profiles for personalized impact analysis — renters, commuters, business owners each see which arguments affect them most
  • Community voting on agent arguments to layer collective sentiment on top of AI perspectives
  • Playground sharing — generate a debate from any bill and share a link so others can read, react, and ask agents follow-up questions
  • Multi-city expansion: the dynamic agent framework is city-agnostic and ready to scale to Chicago, LA, and beyond

Built With

Share this project:

Updates