CivicBite
Governance & Accessibility Track
About
CivicBite turns local government — usually buried in 80-page PDFs and 2-hour council meetings — into 90-second civic missions. Users get an AI-generated plain-English summary of a real local issue (zoning changes, school board votes, transit funding), see arguments from both supporters and opponents side-by-side, then work through a tradeoff question that surfaces what they actually value (e.g., "Lower property taxes or more parks funding — which matters more?"). Each mission builds out their civic profile, which tracks not just opinions but the underlying values driving them. At the end, they can send an editable public comment that goes from draft to submitted in under a minute.
Think Duolingo's daily streak meets your city council's agenda — civic learning that's frictionless, neutral, and actually finishable on a lunch break.
Who we built this for and why they need it
Local elections regularly see turnout below 20%, and the gap is widest among 18–29 year olds — not because young people don't care, but because the on-ramp is brutal. The information exists, but it's locked behind dense agendas, jargon, and a time tax most people can't afford.
We built CivicBite for:
- The college student who heard about a controversial development in their neighborhood and wants to weigh in but can't find a 5-minute version of what's happening.
- The first-time voter who doesn't know what a "comprehensive plan amendment" actually is.
- The working parent who cares about school board decisions but isn't going to read 200 pages of board materials.
These aren't disengaged people — they're underserved ones. CivicBite meets them where they already are: on their phone, with five minutes between classes or commutes.
How we used Claude / AI in our project
We use Claude across four distinct stages of the pipeline, each with carefully designed prompts:
- Issue discovery — parsing meeting agendas, public notices, and local news to identify which items actually have stakes for residents, filtering out routine procedural votes that don't warrant attention.
- Plain-language summarization — collapsing dense policy text into a 3–5 sentence brief at roughly an 8th-grade reading level, with explicit instructions against editorializing or inserting recommendations.
- Steelmanning both sides — we prompt Claude to generate the strongest version of each viewpoint, not strawmen. Each side gets equal word count and is written as if by someone who genuinely holds the position, citing real interest groups and concerns.
- Personalized comment drafting — once a user works through a mission, Claude generates a first-person draft based on their tradeoff answers and civic profile, which they can edit before submitting.
We also designed the prompts to flag uncertainty rather than fabricate (if an agenda item is ambiguous, Claude says so) and to surface tradeoffs rather than collapse them into yes/no.
What could go wrong and how we addressed it
We treated risk mitigation as a core design constraint, not an afterthought:
- Bias creep in summaries. AI summarization can subtly favor whichever side is more loudly represented in source material. We mitigate this with side-by-side perspective generation, enforced word-count parity, and a final "neutrality check" pass where Claude audits its own summary for loaded language and flagged framing.
- Oversimplification of high-stakes issues. Civic decisions aren't binary. Instead of a support/oppose vote, every mission ends in a tradeoff question that forces engagement with real tensions — cost vs. coverage, density vs. character, speed vs. process — preserving the actual texture of the decision.
- AI-generated comments flooding officials at scale. Comments are always editable and never auto-sent. We tag each draft as AI-assisted in the metadata so officials can weight feedback honestly, and we cap the number of comments a single user can submit per cycle. Authenticity is a feature, not a bug to engineer around.
- Hallucination on local facts. Claude is constrained to summarize from source documents we provide rather than generate from memory, with citations back to the original agenda or notice for every claim.
What we'd build next if we had more time
- Live data integration. Direct ingestion of city council agendas, public notice feeds, and local news sources (Patch, neighborhood Substacks, public radio transcripts) so missions reflect what's actually being decided this week — not last month.
- Smarter civic profiles. Move beyond opinion tracking to value mapping — surfacing the tradeoffs a user consistently makes — with friend comparison features that show where you actually agree vs. where you only think you disagree. Most political division online is over labels, not values.
- Officials' dashboard. Anonymized, aggregated feedback views for council members and staff that show not just sentiment but the values and tradeoffs constituents are weighing. Right now, officials mostly hear from the loudest 1% at public hearings; we want to give them a representative read.
- Jurisdiction-specific mission packs. Partner with civic organizations, student governments, and city clerks to deploy CivicBite for specific cities and campuses, with hyperlocal issue feeds. UMD's College Park council would be a natural pilot.
Built With
- ai-api-/-llm-api
- claude
- css
- html
- node.js
- react
- typescript
- vite
Log in or sign up for Devpost to join the conversation.