Inspiration# EcoJustice MD

AI classifies. Community corroborates. Maryland gets heard.


Inspiration

Environmental violations in Maryland don't affect everyone equally. Illegal dumping, industrial discharge, and air pollution concentrate in lower-income communities like Curtis Bay in Baltimore and the Route 1 corridor in Prince George's County. Residents notice something is wrong but have no idea what law is being broken, who to call, or whether anyone else has noticed the same thing. The existing reporting system assumes knowledge most people don't have: which agency handles which violation, what permit status a facility has, what language makes a complaint credible. That gap means violations go unreported and the communities closest to the harm have the least power to act.


What It Does

EcoJustice MD lets a resident describe what they observed and get back plain-language answers: what this might be, what Maryland law could apply, and whether anyone else in their ZIP code has seen the same thing. Claude classifies the issue type, estimates permit likelihood, and determines whether the incident is acute (routed immediately to MDE emergency contacts) or chronic (enters a community corroboration flow). Neighbors can add structured testimonies to the same report. When enough independent observations accumulate, Claude synthesizes them into a collective complaint citing the Maryland Environmental Justice Act 2021 and routes it to the right agency.

The AI educates. The community decides. The people closest to the harm get heard.


How We Built It

Node and Express backend with a single server.js, vanilla JS frontend with no build step, and Supabase for persistence. AI calls go through a provider wrapper (core/ai.js) that supports both Anthropic Claude and Google Gemini, toggled with a single environment variable. The community layer uses session-based deduplication and IP rate limiting to keep testimony counts credible. A threshold engine watches testimony counts and triggers collective draft generation at 15 corroborated observations. Claude drafts the complaint from anonymized testimony summaries, always framing it as a request for investigation rather than a conclusion of wrongdoing. Every screen shows a liability disclaimer baked into the UI, not hidden in a footer.

Built with: Node.js, Express, Vanilla JavaScript, Supabase, Anthropic Claude API, Google Gemini API, EPA ECHO API


Challenges We Ran Into

The hardest design problem was not the code — it was calibrating what the AI should and should not claim. Environmental classification from a text description is genuinely uncertain, and overstating certainty creates real harm: a misclassified observation could generate a complaint against someone who did nothing wrong. We built confidence grading into every Claude response and made the innocent explanation of each observation mandatory reading before a user can post. Getting that UX tight without making the app feel like a legal waiver factory took most of our iteration time.

The second challenge was testimony manipulation: a coordinated group could flood a report with fake corroborations. Our session and IP deduplication is pragmatic rather than cryptographically airtight, and we are honest about that in the complaint text itself.


Accomplishments We Are Proud Of

The collective complaint draft is the thing we are most proud of. It synthesizes anonymized observations from multiple independent witnesses into a structured document that cites real Maryland law, notes permit status, and requests a written response timeline from MDE. It reads like something a legal aid organization would produce. The fact that it is generated from structured community testimony rather than a single person's account is what gives it weight.

We are also proud of the acute triage routing: anyone describing an active spill or immediate health risk bypasses the community layer entirely and gets MDE's emergency line immediately, regardless of what Claude classified.


What We Learned

The liability wrapper is not a legal formality — it is a core product decision. Every choice about what language Claude uses, what confidence thresholds trigger which actions, and what users must acknowledge before posting is a policy decision dressed in UI.

We also learned that the framing of "request for investigation" versus "complaint of violation" is not just semantics. It changes what the document is, who can credibly sign it, and what MDE is obligated to do with it. Getting that framing right across every Claude prompt took more passes than any other part of the build.


What's Next for EcoJustice MD

Three things would make this meaningfully more powerful:

  1. Permit database integration. Right now permit status comes from Claude's interpretation of the description. The real version queries MDE's permit database directly so the collective complaint cites actual permit numbers and compliance history.

  2. Public accountability dashboard. Once a complaint is filed, what happens? The next version tracks MDE response times publicly. A report showing "Filed, MDE Response Pending, Day 34" is a public fact that journalists and state delegates can act on without the tool doing anything else.

  3. Voice and SMS input. The residents most affected by environmental violations are often not the ones most comfortable typing into a web form. A version that lets you call a number and describe what you saw in plain speech would reach the people who need it most.

Built With

Share this project:

Updates