GreenPrompt — Responsible AI Footprint Coach
Project Writeup
Inspiration
Every week, headlines celebrate what AI can do — but almost none ask what it costs. Training GPT-4 emitted roughly as much CO₂ as 300 round-trip flights from New York to San Francisco. And that's just training. Every single query you send — every "explain this", "rewrite that", "can you please help me with..." — draws power from a data center, consumes cooling water, and leaves a small but real carbon trace.
The uncomfortable truth is that most of us using AI tools daily have no idea what our usage actually costs the planet. There's no fuel gauge. No receipt. Just a blinking cursor and an answer.
That's what inspired GreenPrompt. Not to guilt people out of using AI — AI is genuinely useful and that's not going away — but to make the invisible visible. If you knew that three sloppy, filler-heavy prompts cost the same as one clear one, would you write more carefully? We think most people would. They just need to see it.
We also noticed something ironic: people use AI to be more productive, but wasteful prompting actively makes it worse — you get vague answers, send follow-ups, regenerate responses. Efficiency and sustainability point in the exact same direction. GreenPrompt sits at that intersection.
What We Built
GreenPrompt is a Chrome extension that runs silently across six major AI platforms — ChatGPT, Claude, Gemini, Copilot, Perplexity, and Mistral. It tracks the environmental cost of your AI usage in real time and nudges you toward more efficient habits through a lightweight gamification system.
Core features:
- Real-time footprint tracking — estimates CO₂ (grams), water usage (ml), and energy (Wh) per query using published research on LLM inference costs. Numbers update live as you chat.
- Cross-platform detection — works across all six platforms using a three-method detection system: keyboard events, smart click detection, and a MutationObserver watching for new message bubbles in the DOM.
- Prompt Optimizer Lite — a floating 🌿 button injected into every AI chat interface. Click it before sending and it suggests a shorter, cleaner version of your prompt using local heuristics. No API calls, no data leaves your browser. Applying a suggestion earns bonus XP and logs the CO₂ you saved.
- CO₂ Saved counter — tracks not just what you've emitted, but what you've actively prevented through optimization. Seeing your personal impact number grow is meaningfully different from watching an abstract score go up.
- Weekly Challenges — a rotating set of five weekly missions ("Batch Master", "Less is More", "Light Footprint", "Platform Explorer", "Daily Habit") that give people a concrete goal to chase each week with XP rewards on completion.
- Streak system with multipliers — daily streaks earn escalating XP (streak × 5 per day). First query of the day gives a bonus. Concise prompts under 80 characters earn extra points — rewarding the exact behavior that reduces emissions.
- 15 achievements — unlocked automatically with desktop notifications. Designed with real emotional hooks: "A tree absorbed that!" at 10g saved, "Real habit now." at a 14-day streak.
- Full analytics dashboard — 14-day CO₂ bar chart, platform breakdown, real-world equivalents (car distance, bulb time, tree absorption), and a full achievement gallery.
- Privacy by design — the extension never reads prompt content. It measures prompt length for token estimation and nothing else. All data lives in
chrome.storage.local. No server, no account, no telemetry.
How We Built It
GreenPrompt is a pure Manifest V3 Chrome extension — no framework, no build step, no bundler. Deliberately lean.
The architecture has three layers:
1. Content Script (content.js + content.css)
Injected into every supported AI platform. Uses three independent query detection methods to handle the wildly different DOM structures across platforms — some intercept keyboard events at different stages, some use React synthetic events, some ProseMirror editors, some Quill. Rather than maintaining brittle per-platform CSS selectors, we use: (a) all three keyboard event phases (keydown, keypress, keyup), (b) a smart click detector that identifies send buttons by aria-label, data-testid, SVG presence, and proximity to input fields, and (c) a MutationObserver that watches for new user message bubbles appearing in the DOM — the most reliable signal of all. The optimizer button is injected as a fixed floating element to avoid breaking when platforms update their layouts.
2. Background Service Worker (background.js)
The persistent brain of the extension. Handles all data writes, XP calculations, streak logic, weekly challenge tracking, and achievement unlocks. Impact estimates are calculated using published figures from Luccioni et al. (2023) and Patterson et al. (2021) — CO₂ per token varies by platform (e.g. Mistral is ~48% cleaner per token than GPT-4 based on available data). The service worker also manages the prompt optimizer, running a suite of regex-based filler-phrase strippers entirely locally.
3. Popup + Dashboard (popup.html/js, dashboard.html/js)
The popup reads directly from chrome.storage.local — no messaging required, so it loads instantly even if the service worker is sleeping. The dashboard uses a hand-written minimal Chart.js replacement (bundled locally, no CDN) to render the 14-day bar chart without any external network calls, which Manifest V3's Content Security Policy would block anyway.
Challenges We Faced
The CSP wall. Manifest V3 enforces a strict Content Security Policy that blocks inline <script> tags, inline onclick handlers, and external CDN script loads. Every button had to be built with document.createElement and addEventListener. The Google Fonts link we initially used hung the popup silently — the entire UI showed "Loading..." indefinitely because the font request was blocked and the script was waiting behind it. Debugging this took longer than expected because Chrome gives no visible error for blocked resource loads in popup pages.
Cross-platform detection was genuinely hard. Each AI platform has a completely different DOM architecture. ChatGPT uses a Lexical rich text editor. Gemini uses Quill. Copilot and Mistral use ProseMirror. Perplexity uses a standard textarea but intercepts events differently. Claude uses a custom contenteditable. A single detection strategy failed on at least two platforms in every iteration. The three-method approach (keyboard + click + MutationObserver) was the eventual solution — but getting the MutationObserver to not falsely trigger on our own modal injections required careful node-level filtering.
The false-positive problem. The MutationObserver approach introduced a subtle bug: opening and closing our own optimizer modal was counted as new AI queries because our DOM insertions triggered the observer. Fixed by checking every added node against our own gp- class prefixes before counting it.
Making the numbers honest. The CO₂ estimates are approximations — real inference cost varies with server load, data center location, model version, and grid carbon intensity. We were careful to present numbers as estimates for awareness rather than precise measurements, citing the underlying research in the dashboard footer. We also removed an artificial 50-token minimum floor that was making short and long prompts appear identical in cost.
Incentive design is harder than engineering. The first version had XP and achievements that felt hollow — tracking 50 queries doesn't feel meaningful. The breakthrough was reframing around savings rather than just usage. Showing a user "you've saved 0.24g CO₂ through optimization" with a "tree-days offset" equivalent creates a fundamentally different emotional response than "you've earned 120 XP." People want to see their positive impact, not just their score.
What We Learned
- Environmental cost is invisible by design in most software. Making it visible changes behavior — but only if the feedback is immediate, specific, and connected to action the user can take right now.
- Privacy-first constraints (no prompt reading, no server) turned out to be a creative forcing function, not a limitation. The local heuristics optimizer is faster and more trustworthy than an API call would be.
- Manifest V3 is significantly more restrictive than V2 and the error messages are often silent. Budget extra time for CSP debugging.
- The most motivating metric isn't what you've consumed — it's what you've saved.
Log in or sign up for Devpost to join the conversation.