Inspiration
We noticed how often people (us included) type things like “hi!”, “please,” and “thank you!!” into AI prompts. While polite, those extras cost tokens. Recent coverage highlighted how small prompt bloat can scale into real compute, energy, and cooling water at data-center scale. That sustainability angle inspired us to build a Chrome extension that trims unnecessary fluff and nudges leaner prompts—without getting in your way.
What it does
WasteNotPromptNot is a Chrome MV3 extension for ChatGPT. As you type, it detects “wasteful” patterns (greetings, sign-offs, hedges), estimates removable characters and approximate tokens, and shows a small floating badge you can dismiss. If you press Enter on a wasteful prompt, a simple two-button modal appears: Clear (apply cleanup) or Send anyway.
How we built it
Default path: On Enter, the content script sends the prompt to a small Cloud Run (Firebase) HTTP endpoint that returns a waste score and a suggested cleaned version.
Fallback path: If the network call fails, lightweight regex rules power the on-page badge estimate and cleanup.
MV3 plumbing: The content script finds the ChatGPT editor (SPA/shadow-DOM robust), shows the badge, intercepts Enter, and applies cleanup. A background service worker relays messages and performs async fetches while keeping the sendResponse channel alive.
Challenges we ran into
MV3 service worker lifecycle (“Receiving end does not exist”) Issue: Popup messages failed if the content script wasn’t ready. Fix: Added a background relay, returned true to keep sendResponse alive, injected the content script on demand, then retried.
Finding the ChatGPT editor in SPA/shadow-DOM Issue: Static selectors missed the live editor. Fix: Used a MutationObserver, scanned textarea, [contenteditable], and [role="textbox"], walked open shadow roots, and listened globally with e.composedPath() to catch closed roots. Tracked attached nodes with a WeakSet to avoid double listeners.
Badge placement & lingering UI Issue: Badge looked off and sometimes stuck after edits/submits. Fix: Switched to position: fixed, clamped to viewport, hid on empty input/blur.
Enter vs. newline vs. IME Issue: Accidentally blocked newlines or composition. Fix: Only treat plain Enter as submit; ignore Shift+Enter, Ctrl/Cmd+Enter, and isComposing.
“Clear” accidentally submitting Issue: Early “Clear & Send” flow surprised users. Fix: Modal now has two explicit buttons: Clear (just clear the textbox) and Send anyway (submit as-is).
Network/CORS & where to fetch Issue: Content-script fetches hit CORS; service worker timing dropped messages. Fix: Added host_permissions for the API and a background-fetch path via the service worker; popup injects on demand and retries.
Accomplishments that we’re proud of
Solid MV3 wiring: Background service worker, content script, and popup communicate reliably with async sendResponse handling and inject-then-retry logic.
Editor detection that actually works on ChatGPT: Resilient to SPA updates, shadow DOM, and re-renders via MutationObserver + global event path.
Graceful offline fallback: If the Cloud Run scorer is unreachable, local regex still estimates savings and enables cleanup.
Defensive UX: Guards around every querySelector, EMPTY_SUMMARY defaults, and a fallback ruleset so the UI never shows “undefined.”
What we learned
MV3 lessons: You need the right permissions, host_permissions, and web_accessible_resources (for JSON rules), plus scripting.insertCSS/executeScript for fallback injection.
SPA + (shadow) DOM realities: ChatGPT re-renders without full page reloads; listeners vanish unless re-attached.
A MutationObserver + periodic scanEditors() keeps hooks fresh; e.composedPath() sees through shadow roots.
Treat nodes as ephemeral and use a WeakSet to avoid double-attaching.
Privacy & scope discipline: No keystroke streaming; we only send the prompt on Enter. Everything else (badge counts, highlights) runs locally.
Packaging & edge cases: Some pages block content scripts; the popup now injects on demand and reports clear errors. We hit CORS/timeouts and shifted fetches to the background worker for reliability.
Accessibility polish: The modal has roles/labels, focuses itself, and supports Esc to dismiss—small touches that make it feel native.
What’s next for WasteNotPromptNot
Inline highlights + undo: Show exactly what will be removed, with one-click restore.
Auto-clean (opt-in): Clean as you type; intercept only edge cases.
Personalization: Toggle rule categories (greetings, hedges, punctuation), per-site settings, and custom phrases.
Smarter scoring: Add on-device scoring (TF.js/ONNX) for low-latency, private judgments; cache results to cut calls.
Privacy hardening: No prompt logging, transparent policies, explicit “local-only” mode.
Savings dashboard: Token/$ estimates plus CO₂ / water equivalents; weekly “impact” recap.
Multi-site support: Extend beyond ChatGPT to Claude, Gemini, Perplexity, and generic textareas.
Internationalization: Language-aware rules and UI (starting with ES/FR/DE).
Perf & reliability: Debounce scans, trim MutationObserver noise, stronger SPA/shadow-DOM detection, robust retries.
Org/Team mode: Central rule sets, enforced defaults, opt-in aggregate sustainability metrics.
Packaging & reach: Ship to Firefox/Edge; explore Safari; open-source core with Playwright tests and CI.


Log in or sign up for Devpost to join the conversation.