Inspiration

We were tired of seeing corporate sustainability claims that didn't match reality. While researching a "carbon neutral" shipping company, we spent 8 hours digging through SEC filings, news articles, and academic studies to discover their "neutrality" relied on questionable carbon offsets. That frustration sparked the idea: what if AI could do this verification in minutes? With 1,841 greenwashing events documented in 2024 alone and 30% of offenders being repeat violators, we saw an urgent need for accessible verification tools. Existing Environmental, Social, Governance (ESG) platforms cost thousands of dollars annually, leaving consumers, journalists, and advocates without the resources to hold corporations accountable.

What it does

Greenwash Radar is an AI-powered search engine that detects corporate climate misinformation. Users paste any environmental claim or news article, and our platform extracts specific statements, gathers real-time evidence from regulatory reports and academic studies, calculates quantitative risk scores, and generates interactive visualizations. The system renders a 3D globe showing 150+ corporate climate pledges, creates dynamic flowcharts mapping commitments to outcomes, and delivers color-coded verdicts with source verification. Built entirely with vanilla JavaScript and a Mapbox integration, it democratizes ESG verification that previously cost $5,000-$50,000 per year.

How we built it

We split into two teams: frontend focused on vanilla JavaScript mastery, backend handled AI orchestration. Frontend used pure HTML5/CSS3/ES6+ to build the hero interface and the reporting page, Mapbox GL JS for the 3D globe, and Mermaid.js for flowcharts. No frameworks were used. Backend built a Flask API integrating OpenAI Responses API with web search capabilities, creating a multi-agent pipeline: GPT-4o extracts claims, GPT-5-mini retrieves evidence, alongside a finalized verdict regarding the level of greenwashing. We stored 150+ companies in GeoJSON format. The modular service architecture lets us swap AI providers without frontend changes.

Challenges we ran into

The biggest challenge was getting OpenAI's web search to work reliably within the Responses API. Ensuring that responses had appropriate structures which could be parsed for later use was extremely important to ensuring the reliability of our product. Rendering the 3D globe with smooth performance using vanilla JavaScript was another hurdle, on top of ensuring that tooltips were available to foster user interaction. Mapbox GL JS documentation assumes framework integration, so we had to manually optimize WebGL rendering loops and implement our own event delegation system. Styling is a natural problem given our approach, but with a variety of different views and objects allowing users to experience the same data across semi-different modalities managing space was extremely difficult.

Accomplishments that we're proud of

Achieving consistent results across performance, accessibility, and best practices with a feature-rich vanilla JavaScript application. Most hackathon projects rely on React or Vue for speed; we proved deep document object model manipulation mastery delivers superior results. Our multi-agent AI pipeline proved robust in identifying contradicting behaviour in comparison with pledges that were made, particularly when cross-referenced through real-world research. The intelligent caching architecture enabled free public access at scale, something commercial ESG platforms can't match without sacrificing affordability. We're also proud of building a stakeholder-aware design: the 3D globe serves regulators, flowcharts help the everyman, and resources can aid journalists in their role as climate change advocates.

What we learned

Vanilla JavaScript forces architectural discipline that frameworks often hide. We learned to optimize every DOM operation, implement efficient event delegation, and write modular code without component abstractions. We discovered that real-time web search integration discovers "greenhushing" trends where companies reduce transparency to avoid scrutiny. This live evidence approach found 23% more contradictions than historical data methods. We learned that hackathon projects can prioritize both technical excellence and social impact without compromise. The modular service architecture taught us to design APIs that anticipate provider changes, future-proofing our platform against AI vendor shifts.

What's next for Greenwashing Radar

Immediate: Expand to satellite imagery analysis for deforestation claims and build industry-specific models for high-risk sectors like oil and gas. Launch a Progressive Web App with offline capabilities for field journalists. Medium-term: Integrate directly with SEC EDGAR and EU CSRD databases, create a Corporate Transparency Score (like a credit rating for sustainability), and add a secure whistleblower submission portal. Long-term: Position as the public-interest alternative to proprietary ESG ratings, provide regulators with systemic pattern detection, and release an open dataset for academic research. The AI in ESG market is projected to reach $846.75B by 2032, and we're building the transparent, accessible foundation for verified corporate accountability.

Built With

Share this project:

Updates