Inspiration

In a world flooded with climate misinformation, education must go beyond facts—it must engage. This game transforms passive awareness into active understanding by letting players experience the real-world impact of their choices. Through interactive storytelling and dynamic simulation, it cuts through confusion, debunks myths, and empowers users to explore evidence-based solutions. You can’t fact-check the future—but you can simulate it.

What it does

Our platform is an interactive climate change simulator that lets users explore how their decisions—like enacting policies, adopting technologies, or shifting behaviors—impact the planet. Through a dynamic, game-like experience, users can test the outcomes of climate strategies in real time, learning how local actions cascade into global consequences. The goal is to combat misinformation and promote systems thinking by making climate science engaging, visual, and experiential.

How we built it

The simulation runs on a multi-layered AI stack. Fetch.ai agents, powered by Grok, model decentralized decision-making and policy interactions across sectors and regions. Anthropic’s Claude is used to label, classify, and contextualize user-generated data and in-game scenarios, ensuring semantic coherence and accuracy. The core climate engine is driven by Google’s Gemini, which runs high-fidelity simulations that translate user actions into environmental and socio-economic outcomes—such as CO₂ levels, sea rise, energy transitions, and ecosystem shifts. The system uses emergent behavior to reflect real-world complexity and uncertainty in climate dynamics.

Challenges we ran into

Accessing high-quality satellite imagery for climate modeling poses significant hurdles. Public archives often contain data plagued by cloud cover, poor lighting conditions, or inconsistent sensor saturation, making it difficult to isolate usable frames for training or visualization. Finding cloud-free, seasonally consistent, and radiometrically calibrated imagery—especially for specific time periods or locations—requires intensive filtering and cross-referencing across multiple datasets and metadata fields. Compounding this is the challenge of ingesting such data at scale. Many satellite data APIs are rate-limited or backed by slow, legacy servers not optimized for large, parallel workloads. Efficiently scraping or querying these sources requires sophisticated orchestration: massively parallelizing API requests without overloading endpoints, managing retries, caching intermediate results, and balancing throughput with stability. These bottlenecks directly impact the system's ability to deliver timely and reliable simulations grounded in real-world geospatial data.

Accomplishments that we're proud of

One of our key achievements is the development of an AI agent capable of interfacing seamlessly with a wide range of major weather and climate APIs. This includes services from NASA, NOAA, ECMWF, Open-Meteo, and commercial platforms, each with their own data formats, authentication schemes, and domain-specific quirks. The agent dynamically adapts to different endpoints—whether it's retrieving historical climate trends, real-time weather, forecast models, or satellite data layers—and intelligently harmonizes the responses into a unified schema for downstream analysis or simulation. It can parse heterogeneous metadata, handle missing or uncertain inputs, and resolve spatial and temporal mismatches across sources. This interoperability transforms fragmented climate intelligence into actionable context, enabling real-time scenario generation and significantly enhancing the responsiveness and realism of the game’s simulation engine.

What we learned

Working with climate data at scale taught us a hard truth: big data analysis is slow—painfully slow—especially when juggling high-resolution geospatial inputs, real-time simulation, and fragmented API ecosystems. It’s easy to fall into the trap of endlessly optimizing data pipelines or chasing more datasets in the name of accuracy, but at some point, we had to ask: Is this making the experience better, or just slower? One of our key takeaways was the importance of drawing a clear line between "data completeness" and building a functional MVP. Having a simulation that runs, teaches, and responds—even if on a smaller, cleaner dataset—is far more valuable than an endlessly delayed product waiting on the perfect archive. Prioritizing responsiveness, clarity, and user experience over data volume helped us stay lean, ship faster, and focus on what really matters: insight, not just information.

What's next for EcoSim

Looking ahead, we’re focused on expanding both the depth and expressiveness of the simulation. On the data side, we plan to integrate additional climate and socio-environmental datasets—including land use change, wildfire risk, migration projections, and climate vulnerability indices—to enrich scenario realism and provide more nuanced user feedback. In parallel, we’re developing a finetuned Stable Diffusion model trained on curated satellite and climate impact imagery. This will allow the simulator to generate visual futures—photorealistic, localized renderings of environments under different climate trajectories, bringing abstract data to life. By combining quantitative simulation with evocative AI-generated visuals, we aim to make the consequences of climate decisions more immediate, visceral, and personal.

Built With

Share this project:

Updates