Inspiration
Earlier this year, I learned about climate change as the greatest threat facing humanity through a school activity. Is it really that dire? In search of answers, I embarked on a project in my data science class, using a Bayesian time series model to predict global warming. The conclusion was alarming - the 1.5℃ and 2℃ thresholds set by the Paris Agreement would be surpassed within 50 years. I questioned the accuracy of my model and extensively researched other models, only to discover that experts also cast doubts on the predictions, labeling many models as "too-hot" due to their excessive sensitivity to parameters. Evaluating and establishing unified standards for these models pose a challenge for macro-level predictions. As my research deepened, I became increasingly interested in the issue of global warming, a challenge that humanity must collectively face. I am now preparing to apply to universities for majors in Earth Sciences and Sustainable Development.
What it does
This project uses the Bayesian Structural Time Series (BSTS) model to predict future global temperatures with historical temperature data and factors that are discovered to be causing global warming.
How we built it
Temperature data is obtained from the National Oceanic and Atmospheric Administration (NOAA). Carbon Dioxide (CO2) levels data is obtained from the Scripps CO2 Program. Oceanic Niño Index (ONI) data is also obtained from the NOAA.
In this project, the Bayesian Structural Time Series (BSTS) Model was selected. The BSTS model is a statistical framework used for forecasting time series data. It was first introduced by Scott and Varian in 2013 in their paper "Predicting the Present with Bayesian Structural Time Series." BSTS is a state space model, meaning that it relies on the assumption that the system can be represented by a set of latent variables that evolve over time according to transition equations. Therefore, it has the capability to incorporate external variables and predictors and supports that observed data is related to a set of these predictors -- unobserved (latent) variables. use a Markov chain Monte Carlo (MCMC) sampling algorithm
Four models were used to use historical temperature data from 1980 to 2015 to predict temperatures from 2015 to 2023. The first model contains no regression component. The second model takes CO2 levels as its sole predictor. The third model takes ONI indexes as its sole predictor. The fourth model takes both CO2 and ONI as its predictors.
Challenges we ran into
My model estimates CO2 levels to reach 560 ppm by the year 2083. Using the predicted CO2 levels, it is predicted that global temperatures will reach 3.44 Celsius by 2083, with a 25% quantile of 0.9921175 and a 95% credible interval (-4.51, 11.48) Celsius. This estimation of 3.44 Celsius is close to the equilibrium climate sensitivity of around 3.2°C presented by IPCC (Intergovernmental Panel on Climate Change). This suggests that the model makes reasonable predictions in climate sensitivity and is not a “too-hot model”.The term “too-hot models” arose in 2019 from the Coupled Model Intercomparison Project (CMIP), which combines the results of the world’s models in advance of the major IPCC reports that come out every 7 or 8 years. In previous rounds of CMIP, most models projected a “climate sensitivity”—the warming expected when atmospheric carbon dioxide is doubled over preindustrial times—of between 2°C and 4.5°C.But for the 2019 CMIP6 round, 10 out of 55 of the models had sensitivities higher than 5°C—a stark departure## Accomplishments that we're proud of
What we learned
Climate models are important tools for researchers to predict global warming, but to what extent can they accurately reflect future conditions? Currently, climate model simulations overestimate the magnitude of future climate change warming, a problem known as the "Hot Model Problem." How can we avoid this systemic bias? The solution lies in establishing model evaluation criteria and encouraging more people to participate in predictions, promoting the design of models using different methods to increase model diversity. Additionally, open-sourcing models and providing detailed parameter information can prevent the formation of model prediction biases.
What's next for Unleashing Diversity for Accurate Climate Predictions
We want to study how to evaluate the accuracy of models, preferably by designing a clear set of evaluation criteria and utilizing AI for parameter assessment.
Built With
- bayesian
- r
Log in or sign up for Devpost to join the conversation.