Inspiration

The project Climate Continuity was inspired by the urgent need to visualize the long-term impacts of climate change that are caused by urban evolution in a way that is immediate, personal, and visually resonant. We wanted to go beyond graphs and data and use art to provoke curiosity, reflection, and awareness on climate change. By showing people what familiar, and maybe even dear, places could look like in 50, 100, or even 500 years, we aim to bridge the gap between scientific data, such as graphs and trends, and visual storytelling through art. Utilizing an AI-driven program to do so brings in a wider audience, as AI has grown largely popular in recent times. This grows the awareness and impact of our project, and makes it available to all.

What it does

Climate Continuity is an AI-powered contextual art project that takes an aerial image of a modern city or landscape and a time-based number (50, 100, 250, or 500 years) as input. It generates a future version of that place, transformed by the worsening global temperatures due to climate change. Grassy landscapes will be turned brown… beachside towns will be flooded, and clear skies will be grey with smog from excess greenhouse gases. The result is a piece of art in the form of academic realism that helps people contemplate the path that our planet will take if we as humans do not change our actions and behaviour, impacting the emissions we produce.

How we built it

  • Huggingface provided stable-diffusion-xl-refiner-1.0 model for image-to-image generation
  • Torch computer vision library
  • Google Colab
  • Python language -Gradio for UI

Challenges we ran into

While creating the idea for our program, we encountered many challenges. One of them was trying to make our program conform to either framed art or contextual art. This was because we wanted our product to be artsy while informative, which was difficult as once the generated image gained the level of information we desired, it barely resembled art. As a result, we switched to generating more artsy pieces, as we tried to stick to the “art” theme of SAAI. However, in the end, we realised that we should stick with what we know best– our original idea! By doing so, we developed and discussed our original product till it was well-rounded and had an impact on those who use it. We successfully overcame our problem, motivating us to keep working. For the mock product: Another problem we encountered was how we would generate our sample product. At first, we wanted to train a stable diffusion image generator with our desired images so it would generate our intended product. This would then be our sample product. However, with only 3 days left and school work that was overlapping with this Hackathon, we didn’t have time to complete the technical parts of training the AI model to our liking. Thus, we struggled, fearing that the lack of technicality in our project proposal was dangerous. In the end, we decided, we just needed to give a taster of our idea– this could be done using existing models, like ChatGPT!

What we learned

  • Art can make the future feel more viscerally and emotionally moved than data alone, due to it's visual presence.
  • People respond emotionally to images of familiar places that have changed over time.
  • Climate change is one of the biggest concerns of the modern day, and if not fixed or highlighted as a major issue, our generated art might end up as a real representation. This is why we must raise awareness for climate change reduction, by using various methods, one of them being Climate Continuity. ## What's next
  • We would love to be able to add a more detailed user interface, a genuinely friendly face for our users to interact with our product.
  • We would love to gain a specific balance between realism and more artistic styles of art in our product.

Built With

  • gradio
  • huggingface
  • python
  • stable-diffusion
  • torch
Share this project:

Updates