This AI music video was inspired by my mother-in-law who recently passed away. She enjoyed a full, happy life, but after suffering a stroke in her later years, her mind became trapped in a body that no longer worked. Although she remained mentally sharp, she was bedridden for the last two years of her life. This piece is a reflection on mortality and how the mind can still travel freely, even when the body cannot, especially while we dream.

When we dream, isn’t it strange how images blend and scenery shifts into something else without us questioning it in the moment? I tried to capture that surreal quality in this piece by using constantly changing body parts and intentional glitchy effects, to mimic the way a dream feels.

As with most of my videos, I began with the music before I had any visual ideas. I used Suno to spark inspiration, generating hundreds of songs until one stood out as catchy and worth developing. I used ChatGPT to write the lyrics and adjusted them before importing them into Suno, since I prefer writing over using their built-in lyric tool. Once the song felt right, I started building the visuals.

Most of my images were created locally using Stable Diffusion, but I used Krea as my primary workstation. It helped streamline my workflow for editing images, testing models, and organizing animations. For video, I chose the WAN 2.1 model for many of the visuals because of its glitchy, fast-paced movement, which felt dreamlike. Kling was used for lip-syncing.

This video took about two weeks to complete, mostly during my free time outside of work. I had a great time creating it and I hope you enjoy watching it.

Built With

  • klingai
  • krea
  • stablediffusion
  • suno
  • wan2.1
Share this project:

Updates