Inspiration

Happy Radio Time was inspired by the visual language of Korean stage plays and the emotional pacing of folk storytelling. The video reimagines a mythic tale of separation, transformation, and missed reunion—told through modular AI visuals and anchored by a shared radio that connects father and son across time.

What it does

This music video tells a nonlinear story:

  • It opens with a man shipwrecked on a remote island, where he meets a woman and starts a family
  • Then flashes back to his childhood, listening to an old radio with his sailor father at the docks
  • Before leaving, the father gives the boy the radio
  • As a young adult, the boy sets out to find his father, clutching the radio as a lifeline
  • He’s shipwrecked, builds a life on the island, and eventually returns to civilization with his family
  • In a quiet twist, the father is shown shipwrecked on the same island, sitting beside the radio the son left behind
  • The final scene shows the family eating a joyful Korean meal, unaware of the father’s arrival

The radio becomes a symbol of memory, connection, and emotional inheritance.

How we built it

The video was built organically, without storyboards. We broke the story into three parts and developed each scene using modular prompts—one per shot—to maintain runtime safety and emotional clarity. Because consistency is difficult with AI, we relied heavily on reference images to preserve visual continuity.

The music had already been written, produced, and released before the video. We restructured and trimmed the track—making sample changes while keeping the runtime consistent—to fit platform constraints like Instagram and TikTok. The video was built entirely around the emotional rhythm of the song, with transitions timed to match its pacing.

We used a mix of cuts and fading transitions depending on the emotional tone of each scene. Each shot was treated as a discrete visual unit, allowing for precise control over pacing, lighting, and symbolic continuity.

Challenges we ran into

  • Maintaining visual consistency across scenes
  • Avoiding runtime overload and motion conflicts in multi-character shots
  • Preserving nonlinear clarity without dialogue or voiceover
  • Balancing emotional pacing with platform runtime constraints

Accomplishments that we're proud of

  • Created a visually coherent, emotionally resonant music video
  • Embedded symbolic storytelling through props and scene structure
  • Delivered a complete mythic arc in under 2 minutes without exposition
  • Remixed the original track to fit platform constraints while preserving emotional integrity

What we learned

We learned how to adapt nonlinear storytelling into modular AI video formats, simulate theatrical logic without dialogue, and balance emotional pacing with runtime safety. We refined our workflow for reference-based consistency, symbolic continuity, and remix logic across platforms.

What's Next

We’ll continue producing music videos across different genres, adapting each one visually to match the emotional rhythm and structure of the track. We are working on two new music videos currently in Production.

Built With

  • kling
  • openart
  • runway
Share this project:

Updates