Inspiration

The inspiration for DAWNFALL came from a central paradox in science fiction:
If humanity could rewrite time, would we truly escape destruction—or simply repeat it?

The film’s worldbuilding—a future where a mysterious “Black Stone” can reverse time, yet civilization collapses into nuclear war—demanded visual ambition far beyond traditional shooting. This challenge motivated us to explore AI not only as a tool, but as a creative engine capable of shaping environments, moods, and even narrative symbolism.

DAWNFALL became a way to answer a larger artistic question:
[ \text{Can AI participate in storytelling, not just image generation?} ]


What it does

DAWNFALL is a hybrid sci-fi short film combining AI generation, CGI, and live action. More than 50% of the shots are AI-generated or AI-assisted.

The film showcases:

  • Post-war ruined cities
  • UE5-rendered spacecraft and planetary environments
  • AI-generated cosmic phenomena, explosions, and energy fields
  • Seamless visual integration achieved through color grading
  • A climactic scene where a pilot dives into the Sun to activate a time-reversal event

Beyond visuals, the film reflects on a philosophical core: [ \text{If human nature remains unchanged, time itself may be trapped in a cycle.} ]


How we built it

1. Identifying Missing Shots

After completing the rough cut, we compared the script with the edit to determine which scenes were missing—ruined landscapes, spacecraft motion, special lighting environments, and more.

2. Static Frame Drafting (ComfyUI + Flux + Sora I2I)

We used leftover on-set frames and UE5 block-out scenes as inputs.
The workflow included:

  • Pixel resampling via K-samplers
  • Latent noise injection and multi-step denoising for detail enhancement
  • Tuning steps and denoise to balance structure vs. creativity

This iterative generation process required extensive “rerolling” to achieve consistency.

3. AI Video Generation (T2V / I2V / F-I2V)

We chose different generation modes based on the needs of the shot:

  • T2V — ideal for abstract or dreamlike sequences
  • I2V — ensures stable camera motion and consistent style
  • F-I2V — integrates multiple reference frames; essential for character continuity

The spacecraft chase scene, for example, was built through F-I2V using many UE5 spacecraft renders as reference.

4. Post-production (DaVinci Resolve)

All AI, CGI, and live-action shots were imported and unified through:

  • Color grading to ensure style coherence
  • AI re-generation or paint-over for problematic frames
  • Topaz Video AI for super-resolution and cleanup

The result is a visually unified film where AI shots blend naturally with practical and CGI footage.


Challenges we ran into

1. AI unpredictability

AI models produced inconsistent outputs across iterations, especially for character identity, facial continuity, and multiperspective scenes.

2. Maintaining style consistency

Different AI tools (Stable Diffusion, Runway, Vidu, Sora) each had distinct aesthetics. Unifying them required heavy grading and careful referencing.

3. Ensuring dynamic coherence

I2V and F-I2V struggled with maintaining subtle continuity—lighting shifts, reflections, texture stability—requiring manual refinement or regeneration.

4. Complex multi-tool workflow

The pipeline involved UE5, ComfyUI, DaVinci Resolve, Topaz, and multiple AI models. Keeping assets consistent across platforms was a major operational challenge.


Accomplishments that we're proud of

  • Over 50% AI-generated footage, integrated smoothly into the final film
  • A hybrid AI + CGI workflow where UE5 provides structure and AI expands detail
  • Recognition from several film festivals, including Best Visual Effects awards
  • Demonstrating that AI can be a storytelling partner, not just a patch tool

The project shows that AI filmmaking is evolving from experimentation toward true production capability.


What we learned

  • AI is a creative paradigm, not a single tool
  • Color grading is the key to visual coherence
  • Strong results come from the synergy of live action + CGI + AI + post
  • Controllability is more important than raw fidelity
  • Image-generation and video-generation models require different strategies
  • Model ensembles outperform relying on one model alone

What's next for DWANFALL

1. Narrative-level AI assistance

We aim to let AI contribute to story rhythm, previz, and scene design—beyond just image synthesis.

2. More controllable production pipelines

Developing a robust workflow based on Flux, Sora, and next-gen controllable models.

3. Expanding the DAWNFALL universe

Possible directions:

  • Origins of the Black Stone
  • Civilizations surviving post-nuclear collapse
  • Parallel worlds after time reversal

4. Toward an AI-driven filmmaking studio

Building a unified pipeline: [ \text{Script} \rightarrow \text{Storyboard} \rightarrow \text{3D Previz} \rightarrow \text{AI Generation} \rightarrow \text{Post} ] A fully scalable model for future AI-assisted filmmaking.


Built With

  • comfyui
  • davinci-resolve
  • flux
  • runway
  • sora
  • stable-diffusion
  • topaz
  • unreal-engine-5
  • vidu
Share this project:

Updates