Inspiration:
The Awakened began as a sci-fi novella I co-authored, inspired by the idea that if reality is a simulation, then what is truly real? We placed Genghis Khan inside a virtual world, forcing him to confront his own existence as a mere character in a game. This story explores identity, free will, and the human condition across history and myth. For years, as a filmmaker, I found imagination constrained by budget. The Awakened, a high-concept, high-budget vision, was delayed by traditional limitations. Now, with generative AI, we can finally bring this universe to life, breaking free from industrial constraints and empowering new creators. Embracing AI is not just about realizing my own vision, but about democratizing creativity for the next generation of storytellers. This trailer is the prologue to a multi-season saga, drawing from the scale of Game of Thrones, the cybernetic mood of Ghost in the Shell, and the paradigm shifts of The Matrix.
What it does
“The Awakened” is a cinematic AI-generated trailer that visualizes a high-concept sci-fi world where Genghis Khan discovers he is only a character inside a virtual game. The trailer introduces the core conflict of the larger multi-season saga: identity vs. simulation, free will vs. control, and humanity vs. algorithmic power. It functions as both a narrative proof-of-concept and a worldbuilding prototype, demonstrating how generative AI can bring epic-scale storytelling, spanning ancient history, mythology, and cyberpunk futures, to life without traditional production barriers.
How we built it
We built The Awakened trailer through a multi-model AI filmmaking pipeline that mirrors real cinematic production. The process began with transforming our 200,000-word novel into a 10,000-word outline using ChatGPT, then distilling it into action beats, trailer beats, and a four-act trailer script. Each act was broken into scenes and then shot-level direction, allowing us to design the trailer with the precision of traditional pre-production.
Midjourney guided early visual exploration, while Flora became our primary multimodal engine for generating images and video. We pursued automated storyboards through Wan 2.2 with custom scripting—an ambitious attempt that ultimately failed, but helped refine our workflow. To maintain consistent character identities, we fine-tuned Genghis Khan, Mulan, Daji, and Tudur using nano-banana.
Editing in CapCut treated AI outputs like raw dailies—selecting takes, shaping performance rhythm, and identifying missing pickups. Some scripted beats were adjusted when certain visuals proved impossible, echoing the improvisation found on real sets.
We used Hailuo for violent sequences and Sora 2 for large-scale battles and performance ideation; Sora 2 was essential for helping AI artists grasp emotional energy for prompt refinement. Dialogue came from Wan 2.5 and Sora 2, narration from ElevenLabs. Except for the music, every element was created with AI.
This pipeline enabled us to realize epic, high-budget imagery once impossible under traditional production constraints.
Challenges we ran into
One of our biggest challenges was navigating the reluctance of real actors to participate in AI-related projects. Due to industry concerns and SAG guidelines, multiple performers declined involvement, which forced us to rely entirely on AI-generated performances. This meant we had to master directing AI “actors” from scratch, a process that required extremely precise prompts and left no room for the kind of spontaneous human interpretation we’d typically rely on.
We also faced hurdles with the battle scenes. The opening clash between the Mongolian and Greek armies at the Edge of Heaven demanded intense, violent visuals that some AI tools couldn’t handle or refused to generate. While Hailuo worked well for violent scenes, it sometimes lacked the spatial grandeur we needed, especially for maintaining character consistency in close-ups during fights. We often had to turn to tools like Runway for face-swapping when close-ups were unavoidable and continuously adjust our approach.
Ultimately, these challenges pushed us to innovate and refine our AI filmmaking pipeline, ensuring that we could still deliver a cinematic, cohesive trailer despite the constraints.
Accomplishments that we're proud of
We’re proud that we created a cinematic trailer for a large-scale sci-fi universe almost entirely with AI — a project that would traditionally require a full studio, large crews, and a multimillion-dollar budget. We transformed a 200,000-word novel into a four-act trailer with defined character arcs, emotional close-ups, large-scale battle sequences, and worldbuilding that spans epic historical warfare, futuristic societies, and distinct character designs.
We developed a multi-model pipeline integrating Flora, Sora 2, Hailuo, Runway, nano-banana LoRA training, and custom prompt systems, allowing multiple AI artists to collaborate efficiently across hundreds of shots. We also built a detailed production-tracking spreadsheet to manage workflows, versions, and shotlists, making the entire pipeline structured and scalable.
We are especially proud of the emotional performances we achieved — from Daji’s subtle, dangerous elegance to Genghis Khan’s internal conflict and awakening — as well as the action sequences created by strategically combining the strengths of different AI tools. Despite actor rejections, tool limitations, and the technological difficulty of battle scenes and character consistency, we produced a trailer that captures the scope, philosophy, and emotional heartbeat of The Awakened.
What we learned
What's next for The Awakened | Film Trailer | Sci-Fi
Built With
- alt-takes-nano-banana-?-lora-fine-tuning-for-main-characters-chatgpt-?-story-breakdown
- capcut
- chatgpt
- controlla
- dialogue
- dialogue-hailuo-?-violent/action-scenes-runway-?-face-replacement-&-shot-fixes-midjourney-?-visual-ideation-&-concept-design-wan-2.2-/-wan-2.5-?-storyboards
- elevenlabs
- excel
- flora
- hailuo
- midjourney
- nano-banana
- performance-ideation
- runway
- scenes
- script
- sora2
- wan2.2
- wan2.5
Log in or sign up for Devpost to join the conversation.