Inspiration
Station-0 grew out of real research into the future of high-power industries. As I learned about proposals to relocate data centers off-world to reduce Earth’s energy load and environmental impact, I became fascinated by the risks involved—especially the dependence on a potentially tethered system to send data back down. That tension between innovation and vulnerability became the foundation of the story.
I was also driven by a technical goal: to blend traditional CGI in Unreal Engine with AI-generated visuals in a way neither tool could achieve alone. Unreal handles the dynamic space shots AI can’t yet create with as much fidelity, while AI is used to power the characters and more grounded sections of the film, as well as to create and iterate the overall visual style.
What it does
This project is a sci-fi film trailer proof of concept that showcases the story, tone, and visual approach for a larger feature set in the first orbital data center.
How we built it
This project was created using a wide range of image and video generation tools—inside Krea, along with Google Flow (VEO 3.1) for video and MidJourney. I used Unreal Engine to build the space station and orbital environments with a variety of assets, allowing the shots that AI couldn’t reliably produce to elevate the film with the movement possible with Unreal. All final imagery and sequences were assembled and refined in DaVinci Resolve Studio, using its effects, color tools, and post-processing to unify the look and add motion, depth, and polish to the AI-generated material.
Challenges we ran into
The biggest challenges came from AI’s struggle with physics and precise character motions, especially anything involving hands, tools, or complex actions inside a cramped environment. Even with the newest image tools like NanoBanana and Seedream, getting consistent character positioning—sometimes even in a clean reference pose against a white wall—proved difficult. Maintaining continuity across shots and keeping characters correctly oriented also required workarounds and careful curation.
Accomplishments that we're proud of
I'm especially proud of how well the Unreal Engine environments blended with the AI-generated imagery, creating a cohesive visual style that feels cinematic and large-scale. The hybrid workflow proved the project’s core idea—that traditional 3D and modern AI tools can meaningfully complement each other. Seeing those two worlds merge smoothly was a major win.
What we learned
Going forward, integrating a controllable image model like Qwen in Comfy with more precise control will help achieve more accurate positioning, continuity, and motion. Combining these improvements with Unreal Engine will allow for even more precise, cinematic sequences and a smoother fusion between AI-generated imagery and traditional 3D rendering.
What's next for Station-0
I'm excited to push this even further by building more interior backdrops entirely in Unreal and then using pose-controlled AI to merge characters and environments with even greater accuracy and realism. Additionally, I plan to expand the scope of the trailer to add more to the story and consider options to turn it into a full narrative film (Prob in the 20ish minute range, but a feature-length version would, of course, allow the world to grow further).
Built With
- davinci-resolve
- google-flow
- krea
- multiple-additional-img&video-models
- unreal-engine

Log in or sign up for Devpost to join the conversation.