Inspiration

The idea for this project first came to me in 2013. I wanted to make it as a short film. Later I discovered the music of Dance With The Dead and realized that their music aligned perfectly with this story. The music triggered very specific shots in my head.

At that time I had no team and no skills to realize it, so I postponed the project. I occasionally remembered it and thought it would be good to return to it in the future.

The emergence of AI tools became the turning point. I understood that this was a real chance to finally produce the film. I was inspired by 1980s thrillers and horror films, and by the directing style of John Carpenter, Steven Spielberg, Robert Zemeckis and that entire era. I aimed for the visual and emotional atmosphere of 80s s genre cinema.

What I Learned

Working on this project taught me several things: • AI tools have strong limits when you try to match a very precise mental image. • Simple actions, such as a close-up of a hand holding an object, can be harder to generate than spectacular fantasy shots. • Achieving a convincing $1980$ s film look requires more than adding a “film” filter. It depends on contrast, lighting style and camera behavior. • Clear communication with a model is a skill. I had to invent many tricks to show the model exactly what I wanted. • Perfection is not always possible. At some point I had to accept good results instead of ideal ones to finish the project.

How I Built the Project

My workflow was a combination of several tools and many iterations: 1. I generated video mainly with Google Veo. I may have also used some ByteDance tools, although the interface did not clearly show which model ran each generation. 2. I set up sequences of shots using ReferenceFrame (StartFrame, and EndFrame often). 3. I created reference images in NanoBanana (using one root reference) and Midjourney. 4. I relied heavily on NanoBanana. I generated thousands of frames and tried many variations. 5. When NanoBanana could not understand the pose or composition, I prepared rough sketches in Photoshop and asked the model to correct and refine these sketches. 6. I constantly tried to push the image toward a realistic 1980 s cinematic look. I adjusted prompts, reference frames and compositions to keep the contrast and sharp lighting that were typical for that period. 7. For each important shot I made dozens of generations in order to approximate the frame that had existed in my head since 2013.

Challenges

The project faced several technical and artistic difficulties: • NanoBanana often produced a slightly cartoonish style instead of the realistic look that I wanted. • Attempts to ask for “cinematic” or “film-like” images often led to low-contrast pictures that looked like modern filters, not like real 1980s s or 1990s s film scans. • Many faces shifted toward stylized or animated appearances. I sometimes accepted this due to time limits, although I wanted more natural faces. • Some of the most basic actions were hard to generate correctly. For example, a hand holding an object in a specific way often required many attempts. • Long shots were difficult. When a shot needed to be dynamic and clean for the entire duration, artifacts and glitches became a serious problem. • Video models often provided fixed durations, such as 5 seconds. When I needed a 1 second shot, I had to design prompts so that the useful action happened in that short interval and then cut off the rest during editing. • Even after many iterations, some frames never reached the exact composition, light and camera movement that I imagined. The models simply could not reproduce every nuance.

Despite these limitations, the results sometimes surprised me. Certain AI-generated shots exceeded the images I had originally held in my mind and expanded my sense of what this story could look like on screen.

Built With

  • envato
  • googleveo3
  • midjourney
  • nanobanana
Share this project:

Updates