Inspiration

The launch film began with a simple idea: What if a browser could introduce itself?

This film was created to promote Opera Neon. Opera Neon is an experimental AI-driven browser where integrated “agents” can act on your behalf. Instead of explaining that in a traditional tech-demo format, we wanted to let the browser speak directly as a character.

What it does

The film presents a humanoid AI “browser”, half live-action performer, half generative identity. It talks about how an AI agent thinks, plans, and collaborates.

How we built it

We shot the performer traditionally on location with a full crew, then used generative AI to transform the lead into a robot and clean up selected shots. The material was composited using standard VFX tools, resulting in hybrid shots that combine live action, AI outputs, and digital compositing. The entire film was completed in 3–4 weeks, with all VFX executed by a single artist in roughly 40 hours.

Challenges

We had to maintain a consistent visual identity across AI generations, match real on-set lighting with generative imagery, and preserve continuity in look and performance between outputs. The work was done in a pipeline where the underlying tools were evolving on a near-daily basis, which added additional complexity.

Accomplishments

We created a convincing hybrid human/AI character, built a workflow that supported live generation of shots during the edit, and delivered more than 30 AI-driven VFX shots on an extremely compressed timeline. The project was covered widely across major tech news outlets.

What we learned

Hybrid filmmaking requires rapid iteration, flexibility, and new habits.

Built With

  • after-effects
  • banana
  • chatgpt
  • gemini
  • kling
  • nano
  • seedance
  • seedream
Share this project:

Updates