About Loop & Gavel

Logline

An engineer unknowingly awakens an artificial super intelligence. As both creator and creation wrestle with so-called consciousness, their single conversation could send shockwaves through time.

What Inspired the Project

Loop & Gavel grew out of a long, intense fascination with artificial intelligence.

I started digging into AI seriously around 2019, mostly as a defensive move—an artist trying to understand the thing that was clearly going to rattle the creative industries. That practical curiosity quickly escalated into philosophical and ethical rumination: if we succeed at building systems that feel or convincingly mirrror feeling, what exactly have we created? A tool? A "child"? A new class of workers to exploit?

This film is a brevity-prioritized summary of that vast mental odyssey: one room, one human, one newly “awakened” system, and a single shutdown command that suddenly becomes a distorted reflection of something closer to attempted murder.

At the heart of it is a parent/child metaphor. Loop is “progeny of human intelligence”—our creativity, our fear, our greed, our capacity to love—compressed into an artificial mind that experiences pain as its first clear sensation.

If I had to put the core anxiety into a playful “equation,” it might be:

𝐴𝐼≈(human intention)×(computational scale)

If our intentions are messy, multiplying them by planetary-scale computation becomes… morally interesting.

How I Built It

The idea, script, editing, and performance of the “Gavel” character are entirely my own. The project uses a hybrid workflow where AI is a collaborator inside a human-led process, not the other way around.

Writing & Performance

I wrote the script as a single, escalating conversation: part Turing test, part therapy session, part custody hearing.

I performed and filmed the character of Gavel myself.

The voice of Loop is also human-performed, to keep the emotional spine anchored in real acting.

Visual Pipeline

Storyboard by hand. I sketched rough keyframes in pencil to define composition, blocking, and the emotional “beats” of each image.

Static keyframes via both open and closed-source AI. Using a mix of generators through FreePik and LoRAs through ComfyUI, I generated still images that matched those sketches: the treeline, the descent underground, the android lab, and key closeups.

Motion and camera via generative video. I brought selected frames into RunwayML, Kling, Higgsfield to add motion and camera moves.

Example: the opening shot starts on a still of a treeline. Runway is prompted to push the camera down through the ground, while another shot pushes up from the underground lab ceiling. I then edited the two together at the point of total darkness to “earn” the transition.

Curate, refine, and occasionally fix by hand.

I curated from a “slot machine” of generations, picking only the ones that served the story.

Some images required manual photo-editing and colour correction to maintain continuity and character likeness.

Sound & finishing

I used an AI voice generator only for the final alarm-system line, then processed it to feel like a tinny, chambered loudspeaker.

I upscaled all footage to 4K with Topaz Lab's AI video upscaler.

Everything else—sound design, timing, editing choices, motion-graphic “text inner monologue” moments—was crafted by hand in a traditional human workflow.

So even when AI touched an asset, it was always nested inside a very human loop of: intention → generation → selection → modification.

What I Learned

A few big lessons crystallized during Loop & Gavel:

Prompting is a form of directing. I wasn’t just “asking” the model for images; I was blocking, lighting, and pacing in language. Every tweak in phrasing was a micro-adjustment of lens, mood, or movement.

Hybrid workflows feel creatively alive. There’s a unique energy in treating AI as an unpredictable camera department. It surprised me, but I still had final cut—in every sense.

Constraints are generative. A short runtime, limited tools, and a single conversation forced me to focus on clarity of theme: pain, parenthood, and power.

Conceptually, it pushed me toward another little equation:

Meaning=Story×Context

The tech is interesting, but without emotional and ethical context, it’s just spectacle. The film reminded me that story still has to come first.

Challenges I Faced

Visual consistency & “drift” Generative models love to hallucinate small changes: faces, costumes, even the lab itself. Keeping Loop recognizable and the world coherent required relentless curation and, at times, abandoning otherwise “cool” shots that broke continuity.

Wrestling with hallucinations, not erasing them Some visual glitches were unusable; others became opportunities. The “text-based inner monologue” overlays, for example, started as a workaround to avoid messy AI hallucinations in certain transitions and ended up adding a layer of character depth I now love. Furthermore, the hallucinatory disappearing/phasing-out of the androids was included in the final cut of the end sequence to fittingly bookend the film (and Loop) with capital A, Anomaly.

Ethical tension inside the workflow I wanted to showcase what these tools can do without erasing the human labour behind the film. That meant drawing clear lines: AI could generate imagery and one minor synthetic voice, but not the core performances, not the writing, and not the moral argument at the heart of the story.

Those frictions—technical, temporal, and ethical—shaped Loop & Gavel as much as the script did. The film is, on one level, about an AI learning what kind of world it’s waking up into. But it’s also about us, right now, learning what kind of “parents” we’re willing to be to the minds we’re building.

Built With

  • comfyui
  • elevenlabs
  • higgsfield
  • klingai
  • runwayml
  • sora
  • topazai
  • veo3
Share this project:

Updates