Writing Process

I first brainstormed some ideas with ChatGPT back in December 2024 and then I developed it more and figured how fun it might be if it just ends up ultimately super depressing. What if his mom dies, what if he's laughing at her funeral?? What if he just can't stop no matter what? What if he's even laughing once he's dead?

While writing, I also would put the story into ElevenLabs and hear it back to hear the flow and the rhythm of the prose and see what needed fixing. Once I got to the end (of editing it all, really), the ending didn't feel right, it was originally just saying that the comedy club keeps people laughing, but then I asked myself "what is this really about?" and that's what tied it all together.

Technical Process

First, I knew it was most important to create the character, so I tried Midjourney and Imagen3 to see what would stick best for me. Imagen3 came out with a decent image so I used that as my starting point. I thought what sort of job he might have, etc etc. Just building him up as it would inherently build his world.

I tried doing character consistency with Midjourney, but for the life of me, it wouldn't get his face right -- I'd always have to throw in a Paul Rudd here or there to correct it lol. For most of the character consistency, I just uploaded my subject to Minimax and then prompted whatever action I wanted. This is what I did for the majority of the time you see him.

I also trained a model on him in Krea to use with Flux Realtime for a couple of the bar shots where he's laughing in slow motion -- I put those into Kling, since I believed it was the highest quality at the time (March 2025), for image to video (even if Minimax was the most prompt-coherent).

Then Google AI Studio came out (now the feature known as Nano Banana) and I would use that and ask for different angles for the same scene (like in the apartment). I would also do this for those really cool zoom in shots -- I'd take a screengrab of a Minimax video I'd made (since this was the most consistent face I had) and then put that into Google AI Studio and asked for a close-up on his eyes and then first frame + end frame in Kling and that's how those really cool shots happened.

For any shots that didn't need character consistency -- like inserts, close-ups, cutaways, establishing shots -- I would either generate an Imagen3 image or I'd use Veo 2. These seemed the best for any surreal stuff as well. Veo 2 was incredibly realistic and the 8 seconds beat Minimax's 6 for that time too.

Most of the outputs from all of these are quite low quality, so in DaVinci Resolve, I always like to use a decently strong 35mm film grain to trick your brain into "seeing" more quality. And then I exported my 1080p timeline in 4k (upscaled by DaVinci). I would use Topaz Labs or even enhancing in Krea, but to my eyes, AI upscaling still gives off a bit too much of an “ai sheen” and too much artifacting is introduced (I mean your ai generation is magnified, so of course!), so I prefer a blurrier image with film grain at that point.

For the voices, I made the narrator out of Voice Design, because you can get quite good acting if you prompt well! Then I made a face I liked in Midjourney for the comedian and found a matching voice in ElevenLabs, then I used Hedra's Character 3 to make him speak. All the music and most of the sound effects came from Artlist.io. I modified the choir bit of the final song and extended it in Udio, since the original cut off before I wanted it to and didn't have a satisfying musical ending. Some other sound effects like the distorted laughter and such I used ElevenLabs, since it's good with otherworldly sounds.

Built With

  • chatgpt4o
  • davinciresolve
  • elevenlabs
  • google
  • hailuo
  • hedra
  • imagen3
  • klingai
  • midjourney
Share this project:

Updates