Inspiration
Malignant Lunacy started with a revolutionary rapper from Washington, DC whose lyrics already hit like a system audit. The song calls out the insanity of a machine that curates destruction, glamorizes it, and sells it back to the very communities it harms.
Because this is his introduction to the world, I didn’t just want a cool visualizer. I wanted to:
- Brand him as an artist + activist, not just an entertainer.
- Visually honor heroes of the past and present.
- Give people stuck inside the system a sense that there is a way out.
I translated that into a visual language of:
- Puppet strings attached not only to artists, but also executives and politicians. It’s easy to point the finger, but many who appear “empowered” are also trapped and incentivized by the system (without using any real likenesses).
- Split screens, warping, and following the strings to guide viewers from spectacle → system → liberation.
At the heart of it, I wanted audiences to walk away feeling that even though our collective circumstances may seem bleak, we can move toward a better life. That’s why the film ends on a hopeful note, not despair.
What it does
Malignant Lunacy is an AI-enhanced narrative music film that:
- Follows a truth-telling emcee through a seductive world of industry spectacle and systemic control.
- Exposes how politicians, executives, and rappers collude to glamorize violence for profit—while communities bear the cost.
- Uses high-gloss club imagery, split-screen contrasts, and surreal chorus sequences to fracture the illusion of “harmless entertainment.”
- Turns the artist’s voice into a literal, cinematic force that snaps puppet strings attached to artists, institutions, and media.
Functionally, the project:
- Delivers a cinematic music video at a level of production value far beyond the artist’s actual budget.
- Blends AI-generated imagery, fair use news, licensed stock footage, and minimal real footage into a cohesive story.
- Serves as a proof-of-concept for how AI can democratize high-end visuals for independent artists and socially engaged storytellers.
How we built it
This project was built under three major constraints:
1) The artist is on the East Coast, I’m on the West Coast.
2) He had almost no media footprint.
3) The budget was far below what this level of production usually costs.
1. From One Selfie to a Lead Character
- I started with a single selfie—no professional photos, no performance footage.
- Using AI editing and image-to-video tools, I:
- Refined his face.
- Developed consistent “hero looks.”
- Ensured he felt like the same person across multiple shots and sequences.
- The entire workflow was image-to-video, which helped maintain consistency even across different AI models.
2. Rebuilding DC Without Leaving My Desk
The story is deeply rooted in Washington, DC—its landmarks and political symbolism—because that’s where we’re both from.
- Many “location” shots are AI-generated DC-like environments that had to feel truly on-location.
- I generated multiple variants per landmark and:
- Kept shots where the architecture, geography, and mood felt grounded.
- Cut otherwise beautiful shots that contained hallucinations or impossible structures.
- The result: many viewers assume we shot on real DC streets and at real landmarks, even though a significant portion is AI-built.
3. Working Within Violence & Policy Constraints
The narrative includes violence because that’s part of the reality being critiqued—but AI models are (rightly) cautious about violent imagery.
To navigate this:
- I crafted prompts that implied violence without fetishizing it.
- I tested multiple models to see which could “cooperate” enough to tell an emotionally honest story while respecting policies.
- I leaned heavily on editing and implication—aftermath, reactions, shadows, symbolic imagery—instead of explicit gore.
4. The Hardest Problem: Ancestral Liberation
The final act depicts summoning ancestors and liberating both the industry and the city through music.
- I had a very specific, non-standard visual style in mind for these sequences.
- I got one generation that nailed the look—but the exact same prompt never reproduced it.
- I needed multiple shots of the same summoning effect for each ancestor.
Solution: I relied on coverage and creative editing to extend and remix the successful look across several shots, and used similar strategies across much of the liberation sequence.
5. Resource-Constrained Workflow Design
Because I was finishing this for The Chroma Awards on a tight budget:
- Many promo codes were already used up by other participants by the time I decided to submit, so I had to hunt for specials and free tiers, and use paid generations sparingly.
- I secured a few days of free use on top models, but could often only run a couple of generations at a time.
- I pulled multiple all-nighters to stay within resource limits and still deliver on quality.
- I used the cheapest viable models whenever possible—some of them produced shockingly good results.
- I discovered better prompting methods for color grading late in the process, so I manually matched and corrected the look in the edit.
Stack / Tools
- AI video & image tools:
Higgsfield, Kling, Wan, Hailou, Dreamina, Sync, Google tools (Veo, NanoBanana, Whisk), Saga, Seedance, Seedream, MiniMax - Traditional tools:
Adobe tools for editing, compositing, color, and finishing - Source material:
Fair use news clips and licensed stock footage from various libraries
I treated the AI tools like a distributed VFX and second-unit crew—designing sequences, batch-generating options, then hand-selecting and assembling the pieces into a cohesive film.
Challenges we ran into
1. No Real Footage of the Artist
- I had no professional footage or studio material of the artist.
- Building a believable, stable on-screen persona from a single selfie required:
- Experimentation with facial consistency, expression, lighting, and style.
- A lot of rejection of outputs that drifted off-model or looked “AI-weird.”
- Any mismatch would have broken immersion and undermined the seriousness of the message.
2. Iconic Locations With Zero Margin for Error
- DC landmarks are iconic; locals know every angle.
- Some of the most beautiful generations had to be thrown away because the architecture was impossible or subtly wrong.
- I used compositing hacks and grading tricks to make AI shots look less “AI,” to the point that many are indistinguishable from real footage.
- This pushed my editing software to its limits:
- Heavy effect stacks on each clip.
- Crashes and export issues.
- I had to hack my way through renders and, in the process, developed a more efficient method I’ll use next time.
3. Violence Without Exploitation
- Representing real harm and stakes without glorifying violence or violating content policies required:
- Careful framing.
- Symbolism and editorial rhythm.
- Emphasis on consequences, not spectacle.
4. Mixing Real Footage and AI Seamlessly
- Combining real footage with AI is risky:
- If AI looks fake, it breaks the world.
- If AI is too perfect, it can flatten the real footage.
- This forced me to refine:
- How I graded and processed each source.
- How I cut between AI and real shots so the audience never “falls out” of the film.
From a craft standpoint, this project convinced me to move toward more adjustment layers and/or node-based editing so I’m not overloading individual clips with effects in future projects.
Accomplishments that we're proud of
- As an army of one, I really appreciate how much became achievable by leveraging AI tools.
- We delivered a visually rich, politically sharp music film despite lacking the budget something like this typically demands.
- We introduced a new artist to the world not just as a rapper, but as a conscious voice and activist, with a fully branded visual identity built from a single selfie.
- We recreated the feeling of being in DC using AI-generated imagery so convincing that many viewers assume it was shot on real locations.
- We used AI in a way that supports the story and ethics:
- No glamorization of violence.
- Clear critique of systems, not communities.
- Respect for the lived reality of people on the ground.
- We built and battle-tested a resource-conscious AI workflow that can now be used to support other under-resourced artists.
What we learned
AI can radically democratize production value.
This film has exponentially more production value than the artist’s budget could ever traditionally buy. With thoughtful use of AI, that gap can be closed.AI is not a magic button; it’s like a studio of unpredictable interns.
You still need a director. The real power is in:- Shot design.
- Iteration and rejection.
- Editorial judgment.
Grounding matters.
By tying the visuals to real DC geography, history, and social context, the piece stopped feeling like an AI experiment and became a story that people can feel.Constraints made the film better.
Limits on violence, budget, and compute forced more inventive choices:- More metaphor and implication.
- Smarter editing and compositing.
- Clearer ethical decisions.
Most importantly, Malignant Lunacy proved that AI can open doors for artists who are usually left out of the high-production-value ecosystem. With careful design, ethical intent, and persistence, AI can be used not just to accelerate content—but to amplify difficult truths, reclaim narrative power, and give underrepresented voices a cinematic platform they’ve never had before.
What's next for Malignant Lunacy
Festival Run & Screenings:
The artist hasn’t officially released this record or the video, but I’ve been cleared to submit to additional AI, music, and social-issues festivals and curate screenings in communities that relate directly to the themes.Workshops:
I would love to provide workshops on this workflow to help further democratize this kind of storytelling for communities that need it, but the artist is under-resourced so something like this would require 3rd party financing.Workflow for Other Artists:
Adapt the pipeline used here into a repeatable service model for independent artists—especially those with powerful messages but limited access to high-end production.Artmerse & VIP Studios Slate:
Apply the lessons from Malignant Lunacy across my broader slate at VIP (Visionary Image Production) Studios and Artmerse, using AI to preserve culture, elevate underrepresented voices, and bring visionary imagery to life at scale.
Built With
- adobe
- dreamina
- google-tools-(veo
- hailou
- higgsfield
- kling
- minimax
- nanobanana
- saga
- seedance
- seedream
- sync
- wan
- whisk)

Log in or sign up for Devpost to join the conversation.