Inspiration
The idea behind Eccedentesiast developed over time by observing the world around me and focusing on something I find incredibly prevalent today: the smile we wear while carrying things we don’t say out loud (thus the name). I wanted to create a music video that dives into that emotional duality, the tension between the smile we present and the feelings we suppress, and show how love and kindness still have the power to pull us back toward ourselves.
Every visual in the video explores a different human emotion, while love and kindness are the ones that stay present throughout the lyrics, eventually embedding themselves into the soul of the listener and viewer.
The skepticism around AI in creation also inspired this project. I wanted to show that even through AI, emotion can be expressed in a way that feels intimate, cinematic, and slightly surreal. It always comes down to intent.
What it does
It’s not an app or tool, so at first glance Eccedentesiast might look like it “does nothing.” But it does.
It explores a spectrum of human emotions (pain, love, kindness, confusion, inner noise) and translates them into a moving visual story. Its purpose is emotional, not functional.
How we built it
The video was created through a multi-model AI workflow:
Midjourney for the base visuals of the artworks featured in the video
Adobe Photoshop for editing and combining multiple images into final compositions
Krea AI for quality enhancement and stylistic consistency
Suno to create the music (sound + lyrics)
Veo-3 to generate each video scene (20+ scenes in total)
CapCut for assembling the video: editing, transitions, timing, and scene composition
Topaz Astra for final 4K upscaling
Custom prompting throughout to maintain a unified visual identity
Each scene was built individually using symbolic prompts, then carefully arranged to follow the emotional arc of the song, from tension to clarity through love and kindness.
Challenges we ran into
Maintaining visual consistency across different AI models
Working with limited upscaler credits and balancing quality vs. speed
Preserving emotional tone across multiple generative pipelines
Matching each scene precisely to the rhythm and meaning of the lyrics
Ensuring the symbolism remained clear without becoming too literal
Making the most out of very limited resources
These challenges required constant iteration, experimentation, and fine-tuning of prompts, styles, and transitions.
Accomplishments that we're proud of
Completing a fully AI-generated multi-scene music video from scratch
Creating the most cohesive and intentional AI video I’ve made so far
Maintaining a consistent emotional and aesthetic language across the entire project
Designing symbolic scenes that closely reflect the narrative of the song
Establishing a smooth workflow between all tools involved
Turning a personal emotional concept into a visual experience people can connect with
What we learned
How to maintain stylistic coherence across different tools and models
The importance of iteration, as almost every scene required multiple rounds of refinement
How to guide AI models to match specific emotional tones
That AI doesn’t replace artistic vision, it expands what a single creator can achieve
What's next for Eccedentisiast
I plan to expand this visual language into future music projects and continue exploring emotional storytelling through AI-driven visuals.
Built With
- capcut
- krea
- midjourney
- photoshop
- suno
- topaz
- veo-3

Log in or sign up for Devpost to join the conversation.