posted an update

IgniteAI Update — Building the Director-in-a-Box

Over the past few days, IgniteAI’s video engine has been going through a major architectural upgrade.

We’re currently migrating the pipeline to a new V3 Parallel-Core architecture designed to generate UGC-style ad videos faster, more consistently, and with better creative control.

Here’s what’s new under the hood:

** Director → Scene Workers → Conductor architecture** Instead of generating everything sequentially, the system now behaves more like a real production crew:

  • A Director node decides the creative strategy
  • Parallel scene workers generate hook, feature, and CTA shots simultaneously
  • A Conductor stage assembles the final video with music, voice, and captions

** Parallel scene generation** Scenes are now rendered concurrently using a worker pool, significantly reducing total generation time.

** Strategy-aware ads** The engine now extracts:

  • target persona
  • pain point
  • hook angle

This allows IgniteAI to structure ads more like high-performing UGC creatives.

** Modular AI “skills” architecture** Video generation, captions, voice, images, and fallback animation are now handled by independent skills — making the system easier to extend and upgrade.

** Powered by Veo 3.1 + Gemini Image** The new pipeline fully leverages Google’s latest video and image generation models to create short-form ad content.

Still early days, but the goal is clear:

Turn IgniteAI into the first true “Director-in-a-Box” for UGC ads.

More updates soon as the V3 engine comes online.

Log in or sign up for Devpost to join the conversation.