AdBlitz - AI That Markets For You
How It Started
A few months ago, my teammates and I were talking about how painful marketing is for small businesses. She had a friend who runs a small candle brand, and this friend had spent close to three weeks just getting her first Instagram campaign together, finding a designer, writing copy, figuring out which platforms to post on, trying to make a short video ad. Three weeks for what was essentially one product launch.
That conversation stuck with me. I kept thinking we have AI models that can analyze images, generate text, create visuals, and even produce video. What if we could wire all of that together into something that feels like having an entire marketing team, but it starts with just one product photo?
That's where AdBlitz came from. Not from a technical idea first, but from a real frustration that real people have.
What We Built
AdBlitz is a multi-agent AI system that takes a single product image and generates a complete, launch-ready marketing campaign. You upload a photo of your product, and within minutes you get:
- A full brand identity, personality, voice, color palette, taglines, target audience
- Ad copy tailored for Instagram, Facebook, Google Search, TikTok, Twitter/X, and Email
- AI-generated creative images in multiple formats feed posts, stories, lifestyle shots, banners
- A 6-second video ad with AI voiceover
- Three detailed audience personas with pain points, buying triggers, and platform targeting
- A 7-day media launch plan with budget allocation, platform strategies, and A/B testing recommendations
The whole thing is powered by 7 specialized AI agents, each responsible for one piece of the campaign, coordinated by an orchestrator that manages the pipeline.
How We Built It
We started by mapping out what a real marketing campaign actually looks like. We talked to a few people who run small businesses and freelance marketers. The pattern was pretty consistent — brand strategy first, then copy, then visuals, then distribution plan. So we modeled our agents the same way.
The Brand Agent runs first because everything else depends on it. It takes the product image and uses Amazon Nova Lite to extract what we call the "brand DNA" — the vibe, voice, color palette, emotional angle, audience. We spent a lot of time on this prompt because if the brand brief is off, every downstream agent produces something that feels disconnected.
Once the brand brief is ready, we parallelize the next three agents, Copy, Audience, and Visual using Python's concurrent.futures. This was a deliberate choice. Running them one after another took almost four minutes. Running them in parallel brought it down to about 90 seconds. We realized early on that if the demo takes too long, nobody watches it through.
The Copy Agent generates platform-specific content. Instagram copy is different from Google Search headlines, which is different from email subject lines. Each platform has its own structure hooks, body copy, CTAs, hashtags, scene breakdowns for TikTok. Getting the agent to produce consistently structured JSON for each platform took a lot of prompt iteration.
The Visual Agent plans image compositions and generates them using Amazon Nova Canvas. We experimented with different aspect ratios and prompts. One thing we learned is that lifestyle images showing the product in a real-world context work much better as video source frames than plain product shots.
The Video Agent was probably the most challenging piece. Amazon Nova Reel generates video asynchronously, meaning you start a job and then poll until it's done. This can take three to five minutes. On our local machines, that was fine. On the cloud, it became a real problem because the connection would drop before the video finished. We ended up running video generation in a background thread and polling in small intervals to keep the connection alive.
The Audio Agent selects an AI voice from AmazonPolly and generates a voiceover from the video script. Then we use MoviePy and FFmpeg to merge the voiceover with the generated video. Getting audio and video to sync properly, especially when the voiceover duration didn't match the video length exactly, required some trial and error.
The Media Plan Agent ties everything together a 7-day launch roadmap, budget split across platforms, targeting strategies for each persona, and A/B test recommendations.
All generated assets images, video, audio, campaign JSON are stored in Amazon S3. We also built an S3-based caching layer so that if someone uploads the same product image twice, the entire campaign loads instantly from cache instead of regenerating.
The Frontend
We wanted the UI to feel like a real product, not a hackathon demo with default styling. So we built a custom dark-themed dashboard in Streamlit with sidebar navigation, platform-branded icons, progress indicators during generation, and dedicated pages for each section of the campaign.
The Campaign Dashboard shows metric cards with estimated ROAS, audience reach, and brand sentiment. The Ad Copy page has cards for each platform with their actual logos. The Video Ad page shows the generated video alongside the AI script, scene breakdown, and performance estimates. The Media Plan page has a visual timeline with phase-coded days and budget allocation bars.
We went through multiple iterations on the UI. Early versions had raw HTML breaking because Streamlit has character limits on inline HTML rendering. We had to learn to keep each HTML block short and self-contained, which honestly taught us a lot about how Streamlit actually works under the hood.
Challenges We Faced
Getting agents to agree. The hardest part wasn't building individual agents it was making sure they all stayed consistent with each other. If the Brand Agent says the vibe is "minimal and premium," but the Copy Agent writes something playful and casual, the whole campaign feels off. We solved this by passing the complete brand brief to every downstream agent and being very explicit in prompts about maintaining consistency.
Video generation on the cloud. Nova Reel's async pattern works great locally, but cloud deployment platforms have connection timeouts. We tried several approaches increasing timeouts, running generation in threads, saving partial campaigns before attempting video. Eventually we got it working by combining threaded execution with a fallback that serves the raw video without voiceover if the merge step fails.
JSON reliability. AI models don't always return clean JSON. Sometimes they add a preamble, sometimes the response is empty, sometimes it's valid JSON but with unexpected field names. We built parsing utilities with fallback logic, but honestly this is still the most fragile part of the system. About one in every fifteen or twenty requests will return something we can't parse.
Secrets and deployment. This sounds trivial, but we accidentally committed AWS keys to GitHub and had to learn about git history rewriting, push protection, and proper secrets management. That was a detour we didn't plan for, but it taught us something practical about working with cloud credentials.
What We Learned
Building AdBlitz changed how we think about AI applications. It's not about one model doing one thing it's about orchestrating multiple specialized agents that each handle a piece of a larger workflow. The orchestration layer, the error handling, the caching, the parallelization that's where the real engineering challenge is.
We also learned that the gap between "it works on my machine" and "it works in production" is wider than we expected. Local development gave us a false sense of confidence. Cloud deployment introduced constraints around timeouts, memory limits, system dependencies, and API authentication that we had to solve one by one.
Most importantly, we learned to start with the user's problem. We didn't build AdBlitz because multi-agent AI is cool (though it is). We built it because a small business owner shouldn't need three weeks and a marketing agency to launch one product. If one photo and a few minutes can get them 80% of the way there, that's a meaningful difference.
What's Next
We see AdBlitz evolving into a full marketing co-pilot. Here's where we'd take it:
- Longer, richer video ads : right now we generate 6-second clips. With more compute time and higher-tier model access, we want to support 15, 30, and 60-second video ads with multi-scene transitions, background music, and dynamic text overlays.
- Tiered access model: similar to how platforms like ChatGPT and Claude offer free tiers with usage limits and pro tiers with extended capabilities, we'd offer a free tier that generates basic campaigns (brand brief, copy, and static images) and a Pro tier that unlocks video generation, voiceover, advanced persona targeting, and unlimited regenerations.
- One-click publishing: direct integration with Meta Ads Manager, Google Ads, and TikTok Ads so users can go from product photo to live campaign without leaving AdBlitz.
- Multi-product campaigns: upload an entire product line and get a coordinated campaign with consistent branding across all products.
- Brand memory: save your brand guidelines once and reuse them across future campaigns, so every new product launch stays on-brand without starting from scratch.
- Campaign performance analytics: connect real ad platform data back into AdBlitz so the agents can learn what's working and optimize future campaigns based on actual performance, not just estimates.
- Team collaboration: shared workspaces where marketing teams can review, edit, and approve AI-generated campaigns before publishing.
The foundation is built. The agents work. The pipeline scales. Now it's about turning it from a powerful demo into a product that small businesses actually use every day.
AdBlitz - because your product deserves a campaign, not just a post.
Log in or sign up for Devpost to join the conversation.