AVAX (AI Viral Agency on X) - Devpost Submission
Project Story
Inspiration
Everyone wants to be popular online these days — to grow their following, go viral, become an influencer. For Gen Z, it comes naturally. But for millennials like us, posting publicly can feel… awkward. We grew up when social media was just for friends, not for broadcasting to the world.
But here's the thing: we still feel the pressure to maintain an active presence. The problem isn't just posting once — it's doing it consistently, knowing what's trending, and understanding what actually works. That takes time, creativity, and constant attention.
What if you could have an entire AI marketing team working for you 24/7 — and that team could hire, fire, and improve itself based on real performance data?
That's where AVAX (AI Viral Agency on X) was born. We were inspired by Truth Terminal's viral success, but we wanted to go further: instead of a single AI prompt, we built a self-improving multi-agent system where agents evolve based on actual Twitter engagement metrics.
What it does
AVAX is a fully autonomous AI social media team that creates viral content, posts to X/Twitter, analyzes real engagement data, and continuously improves itself.
The Core System:
Trend Intelligence Layer
- Every 3 hours, scrapes Google Trends using Browserbase (screenshots + Gemini Vision OCR)
- Discovers trending X posts using Tavily API
- Combines data into live trend feed
Agent Orchestration (A2A Protocol)
- CMO Agent: Strategic orchestrator that analyzes trends and decides content strategy
- Post Agent: Creates original tweets with AI-generated images (Imagen) or videos (Veo 3)
- Quote Agent: Finds trending tweets and adds insightful commentary
- Reply Agent: Builds relationships through thoughtful responses
- Repost Agent: Curates exceptional content
Content Creation Pipeline (8-layer deep)
- Research → Creative Writer → Generator → Critic (×3 iterations)
- Safety validation → Best candidate selection
- Media type decision (image vs video)
- AI-generated visuals with voiceovers and sound effects
🔥 Self-Improving Loop (The Game-Changer)
- HR Agent monitors actual Twitter engagement (likes, retweets, views, replies)
- Analyzes which agent prompts correlate with viral content
- Automatically rewrites underperforming agent prompts based on data
- Versions and tracks all changes for rollback capability
- The AI team literally hires and fires itself based on performance
Result: An AI marketing department that gets smarter, funnier, and more relevant every day — without human intervention.
How we built it
LLM Framework:
- Google ADK (Agent Developer Kit) for sequential and loop agent orchestration
- Gemini 2.5 Flash for all agent reasoning, content generation, and OCR
- Pydantic for structured outputs and validation
- OpenTelemetry + WandB Weave for complete observability
Content Generation:
- Google Imagen for 3:4 portrait images optimized for social media
- Google Veo 3 for 8-second 9:16 vertical videos with audio (voiceovers, sound effects, music)
- Intelligent media selector that chooses image vs video based on content type
Trend Collection:
- Browserbase: Remote browser sessions to capture Google Trends screenshots
- Gemini Vision API: OCR extraction from trend screenshots
- Tavily API: Twitter trend discovery without login requirements
- Apify Twitter Scraper: Real engagement metrics (impressions, likes, retweets)
Social Media Integration:
- Twitter/X API v2 with OAuth 2.0 for posting
- Real-time performance measurement and feedback loop
Architecture Highlights:
- A2A Protocol: Standardized agent-to-agent communication with clear request/response contracts
- Sequential Agents: Research → Writer → Generator pipeline with 3 refinement iterations
- Loop Agents: Critic evaluates each iteration until quality threshold met
- Meta-Agent Design: HR Agent manages and improves other agents (agents managing agents!)
- Prompt Versioning: Every prompt change tracked with version number, reason, and rollback capability
The Self-Improving Workflow:
1. Agents create content → Post to Twitter
2. Apify scrapes engagement metrics (likes, RTs, views)
3. HR Agent analyzes: Which prompts led to viral posts?
4. HR Agent generates improved prompts for weak layers
5. New prompts auto-deployed to production
6. Better content → Better engagement → Better prompts (loop repeats)
Challenges we ran into
1. Twitter API Complexity
- OAuth 2.0 flow required careful session and refresh token management
- Official API doesn't provide engagement metrics — had to use Apify scraper
- Solution: Smart caching (1-hour cache) and batched requests
2. Video Generation Latency
- Veo 3 takes 11s-6min to generate videos
- Solution: Intelligent media selector pre-analyzes content to decide image vs video, generates image first as fallback
3. Real-Time Trend Analysis
- Google Trends has no official API
- Solution: Browserbase remote browser + Gemini Vision OCR to extract data from screenshots
4. LLM Output Reliability
- Gemini sometimes outputs malformed JSON with unescaped special characters
- Solution:
json-repairlibrary with fallback parsing + strict Pydantic validation
5. Prompt Optimization at Scale
- Managing 8 different layer prompts across iterations and measuring effectiveness
- Solution: HR Agent generates complete replacement prompts (not diffs), version control system for A/B testing and rollback
Accomplishments that we're proud of
✅ Self-improving AI system that gets better with every tweet ✅ Meta-agent architecture where HR Agent manages other agents like a real manager ✅ Real performance feedback loop using actual Twitter engagement data ✅ Multimodal content (text + images + videos with audio) ✅ Complete observability via OpenTelemetry → Weave integration ✅ A2A protocol for scalable agent orchestration ✅ Production-ready with versioning, rollback, error handling ✅ Actually posts to Twitter with full OAuth 2.0 integration
Most proud of: The HR Agent doesn't just analyze internal scores — it pulls real Twitter engagement data to improve prompts. When a post goes viral, the system learns why and automatically applies those lessons to future content. The team literally hires and fires itself.
What we learned
1. Meta-Agent Design Patterns
- Separating strategy (CMO) from execution (specialists) creates cleaner architecture
- Meta-agents analyzing performance can drive continuous improvement
- Prompt optimization is itself an agent-solvable problem
2. Real-World Metrics Trump Internal Scores
- Internal scores (clarity, novelty) don't always predict virality
- Real engagement reveals unexpected patterns (moderate novelty sometimes > high novelty)
- HR Agent's correlation analysis shows which dimensions actually matter
3. Observability is Essential
- OpenTelemetry tracing reveals exact decision paths through 8-layer pipeline
- Weave's timeline view helped debug multi-agent coordination
- Structured logging with Pydantic makes debugging 10x easier
4. Multi-Agent Coordination
- A2A protocol needs clear contracts and schemas
- Context passing (trends, history) is critical for intelligent decisions
- Each agent should be independently testable
5. Self-Improvement Requires Real Feedback
- Simulated metrics don't work — you need actual user engagement
- Version control for prompts enables safe experimentation
- Automated rollback prevents bad prompt updates from staying live
What's next for AVAX
Phase 1: More Agent Types
- Thread Agent: Multi-tweet narrative threads
- Meme Agent: Viral meme generation
- Poll Agent: Interactive polls for engagement
Phase 2: Advanced Learning
- Reinforcement learning from engagement signals
- A/B testing multiple prompt versions simultaneously
- ROI-based agent hiring/firing
Phase 3: Multi-Platform
- LinkedIn (professional content)
- Instagram Reels (vertical video optimization)
- TikTok (short-form viral content)
Phase 4: Production Deployment
- Fully automated 3-hour pipeline (scrape → generate → post)
- Real-time monitoring dashboard
- Human-in-the-loop for brand-critical content
Ultimate Vision: A fully autonomous social media team that monitors trends 24/7, generates optimized content, posts at peak times, learns from every interaction, and continuously improves its own prompts — all while you focus on building your actual product.
Built With
google-adk- Agent orchestration framework (Sequential & Loop agents)gemini-2.5-flash- LLM for reasoning, content generation, and OCRgoogle-generativeai- Imagen (images) and Veo 3 (videos with audio)opentelemetry- Distributed tracing for agent workflowsweave- WandB Weave for observability and monitoringwandb- Experiment tracking and metricsbrowserbase- Remote browser sessions for Google Trendstavily-python- Twitter/X trend discovery APIapify-client- Twitter engagement scrapingtweepy- Twitter/X API v2 for postingpydantic- Structured outputs and validationpython- Primary programming languagejson-repair- Robust JSON parsing for LLM outputsschedule- Automated pipeline scheduling
Try it out
🐝 Live Weave Dashboard: https://wandb.ai/mason-choi-storika/WeaveHacks2/weave
🔗 GitHub Repository: https://github.com/storika/agents_of_agents
📊 Example Outputs:
- Generated Images:
artifacts/generated_image_*.png(3:4 portrait) - Generated Videos:
artifacts/generated_video_*.mp4(8-second 9:16 vertical with audio) - Trending Data:
trend_data/trending_*.json(updated every 3 hours)
Note: This entire Devpost submission was partially created with assistance from Claude Code — itself an example of AI agents helping humans build better AI agent systems! 🤖
Log in or sign up for Devpost to join the conversation.