Inspiration

LLMs are trained on static datasets, blind to real-time cultural shifts. So when Pepsi drops a Kendall Jenner ad and the world revolts, or Netflix flops in a new market, these models can’t warn you—it’s like using a 2020 map for 2025 terrain. TasteEngine was born to close that gap, merging OpenAI’s reasoning power with Qloo’s real-time cultural intelligence.

What it does

TasteEngine is a multi-agent LLM orchestration platform that taps into Qloo’s API and other web intelligence sources to provide live cultural analysis. It can:

  • Predict how different demographics will perceive a brand or campaign.
  • Surface real-time trends by region, age, or interest.
  • Compare cultural preferences across markets—e.g., Gen Z in Brazil vs Japan.
  • Generate visuals that match local taste using GPT-4 Vision.
  • Ingest proprietary data, perform scraping, and synthesize insights across domains.

The system executes 22 AI tools, coordinates 5 microservices, and streams its thought process live through Server-Sent Events (SSE).

How we built it

TasteEngine is built on a modular NestJS microservices architecture:

  • ai-service: LLM orchestration with OpenAI function calling.
  • qloo-service: Integrates 13 Qloo endpoints for cultural data.
  • scraper-service: Traditional scraping + vector-based semantic search.
  • scraper-v2-service: Firecrawl’s FIRE-1 agent for screenshot & visual analysis.
  • api-gateway: Manages routing, load balancing, and health checks.

All services are containerized via Docker. We also built a powerful CLI for dev testing and a real-time frontend UI powered by streaming AI outputs.

Challenges we ran into

  • Managing token limits across 22 tools while maintaining reasoning coherence was brutal—we had to engineer custom token truncation and JSON reducers.
  • Qloo’s data is powerful but complex; translating vague user queries into precise API calls required deep function tuning.
  • Streaming with SSE while juggling async scraping and image generation introduced race conditions and retry nightmares.
  • Coordinating multiple AI agents without them stepping on each other’s toes required a full multi-agent state manager.

Accomplishments that we're proud of

  • Pulled off the most comprehensive integration of Qloo’s API ever built.
  • Created a real-time LLM orchestration engine that performs complex cultural analyses in under 10 seconds.
  • Designed a CLI that lets developers test 30+ cultural tools interactively.
  • Engineered true cross-domain intelligence—connecting real-time trends, images, e-commerce data, and global cultural graphs.

What we learned

  • Cultural data isn’t just a “nice-to-have”—it’s the missing layer for any serious global-facing AI application.
  • Tool calling is far more powerful when treated as an orchestration graph, not a single turn.
  • Real-time intelligence beats stale embeddings every time—especially when working with dynamic markets like fashion, entertainment, and consumer tech.
  • Privacy-first data doesn’t mean you compromise insight—you just need smarter signals (thanks Qloo).

What's next for TasteEngine

We’re commercializing this. The plan:

  • Launch as an enterprise SaaS targeting media and CPG brands.
  • Offer API access to marketing platforms for white-labeled cultural analysis.
  • Expand to real-time influencer matching and campaign localization tools.
  • Scale infrastructure from hackathon-grade to AWS-grade, with cost optimization and region-specific deployments.

The market research industry is $76B—and 60% of that money is wasted on cultural misfires. TasteEngine is built to fix that.

Built With

Share this project:

Updates