Inspiration

Marketing teams waste millions on creative that falls in specific markets because they can't pinpoint where and why their ads fracture across cultural segments. Swayable's RCT pre-testing reveals that creative underperforms in certain demographics, but brands still spend weeks manually reviewing footage to guess which scenes caused the backlash. We saw an opportunity to fuse Twelve Labs' video understanding with Swayable's gold-standard testing data to create the attribution layer that's been missing - turning passive scores into active creative intelligence with timecoded recommendations.

What it does

PulsePoint transforms pre-test data into actionable creative edits by:

  1. Analyzing videos using Twelve Labs' multimodal AI to identify scenes, objects, emotions, cultural symbols, and voiceover tone with precise timestamps.
  2. Ingesting Swayable RCT data (quantitative metrics + qualitative comments) segmented by demographics, geography, and values.
  3. Detecting divergence by identifying which segments react differently and linking their feedback to specific video moments through semantic matching.
  4. Generating recommendations with timecoded edits, cultural explanations, alternatives, and projected lift estimates
  5. Visualizing insights through an interactive mindmap designed for CMO-level presentations with drill-down capability .

Example output: "Timestamp 0:34-0:42 shows champagne toast causing -18% favorability in Saudi Arabia. Replace with traditional coffee ceremony. Projected lift: +22%."

How we built it

Stack:

Frontend: Next.js + React + Tailwind for UI, interactive mindmap, and data visualization

Backend: NestJS + FastAPI dual-layer architecture

NestJS handles job orchestration, queue management, and API gateway FastAPI runs intensive AI processing pipeline

AI Integration: Twelve Labs Pegasus 1.2 for video understanding with structured JSON responses

Data Processing: Pandas for CSV parsing, Pydantic for type validation, OpenAI GPT-4 for synthesis

User Flow:

  1. User uploads video + Swayable CSV through Next.js interface
  2. FastAPI parses CSVs, caching treatment/segment/metric combinations with statistical data
  3. Video uploaded to Twelve Labs index (with pegasus1.2 generative model for Analyze API)
  4. For each node (breakdown/segment/metric combination):
  5. Twelve Labs analyzes video with segment-specific prompt, returning structured JSON with key scenes, cultural fit analysis, and initial recommendations
  6. Comments filtered to specific segment from parsed CSV
  7. OpenAI synthesizes Twelve Labs analysis + quantitative stats + qualitative comments into final insights with strengths, weaknesses, and recommendations
  8. Results stored and rendered in interactive UI

Key Technical Decisions:

Structured JSON schemas for Twelve Labs responses ensure parseable, validated data Smart caching: video uploaded once, reused across all segment analyses Pydantic models throughout for type safety and validation Comprehensive CSV parser handling test-only studies (no baseline) and string boolean values

Why the dual backend?

NestJS: Enterprise-grade job orchestration, TypeScript type safety, clean API layer FastAPI: Python ecosystem for AI/ML libraries, async processing, Pydantic validation Bull/Redis: Handles 100+ concurrent node computations, provides progress tracking, enables horizontal scaling

This architecture allows us to process entire campaigns (20+ segments × 13+ metrics = 260+ nodes) in parallel while giving users real-time feedback on computation progress.

Challenges we ran into

Twelve Labs Integration Hell:

  • Initially hit "index_not_supported_for_generate" errors because we created indexes with only marengo2.7 (embedding model). The Analyze API requires pegasus1.2 (generative model). Took hours to debug the caching layer—videos were stuck on old indexes even after fixing the code.
  • Had to implement cache invalidation that checks index_id to detect when videos need re-uploading to the correct index.

Figuring out how to manage this many nodes was also very difficult.

Accomplishments that we're proud of

Full Twelve Labs + Swayable integration working end-to-end - Real video analysis linked to real pre-test data generating actionable insights Type-safe data flow - Pydantic models catch schema mismatches early, preventing runtime errors deep in the pipeline Smart caching architecture - Videos upload once, analysis reuses cached video_id across 100+ segment combinations, making iteration fast after initial upload Structured JSON everywhere - Twelve Labs returns validated schemas, OpenAI synthesis follows strict format, frontend gets clean, typed data Comprehensive CSV parser - Handles test-only studies, string booleans, missing columns, multiple segment formats gracefully Actually solved the attribution problem - Not just sentiment analysis - we're linking specific scenes to specific audience reactions with cultural context

What we learned

  • API integrations are never simple: What looks like a straightforward REST API in docs becomes a debugging nightmare with streaming vs non-streaming responses, model requirements, and caching edge cases.
  • Structured data beats prompt hacking: Forcing Twelve Labs and OpenAI to return strict JSON schemas with Pydantic validation saved hours of debugging malformed responses.
  • Real data is messy: Sample CSVs don't have the edge cases. Production data has empty columns, mismatched types, inconsistent naming. Parsing robustly requires defensive coding and extensive validation.
  • Caching is critical for AI pipelines: Without smart caching, every iteration would re-upload videos (5-10 min) making development impossible. Cache invalidation is hard but essential.

What's next for PulsePoint

Immediate (Post-Hackathon):

  • Visual heatmap showing which timestamps have highest divergence across segments
  • A/B variant generator that automatically creates localized cuts based on recommendations
  • Cultural rules database expansion beyond hardcoded examples
  • Historical accuracy tracking to refine lift projections

Built With

Share this project:

Updates