Inspiration

Trend dashboards are great at ranking movement, but teams still ask the same question:

"What actually caused this spike or drop?"

We wanted to make trend analysis actionable by adding explainability, not just analytics.

What it does

Our product adds an explainability layer on top of topic movement data.

  • Aggregates signals from YouTube, Reddit, X, Google Trends, and News
  • Computes weighted topic movement and likely contributing factors
  • Generates concise and comprehensive explanations
  • Surfaces key drivers, dampeners, timeline evidence, and counterfactual impact
  • Adds a dedicated Market Context tab with stock-driven engagement analysis
  • Labels sentiment and confidence for stock-to-engagement relationships
  • Visualizes current-day stock movement for publicly listed entities (e.g., Uber)

How we built it

  • Backend: FastAPI with async adapters and structured schemas
  • Data connectors: YouTube, Reddit, X, Google Trends, News RSS, Yahoo Finance
  • Scoring engine: semantic relevance + timing + engagement + reliability + direction alignment
  • Reasoning layer: LLM-powered summaries with deterministic automated reasoning fallback
  • Frontend: lightweight interactive UI with topic tabs, engagement chart, and market context panel

Challenges we ran into

  • External APIs had uneven reliability and access constraints
  • Stock endpoints behaved inconsistently across quote vs chart APIs
  • Ranking can be misleading without strict direction alignment
  • Needed to keep output stable and demo-friendly under partial data failures

Accomplishments that we're proud of

  • Built a full end-to-end explainability MVP in hackathon time
  • Converted raw movement into structured, defensible narratives
  • Added market-aware reasoning without overwhelming the UI
  • Delivered a reliable, presentation-ready product despite external API volatility

What we learned

  • Why-features need both scoring rigor and narrative clarity
  • Robust fallback architecture is essential for real-world demos
  • Small wording choices dramatically affect trust and interpretability
  • Explainability is most useful when paired with confidence and uncertainty

What's next for Explainable Insights for Forum

  • Stronger event-level sentiment and stance classification
  • Better causal inference signals beyond correlation
  • More entities and richer market proxies for private companies
  • Alerting and workflow integrations for product, growth, and research teams

Video Link

https://youtu.be/loPP5q8kpc8

Built With

Share this project:

Updates