Inspiration

The average professional spends 31 hours per month in unproductive meetings. 71% of meetings are considered unproductive according to Harvard Business Review. The #1 reason? Conversational looping — people restating the same positions in slightly different words, believing they're making progress while going nowhere. Nobody notices loops while they're happening. Tools like Otter.ai and Microsoft Copilot answer "what was said?" but nothing answers "where is this conversation stuck and why?" We built LoopBreaker to fill that gap.

What it does

LoopBreaker is an AI-powered conversation debugger with 24 features:

  • Semantic Loop Detection — NLP embeddings detect when the same argument repeats in different words, not keyword matching
  • Loop Pattern Classification — categorizes loops as Deadlock, Ping-Pong, Escalation, Avoidance, or Echo Chamber
  • AI Interventions — Groq LLaMA 3.3 diagnoses the unstated assumption and generates a loop-breaking question
  • Meeting Cost Calculator — shows the dollar cost of loops ("$187 wasted, $15.6k/year projected")
  • Conversation DNA Fingerprint — unique radial visualization where each ring is a turn, creating a visual fingerprint per conversation
  • Debate Scoreboard — grades each speaker A-F on idea novelty vs repetition
  • Emotional Escalation Timeline — tracks frustration, defensiveness, and urgency over time
  • Audio Upload — upload MP3/WAV files, Groq Whisper transcribes, then analyzes for loops
  • Live Meeting Mode — real-time transcription via Web Speech API with loop alerts every 30 seconds
  • Conversation Comparison — compare two conversations side-by-side (before/after coaching)
  • Similarity Heatmap, Speaker Analytics, Conversation Replay, PDF Export, Dark/Light Theme, and more

How we built it

Backend: Python with FastAPI, managed by uv. The core pipeline uses sentence-transformers (all-MiniLM-L6-v2) for semantic embeddings, scikit-learn for cosine similarity matrices, and a union-find algorithm for loop cluster detection. Groq API powers the AI diagnosis (LLaMA 3.3 70B) and audio transcription (Whisper). Tone analysis uses regex pattern matching across 5 emotional dimensions.

Frontend: React 19 with TypeScript, Vite 8, Tailwind CSS v4, Zustand for state management, and Framer Motion for animations. All visualizations (heatmap, path diagram, DNA fingerprint, tone timeline) are rendered with the Canvas API. Live meeting mode uses the Web Speech API for browser-native transcription.

Key technical decisions:

  • Volume-based similarity threshold (0.72) tuned specifically for conversational repetition
  • Union-find clustering to group related loop segments efficiently
  • Groq for ultra-fast inference (~200ms) enabling real-time diagnosis
  • Groq is optional — the app fully functions without an API key

Challenges we ran into

  • Threshold tuning — too high and real loops were missed, too low and normal conversation was flagged. We solved this by making the threshold user-adjustable with a live slider so users can tune sensitivity themselves.
  • Speaker attribution — inconsistent speaker names (e.g., "Alice (PM)" vs "Alice") split the analysis. We standardized the transcript parser to handle this.
  • Real-time performance — running embeddings + similarity on every 30-second window in live mode needed careful optimization. We used a lightweight model (MiniLM) that runs in milliseconds.
  • Making loops visible — the hardest UX challenge was making an abstract concept (semantic repetition) feel tangible. The path visualizer with curved arcs was the breakthrough — you literally see the conversation going backward.

Accomplishments that we're proud of

  • 24 features shipped in a hackathon — from core NLP to polished UI with skeleton loaders, toasts, keyboard shortcuts, and PDF export
  • The "aha moment" — when you see the loop arcs in the path visualizer, you instantly understand what went wrong. Nobody has visualized conversations this way before.
  • Meeting Cost Calculator — turning loops into dollar amounts makes the problem impossible to ignore
  • Conversation DNA — a completely novel visualization that gives every conversation a unique fingerprint
  • It works without AI — loop detection, heatmaps, speaker analytics, and tone tracking all function without a Groq API key. AI adds diagnosis on top but isn't required.

What we learned

  • Semantic similarity is surprisingly effective at detecting conversational repetition — people rarely use the exact same words, but embeddings catch the meaning
  • Union-find is the perfect data structure for clustering related conversation segments
  • Making a technical tool feel intuitive requires multiple visualization approaches — heatmaps work for data people, path diagrams work for visual thinkers, and the cost calculator works for executives
  • Groq's inference speed makes real-time AI analysis genuinely viable — sub-200ms responses enable live meeting mode

What's next for LoopBreaker

  • Browser extension — analyze Slack, Teams, and Discord conversations in-place without copy-pasting
  • Zoom/Teams plugin — real-time loop detection directly inside video calls
  • Historical trends — track loop patterns across meetings over time to measure improvement
  • Team analytics — aggregate loop data across an organization to identify systemic communication issues
  • Therapy/counseling mode — specialized pattern detection for therapeutic conversations and couples counseling
  • Multi-language support — detect loops in any language using multilingual embeddings
  • B2B SaaS — meeting productivity tools market is $18.7B. LoopBreaker creates a new sub-category within it.

Built With

Share this project:

Updates