Inspiration

During online meetings, we noticed that people often accidentally leak sensitive information — passwords, client details, project names — just by speaking naturally. In fact, over 6% of all data leaks happen due to human error, often through slips in conversation. We wanted to fix that problem in real time — not after the fact. That’s why we built TakeBack, an AI-powered Chrome extension that predicts when you’re about to say something sensitive and mutes your mic automatically, protecting both individuals and organizations from unintentional voice-based leaks.

What it does

TakeBack continuously listens to your speech during virtual meetings (like Google Meet or Zoom), transcribes it locally, and uses predictive AI to detect high-risk phrases or patterns. If it senses you’re about to say something confidential — even before you finish — it instantly mutes your microphone and shows a clean, subtle visual alert. The AI understands context, not just keywords, meaning it can catch risky intent like “let me tell you my…” before you reveal something sensitive. Everything runs entirely on your device, ensuring full privacy and zero data transmission.

How we built it

Built as a Manifest V3 Chrome Extension using JavaScript and ONNX Runtime Web, we integrated the Web Speech API for real-time transcription of live speech. In addition, we used the TinyBERT transformer model (quantized for browser performance) to perform local inference and predict risky intent. Combining keyword detection and semantic embeddings helped us identify both explicit and implicit data leaks. And looks matter too, we made sure we had a sleek black-and-purple overlay UI for subtle feedback — no harsh alerts, just smooth visual cues. For feedback, we made a settings pop-up where users can toggle predictive muting, adjust sensitivity, and view recent mute events.

Challenges we ran into

Latency: Early versions muted after a word was spoken. We re-engineered the pipeline to process interim transcripts for real-time muting. Noisy feedback: Our initial “beep” alert was intrusive, so we replaced it with animated visual pulses. Semantic accuracy: Teaching the model to detect intent instead of keywords was tricky — it required fine-tuning thresholds and optimizing embeddings. Chrome Manifest V3 restrictions: Transitioning to service workers required restructuring background scripts and event handling. UI logic: Making every button in the extension (e.g., enable predictive mode, sensitivity slider) fully functional across content and popup scripts took time and debugging.

Accomplishments that we're proud of

We actually built a fully working predictive AI mute system that operates entirely locally while reducing muting latency from seconds to under 100 ms. Designing a clean, professional overlay that enhances user awareness without distractions was difficult while also developing an architecture that fuses semantic AI and live speech recognition inside a browser, which is something rarely achieved in real time.

What we learned

-How to run transformer models (TinyBERT) efficiently in the browser with ONNX Runtime Web. -How to merge real-time speech recognition with AI-based inference for predictive decision-making. -The importance of UX subtlety — clean, non-intrusive cues drastically improve user trust. -The value of edge AI — running everything locally protects privacy while maintaining responsiveness. -Collaboration, debugging event-driven systems, and optimizing for human speech variation were all valuable learning experiences.

What's next for TakeBack

-Fine-tuning the AI on real conversational datasets to better predict intent and tone. -Adding voice emotion analysis (e.g., stress or hesitation cues) for deeper leak prediction. Building a dashboard for enterprises to monitor aggregate leak prevention statistics without recording audio. -Extending compatibility to Microsoft Teams and Discord. -Creating a mobile version for on-the-go voice protection. Long-term: evolving TakeBack into a real-time privacy assistant for both corporate and personal communication.

Built With

  • chromeextension
  • chromestorageapi
  • css
  • domapi
  • hashmap
  • html
  • javascript
  • minilm-l6-v2
  • onnxruntime
  • python
  • pytorch(offline)
  • shell
  • webaudioapi
  • webrtcapi
  • webspeechapi
Share this project:

Updates