Inspiration

Modern digital platforms increasingly rely on fear, outrage, and urgency to capture attention. During global crises, elections, and even daily news consumption, we observed how emotionally manipulative language quietly influences decisions without users realizing it. This inspired MindGuard—to give people cognitive autonomy while consuming digital content.

What it does

MindGuard runs entirely offline on a user’s device and analyzes on-screen text in real time. It detects emotional manipulation techniques such as fear amplification, urgency bias, moral pressure, and dark persuasion patterns, then warns users instantly—without blocking content or censoring speech.

How we built it

We built MindGuard using on-device OCR and text capture, followed by a quantized reasoning model for manipulation detection. A lightweight inference pipeline ensures low latency while preserving privacy. All analysis happens locally, with no cloud access or data logging.

Challenges we ran into

Balancing accuracy with real-time performance was challenging on mobile hardware. We also had to carefully avoid false positives while ensuring the system remained explainable and non-judgmental.

Accomplishments that we're proud of

  • Achieved real-time manipulation detection fully offline
  • Zero data collection or tracking
  • Clear, explainable alerts that respect user autonomy

What we learned

We learned that ethical AI is not just about what models do, but where they run and who controls them. Offline-first design significantly changes trust dynamics.

What's next for MindGuard

We plan to expand manipulation detection to audio and video, add user-adjustable sensitivity controls, and conduct academic validation studies with psychologists and media researchers.

Built With

Share this project:

Updates