SafeScreen: Intelligent & Adaptive Media Safety for Families

What Inspired Us

As media becomes more accessible, parents face a growing challenge: general age ratings (like PG-13 or R) are too broad and fail to account for a child's unique sensitivities. A movie might be rated PG-13 for language, but contain a specific type of intense emotional grief that deeply triggers a particular 8-year-old. We saw projects like Skipit doing a fantastic job of identifying video triggers on the fly, and we wanted to take that concept a step further. Instead of just reacting to triggers as they happen, what if we could proactively build a holistic, psychologically-aware viewing plan before the movie even starts? We were inspired to build a system that doesn't just block content, but facilitates healthy, structured media consumption empowering parents to navigate intense scenes through co-viewing, planned intermissions, and personalized decompression.

How We Built It

SafeScreen is structured around a modern, AI-first architecture designed for deep contextual understanding.

  • Frontend: Built with React and Vite, featuring a responsive, family-friendly UI.
  • Backend: A highly concurrent FastAPI (Python) server handling data routing and AI orchestration.
  • Database: Supabase (PostgreSQL) for secure profile storage, movie metadata, and caching LLM analysis to ensure fast load times.
  • AI Integration (Google Gemini):
    • We use Gemini 2.5 Pro to parse full movie SRT transcripts, identifying distinct narrative scenes, generating exact timestamps, and assigning specific content flags (e.g., violence, bullying, loud sensory).
    • We use Gemini for the Adaptive Profile Questionnaire, creating a conversational agent that deduces nuance (e.g., "How does your child handle slapstick vs. realistic violence?").
    • Finally, we cross-reference the parsed movie data against the child's profile using Gemini to generate the custom Viewing Plan (mute, skip, co-view) and use Gemini 2.5 Flash to suggest personalized YouTube calming videos post-watch.
  • Contextual Transcript Analysis: SRT files are just raw text and timestamps. Teaching the LLM (Gemini) to accurately group disjointed subtitle lines into cohesive "scenes" without losing timestamp accuracy was heavily prompt-engineered. We had to implement a chunking strategy to keep the LLM within context limits while maintaining narrative flow.
  • Deterministic vs. Generative Logic: Initially, we tried using hard-coded rules to decide if a scene should be skipped. However, human emotion is nuanced. Transitioning the viewing plan generation from strict rule-based logic to an LLM-driven approach allowed us to generate much more empathetic, context-aware advice (like suggesting "pause and prompt" for complex themes rather than just blindly skipping them).
  • Handling Hallucinations: When generating exact JSON arrays of timestamps and flags, early LLM iterations would sometimes invent timestamps that didn't exist in the SRT. We overcame this by implementing a robust parsing and normalization layer in our FastAPI backend to validate all timestamps against the source file. ## What We Learned
  • Prompt Engineering as Code: We learned that treating LLM prompts like fragile but powerful code APIs requires strict typing, fallback mechanisms, and robust JSON parsing. We learned how to gracefully recover in the backend when the LLM returns slightly malformed markdown.
  • The Nuance of Media Safety: We learned that "safety" isn't binary. A massive learning curve was understanding how to implement calming strategies and decompression metrics. Fast-forwarding a scary scene isn't enough; sometimes the viewer just needs a 5-minute break with a box-breathing exercise before continuing. ## What's Next for SafeScreen Currently, SafeScreen works on pre-analyzed transcripts. Our next major step is building a browser extension that directly hooks into streaming platforms (like Netflix or Disney+). By intercepting the video player, we can automatically execute the SafeScreen viewing plan muting the audio, applying CSS blur overlays, or pausing the video at the exact timestamps our API generated, creating a completely zero-touch parental control experience.

Built With

Share this project:

Updates