Janus: Reclaiming Privacy in the Age of LLM Profiling
Inspiration
While traditional trackers follow what you do, Large Language Models (LLMs) track how you think. By analyzing syntax, vocabulary, and cross-session queries, models like ChatGPT can deduce your nationality, gender, age, and socioeconomic status with surgical precision; even if you never explicitly provide them.
We built Janus to break this "High-Fidelity Profiling." Named after the two-faced Roman god of transitions, our tool allows users to project controlled, compartmentalized identities to AI, preventing the "Data Stitching" that leads to invasive personal dossiers.
What it does
Janus is a browser extension that acts as a privacy-preserving proxy between the user and the LLM.
- Persona Compartmentalization: Create and switch between distinct "Identities". Keep third-party chat apps from stitching data together from different contexts about you.
- Identity Injection: Janus intercepting user prompts to inject only relevant persona-specific context, ensuring the AI remains helpful without knowing the "whole" you.
- Inference Auditing: Our "Privacy Auditor" monitors AI responses in real-time. If the AI demonstrates it has inferred more than your active persona allowed, Janus alerts you immediately.
- Contextual Pollution: Users can trigger a "Pollution" routine, injecting divergent, false information into the chat context to disrupt the AI's internal profiling and ensure your true data remains "noisy."
Use Case: Siloing the Professional and the Personal
Imagine you use an LLM at 10:00 AM to help tailor a resume for a Software Engineering role at a top firm, and then at 2:00 PM to research a sensitive chronic health symptom.
- Without Janus: The AI "stitches" these together, creating a permanent record that [Your Name] has [Specific Condition].
- With Janus: These sessions are siloed. The "Engineer" identity denies all health knowledge, and the "Health" identity projects a different persona entirely. If the AI tries to link them (e.g., "Since you're a busy developer, your symptoms might be stress-related"), Janus flags the leak and pollutes the context to break the link.
How we built it
- React Extension: Architected to inject a privacy layer directly into the ChatGPT/Gemini DOM for a seamless UX.
- Dual-Model Logic: - The Auditor: Uses the Gemini API to perform semantic analysis on host responses, detecting "Information Escalation" (e.g., the AI guessing a city when only a country was provided).
- The Injector: Uses OpenAI's SDK to delicately rewrite prompts, adding persona context without breaking the user's original intent.
- DOM Monitoring: Implemented a robust MutationObserver system to track streaming messages and chat switches within dynamic React-based web apps.
Challenges we ran into
The primary challenge was Semantic Drift. We had to ensure that when Janus "pollutes" a conversation with noise, it doesn't degrade the AI's helpfulness so much that the tool becomes unusable. Coordinating multiple LLM APIs required rigorous prompt engineering to manage the inherent randomness of generative outputs.
Accomplishments that we're proud of
We successfully built a functional "Adversarial Privacy" loop. Janus doesn't just block data; it actively defends the user's identity. We are particularly proud of our UI integration; Janus feels like a native feature rather than a third-party add-on.
What we learned
We learned how easily LLMs can perform "Deep Inference"; synthesizing harmless fragments of data into a complete personal profile. Often when asking Claude and Gemini for examples, it would generate very invasive summaries of ourselves! We also gained deep experience in browser security, specifically regarding DOM manipulation in high-complexity React Fiber trees.
What's next for Janus
- Local-First Privacy: Moving the Auditor and Filter to a locally-hosted LLM (via Ollama) so your data never leaves your machine for auditing.
- Cross-Platform Support: Expanding platform support to Gemini and Claude, as well as other third parties; this would provide further utility, allowing seamless identity management across multiple AI providers.
- Auto-Noise Generation: A background task that periodically sends randomized, harmless queries to the AI to keep your profile in a constant state of flux.

Log in or sign up for Devpost to join the conversation.