Inspiration
My dad works in national security and has had a ban since 2022 on the use of LLM's like OpenAI, drastically limiting productivity of all employees
What it does
Placeholder.AI intercepts prompts containing sensitive data, scrubs PII before it reaches any LLM, then re-identifies the response so employees get full AI output with zero data exposure.
How we built it
We built a privacy proxy layer that integrates via the MCP standard, using pattern-matching and configurable policy rules to detect and replace sensitive fields in real time.
Challenges we ran into
The hardest problem was ensuring re-identification accuracy after LLM processing, since models sometimes rephrase or reorder content in ways that break naive placeholder mapping.
Accomplishments that we're proud of
We got the full scrub-and-restore pipeline working end-to-end inside Claude via MCP in a single hackathon session, with zero sensitive data ever touching the model.
What's next for Placeholder.AI
We're expanding policy configurability for industry-specific regulations like HIPAA and GDPR, and building a dashboard so compliance teams can audit exactly what was scrubbed and when.

Log in or sign up for Devpost to join the conversation.