SENTRY
SENTRY (Secure ENgine for Trusted RAG Yield) makes sure your AI assistant only sees what you’re allowed to see — nothing more, nothing less — and always explains why.
Inspiration
With more companies using Retrieval-Augmented Generation (RAG) systems, we noticed a big worry: privacy.
RAG helps AI give more accurate, knowledge-based answers, but it can also:
- Accidentally mix up data between tenants
- Reveal sensitive information in responses
- Be tricked by malicious prompt injections
- Have embeddings manipulated by adversarial inputs
We asked: How can we make RAG safe, transparent, and trustworthy—without slowing it down?
That question led to SENTRY — a privacy-first gateway for RAG systems.
What it does
SENTRY makes sure your AI only sees what it’s allowed to see. It:
- Blocks unsafe or malicious queries
- Filters out sensitive data automatically
- Enforces access rules by user or tenant
- Keeps a full audit trail to show why every decision was made
In short: your AI stays smart, but now it’s also safe.
How we built it
We created SENTRY with a lightweight, modular design:
- React frontend for easy interaction
- FastAPI backend with SQLite for storage
- FAISS + Sentence Transformers for fast vector search
- Embedding scrambling and encryption for privacy
- Git/GitHub for version control and collaboration
SENTRY works as a middleware that sits between your queries and your RAG system, enforcing rules at every step.
Challenges we ran into
- Balancing privacy vs. answer quality: showing less context reduces exposure but can make answers less complete.
- Keeping security lightweight: too many checks can slow down real-time responses.
- Handling complex enterprise access rules without making the system hard to use.
Accomplishments we’re proud of
- Built dual-layer gates to protect queries and retrieved data
- Policy-driven filtering that works out of the box
- Transparent logging so every action can be traced for compliance
- Smooth integration without slowing down the RAG workflow
What we learned
- Privacy matters as much as AI performance: protecting data is a core feature, not an afterthought
- Trade-offs are inevitable: the more we protect, the more careful we must be about context
- Lightweight security works best: it’s easy to add, and it doesn’t frustrate users
What’s next for SENTRY
- Expand support for more AI platforms and vector stores
- Improve context minimization techniques without losing answer quality
- Add real-time alerting for suspicious queries
- Continue refining privacy and safety policies
Log in or sign up for Devpost to join the conversation.