Inspiration

While building and selling enterprise Customer Success tools, we repeatedly encountered the same hesitation from customers.

They were willing to grant software access to CRMs and internal systems, but AI raised new concerns:

  • Employees using unsanctioned (“shadow”) AI tools
  • Prompts leaking IP or PII
  • No visibility into how AI was actually being used

AI adoption was already happening, but governance and visibility were missing. Sentient-X was built on the belief that AI governance must be continuous, autonomous, and enterprise-grade.


What it does

Sentient-X is an autonomous enterprise AI governance system.

It continuously monitors live AI tool usage, enriches events with enterprise context, detects risk, and responds in real time — without human prompts.

It:

  • Monitors AI tool usage across the organization
  • Classifies licensed vs. shadow AI usage
  • Detects potential PII and sensitive data exposure
  • Autonomously creates governance incidents
  • Explains decisions using an AI agent with live system context

The system runs as a closed loop: observe → reason → act → explain.


How we built it

We focused on real enterprise integrations and production realism.

  • Auth0 for employee, department, and organization lookup
  • AWS Lambda for telemetry ingestion and forwarding
  • Claude as the MCP-style reasoning agent answering governance questions using live context
  • Tonic Fabricate to generate synthetic telemetry logs for safe, repeatable demos
  • Deterministic replay via a hidden admin control plane
  • Fallback paths so the demo never depends on external availability

The UI reflects system state, but governance decisions are driven by the agent.


Challenges we ran into

  • Designing autonomy without unpredictability
  • Ensuring identity enrichment always mapped to real employees
  • Preventing synthetic data from breaking governance logic
  • Keeping the system explainable while acting independently

Accomplishments that we're proud of

  • Built a fully autonomous AI governance loop
  • Integrated real enterprise identity and telemetry systems
  • Demonstrated real-time PII and policy risk detection
  • Delivered a stable, deterministic, demo-safe system
  • Made the AI agent the decision-maker, not just an assistant

What we learned

  • Enterprises need visibility before governance
  • AI governance must run continuously, not as audits
  • Context (employee, org, domain) is critical to meaningful risk detection
  • Autonomy only works with strong guardrails and explainability

What's next for Sentient-X

  • Self-improving governance agents that learn what constitutes safe vs. unsafe AI usage
  • Policies and risk models tailored to agent behavior, not just human users
  • Deeper integrations with enterprise security tools (e.g., SWG)
  • More advanced ML-based behavioral and anomaly detection
  • Domain- and business-specific PII models

Our goal is simple: AI governance that runs itself, even as AI agents become first-class users inside enterprises.

Built With

Share this project:

Updates