Private Investigator : Autonomous OAuth Security Agent

A sovereign AI security and digital hygiene assistant built for the modern connected life.

It audits connected accounts, detects stale OAuth access, surfaces risky permissions, discovers wasted subscriptions, and helps users safely clean up their digital footprint through trusted AI, clear explanations, and human-in-the-loop approval.


Inspiration

The inspiration for Private Investigator came from a simple but frustrating reality: people accumulate access over time, and almost none of it is visible.

Most users sign into tools, connect accounts, grant permissions, try a service once, and then move on. Months or years later, those same connections can still have access to email, cloud storage, GitHub repositories, calendars, chat systems, and payment-related data. The problem is not just clutter. It is silent authority.

That hidden authority creates a long tail of risk. It also creates confusion. People do not know which apps still have access, which permissions are broad, which subscriptions are still charging, or which integrations are safe to keep. The experience is fragmented across vendors, dashboards, and settings pages. Cleanup is tedious. Revocation is inconsistent. Trust erodes.

Private Investigator was inspired by the idea that AI should help users regain control, not lose it. If an AI agent is going to act on behalf of a person, then the system behind it must make authority visible, constrained, auditable, and reversible. The product should feel like a calm control center for connected digital life, not a black box.

That led us to a simple product thesis:

Users should be able to see what still has access, understand the risk, approve high-impact actions, and clean up safely in one place.

This project turns that thesis into a working product experience.


What it does

Private Investigator helps users understand and manage their connected digital accounts through a secure, AI-assisted workflow.

At a high level, it does four things:

1. Discovers connected accounts and permissions

The system scans connected services and surfaces apps, integrations, and delegated access. It identifies where the user has granted permissions, how broad those permissions are, and how long it has been since the app was last active.

2. Scores risk and explains it clearly

Each connected app or subscription is evaluated for risk based on factors such as permission breadth, write access, staleness, and potential blast radius. The product does not just show a score. It explains why something is risky in plain language.

3. Finds subscription waste and account sprawl

The product highlights recurring charges, dormant subscriptions, forgotten trials, and low-value services that continue billing in the background. It helps users separate what is useful from what is just lingering.

4. Cleans up safely with human control

When an action is low risk, the system can recommend safe cleanup. When the action is high risk, it requests approval before proceeding. That approval step is designed to preserve user trust and prevent destructive automation.

The result is a product that feels like a trusted digital investigator: it finds hidden access, shows the user what matters, and helps them take action safely.


How we built it

We built Private Investigator as a product-first AI security system with a strong emphasis on UX, trust, and delegation control.

Product architecture

The system is designed around a layered architecture:

  • Frontend for dashboards, filters, detail views, approvals, and reports
  • Backend API for audit orchestration, data access, policy enforcement, and revocation
  • AI/agent layer for scanning, reasoning, and recommending cleanup
  • Identity layer for delegated access and secure authentication
  • Approval layer for human-in-the-loop review of risky actions
  • Audit layer for logs, traceability, and reporting

This separation is important because the product must do more than “call an AI.” It must mediate authority safely.

Secure delegated access

Instead of storing broad credentials directly in the application, the product uses secure delegated access patterns so the agent can retrieve the permissions it needs without being overpowered by long-lived secrets.

Policy-driven control

We added guardrails to decide whether actions should be allowed, blocked, or escalated for approval. The policy layer is what keeps the system trustworthy. It ensures that automation does not become silent autonomy.

Human-in-the-loop approval

For sensitive operations, the product shifts from automation to confirmation. That makes the system suitable for real-world use cases where trust is more important than speed.

Product-grade UX

We also invested heavily in the user experience so the system feels polished and usable:

  • clean overview dashboard
  • apps and subscriptions pages
  • permissions matrix
  • policy editor
  • approvals inbox
  • reports and audit trail
  • settings and help pages
  • interactive filters, drawers, and modals

The goal was to make the app feel like a real control center rather than a prototype.

Tech story

The implementation is centered on modern web patterns with typed models, reusable services, clear separation of concerns, and a data flow that can scale from mock data to real integrations.

The final product feels like a secure SaaS platform for personal digital hygiene and AI-assisted cleanup.


Challenges we ran into

A project like this has a lot of moving parts, and the biggest challenge was not just technical. It was designing trust.

1. Balancing automation with control

The first challenge was deciding how much the agent should do automatically. If the system is too cautious, it becomes annoying. If it is too autonomous, it becomes risky.

We solved this by separating actions into clear trust tiers:

  • safe to surface automatically
  • safe to recommend
  • safe to execute after approval
  • unsafe without manual review

That allowed us to keep the product helpful without making it feel reckless.

2. Making risk understandable

Security products often overwhelm users with jargon, dashboards, and red flags. We wanted a product that explained risk in human terms.

That meant translating permission scopes, stale access, and subscription waste into simple explanations. The challenge was to be accurate without becoming technical noise.

3. Designing for a fragmented problem

Connected-account sprawl is inherently fragmented. It spans many services, many permission systems, and many UX conventions. The challenge was to turn that fragmentation into one coherent product story.

We solved this by designing a unified visual language across all pages and by making the overview dashboard the central entry point.

4. Building a product that feels complete

Hackathon projects often look like demos. We wanted this one to feel like a complete product.

That meant adding the kinds of pages and flows that make an app feel real:

  • settings
  • help
  • activity history
  • reports
  • approval states
  • empty states
  • loading states
  • error handling
  • responsive layouts

5. Working across system boundaries

Another challenge was thinking through how the frontend, backend, agent logic, and identity system fit together without becoming tangled. The solution was a strong separation of concerns and reusable abstractions.


Accomplishments that we're proud of

We are especially proud of the fact that this project does not just visualize the problem. It helps solve it.

1. A strong product narrative

The app tells a clear story from problem to solution:

  • discover what is connected
  • explain what is risky
  • approve what matters
  • clean up safely
  • show the outcome

That narrative is easy for judges, users, and stakeholders to follow.

2. A trustworthy AI model

We are proud that the AI does not feel like a mysterious all-powerful assistant. It feels constrained, explainable, and respectful of user control.

That design decision is central to the product.

3. A premium interface

The frontend was designed to feel polished and serious. We invested in visual hierarchy, responsive layouts, interactive states, and clear navigation so the app feels like a real SaaS product.

4. Full product surface area

We built more than a landing page. The product includes the pages and flows needed to feel like a mature application:

  • Overview
  • Apps
  • Subscriptions
  • Permissions
  • Policies
  • Approvals
  • Reports
  • Activity
  • Settings
  • Help

5. Clear business value

The product makes it easy to understand why it matters:

  • reduces security exposure
  • finds stale permissions
  • cuts recurring waste
  • saves time
  • preserves trust
  • creates a clean audit trail

That combination of user value and security value is what makes the product compelling.

6. A flexible architecture

We also built the system in a way that can evolve. The product can grow into more services, more integrations, more agents, and more policy controls without needing to be redesigned from scratch.


What we learned

This project taught us that security products are really trust products.

1. Trust is a UX problem

Users do not trust a system because it is “AI-powered.” They trust it because it is understandable, predictable, and reversible.

That means the interface matters just as much as the backend logic.

2. Authorization is part of the product

We learned that access control is not just infrastructure. It is a user experience.

The way a system asks for permission, explains permission, and revokes permission becomes part of its brand and usability.

3. People need control, not just insight

A dashboard that only shows problems is not enough. Users also need a path to action.

That is why approval flows, safe revocation, and clear remediation matter so much.

4. Hidden risk is everywhere

One of the strongest lessons was how much digital risk comes from things people forget about:

  • old logins
  • dormant apps
  • forgotten subscriptions
  • excessive scopes
  • unused automation

The problem is not dramatic in isolation, but it adds up over time.

5. AI works best when it is bounded

The most practical AI systems are not the ones that can do everything. They are the ones that can do a focused job very well inside clear guardrails.

Private Investigator is designed around that idea.

6. Good product design reduces cognitive load

We learned that users are more willing to act when the UI is calm, structured, and easy to scan. If the product feels chaotic, the user hesitates. If it feels orderly, the user moves forward.

That was a major design principle throughout the build.


What’s next for Private Investigator

The current product is a strong foundation, but there is a lot more we want to build.

1. More integrations

The next major step is expanding the number of connected services the product can understand. That includes more workplace apps, consumer apps, and identity providers.

The more services we support, the more useful the product becomes.

2. Smarter recommendations

We want the agent to do more than identify risky access. We want it to prioritize what matters most and recommend the next best action.

That means better ranking, better context awareness, and more personalized remediation.

3. Stronger policy controls

We plan to add richer policy editing so users and organizations can define exactly how automation should behave.

Examples include:

  • always require approval for certain services
  • auto-clean low-risk stale access
  • never allow write permissions without review
  • escalate suspicious patterns immediately

4. Better reporting and export

We want the product to produce cleaner reports for both users and teams. That includes downloadable audit bundles, summary reports, and evidence packs that show what changed and why.

5. Enterprise and team support

Private Investigator can grow beyond an individual consumer tool. We see a strong path toward team, workspace, and enterprise use cases where admins want visibility across many accounts.

6. Stronger mobile approval experiences

Approval is one of the most important trust moments in the product. We want to make the mobile step-up experience even smoother and more delightful.

7. More autonomous cleanup with guardrails

Over time, the system can safely automate more cleanup tasks as policy and confidence improve. The key is to keep the approval model strong while reducing friction for low-risk actions.

8. A broader “digital hygiene” category

Longer term, we believe Private Investigator points to a larger product category: AI-assisted digital hygiene. That includes access cleanup, subscription cleanup, permission management, and trust-preserving automation.


Private Investigator is about more than connected accounts. It is about restoring control.

As people use more SaaS tools, more AI assistants, and more delegated access, the need for a trusted control plane becomes more urgent. We built this project to show that AI can help users clean up their digital life without taking away their authority.

The product’s mission is simple:

See what still has access. Understand the risk. Approve what matters. Clean up safely.

Built With

  • auth0
Share this project:

Updates