Inspiration

Modern infrastructure is breaking — not because systems are weak, but because humans are forced to understand them through the wrong interface.

Logs are linear. Systems are not.

We kept running into the same bottleneck:

Engineers don’t lack data — they lack clarity.

At the same time, AI has evolved from answering questions to reasoning about systems. That led to a sharper question:

What if infrastructure wasn’t debugged — but **observed, reasoned about, and repaired in real-time by AI?

This led to a complete redefinition of observability:

$$ \text{Observability} = f(\text{Topology}, \text{State}, \text{Autonomy}) \quad \text{not} \quad f(\text{Logs}) $$

RIFT was built to make infrastructure visible, reactive, and self-healing — not just monitored.


What it does

RIFT is a multi-agent cyber-topology engine that turns raw, chaotic inputs into live, visual, autonomous system behavior.

Instead of reading logs, users watch their infrastructure think and respond.

  • Systems are rendered as a live topology graph
  • Threats are mapped spatially, not abstractly
  • AI doesn’t just explain — it acts

The core loop is brutally simple and powerful:

  1. Any input (logs, images, GitHub, voice) enters the system
  2. AI maps it to a specific node in the architecture
  3. The attack is visualized instantly (node turns red)
  4. A real DevSecOps patch is generated
  5. The system heals itself in real-time (node turns green)

Mathematically:

$$ G_{t+1} = \text{Heal}\big(\text{Detect}(\text{Input}, G_t)\big) $$

This is not monitoring.

This is closed-loop autonomous infrastructure.

Key systems:

  • AEGIS → continuous voice AI (hands-free control)
  • AI Assist → instant contextual chat
  • Chaos Mode → live attack simulation
  • Temporal VCR → replay system history
  • Resolution Registry → exportable AI-generated postmortems

How we built it

RIFT is built as a zero-backend, real-time AI system — everything runs in the browser.

Core stack:

  • React 19 + Vite
  • ReactFlow (graph engine)
  • Gemini 2.5 Flash (multi-agent AI)
  • Web Audio API + MediaRecorder (voice system)
  • Native TTS (speech synthesis)
  • WebGL + WASM (visual computation layer)
  • jsPDF (report generation)

We engineered RIFT around three specialized AI agents:

  • Intake Agent → understands and maps threats
  • Resolution Agent → generates precise remediation
  • Conversation Agent → powers voice + chat

Pipeline:

$$ \text{Input} \rightarrow \text{Parse} \rightarrow \text{Map}(G) \rightarrow \text{Act} \rightarrow \text{Update}(G) $$

The critical innovation:

$$ \text{AI Context} = \text{System State, not just User Input} $$

This makes RIFT fundamentally different — the AI is not answering questions, it is operating inside a system.


Challenges we ran into

1. Making AI spatially aware AI models don’t naturally think in graphs. We had to force structure:

$$ \text{Output} = {\text{target node}, \text{attack type}, \text{action}} $$

Getting consistent, reliable mapping across modalities was difficult.

2. Real-time system coherence Every part of RIFT — graph, logs, AI, animation — had to behave like a single organism.

Even small delays break the illusion of intelligence.

3. Natural voice interaction We eliminated push-to-talk and built continuous voice using silence detection:

$$ \text{Latency} \leftrightarrow \text{False Activation} $$

Balancing this took multiple iterations.

4. Controlled autonomy We needed AI outputs that feel real and actionable without being unsafe.


Accomplishments that we're proud of

  • Built a system where AI closes the loop — not just observes it
  • Transformed observability from text → visual causality
  • Achieved real-time detect → attack → heal cycles
  • Created a dual-interface AI system (voice + chat)
  • Designed time-travel debugging for infrastructure
  • Delivered a fully client-side AI architecture (no backend dependency)

Most importantly:

We made infrastructure legible, reactive, and alive.


What we learned

  • The biggest bottleneck in engineering is understanding, not data
  • AI becomes exponentially more powerful when given structure + context
  • Interfaces define capability — not just models
  • Autonomy is a design problem, not just a technical one

Core insight:

$$ \text{Insight Speed} \propto \text{Visual Clarity} $$


What's next for RIFT

Immediate roadmap:

  • Live integrations (AWS, GCP, Kubernetes)
  • Streaming observability (real-time logs)
  • Executable remediation (actual system actions)
  • Collaborative infrastructure graphs

Intelligence layer:

$$ \text{Risk Score} = \sum_i w_i \cdot \text{Vulnerability}_i $$

Long-term vision:

RIFT becomes an autonomous AI-SRE (Site Reliability Engineer):

$$ \text{Human} \rightarrow \text{Strategic Oversight} \ \text{AI} \rightarrow \text{Operational Control} $$

From dashboards… to decision-making systems.


Built With

Share this project:

Updates