Inspiration
Modern Zero Trust sounds great on slides, but it often ends up as static firewall rules or opaque policy engines that engineers can’t easily see or debug. I wanted something closer to what Cilium does in production: use eBPF to get per-flow visibility and policy decisions directly from the kernel, and then surface that in a simple, understandable way.
So the inspiration was:
- Turn abstract Zero Trust ideas into concrete flow-level signals.
- Use eBPF/XDP to do this in a high-performance, cloud-native way.
- Build a demo that’s small enough to run on a laptop/VM, but conceptually similar to how serious systems like Cilium enforce policy.
What it does
The Cilium Zero Trust Policy Visualizer attaches an eBPF/XDP program to a network interface (e.g., ens160) and:
- Parses each packet into a 5-tuple flow key:
src_ip,dst_ip,src_port,dst_port,proto. - Looks up the flow in a policy map that says allow or deny.
- Updates a stats map with per-flow counters: how many packets would be allowed vs denied.
- Exposes those decisions back to user space so they can be:
- Printed in the console, or
- Served via an API and displayed in a UI (e.g., a graph or table).
- Printed in the console, or
In the current demo mode it does not drop traffic – it acts as a “Zero Trust lens” over your traffic, showing what would be allowed or denied under your policy.
How I built it
eBPF/XDP program (kernel side)
- Written in C using BCC-style helpers (
BPF_HASH,lookup,update). - Parses Ethernet + IPv4 + TCP/UDP headers to construct a
flow_key. - Uses two BPF maps:
policy_map:flow_key -> __u8(1 = allow,0 = deny)stats_map:flow_key -> flow_stats { allowed, denied }
- Classifies each packet and updates the corresponding counters.
Shared header (zt_policy_common.h)
- Defines
struct flow_keyandstruct flow_statsused on both sides
(C + Python via BCC types).
Python loader (loader.py)
- Uses BCC to:
- Compile the eBPF program from
zt_policy_kern.c. - Attach it to the chosen interface using XDP.
- Compile the eBPF program from
- Reads
policy.yamlwith rules like:
rules:
- src_ip: 192.168.86.250
dst_ip: 1.1.1.1
dst_port: 80
proto: tcp
action: allow
- Converts each rule into a flow_key and writes it into policy_map.
- Periodically iterates over stats_map and prints flows with: . allowed count . denied count
- Demo harness (scripts/demo.sh)
. Single command:
sudo ./scripts/demo.sh ens160. Starts the loader and guides the user to generate traffic (e.g. curl).
Challenges I ran into
BCC vs libbpf confusion
I initially mixed CO-RE/libbpf style (__uint,__type,SEC(".maps")) with BCC-style macros (BPF_HASH,helpers.h), which caused compilation and loader errors. I fixed this by standardizing on pure BCC-style maps and helpers.Getting BCC and kernel headers right
I needed the correct kernel headers (linux-headers-$(uname -r)) and BCC packages. Along the way I hit errors like “Failed to compile BPF module”, missingbpf_helpers.h, and map type mismatches.XDP attach and permission issues
XDP requires root and a compatible interface. I ran into verifier/attach errors like “Permission denied” andR1 type=scalar expected=map_ptruntil the maps and function signatures were declared correctly.Map value typing in Python
BCC maps expect ctypes objects, not rawbytes. I fixed errors likebyref() argument must be a ctypes instanceby usingct.c_ubyte(allowed)and reading backvalue.value.Making the demo simple enough
The hardest non-technical challenge was resisting the urge to overbuild. I refactored down to a minimal UX: one script, one YAML file, clear console output.
Accomplishments that I'm proud of
End-to-end eBPF pipeline actually running
I wired everything together frompolicy.yaml→ Python loader → kernel XDP program → BPF maps → back to user-space stats.Clear Zero Trust semantics
Everything is deny by default unless there’s an explicit allow rule inpolicy_map. You can literally see which flows comply with policy and which are “violations”.Safe demo mode
The program currently does not drop packets, which makes it safe to run on a dev machine or VM without locking yourself out. Changing a single line:return is_allowed ? XDP_PASS : XDP_DROP; ## What I learned
Working on this project forced me to go beyond theory and really understand how eBPF/XDP and Zero Trust policies behave in practice.
Practical eBPF/XDP programming
I learned how to write, verify, and debug XDP programs using BCC, and how BPF maps behave when sharing structured keys/values between kernel and user space.Zero Trust in concrete terms
Zero Trust is no longer just a buzzword for me: it maps nicely to an explicit allow list plus deny-by-default, implemented at the per-flow level.Tooling & ecosystem gotchas
I experienced the differences between BCC and libbpf styles first-hand and saw how mixing them breaks things. I also saw how critical correct kernel headers, verifier expectations, and small type details are.Designing for demoability
I learned that a small, focused demo with a clear story often has more impact than a huge, complex system: one command to run, one YAML file to edit, obvious output.
What's next for Cilium Zero Trust Policy Visualizer
There are several directions to take this from a demo into something closer to a production-style Zero Trust tool.
Real enforcement mode
Add a flag to toggle between visualizer mode (XDP_PASS) and enforcement mode (XDP_DROPon denied flows), and show side-by-side views of “what would be blocked” vs “what is actually blocked”.Web UI / dashboard
Build a small frontend that polls/api/flowsand/api/policy, and renders flow graphs, heatmaps (src ↔ dst), and policy hit/miss statistics.More expressive policies
Extend the policy model to support CIDR prefixes (e.g.,10.0.0.0/24), protocolany, port ranges, and higher-level tags likefrontend/databaseinstead of raw IPs.Integration with Cilium concepts
Map Kubernetes-style identities or labels onto these flows for a more Cilium-like feel, and emit metrics to Prometheus / Grafana to visualize Zero Trust posture over time.CO-RE / libbpf implementation
Port the BCC-based prototype to a libbpf CO-RE implementation to make it more production-friendly and aligned with how Cilium operates in real clusters.
Log in or sign up for Devpost to join the conversation.