Inspiration
The idea for Spectre came from watching the aftermath of major DeFi exploits. When the Ronin Bridge was drained for 625 million dollars, the stolen funds sat in the mempool for minutes before confirmation, and no automated system flagged them in time. Every existing blockchain analytics tool operates on confirmed blocks, meaning they perform post-mortem analysis after the damage is already permanent. I wanted to build something that could detect threats in the mempool itself, before the transaction is ever mined, giving institutions a real-time defensive window.
The "Quantum" theme of this hackathon pushed me further. Current cryptographic standards will be broken by Shor's algorithm once fault-tolerant quantum computers arrive. I wanted to explore what a post-quantum-ready intelligence platform would look like today.
What I Learned
Systems-level memory management. I learned why high-frequency trading firms use custom allocators like
jemallocinstead of standardmalloc. Thread-local memory arenas solve heap fragmentation by isolating each worker's allocation pool.Zero-copy FFI boundaries. Using PyBind11's Buffer Protocol, you can pass a raw memory pointer across the C++/Python boundary with zero overhead, essential for sub-millisecond latency.
Post-Quantum Cryptography. I studied Kyber-512 KEM (NIST FIPS 203), which derives security from the Module-Learning With Errors problem over polynomial rings \(R_q = \mathbb{Z}_q[X]/(X^{256}+1)\) where \(q = 3329\).
GPU-accelerated rendering. Using
THREE.InstancedMeshand uploading transformation matrices in a single draw call keeps 1000+ nodes at a locked 60 FPS.
How I Built It
Spectre is a three-layer architecture:
Layer 0: C++17 Engine
The core module maintains an in-memory directed acyclic graph of UTXOs. It runs Depth-First Search to detect Peeling Chains where \(O_p / I < 0.05\) and \(O_c / I > 0.90\), and computes Shannon Entropy on OP_RETURN bytecodes:
$$H(X) = -\sum P(x_i) \log_2 P(x_i)$$
Payloads approaching \(H > 7.5\) bits per byte are flagged as encrypted C2 commands.
Layer 1: Python AsyncIO Router
Imports the compiled .so via PyBind11, passes raw bytes to C++ via zero-copy Buffer Protocol, and broadcasts results as JSON over WebSockets.
Layer 2: Next.js + Three.js Dashboard
Renders a 3D force-directed topology graph with GPU post-processing (Bloom, Chromatic Aberration). A secondary /engine route exposes live thread states, jemalloc metrics, and a Kyber-512 entropy pool. Every anomaly is scored via an Exponential Moving Average Z-Score:
$$Z_t = \frac{X_t - \text{EMA}_t}{\sqrt{\text{EMV}_t}}$$
Transactions exceeding \(|Z_t| > 3.0\) (a 3-sigma statistical outlier) trigger immediate alerts.
Challenges
PyBind11 compilation pipeline. Getting CMake to locate correct Python headers and output a working
.socaused hours of version mismatch debugging between system Python and conda environments.WebGL performance at scale. First implementation rendered each node as a separate React component, dropping to 12 FPS at 500 nodes. Refactoring to
InstancedMeshwith a single GPU draw call was the breakthrough for smooth 60 FPS.Aesthetics vs usability. Heavy CRT scanlines and glitch noise looked incredible in screenshots but were painful to interact with. I stripped post-processing from the 3D interaction layer while keeping effects on ambient UI elements.
Vercel deployment. Strict TypeScript checks on production caught type errors the local dev server silently ignored, requiring multiple rounds of prop type fixes on the Three.js post-processing components.
Built With
- asyncio
- c++17
- cmake
- docker
- jemalloc
- next.js
- pybind11
- python
- react
- react-three-fiber
- three.js
- typescript
- vercel
- webgl
- websockets
Log in or sign up for Devpost to join the conversation.