Inspiration
We kept thinking about one image: a 6-year-old lying inside an MRI machine for 45 minutes, unable to move, while a doctor waits outside deciding whether to sedate her.
40% of pediatric MRI patients require general anesthesia — not because the scanner is slow, but because the math forces it to collect every single data point before it can show you anything. The magnet fires in microseconds. The theorem makes you wait.
We wanted to break that rule.
What We Built
L1-MAGIC is a compressed sensing engine that reconstructs full MRI brain scans from just 20% of the measurements — running entirely in the browser, with no server, no API, and no neural networks.
The result: 7.5× faster scans, +10.9 dB better image quality than standard interpolation, and a tool that fits in 293 KB of WebAssembly.

How We Built It
The Core Idea: Compressed Sensing
The Shannon–Nyquist theorem says you must sample a signal at twice its bandwidth. MRI follows this exactly — every k-space line, no skipping.
But in 2006, Candès and Tao proved something remarkable: if a signal is sparse in some basis, you can recover it from far fewer measurements. Formally:
$$m \geq O\left(k \cdot \log\frac{n}{k}\right) \ll n$$
where \(k\) is the number of non-zero coefficients and \(n\) is the signal length. Brain MRI images are highly sparse in the DCT domain — most of the energy sits in just a few frequency components. This is the mathematical unlock.
Algorithm 1: Orthogonal Matching Pursuit (OMP)
We recover the sparse signal \(\hat{x}\) by solving:
$$\min |x|_1 \quad \text{subject to} \quad \Phi x = y$$
OMP does this greedily:
- Start with residual \(r = y\)
- Find atom \(\phi_j\) most correlated with \(r\): pick \(j^* = \arg\max_j |\langle \phi_j, r \rangle|\)
- Add \(j^*\) to the active set, recompute least-squares projection
- Update residual, repeat \(k\) times
For a 128×128 image split into 8×8 patches, this runs in milliseconds.
Algorithm 2: K-SVD Dictionary Learning
DCT is a universal basis — it doesn't know anything about your scan. K-SVD learns one that does:
- Extract all 8×8 patches from the image
- Sparse-code each patch with OMP
- For each atom, compute SVD of the error matrix and replace it with the leading singular vector
- Repeat 15 iterations → converged dictionary \(D \in \mathbb{R}^{64 \times 64}\)
The learned dictionary adapts to the patient's anatomy in ~1 second, pushing PSNR from 35.2 dB (DCT) to 37.8 dB.

Implementation Stack
Everything is written from scratch — no linear algebra libraries, no ML frameworks.
- Rust — OMP loop, DCT basis, K-SVD SVD updates, patch extraction, PSNR calculation
- wasm-bindgen + wasm-pack — compiles Rust to a 53 KB WASM module
- React + Vite — UI, image upload, canvas rendering, parameter controls
- Framer Motion — presentation slide animations
The entire bundle is 293 KB. No backend. No login. Patient data never leaves the device.
Results
| Method | PSNR | Sampling |
|---|---|---|
| Bilinear interpolation (baseline) | 28.5 dB | 100% |
| OMP + DCT | 35.2 dB | 20% |
| OMP + K-SVD | 37.8 dB | 20% |
| OMP + K-SVD | 39.4 dB | 33% |
Every 3 dB halves the reconstruction error. Our method gains +10.9 dB over baseline — the difference between a blurry image and one a radiologist can actually diagnose from.

Challenges
The hardest part was not the math — it was making the math fast enough to feel instant.
SVD inside K-SVD runs on 64×64 matrices for every atom, 15 iterations, hundreds of patches. Our first Rust implementation took 40 seconds. We got it under 1 second by:
- Avoiding heap allocations inside the hot loop using fixed-size arrays
- Computing only the leading singular vector instead of full SVD decomposition
- Processing patches in column-major order to improve cache locality
Getting WASM to handle SharedArrayBuffer correctly also required specific COOP/COEP
HTTP headers — Vercel's config had to be tuned manually for this.
What We Learned
- Compressed sensing is provably correct — not a heuristic. The guarantee comes from the Restricted Isometry Property, which random Gaussian matrices satisfy with high probability.
- Rust's ownership model made the numerical code surprisingly safe — we caught several off-by-one errors in patch indexing that would have been silent UB in C.
- WebAssembly is genuinely near-native for this workload. The bottleneck was algorithm design, not the runtime.
- K-SVD is a beautiful algorithm. It's essentially EM for sparse coding — and watching it converge on an 8×8 atom grid in real time is deeply satisfying.
Built by
Pratik & Dipendra — ALGOfest 2026, HealthTech Track

Log in or sign up for Devpost to join the conversation.