I have everything I need. Here are the Devpost-style answers for ImageHax:
Inspiration
The glitch aesthetic has 2.4 billion hashtag impressions on Instagram and is the defining visual language of an entire generation of digital creators — and yet the tools to create it are either locked behind a $55/month Adobe subscription or relegated to janky mobile apps with two sliders. We kept asking the same question: WebGL has been GPU-capable in browsers for years, so why does every browser-based image editor still feel like it's running on a 2008 netbook? The answer was that nobody had bothered to build a proper rendering architecture for the browser. We built ImageHax because the technology was ready and the tool didn't exist.
What It Does
ImageHax is a real-time, GPU-accelerated glitch art and image processing application that runs entirely in the browser — no installs, no accounts required. You drop an image in and immediately start applying a library of 18+ layered effects in real-time:
GPU Effects (WebGL via PixiJS v8, running at 60fps): Chromatic Aberration, RGB Channel Split, Bloom, Neon Edge Glow, Glitch Slice, Color Grade, Vignette, and Noise.
CPU Effects (processed in a dedicated Web Worker so the UI never stutters): Scanlines, Film Grain, VHS Noise, JPEG Artifacts, Dither, Pixel Block, Data Mosh, Halftone, and Noise Overlay.
Every effect is its own independent, non-destructive layer — you can toggle it, reorder it, dial its opacity, and set its blend mode (Normal, Screen, Overlay, Multiply, Add). One-click presets like Cyber Glitch, VHS Nostalgia, and Digital Corruption get you to a great result instantly. When you're done, export to PNG, JPEG, WebP, PDF, or a multi-layer Photoshop PSD that preserves your entire effect stack.
How We Built It
The core architectural decision was a Hybrid GPU/CPU split — and getting that right shaped every other decision.
GPU effects (chromatic aberration, bloom, glitch slices) are fundamentally GLSL shader operations that belong on the graphics card. We used PixiJS v8 with its WebGL/WebGPU renderer and the pixi-filters v6 library to build a layer-per-effect scene graph: every effect is its own Container node with its own filters[] array, composited in sequence. This means real-time parameter adjustment with sub-16ms frame latency.
CPU effects (scanlines, data mosh, halftone) are pixel-enumeration operations that would block the main thread. We offloaded these entirely to an OffscreenCanvas Web Worker — the processing happens on a background thread, the output is uploaded as a PixiJS texture, and the UI stays completely responsive throughout.
The state model uses React Context + useReducer (no Redux, no Zustand). Session persistence is handled via localStorage. Export uses pixiApp.renderer.extract.base64() for raster formats, pdf-lib for PDF, and ag-psd for multi-layer PSD. The whole thing runs on Next.js 16 with App Router, React 19, and Tailwind CSS v4.
Challenges We Ran Into
The PixiJS v8 SSR problem. PixiJS expects a DOM and a WebGL context — neither of which exist during Next.js server-side rendering. Getting the canvas component to load only client-side without creating hydration mismatches required careful dynamic imports with ssr: false and ref management patterns that don't exist in any tutorial.
Web Worker + Next.js bundling. Next.js doesn't have a clean story for inline Web Workers. Standard new Worker(new URL(...)) syntax breaks in the App Router. We ended up building the worker as an inline code string instantiated via Blob URL — not pretty, but it works across all environments.
Layer compositing accuracy. Getting the GPU layer blend modes to match Photoshop's compositing math precisely was harder than expected. The PixiJS BLEND_MODES enum doesn't map 1:1 to CSS or Photoshop blend modes in edge cases. We had to empirically verify each mode against reference outputs.
CPU effect performance on large images. Running scanlines or halftone on a 50MB image in a Web Worker was still locking up memory. We added createImageBitmap() with resizeWidth/resizeHeight to downsample before processing and upsample back — keeping quality high while staying within browser memory constraints.
Accomplishments That We're Proud Of
The layer-per-effect compositing system is the thing we're most proud of. Treating every effect as an independent PixiJS Container — with its own opacity, blend mode, visibility toggle, and drag-to-reorder — is Photoshop's core layer model rebuilt from scratch for a real-time GPU renderer. That's not a feature list item; it's a fundamental architectural achievement that makes the entire creative experience feel non-destructive and professional.
We're also proud of the Web Worker CPU pipeline. The fact that you can have a Data Mosh processing a 12-megapixel image in the background while simultaneously dragging a Chromatic Aberration slider and getting 60fps GPU preview — with zero UI jank — is the kind of technical outcome that takes significant engineering to make feel effortless.
And honestly: the visual output. The Cyber Glitch and Digital Corruption presets produce genuinely beautiful, shareable imagery that stands up against anything you'd get out of Photoshop.
What We Learned
The browser GPU is ready for professional creative tools. We went in with some skepticism about whether WebGL could really compete with native rendering for a use case like this. It absolutely can. The PixiJS v8 WebGL/WebGPU pipeline is fast enough that the bottleneck is never the GPU — it's always CPU-side JavaScript or network.
Non-destructive architecture is worth the upfront cost. Building the layer-per-effect scene graph took significantly longer than a simpler "apply filter to a single sprite" approach would have. But every feature we added afterward — preset system, layer reordering, blend modes, export — was easier because the foundation was right. Shortcuts in the data model cost you twice as much later.
Invisible architecture is the goal. The Web Worker, the GPU/CPU split, the scene graph — none of this is visible to the user. The measure of whether we got it right is whether the app just feels fast and responsive. That invisibility is harder to achieve than a feature you can demo.
What's Next for ImageHax
Effect expansion is the immediate roadmap. We have the architecture; adding new GPU effects is now a matter of writing GLSL shaders and wiring them into the existing layer system. Priority effects: Pixel Sort, Displacement Map (image-driven), Recursive Feedback Loop, and animated glitch cycles with timeline control.
Mobile optimization. The architecture works on mobile WebGL today but the UI is desktop-first. Rebuilding the panel system for touch and a vertical layout is the next major product milestone — unlocking the creator segment that lives on their phone.
API. The ImageHax rendering engine — GPU compositing + CPU Worker pipeline + export — is a general-purpose tool. Exposing it as an API (POST /render with an image + effect config, get back a processed image) opens up use cases for game asset pipelines, social media scheduling tools, NFT generators, and brand teams doing batch processing.
Creator preset marketplace. The preset system is already built. The next step is letting creators publish and sell their custom effect stacks. A 70/30 revenue split creates a flywheel: the best creators build the best presets, the best presets attract more users, more users grow the marketplace.
WebGPU compute shaders. We're currently on WebGL. WebGPU's compute shader model would let us run significantly more complex algorithms — real-time neural style transfer, frequency-domain glitch effects, physics-based noise — that are architecturally impossible in WebGL's fragment-shader-only model. That's the long-term technical frontier.
Log in or sign up for Devpost to join the conversation.