Inspiration

Plum Engine was inspired by the idea of turning raw 3D point-cloud data into something people can actually inspect, reason about, and act on in real time. Rather than treating spatial data as a static snapshot, we wanted to model how a 3D world changes over time and make those changes visible through a live voxel engine and audit-style analytics layer.

What it does

Plum Engine ingests 3D point data, converts it into voxels, tracks how each voxel changes across a rolling 32-tick temporal window, and visualises the result in an interactive 3D frontend. On top of the live world model, it computes metrics like entropy, anomaly density, structural health, volatility, and clustering patterns, then presents them in an audit-style report with optional AI-generated narrative summaries.

How we built it

We built Plum Engine as a two-part system:

  • Backend: A Rust service using Axum, Tokio, DashMap, parking_lot, and glam to store and update a mutable voxel world. The backend ingests streamed point data, maps points to integer voxel coordinates, and tracks occupancy, history, entropy, anomaly state, and sequence IDs.
  • Frontend: A React + Vite app using react-three-fiber, three.js, Zustand, Framer Motion, and Recharts to render the voxel world, support temporal scrubbing, upload datasets, and generate live analytics reports.
  • Data pipeline: The browser parses CSV, JSON, and PLY files, splits them into batches, streams them to the backend, then continuously polls for the latest world state so the visualisation and telemetry can update in near real time.

Challenges we ran into

One of our biggest challenges was ingesting large files efficiently. Since Plum Engine works with dense 3D datasets, bigger uploads can quickly become expensive to parse, batch, send, and process in real time. We had to deal with the overhead of handling large CSV, JSON, and PLY-style files in the browser, then streaming that data to the backend without freezing the UI or slowing the overall experience too much.

This became especially difficult because the system is not just uploading files for storage — it is transforming them into a live voxel world while also updating analytics and visualisation state. As file sizes grow, that puts pressure on both the frontend and backend, and it exposed limitations in our current ingestion pipeline. Figuring out how to make large-file ingestion faster, smoother, and more scalable has been one of the main technical challenges we are still working through.

Accomplishments that we're proud of

We are proud that Plum Engine already delivers a full end-to-end pipeline:

  • Real-time 3D voxel rendering
  • Temporal hindsight with a 32-tick scrubber
  • Multi-format ingestion for CSV, JSON, and PLY/LiDAR-style data
  • Live telemetry and forensic-style audit reporting
  • Anomaly detection based on recent voxel behaviour
  • Optional AI-assisted narrative reporting with a local fallback What makes this especially exciting is that we did not just build a visualisation demo, we built a system that combines simulation, spatial analytics, anomaly detection, and user-facing reporting into one coherent experience

What we learned

We learned that building a spatial analytics engine is really about connecting layers: ingestion, state management, rendering, and interpretation. We also learned the value of separating the system into a high-performance backend and a highly interactive frontend. On the technical side, we saw firsthand where simple decisions help speed up development — like polling full snapshots — and where they become bottlenecks that should later evolve into delta-sync, persistence, and better environment configuration.

What's next for Plum Engine

Next, we want to make Plum Engine more scalable, production-ready, and intelligent. Our next steps include:

  • Switching from full snapshot polling to delta-based synchronisation
  • Adding persistence for world state and snapshots
  • Moving API endpoints into environment-based configuration
  • Wiring the multipart upload endpoint into a true server-side ingestion pipeline
  • Adding automated testing across the backend and frontend
  • Improving performance for larger datasets and reducing frontend bundle size The broader goal is to evolve Plum Engine from a strong prototype into a more robust platform for real-time 4D world modelling and spatial anomaly analysis.
Share this project:

Updates