Inspiration

The spark for Project Laminar didn't come from a classroom, but from looking at the data bills of the people trying to access one.

During our research on education in rural Rajasthan, we noticed a pattern: modern education has an "Economic Firewall". The bottleneck isn't coverage anymore, it's the cost. We found that a standard 1080p lecture uses up about 2.5 GB per hour. That's huge for someone on a budget. For a student in a bandwidth-constrained region, that one hour can cost 10-20% of their daily disposable income.

We realized this is an engineering inefficiency. We are treating a teacher's handwriting: sparse, simple lines, as a high-density video stream. We asked: Why stream the pixel, when we can just send the vector?

What it does

Project Laminar is a conceptual Vector Reconstruction Engine designed to replace video streaming with a protocol we call "Ghost-Packet" delivery.

The core concept is simple but powerful: Instead of recording a video of a whiteboard, our system is designed to capture the teacher's pen strokes as raw mathematical coordinates (JSON) and audio. The student's device would then re-draw the lecture in real-time using HTML5 Canvas.

Projected Savings: Our calculations show this would shrink a 2.5GB lecture down to <15 MB. That’s small enough to fit on roughly 11 standard floppy disks.

Infinite Resolution: Because the design relies on vectors, the text would remain mathematically crisp on any screen, unlike 240p video which becomes unreadable.

Offline Mode: We also designed it to work without a signal. Using a P2P setup, one student can download the file and just beam it to others nearby using Wi-Fi Direct, no internet required.

How we built it

Since this is an ideathon project, we focused on Systems Architecture and Feasibility Analysis. We didn't write the production code yet, but we engineered the blueprint:

The Input Logic: We designed the system around a "Semantic Recorder". We mapped out how to capture mousedown and touch events in a browser and serialize them into a timestamped JSON stream.

Audio Selection: We chose the Opus codec for the audio layer because our research confirmed it offers the best intelligibility at low bitrates (12kbps).

Optimization Algorithms: We identified that raw drawing data can be noisy. To solve this, we incorporated the Ramer-Douglas-Peucker algorithm into our design spec, which calculates how to reduce vector points by ~40% to keep file sizes distinctively low.

Network Protocol: We wireframed a "Ghost-Packet" transfer method that splits the file into 1MB chunks, optimized for unstable connections where standard HTTP streaming fails.

Challenges we ran into

The "Trust" Paradox: Designing for offline P2P networks created a security hole: if students share files directly, how do we prevent malware? We spent a lot of time researching cryptographic solutions and settled on a SHA-256 Checksum verification step that runs locally on the receiving device.

Syncing Theory: Theoretically, keeping audio synced with a drawing that is being "re-drawn" by code is difficult, especially if a student skips ahead. We had to design a data structure that indexes stroke arrays against audio timestamps to ensure they stay locked together.

Defining the MVP: We initially wanted to use Computer Vision to convert existing videos (like YouTube) into vectors. However, we realized that was too computationally heavy for a first step. We had to pivot our design to a "Recorder-first" approach to make the MVP feasible for low-end devices.

Accomplishments that we're proud of

The Math Works: We proved the statistical feasibility of the project. By comparing the byte-size of JSON text vs. h.264 video pixels, we confirmed a theoretical 97% reduction in data usage.

The "Floppy Disk" Benchmark: We validated that a 60-minute STEM lecture could technically fit into ~15 MB. This number confirms that we can break the "Economic Firewall" for students on 2G networks.

Solid Architecture: We moved beyond a vague idea and created a concrete, step-by-step technical pipeline (Input -> Serialization -> P2P Transfer -> Canvas Rendering) that is ready for development.

What we learned

We learned that the industry is trying to solve O(n) storage problems with O(n²) bandwidth solutions. By rethinking the fundamental protocol of how we transmit "knowledge," we realized that legibility doesn't require high-fidelity video: it requires high semantic fidelity

What's next for Project Laminar

Prototyping: First on our to-do list is coding the CanvasRecorder.js module. We need to prove that our simplification algorithm (Ramer-Douglas-Peucker) actually runs smoothly in a standard web browser.

Accessibility: Because we aren't using pixels, we can do something video can't. Our text-based data can feed directly into Screen Readers or Braille displays, opening up whiteboard lectures to visually impaired students by default.

Built With

Share this project:

Updates