V-22 Labs
The Seamless Audio-Visual Performing Station
Inspiration
The traditional DJ booth is a mess of wires, heavy hardware, and disconnected visuals. As a digital growth architect, DJ and an artist manager, I saw a gap: performers need a unified "cockpit" where sound and sight are born from the same gesture.
V-22 Labs was inspired by the idea of the Flow State—removing the technical friction so the artist can focus entirely on the energy of the crowd.
What it does
V-22 Labs is a browser-based, high-performance VDJ (Video DJ) station. It allows users to mix high-fidelity audio while simultaneously manipulating real-time video textures. It bridges the gap between a standard DJ deck and a VJ suite, ensuring that every beat drop is felt visually and every visual transition is heard.
How we built it
Built using the Web Audio API and Canvas/WebGL, we leveraged Gemini’s multimodal capabilities to assist in complex signal routing and UI logic. The core engine manages two high-resolution "Decks" that handle independent audio and video streams, synchronized through a centralized master clock.
Challenges we ran into
The Sync Struggle: Getting two independent audio buffers to beat-match in a browser environment was the hardest thing in the world. Web audio isn't naturally "aware" of the beats; we had to build a custom logic layer to handle microscopic timing differences and prevent "phasing," where the tracks sound like a "galloping horse" instead of a unified rhythm.
Accomplishments that we're proud of
The UI/UX Fusion: We are incredibly proud of how the UI is coming together. We’ve moved away from the cluttered look of legacy software toward a sleek, "Neon" aesthetic that prioritizes visibility in dark club environments while keeping the most critical performance data (BPM, Waveform, Video Preview) front and center.
What we learned
Building V-22 Labs required a deep dive into the physics of sound and time. We mastered two critical calculations:
Calculating the Playback RateTo match Deck B to Deck A, we calculate the ratio of the target tempo to the source tempo: $$PlaybackRate = \frac{BPM_{target}}{BPM_{source}}$$Example: Matching a 124 BPM track to a 128 BPM master requires a playback rate of $1.0322$ (a $3.2\%$ increase).
Calculating the Phase Offset (The "Nudge")Matching the tempo isn't enough; the beats must land at the exact same millisecond.Beat Duration: We first find the time between beats:$$BeatDuration (sec) = \frac{60}{BPM}$$The Sync: By comparing the current Time of the Audio Context for both decks, we calculate the phase $\Delta$ and apply a microscopic "nudge" to the start time of the slave deck to align the transients.
What's next for V-22 Labs
The browser is just the beginning. We see a future where DJs aren't looking down at laptops but are wearing Smart Glasses. The next evolution of Neon-V will involve Spatial Motion Control, allowing artists to mix audio and sculpt 3D visuals using only hand gestures and finger tracking—turning the entire stage into a playable instrument.


Log in or sign up for Devpost to join the conversation.