There are not any visualizers in existence which provide a flexible API for customizing elegant 3D visualizations. This project strives to fill the void.
What it does
The application captures an input audio stream from an external application such as Spotify, Netflix, Pandora, etc, performs an FFT on the input sound signal, and distorts a set of objects in a scene graph as per filters mapped to the scene graph nodes.
How I built it
I used port audio for audio stream catching, used Irrlicht and OpenGL for the scene graph and core graphics, and used Aquilla DSP for the FFT transform on the input audio signal. Post-processing on the FFT to reduce noise, adjust range, windowing functions, etc. were implemented by hand.
Challenges I ran into
Synchronization was a bit of an issue, since the audio callback is called at interrupt level on most systems, prohibiting the use of any kind of synchronization primitives such as locks. I ended up using hardware primitives to perform atomic writes to an array of floats accessible publicly from the main thread.
Accomplishments that I'm proud of
The simulation runs at under 20% CPU at 60 FPS, and the core filter structure allows many scene graph objects to be affected by the same filter, and likewise allows multiple filters to affect a single scene graph object. This structure made it easy to build complex scenes quickly, and will be used in the future to support a user-centric editor.
What I learned
I learned about digital signal processing, the Irrlicht engine, and how to use the Port Audio library.
What's next for 3D Music Visualizer
The next task is to build a user-centric editor that allows users to create and share scenes via a simple UI.