After an inspiring talk on Saturday afternoon, we learned about the boundaries that were pushed by engineers and artists at Abbey Roads Studio throughout history. We wanted to be part of that legacy, so we decided to create a music player in Spark AR, with which a user can move around the different instruments (or stems) and interact with them. We further thought that this interaction would be even more immersive if we were to use spatial audio as part of our system, especially because we have an amazing L-ISA 12.1 speaker layout available to use at the studio! With these premises, we started to think about how to match both technologies in one project.

What it does

Mode 1: It can work with the built-in audio engine of Spark AR. Since the audio output of Spark AR is only mono, we tried to recreate spacial immersion using level, reverb and filter changes. When the user gets closer to an AR object, that music stem becomes louder than the other, it's reverb decreases and filtering is applied. When the user goes closer to another object, that object becomes the focus of the processing.

Mode 2: When the user enables the power of immersion within the Spark AR app, the system becomes a 3D interactive venue. In this case, the user can move the camera through the venue space, and their position relates to the audio objects of L-ISA Controller.

How we built it

The AR engine is built using Spark AR both in the patch editor and through scripting. Music stems were sourced from the UMG library. The 3D objects within the AR interface interact with the audio through movements and scaling according to data extracted using the audio analyser. We also get the phone position relative to the objects, and we use this data to change the level, reverb and filter parameters.

For the L-ISA connection, we had to extract the position of the objects as well as the position of the phone in relation to those objects. We found that it was not possible to extract this data from Spark AR, so we had to hack it! As we are audio guys, the best way we found to hack it was by using the headphone jack of the phone and reproducing sinusoidal waves. We modulated the volume of the sine according to the parameter that we wanted to control. Then, with some bridge software we wrote in C++ with JUCE, we could translate those volume changes to data to send via OSC. If we wanted to control more than one parameter at the same time, we just had to use different sine frequencies and analyse the spectral output using an FFT.

Challenges we ran into

The greatest challenge we ran into was the integration of both systems. Spark AR is a closed system (in terms of data extraction). As explained above, we had to think about how to export data and build a bridge software to match both technologies.

Accomplishments that we're proud of

Hack Spark AR and get data out of it. Controlling, this way, the L-ISA through Spark. Learning a new programming environment in a short period of time.

What we learned

We learned how to use Spark AR and work with it. We found that the demos we had on Saturday morning were really interesting and informed us well regarding the technology we had available for the Hackathon. We learned how to program using the reactive paradigm.

What's next for Pocket Venue

Get Spatial audio directly to the facebook app (binaural). Get new music tracks. Allow moving audio sources around the venue. Customisable sound sources.

Built With

Share this project: