I wanted to create a musical visual effect in Spark AR Studio using audio spectrum analyzer. For musical content stories in Instagram, it'll add interesting visual to encourage viewers to turn on the sound.

So I experimented create effect using code generated musical pitch tone from sampled sound to drive animated visuals of Synth Bot.

Some interactivity are: *Tap and Hold platform button will change the number of circling rod to alter pace of drum beat *Tap and Hold while dragging the bot will alter tone pitch *Tap on the bot to play randomized scripted synth tone progression.

In latest version of Spark AR Studio, I used Audio analysis to drive various interactive visual output base on pitch frequency of sound clip. The procedural driven tone based on note progression dictionary of semi tone number in the Javascript.

I Imported 3D Model of Musical Bot I created using Blender and textured in Substance Painter. I then modeled the platform and vortex objects by rearranged UV mapping in the model to apply animated shader effect.

With this project I learned new SparkAR API changes from the previous version. Also tips on creating procedural generated tones from sample sound file.

Some obstacles I experienced during development. I find organizing components in blocks and limited support for interconnect using Audio datatypes in Patches. Also encountered corrupted project files while playing with the latest version. Same project won’t load after restarting the editor. Gladly I recovered after several restarts and rebuild from old stable check ins.

After the contest, I plan to modify the project for placing character as head decor for front camera selfie. Control the animation with microphone and facial expressions. Perhaps creating new AR instrument or musical training game based on this project. Possibility are endless.

Built With

Share this project:

Updates