Inspiration

  • Tbh I have been always a fan of music festivals and concerts & I love screaming out names of the artists while grooving to their music.
  • I happen to attend one such event recently and I decided to build something which would let me make the best out of that event.
  • Also I thought why not actually implement something which would let me use AR to scream my words out in the sky??
  • And thus the Music Festival Lens was born

What it does

  • It's a fun and engaging way of expressing yourself at any music festival or concert. It has cartoon avatar style option as well as a live transcription of whatever you chant at the event which will then throw your words out from your mouth directly into the sky šŸŖ‚
  • It has a cool frame overlay with the user's name on it for more personalized touch

How I built it

  • Over the top the lens looks simple enough to execute but actually it uses some of the novel features which SnapAR offers
  • It uses Sky Segmentation with 3D World tracking to place jolly characters and environment around the user
  • The ML Style Transfer custom components are used to turn the users into a cartoon character matching the vibes with the overall theme
  • Dynamic Day & Night cycle through programming to toggle the sky segmentation which will turn off the sky segmentation post sunset since Sky segmentation doesn't work properly with the night sky thus to prevent a poor experience for the user
  • Dynamic Text to enable users' Display name on the Frame overlay
  • Speech recognition ML to transcript the users' speech in real-time and convert that into floating text which users can throw out of there mouth while chanting their favourite artist's name

Challenges we ran into

  • Speech recognition transcription was a bit tedious work to adjust properly
  • Sky segmentation would break in the night sky so included a script which would automatically turn off the sky segmentation post sunset

Accomplishments that we're proud of

  • This would be probably my first lens which I have personally used alot :P
  • It's also already trending on the Snapchat at #1 šŸ„ŗā£ļø
  • The happy vibes I get everytime I use this lens is in-expressable

What we learned

  • I learned about Speech recognition more in this especially the transcription use case, I did work with keyword detection before but this would be my first lens with live transcription

What's next for Music Festival

  • I mean there's endless possibilities, One thing I personally want to implement is the dynamic style change using remote assets. I later on found out we can't store gifs on the remote assets yet. I would want to dynamically change the loook and feel of this lenses according to the seasons, time, date and events across the world.
  • Imagine if I'm using this in India it will have an Indian theme, and same if it is used in the US, it would themed American without compromising the quality of assets using Lens cloud storage and dynamically loading data according to users' location

Built With

Share this project:

Updates