Inspiration
As musicians, the COVID-19 pandemic has disrupted our ability to physically perform pieces together in bands, orchestras, and small ensembles. Meeting in online video calls to perform is also difficult due to issues with audio lag. Another option is to record videos individually and then manually edit them together which is a very laborious and painstaking process. This inspired us to create Musync: an app that synchronizes recordings via an easy-to-use interface and efficiently integrates them into a polished video.
What it does
Each musician creates a Musync account and joins a "class". The lead musician can then upload a lead recording (or a metronomic click-track) for the class to play along with. Each member records himself playing an individual part (while listening to the lead recording with headphones), then uploads the video file to Musync. Our app synchronizes the parts by listening for a distinctive clap at the beginning of each recording. The final output is a synchronized music video which features the musicians on the screen (grouped by instrument).
How we built it
To create this application, we used Bulma for our frontend CSS framework, Google Cloud Services for our backend, and the ffmpeg file editing library to manipulate the video and audio files. To process the recordings, we start by splitting the mp4 video from each user into a video and audio part. For each of the audio files, we detect the distinctive clap by analyzing spikes in volume and trim the audio and video files at the clap time so they are all synchronized. Then, we overlay the trimmed audio files into a master audio file and arrange the trimmed video.
Challenges we ran into
One of the hardest part of developing this application was trying to use ffmpeg and the google cloud server because of the time it took to load dependencies each time we loaded the application to test the functionality.
Log in or sign up for Devpost to join the conversation.