We wanted to make it easy for beginners and content creators to find suitable background music for their videos.
What it does
It takes in a video as input and then analyzes what type of video it is through a custom trained model on the Clarifai platform. We divided the types of video into broad categories like action, calm etc and the end result would be a combination of these categories. Depending on the category we add suitable music to the video.
How I built it
We take in a video and split it into individual frames every n seconds depending on the length of the video. Then we send these images to our custom model which categorizes the video. We then overlay the video with suitable music and output the new video with the background music.
Challenges I ran into
We initially thought of using Clarifai's video analysis API but we found that it was only available for demo purposes. We then decided to use individual frames for analysis and it took us time to figure out how to actually extract the frames. Our biggest challenge was getting our model to categorize the videos properly. We couldn't find any dataset to train our model so we had to find a way around it. Doing it manually would take us a lot of time so we decided to write a scraper for getting our training images.
Accomplishments that I'm proud of
The project does exactly what we wanted it to do. With a little more polishing, we have an actual product here and that is something we are very proud of.
What I learned
We learned to use Clarifai's API, Web Scrapers and a bunch of cool python libraries.
What's next for Musify
The first step would be to shift from the CLI to a proper web interface. Also, we have requested access to JukeDeck's API that makes music for our project. We would like to integrate that so we have unique music for every video.