Inspiration
We are all passionate about music, and we've always wondered what it would be like if we had our favorite artists featured on other tracks!
What it does
We start by separating the vocals and instrumentals of the original track. From there, we determine the main 'cycle' or pattern in the original beat and generate ai lyrics of the desired featured artist. The lyrics are a product of a Deepseek R-1 Model trained on specific engineered prompts, where the result came to be a model to which we could concisely feed the original song and the desired feature artist. From there, it would infer the lyrics the artist would have based on their own style and tempo of the beat. This forms a snippet, which is passed to DiffRhythm, We then pass the tonal-adapted AI song through Open-Unmix to isolate the vocals which is then fed to the RVC which then it uses latent diffusion to adapt the voice to a choice between preselected artists which are a result of trained neural nets from a repository on HuggingFace. Finally we overlay the vocals on the beat and splice it back into the song making sure to preserve the rhythm using peak-trough alignment.
Accomplishments that we're proud of
We're very proud that we were able to create this product in such a short timeframe while having no prior experience with programming audio manipulation.
What's next for Feature.ai
We hope to build it up, specifically the methods in which we identify the core beat components in the library. We'd like to add support for more complicated beats, like beats with significant noise or a major switch halfway through. We also plan to make our lyric generation even better, using specific tonal frequencies over the generated beat in order to give the model a better base for its algorithm to run on.
Built With
- diffrhythm
- googlecolab
- huggingface
- open-unmix
- python
- rvc-webui
Log in or sign up for Devpost to join the conversation.