We were inspired by our love of music to create an API that any developer could use to make a musician's life more simple and productive. While sampling is frequently done by all musicians, it is a time consuming process that could be better automated.

What it does

Our API serves as an endpoint for developers to provide automatic sound editing capabilities directly on the cloud. Our API has several functions such as: Audio source separation, time stretch, pitch shifting, and remixing. On our web application used to represent a way that developers can use our API, a user can upload a video or audio file, have it converted into a .wav format, and then choose an edit to be made.

How we built it

Built our audio transformations and effects and audio analysis to automatically detect beats. Built out a database using Cloudinary to manage the songs and uploads. Built a server using sanic.

Challenges we ran into

Authentication and security (user login) was the biggest challenge that we came across. We also had some .Json problems on the Sanic side.

Accomplishments that we're proud of

Being able to automatically track the beats in a song and sample them without needing the user to pick a specific part. Created a front end platform web application so that our users can see how developers can use the app.

What we learned

How to use the cloudinary API and new functions of the Python Librosa library.

What's next for Cloud DSP

Cloud DSP currently has 4 main features that were built in less than 36 hours. Next is to expand the list of frequently used features by musicians and improve on the current transformations that we employ.

Share this project: