Countless individuals often search through YouTube for various mediation melodies, such as "Ocean Waves," "Wind Ambiance," and "Trees Swaying." With AMPLIFY, an individual is able to listen to these mediation melodies fused together with Lo-fi elements of music generated based on the environment the individual is located in at a specific moment in time, all while whether the individual is at the beach, the mountains, a metropolitan area, or either walking, running, cycling, or roadtripping.
Process Of Building
- Upload Image File → Drag File From Local → Send Image → Payload Base 64
- Send File To Backend
- Backend (Google Cloud Platform + Node)
- Process File
- Send File To Google Cloud Platform Vision API
- Formulate Results To Send To Magenta
- Machine Learning (Python / Magenta / MusicVAE)
- Train MusicVAE
- Generate Mediation Melodies Through Interpolating Between Various Note Sequences
Backend (Google Cloud Platform + Node)
- The Google Cloud Platform Vision API was not able to recognize the images in an accurate manner, so we instead recognized the images through detecting the differing colors in the image. For example, if the detected colors in the image were perhaps either red, orange, or other bright colors, the generated melodies would then be a bit more upbeat and cheerful. However, if the detected colors in the image were blue, purple, or other dark colors, the generated melodies would then be a bit more calming and serene. Machine Learning (Python / Magenta / MusicVAE)
- MusicVAE is a hierarchical variational autoencoder that learns a summarized representation of musical qualities as a latent space, and encodes a musical sequence into a latent vector, in which can then be later decoded back into a musical sequence. We at first desired to use this pretrained MusicVAE model, however we ended up needing to personalize this MusicVAE model and thus needed to train our own MusicVAE model to generate certain parts of latent space we needed.