This image is currently processing. Please be patient...
Please note! Our file sizes are too large to upload via devpost. Check out the github.
EPILEPSY WARNING - There are flashing lights and colors. Please be safe.
We were inspired by the movie Ratatouille, which has a smell visualizer. Music is an important part of all our lives. The merging of technology and art is always exciting, so now that we have the capability to immerse listeners in a world created by their music, we were excited to try to do it!
What it does
Visualizes music in a 360 video for use with Virtual Reality!
How we built it
First we analyze a music file, and determine a Frequency-Amplitude over time graph. Up next, we do a Fourier Transform and peak analyzer on the file to determine dominant frequencies at each moment. Color is calculated from a novel algorithm that determines dominant frequencies and matches timbre of each frame to a color.
Challenges we ran into
Keeping up with the speed of the actual song is difficult; rendering is hard, and the Amazon AWS credit didn't seem to work. Because of this constraint, we pre-rendered for our demo, and we are looking at ways to do a better, hopefully live render.
What we learned
Uncommon, but cool: we learned about the importance of art & design in technology. Despite none of us being artists, we learned more about blender, putting together a whole project pipeline. Same as always, time management in a 24 hour window is an important takeaway.
Accomplishments that we're proud of
Jay — Carrying the team and handling the major task of generating the demo render. Warren — proud of truly committing to a project and seeing it through. Alex — This my first Hackathon! I'm glad I was able to contribute deeply to the project beyond what I thought I'd be able to do. I'm proud that I pulled through in the end. Jeremiah — being (maybe?) one of the first people to explore visualization of timbre, which is a relatively arcane musical phenomenon
What's next for raVR - VR music visualizer
We're looking at ways to expand our rendering capability. We're looking into using Blendergame to render the visualizer as an explorable environment. We're hoping to fine-tune the ways that we visualize different aspects of music. We're looking into other projects (like Melisma) which can analyze other aspects of music, such as voice.