We really enjoy Virtual Reality and wanted to work with the Oculus DK2. Combining this with our love for music, we thought it would be interesting and fun to find a way to visualize music in a three - dimensional context.

What it does

HearVR is a program that combines Machine Learning with Virtual Reality to create an environment where SoundCloud music files can be played and visualized through a frequency spectrum and user - created comments. Music files and corresponding comments are gathered from SoundCloud and a sentiment analysis is performed on the comments. The music files are then played in a virtual three - dimensional environment where each song has a corresponding frequency spectrum and a stream of comments that are color - coordinated to represent how positive or negative it is. The user can traverse the virtual space to explore different songs.

How we built it

We wrote a Python script that uses multiprocessing to concurrently download and process SoundCloud files as it retrieves SoundCloud comments and communicates with an Azure web service that runs a sentiment analysis on the comments and gives it a score based on how positive or negative the comment is. Each music file's comments, the comments' scores, and additional information about the comments are written in a Comma Separated Value (CSV) file. The CSV file for each song is accessed by Unity, which uses C# with Visual Studio as an IDE to parse the CSV and create the corresponding frequency spectrum and comments. We then designed the virtual environment in Unity to produce a visual layout that displays the frequency spectrum and streams the comments according to their time stamp on SoundCloud for each song.

Challenges we ran into

Since we were working with VR, the technology we were using was very immature. We initially started out with Unreal Engine for our project - however, we quickly found that Unreal's audio engine was buggy and unreliable. After too many hours, we switched to Unity, a tool which none of us had worked with. Unity was a huge learning curve, but we pushed through. However, we had more issues - Unity can't decode MP3's and our plan to stream from SoundCloud was over. Instead, we did some trickery using Python to preprocess the MP3's into WAV files before feeding them to Unity. On the backend, we struggled through Microsoft Azure, which was also new technology to us.

Accomplishments that we're proud of

We're really proud of combining our interests in Machine Learning and Virtual Reality together to create a unique program that enhances user experience in listening to music.

What we learned

We learned new technologies such as Microsoft Azure, Unity, and how to integrate these different technologies together. Additionally, we learned a new language in C#.

What's next for HearVR

We would like to make HearVR a world that generates music automatically based on the person's preferences and which songs they listened to. In addition, we would like to make this a networked experience, so multiple people can listen in on the same session

Share this project: