Inspiration

Last year, one of our team members attended HackMann 2019, the first hackathon he had ever taken part in. Not really knowing what he was doing, he foolishly attempted a machine learning program in the language Processing which didn’t go so smoothly- it end up crashing during the live presentation). However, his group did have excellent taste in music, playing Patrick Open Sesame Lo-Fi for the entirety of the hackathon, garnering compliments from both other hackers and mentors alike. Now, for HackMann 2020, this competitor returns, with more machine learning knowledge, and more lo-fi music. We believe music is the perfect medium in bridging gaps and making connections between people. Our group has bonded over a shared love of making and consuming music, and we want to share this with as many people as we can. We see our project as the perfect intersection between our shared interests in technology and music- we were able to explore music beyond physical instruments and create a unique and accessible experience. Lo-FAi is a youtube livestream which acts as a constant stream of lo-fi music generated in real time by a machine learning model.

What it does

We built an LSTM neural network to generate Lo-Fi music. The music is then played over a dynamic background we created and streamed onto Youtube, which can be accessed through our website. This allows viewers to have constant access to a stream of newly created music for them to listen to.

How we built it

To build the neural network we used Keras with a TensorFlow backend in Google Colab. This enabled us to train the LSTM model using GPU acceleration, and host our code online to work with each other. To host the livestream, we initially tried to use the pylivestream library on a Raspberry Pi but elected to instead use the Windows version of OBS Studio. For our landing site, we used repl.it as a host, and programmed it using HTML, CSS, and JavaScript. For our livestream, we used PyGame to create the animated stream screen, and OBS Studio to control audio and video input.

Challenges we ran into

There are few sources that supply LO-FI music in a MIDI format for free. After searching and not finding anything, we downloaded LO-FI sample kits with MIDI files, and we also converted some WAV files to midi. When figuring out how to livestream on a Raspberry Pi, we decided to try live streaming in python using the pylivestream module. It took two hours just to find the settings file, and then there were more problems with internet connection and detecting the camera. Once we got live streaming working with pylivestream on a different computer, YouTube flagged our first test as explicit because only a head was in the frame, and it thought that we weren’t wearing clothes. We had to set up a new livestream after that, and we also decided to switch to OBS for more reliable live streaming. We also ended up switching to a Windows computer because the Raspberry Pi could not install OBS.

Accomplishments that we're proud of

Finishing the project mainly - the neural net alone took around 2 hours to train on a Google Colab GPU, not including the time spent collecting midi files and converting wav files into midis. Add onto that figuring out how to youtube livestream, trying (and failing) to host the livestream on a raspberry pi, and just a general need to not stay up for 48 hours straight, this project was a time crunch from beginning to end.

However, the part of this project that we feel is most impressive is 100% the livestream. Whenever channels constantly stream music, they typically do so from a predefined playlist or some source that’s so long it might as well be infinite. For our live stream, we are able to generate the music in real time, creating music which is guaranteed to be unique and new to the viewer.

What we learned

We learned so much through this project that it’s almost impossible to figure out where to start. Starting with the audio generation, we looked into all sorts of styles of audio generation. For the dynamic backgrounds, we learned how to use pygame. From researching the structure of lo-fi music to learning neural network structures and ultimately figuring out how to program the LSTM model in python, the music generation was a wild ride that taught us a lot about how computers can see music.

None of us have ever gone live on YouTube before. In fact, this actually meant that, upon coming up with this idea slightly before the opening ceremony of the hackathon, one of us had to register with YouTube to be verified for live streaming, which requires a 24 hour waiting period. Almost everything in this project was new. We learned that the supposed python livestreaming library really only works in command line, and that Raspberry Pis are absolutely terrible at working with any sort of streaming, whether it be a static image or a webcam video. But in the end, we did actually learn a lot about OBS live streaming, how to go live on youtube, how to manage live audio and video generation and so much more.

What's next for Lo-FAi

There are a lot of places to go with Lo-FAi. For starters, we could layer on more instruments to give the music a more complete sound, as we only trained the melody due to time and processing constraints. This could be done by creating a network which learns how drum beats and lead parts fit into the general lo-fi piece, and generates those parts off of the current network output.

Another possible future endeavor is to figure out how to properly host the livestream on a raspberry pi or some other low power and cheap computer. This would make it more feasible to run the livestream for longer periods of time, as a desktop computer is not being used to control the input into the livestream.

Built With

Share this project:

Updates