What it does

Around Us hosts a 24/7 live stream on youtube. Songs are queued by AI based on the previously queued songs if there are no song requests in the chat. Music is manipulated with the surround effect that emulates an 8-dimensional environment with boosted bass. Lyrics are provided line by line, and transliterated to English letters for singalong, if the original language is not English.

How we built it

We use Google Live Stream API to automatically start a live broadcast and a live stream. In the live stream, we use a web scraper to extract any song requests in the chat, and store them in a queue hosted on Google Firebase. We create an event listener so when it's close to the turn of a song, we use node-ytdl to search and download the requested song on Youtube in buffer, and use ffmpeg to manipulate the frequency of the song and amplitude of the song and generate an audio file.

If there are no songs, we take the previously queued songs and find related songs to add to the queue.

At the same time, we run HTTP calls in the background to extract the lyrics and the timestamp of each line of the song. We use Canvas to create a buffer for each line, and write them on a static image. We then take all these images and put them with the audio file earlier to create a video. This video is converted to the proper stream format using ffmpeg again and sent to youtube's RMTP server.

Finally, we bind this RMTP stream with the broadcast we created earlier to stream content to youtube.

Challenges we ran into

We were unable to send the audio stream in ffmpeg to Youtube's RMTP server for the first 14 hours, which led to the shortage of time and incompleteness of the project.

Accomplishments that we're proud of

We successfully figured out how to send music buffer in ffmpeg to Youtube and implemented the music filters and lyrics

What's next for Around Us

To be finished

Share this project:

Updates