After taking a look at the possible three use cases posted by T-Mobile, I narrowed down to "Simulcasting" for two simple reasons: (1) General interest and (2) Expertise

As for a general interest, I can definitely see that there are so many streaming platforms out there, whether it is Youtube, Facebook Video, Vimeo, Twitch, etc, and solving this problem by creating a universal platform where one can "stream once and stream everywhere" will be interesting.

As for expertise, while I don't have any experience with two other two use cases, which are "precise location" and "augmented reality and AR/VR," I can relate to the Simulcasting from both an user perspective and a developer perspective as I am a cloud and a DevOps expert who has been utilizing many API tools. Solving this problem requires the cloud expertise and ability to quickly understand different end-to-end APIs, which are the area where I exactly fit.

What it does

The vision for this "Universal Simulcating Platform" is to present a user with an interface where a "streamer" can click a button to stream to multiple live stream endpoints, which can be Youtube, Facebook, Twitch, Vimeo, etc.

In addition, there will be a live chat feed where it can also get the live chats from different streaming media (Youtube, Facebook, Twitch, Vimeo, etc), and the "streamer" can interact audiences from different channels through a bi-directional communication process.

How we built it

An app created with NodeJS + ExpressJS + Socket IO backend was deployed to Azure App service. From this client side, there are two main functions: (1) a video that goes live to different channels (Youtube, Facebook, Twitch, Vimeo, etc) with a single "Start Broadcasting" button click, and (2) a chat message box that aggregate the messages from different live stream channels.

When the "streamer" hits "Start Broadcasting" button, few things will happen in the backend. This will encode the video through FFMPEG and execute SOCKET IO functions. After processing the video, this will make a call to Azure Media Service through RTMPS endpoint and it will simultaneously make the call to another app service based on REST API. This will authenticate with different streaming services then send the output to different streaming channels.

For the chat, there will be a messaging queue like Redis that aggregates and manages chats coming from audiences on different live streaming channels and displays back to the "streamer". The "streamer" can also broadcast the message through the "unified simulcasting portal" chat window, which will then send out to different live streaming channels.

Challenges we ran into

Time constraint was definitely the main issue for me as I was the only one in the team while this was a side-activity during the break time while working for my main job. In addition, researching into different live streaming services and how to authenticate into it, reading about RTPMS, and managing Azure Media Service were all challenging.

As for the live streaming APIs, it was quite hard to create a testing environment for each channel as there is nothing fixed but this live streaming channel has to be created for on-demand. For example, I struggled to find out how to exactly create a live streaming channel for Youtube, and one time, it asked me to wait for 24 hours to get approved. I found a way around by using different Google account, but this was quite challenging.

Accomplishments that we're proud of

Although the end-to-end flow is not completed, I am proud of the fact that I learned so many things during the process, and I know exactly where to go afterward.

What we learned

I get to learn about RTPMS and Socket platform better. I also gained some experience building a project with Azure Media Service & RTPMS.

I learned that authentication & live streaming flows for different platforms are all different.

What's next for Universal Live Casting Platform

I definitely want to continue developing this.

Share this project: