The world is full of talented people that never get the chance to share their talents with the world. Today, if a performer wants to be discovered, they can either hope to overcome stiff competition and be featured on a traditional television program, or they can use websites like YouTube or Facebook and hope to go viral and stand out among an endless flood of digital content. CurtnCall was borne out of the idea that every performer deserves a chance to perform for the world.
What it does
CurtnCall creates the first time-limited, turn-based live video performance site on the Internet, meaning only one performer can perform for a fixed amount of time on the site. During the performance, viewers react to the performance using a reaction panel (with reactions such as applause and laugh). After the performance, viewers vote on the performance with a simple like or dislike (we call them money or tomato). Performers can immediately re-watch their performance to see how viewers reacted to their video in real-time. Performers compete to top the CurtnCall leaderboards.
CurtnCall is unique in that it introduces scarcity to live video on the Internet. By limiting the amount of time the user can take over the site, performers are pushed to create entertaining content while having the opportunity to perform for more viewers than they typically would using other platforms. Viewers take part in a shared live experience, where they see different performers try to entertain them, one at a time.
How I built it
CurtnCall is a complex, internet of things, real-time web application built on modern messaging, video streaming and event-driven, open source technologies. In building CurtnCall, we had to answer:
- How do we maintain and communicate real-time state for the audience and the performer in scalable fashion?
- How do we facilitate real-time, seamless turn-based video across many performers in a scalable fashion?
- How do we capture audience feedback to help performers understand their efficacy since the performer does not have a live, in-person audience?
To support the real-time and data layer, we utilize GraphQL and interface between a real-time Redis database and MongoDB for longer persistence. As events come into our data layer, such as a new performer getting in line to perform, an audience member reacting to a performer by selecting a laugh emoticon, or a performer’s time running out, we update our application’s state and communicate that in real-time to all visitors on the site using AWS Internet of Things technology. The broadcast and state logic sits in AWS Lambda functions, which utilizes push notifications to communicate directly with the audience and performer immediately. CurtnCall timestamps are associated with every event, and enable us to communicate to performers how their performance evolved over time in terms of audience feedback. We also run regular jobs to update CurtnCall aggregate statistics for the performance (how many total laughs), performer (how many total viewers have watched the performer?) and for CurtnCall as a whole (which performer leads CurtnCall in all-time laughs?).
To support real-time video, we utilize Red5 as our video server and WebRTC. The video server receives WebRTC input from the current performer and publishes the WebRTC feed to the viewers in real-time. We also make recordings of the video and perform post-processing so that performers can playback the performance.
Challenges I ran into
One challenge was keeping an accurate record of the users currently on the site. To do this, we utilize the AWS IoT service and establish a Websocket connection for performers and viewers. If someone disconnects for any reason, we are notified immediately and then can do the business logic cleanup necessary in AWS Lambda. Also, we utilize the AWS IoT service to publish data from the server to the client through the use of GraphQL subscriptions.
Accomplishments that I'm proud of
I'm proud that CurtnCall records a performance and gives performers the ability to see the reactions happen in real-time. It's great that the recorded performance is delivered almost instantly and all the video processing takes place without using a server.
What I learned
I learned how to build a front-end with React, how to pass data between React components with Redux, communicating between client and server in a more structured way with Apollo and GraphQL, managing and accessing data quickly with Redis, and publishing and receiving video in the browser with WebRTC and Red5Pro.
What's next for CurtnCall
We’d like to take CurtnCall public and see how performers and viewers interact with the website. We’ve thought about doing an open mic night to introduce CurtnCall to performers and viewers alike.
If many performers use the channel and time becomes more scarce, CurtnCall could expand from being a single channel launchpad for performers to becoming a decentralized, audience-driven Internet broadcast network with multiple channels that are scheduled or turn-based.