Users upload videos of an event and my app produces a master video in which you can scroll through different perspectives after pressing play. I find the relation between videos through audio processing. By doing this, you get multiple perspectives of an event as well you can view portions that you didn't record. Because audio processing and syncing was such a task, the front-end suffers a bit.

Share this project:
×

Updates