Inspiration

Experience and knowledge of long hours of going through long video streams to edit out the stuff where nothing is happening.

What it does

It takes in a video file, we go through the file and remove all frames where there is nothing significant happening. At the moment, if there hasn't been any motion happening for a set interval, then previous things in motion are now considered the background. Frames, where there are only background and no significant foreground, are considered frames to remove. The output is a new video file with those frames removed.

How we built it

We built it using Python, OpenCV, and moviePy.

Challenges we ran into

Lack of domain knowledge in what we need to use for this project and inconsistent documentation that wasn't updated to the version we were using.

Accomplishments that we're proud of

We had no prior domain knowledge, but understand conceptually how the motion tracking and background subtraction happen.

We were able to produce the output we wanted and completed what we set out to complete for the duration of the Hackathon.

What we learned

We learned how to use the OpenCV API. We learned about the different methods people use to detect motion and found commonality between various examples.

What's next for Automatic Video Editor

At the moment, it cuts out the frames, but the audio is still the same. So we want to cut the audio of cut out frames as well.

Again with audio, we want to use that as another criterion for if something is happening in the scene or not.

We know that all the of frames we cut out may not be considered filler or garbage, we want to take in extra input to specify the threshold of what is considered filler. This includes taking in images for labeling specific objects in the scene, considering and frequency of the objects in motion. We also want to give the user the option to keep or cut out frames after it has completed filtering it, in case they did want to keep it and our program misjudged.

Built With

Share this project:

Updates