I have an old animated film that I've always wanted a better quality copy of. Since the video is old, the best quality available is very poor. With this tool and a lot of time, I'll be able to convert it to a higher quality.
What it does
Asks the user for a video file input. Then runs analysis on the video to find out the frames per second. Uses the fps to split the video into frames and separate audio file. Optimizes those frames based on Deep Convolutional Neural Networks. Recombines the new frames with the audio file to make an improved video.
How I built it
Using perl and small bash scripts.
User needs to have the following dependencies installed:
Challenges I ran into
Time. Ran down to the last few minutes. Waifu2x uses Deep Convolutional Neural Networks that are calculated on the GPU. For longer videos, it takes a significant amount of time.
Accomplishments that I'm proud of
I finished and it works!
What I learned
I need more time for documentation. Maybe get some sleep and a better keyboard for next time (my fingers hurt).
What's next for WAVE: Waifu2x-based Animated Video Enhancer
Optimization: currently takes 15 minutes to improve a 20 second video clip. Maybe by splitting it's conversion of each frame into a separate process it will work faster? Documentation updates.