Inspiration

As our members all have an Asian background, we've begun with understanding what were the needs and difficulties of people in our home countries in regards to modern internet behavior and accessible video consumption, which has led us to this idea.

What it does

LittleVid is a lossy video compression service built onto deep learning models facilitating faster video transfer and access with far less data consumption. This project aims to make the video consumption and delivery faster and accessible all across the world with specific focus on regions with poor connectivity by allowing downloads at a fraction of the original size.

This has been made possible with our LSTM machine learning-based algorithm, which compresses the original video into limited number of frames plus an encoding pattern with a drastic size reduction, which can eventually be used to reconstruct the entire video.

How we built it

We start by splicing and labeling the video into individual frames which are then fed to an auto-encoder CNN. The CNN selectively pools the frames into groups of four to six pixel blocks, assigning a location and color parameter to each group. This is later sent to a seq2seq LSTM which, based on the Boltzmann equation, assigns an entropy value to the labeled frames and gives out the compressed video data in terms of only the first and last frame and an encoded pattern. This process drastically reduces the size of the media allowing for efficient and faster data transfer, even on slower speeds.

Challenges we ran into

This project makes heavy use of Nvidia CUDA, cuDNN framework and GPU optimization by the PyTorch library. We encountered challenges installing and making proper use of the CUDA drivers and nvidia graphics all throughout the event. At one point we even dealt with a boot-loop and an almost dead laptop!

Accomplishments that we're proud of

We were able to solve all the above issues with the graphic dependencies and GPU optimization at the last second! One of our biggest accomplishment was the successful use of the LSTM framework and making it work in conjunction with an auto-encoder CNN. All of this was done with the incredible help from the PyTorch library with a Linux environment.

What we learned

When we use the term AI or machine learning, we tend to think that it's something we have to be cautious "on." Throughout our developing process we've learned that the brighter future with AI and humans co-living together will be a place where they are "blended," with no necessities to distinguish one intelligence from another. Our LittleVid is a service that doesn't emphasize the AI behind the user experience, that we feel be most successful among our work. It aims for the human experience.

What's next for LittleVid

  1. The current project depends heavily on many existing libraries which make the project's scope limited. Making the ones can increase the program's efficiency
  2. In the future, this project can be applied for devices with no GPU or less powerful ones (by utilising the CPU's processing power only such as low scale smartphones).
  3. We expect this project to be employed by many big video sharing websites and cloud platforms for further development.

Built With

Share this project:
×

Updates