Inspiration

Compact Discs use discretized samples of waveforms that are used to reproduce an exact copy of the waveform. I thought that if you could do that process in reverse using image noise, you could store just some mathematical function that represents the noise of an image, as well as some function to represent the color gradient. If you did this over square chunks of the image, you could control how compressed the image is vs. how much noise there is. The only information necessary to store to reproduce the image would be the probability of a 1 and a seed for PRNG that we optimize for. That could be stored in 8 bytes per chunk, regardless of the size of the chunk. The more time given to seed optimization, the closer the noise can be made to the exact data, and in most cases with 64x64 pixel chunks, less than 12% noise can be obtained.

How we built it

Our implementation is quite simple and uses the C standard library PRNG generation with a base seed of 0 that is slowly optimized for at random with a user-configurable amount.

Challenges we ran into

Up to the very end, I was unable to get the chunks to properly align in the generated image and thus each row is offset by a few chunks from where it should be and thus the image is not aligned. We spent so much time on trying to solve this that we were unable to work on the problem of generating color gradients. Moreover, this is obviously a lossy process producing dithered images, so its not well suited to replacing something like JPEG.

Accomplishments that we're proud of

The compression ratio on some images was as high as 1200x and the mathematical ceiling for that value is as high as 24,000x on large images with large chunks. Given a long time to optimize for PRNG seeds, the image result is still quite accurate. Even in the case of a high PRNG optimization and a large image, compression only takes on the order of single-digit seconds.

The name is a list of our initials: Ben Ria Felicity

What we learned

We learned a lot about probability and optimization.

What's next for BARF stochastic Image compression

I'd like to go back and fix the remaining images. I think this compression scheme would be well suited to transmission over low-throughput networks like blutooth low energy. In situations where fidelity is not necesssary and only contrasted images, this can work quite well. Moreover, for smaller chunk sizes, pre-computed tables of probability matches that are approximately close to the real noise function could avoid the optimization step in my cases for common probabilities, though at the cost of image accuracy.

Built With

Share this project:

Updates