Neural Texture Transfer
Neural texture transfer is a technique primarily designed to integrate the texture of one image into another. It is heavily inspired by the paper "A Neural Algorithm of Artistic Style" , which highlights an algorithm that demonstrates the power of backpropagation in optimization. We start by loading and preprocessing the content image and the texture image.A function, Slicer , splits the picture into arbitrarily many pieces since the model (VGG19 architecture in this case) will not maintain the resolution of an HD image. It will also resize the texture to match the content's format in order to allow for a parallel computation. The variable 'Var Image' (defined Computation graph) will be initialized with the values of 'Content img' after preprocessing. From it arise two batches of color and grayscale images. The four inputs are fed to the model in this way: the preprocessed 'Content img' is concatenated with the preprocessed 'Texture img', and 'Color img' is put with 'Graysacle'. There are three outputs which contain values of the model's hidden layers:
- 'color_outputs' contains the features of 'Content img' and 'Color img'
- 'bw_outputs' contains the gram matrices of 'Texture img' and 'Graysacle'
- 'nogram_outputs' contains the raw features of the latter
Every pair of the previous outputs, as well as the pair ('Content img' , 'Color img'), are fed to the 'get_loss' function which computes the sum of the squared difference of any two similar tensors. The sum of these four losses form the 'total_loss'. The key ingredient in the optimization loop is that the image variable should be 'de-processed', turn into a grayscale version fed in one branch of the computation graph, and then reprocessed again. The algorithm can be applied to images with completely different colors as well, like this leaf and a crocodile skin (attached), although they are harder to optimize. To show what role these losses play in the optimization and see more examples, read more at Medium