Cloud synthesis for 452nm and 827nm bands.
Real cloud for 452nm and 827nm bands.
Example of RGB enhanced real image with clouds
Example of synthesized cloud (452nm band)
As the amount of satellite images data increases, if finds numerous applications in areas such as natural environment monitoring, disaster response, transportation and infrastructure improvement, among many others. However...
Cloudy is fuzzy
Or worse, it obstructs the image. Clouds are a major issue with satellite images. Machine learning has only recently been applied to this task and the literature is boiling with excitement about its potential.
Hyperspectral is not only fancy
Instead of only 3 or 4 bands (RGBA, multispectral) in an image, we have many wavelengths available across the electromagnetic spectrum. So much for the storage space... but very helpful for data analysis and image processing purposes.
What are GANs?
Generative Adversarial Networks are a class of unsupervised deep learning architecture which is extremely successful for a wide variety of challenging tasks such as image synthesis, completion or coloring. The basic idea is to train one network - the generator - to generate an image looking like X, while another network - the discriminator - is trained to discriminate between real images of X and fake ones generated by the first network. It is thus a competition between two neural networks which, after a careful training, can yield surprisingly sharp and detailed 'fake' images.
In this project I focused on conditional GANs, you can see more examples like pix2pix here.
What can GANs do for us?
The basic idea is: Since clouds become more transparent towards IR wavelengths, why don't we use this extra information and have GANs generate the missing patches from below the clouds? I could only find one very recent (Feb. 2018) paper which tackled this idea.  showed that conditional GANs could be useful combined with 4-band images (RGBA). It is only natural to think that hyperspectral data should yield even better results.
Ideally we would like to have something that can
- detect clouds reliably
- replace them with relevant pixels
- run fast enough to be implemented up there in space!
But deep learning is a data ogre!
Yes, deep learning usually requires huge datasets. We have 30 images from Satellogic hyperspectral dataset published for the datathon. Each image has 32 bands. Each band can be divided into 300~800 small images (256x256) depending on its size and orientation. That means we could reach a dataset of ~15k hyperspectral images. The pix2pix GAN reportedly can sometimes work with as few as several hundred images, thus we should not run into a major issue due to the dataset size.
What it currently does
"The two most powerful warriors are patience and time" Tolstoy
No, despite tremendous efforts, it does not yet run, so we cannot tell whether it does anything. Some pieces still need to be put together. :-(
Nonetheless what works can be summarized as everything up to the input of the GAN:
- Generation of 256x256 patches from Satellogic hyperspectral data
- Cloud layer generation
- Classification between cloudy/clear patches using Convolutional Neural Networks (CNNs)
How I built it
With a bunch of Python scripts. My strategy was the following:
- Download rasters from Telluric Explorer.
- Tile them: since rasters are always rotated (along mercator) and do not occupy all the image, slice it along a grid and extract as many square images size 256x256 as possible.
- Take a subset of 800 images and label it (cloudy/clear) using Labelbox for classification purpose.
- Train a simple classification network based on VGG to discriminate between images with and without clouds.
- Use this network to extract all non-cloudy images from the dataset. The cloudy images will form the testing dataset (to test the GANs performance).
- Generate a cloud and its shadow using Perlin noise and some image processing.
- Duplicate the dataset of non-cloudy images and add a cloudy layer on the duplicate dataset. We need the clear image as ground truth for the discriminator, the cloudy image will be the input of the generator.
- Train the GAN.
Challenges I ran into
- How to work on a dataset of images which are ~12Go each when your computer only has <5Go free space left.
- How to train a deep neural network without GPU? Or the struggle to get multiple GPU platforms to launch/install dependencies/run the code.
Accomplishments that I'm proud of
- Labeling 800 images by hand to train the classification network
- I ended up downloading, processing, classifying and implementing GAN on four different machines (with all the data uploading/downloading that this implies) which required treasures of organization and data management.
- Figuring out and setting up the whole deep learning pipeline for this problem, from the dataset creation to the end training, in less than 24h.
What I learned
- Had a wonderful time learning what GANs are, all the magical tasks they can accomplish as well as how they work and how to implement them.
- The unsuspected power and importance of satellite data analysis for so many areas, not only earth science.
- "Deep learning is all about data." 80% of the time will be spent on the dataset creation, processing, etc.
What's next for Kumo-san
- Finding teammates? Never too late.
- Patching pieces together and finally run it!
- Compare its performance with multispectral images and simple RGB images to assess the advantage of having hyperspectral data.
- Compare the performance with other cloud removal algorithms which do not rely on machine learning.
 Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets. K.Enomoto et al. (arxiv:1710.04835)
'Kumo' is the Japanese word for 'cloud'.