Session

Frank J. Redd Student Competition

Location

Utah State University, Logan, UT

Abstract

Cloud detection in satellite imagery is key for autonomously taking and downlinking cloud-free images of a target region as well as studying cloud-climate interactions and calibrating microwave radiometers. We propose a C8-equivariant dense U-Net, a rotation-equivariant deep learning model, trained on visible-spectrum, long-wave infrared (LWIR), and short-wave infrared (SWIR) imagery for on-orbit cloud detection. We train this model on the SPARCS1 dataset of Landsat 8 images and compare it to three related deep learning models, two rule-based algorithms, and to the literature. Additionally, we compare a C8-equivariant dense U-Net trained on VIS, LWIR, and SWIR imagery to the same algorithm trained on only VIS and LWIR, on only VIS and SWIR, and on only VIS imagery. We find that augmenting VIS imagery with SWIR imagery is most useful for missions where false positives (non-cloud pixels misidentified as cloud) are extremely costly, and that augmenting with LWIR imagery is most useful for missions where false negatives (cloud pixels misidentified as non-cloud) are extremely costly. We demonstrate also that our C8-equivariant dense U-Net achieves over 97% accuracy (over 99.5% when evaluated with a 2 pixel buffer at the cloud boundaries) on cloud segmentation on the SPARCS dataset, outperforming existing state-of-the-art algorithms as well as human operators, while remaining computationally lightweight enough to be usable on resource-constrained missions such as CubeSats.

Slides 4.pptx (34595 kB)

Share

COinS
 
Aug 9th, 8:15 AM

Evaluating Rotation-Equivariant Deep Learning Models for On-Orbit Cloud Segmentation

Utah State University, Logan, UT

Cloud detection in satellite imagery is key for autonomously taking and downlinking cloud-free images of a target region as well as studying cloud-climate interactions and calibrating microwave radiometers. We propose a C8-equivariant dense U-Net, a rotation-equivariant deep learning model, trained on visible-spectrum, long-wave infrared (LWIR), and short-wave infrared (SWIR) imagery for on-orbit cloud detection. We train this model on the SPARCS1 dataset of Landsat 8 images and compare it to three related deep learning models, two rule-based algorithms, and to the literature. Additionally, we compare a C8-equivariant dense U-Net trained on VIS, LWIR, and SWIR imagery to the same algorithm trained on only VIS and LWIR, on only VIS and SWIR, and on only VIS imagery. We find that augmenting VIS imagery with SWIR imagery is most useful for missions where false positives (non-cloud pixels misidentified as cloud) are extremely costly, and that augmenting with LWIR imagery is most useful for missions where false negatives (cloud pixels misidentified as non-cloud) are extremely costly. We demonstrate also that our C8-equivariant dense U-Net achieves over 97% accuracy (over 99.5% when evaluated with a 2 pixel buffer at the cloud boundaries) on cloud segmentation on the SPARCS dataset, outperforming existing state-of-the-art algorithms as well as human operators, while remaining computationally lightweight enough to be usable on resource-constrained missions such as CubeSats.