Abstract
Cloud occlusion often occurs in optical remote sensing images. Cloud occlusion may reduce or even completely occlude some ground cover information in the images, which limits ground observation, change detection, or land cover classification. Cloud removal is an important task that urgently needs to be solved. Thin and thick clouds usually coexist in optical remote sensing images, and the cloud removal algorithm for single-frame remote sensing images is only suitable for solving the problem of thin cloud occlusion. Therefore, using multi-temporal remote sensing images of the same area at different times to remove clouds has become a major issue. This study aims to fully utilize images in the same location without cloud time period to replace cloud-occluded images for restoring the ground area occluded by clouds. For this purpose, a two-stage cloud removal algorithm for multi-temporal remote sensing images based on U-Net and spatiotemporal generative network (STGAN) is proposed. The first stage is cloud segmentation, which directly uses the U-Net model to extract clouds and remove thin clouds. The second stage is image restoration, which directly uses STGAN to remove thick clouds. It inputs the seven frames of ground images after removing thin clouds into the STGAN model to obtain a single, detail-rich cloud-free ground image. The generative model of STGAN adopts an improved multi-input U-Net to recover the corresponding irregularities in the thick cloud cover area by extracting key features from seven frames of images at the same location at a time. The thin cloud processing in the first stage is beneficial to the subsequent STGAN to capture more ground information. The proposed algorithm can solve the inability of U-Net to handle cloud occlusion in thick cloud areas. It can also capture more ground information than directly using STGAN for cloud removal. It has a better cloud removal effect. The experimental results on our dataset show that only using the first-stage U-Net model and only using the second-stage STGAN model for cloud removal are inferior to the proposed two-stage cloud removal algorithm in terms of subjective visual effects and objective quantitative evaluation indicators such as peak signal-to-noise ratio and structural similarity. This performance fully verifies the effectiveness of the cloud removal algorithm in this study. Compared with traditional cloud removal methods such as RPCA, TRPCA and deep learning algorithms such as Pix2Pix, the proposed algorithm is superior to the comparison algorithm and has a significant improvement, which fully verifies the advancement of the cloud removal algorithm in this study. The proposed algorithm fully utilizes the spatiotemporal information of multi-temporal cloudy satellite images of the same area at different times. It also has good cloud removal performance, which is conducive to the further utilization of optical remote sensing images. Although the proposed algorithm has achieved a relatively good cloud removal effect, it also has certain limitations. The cloud removal effect of the algorithm is not ideal for cloud image sequences with a large area covered by thick clouds. In the follow-up research, the spatiotemporal features of image sequence frames will be explored to better reconstruct large areas covered by thick clouds.
Translated title of the contribution | Cloud removal in multitemporal remote sensing imagery combining U-Net and spatiotemporal generative networks |
---|---|
Original language | Chinese (Simplified) |
Pages (from-to) | 2089-2100 |
Number of pages | 12 |
Journal | National Remote Sensing Bulletin |
Volume | 28 |
Issue number | 8 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2024 Science Press. All rights reserved.
Keywords
- cloud removal
- image restoration
- multi-temporal
- remote sensing images
- STGAN
- U-Net