CAR1275
August 23, 2019 at 9:00am
Yansong Liu
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Earth observation through remote sensing images enables the accurate characterization of materials and objects on the surface from space and airborne platforms. With the increasing availability of multiple and heterogeneous imaging sources for the same geographical region: multispectral, hyperspectral, LiDAR, and multitemporal, a  complete description of the given scene now can be acquired. The combination/fusion of the multi-sensor data opens great opportunities for improving the classification of individual objects or natural terrains in a complex environment such as urban cities. As a result, multi-sensor semantic segmentation stands out as a demanded technique in order to fully leverage complementary imaging modalities.

 

In our dissertation, we focus on developing the techniques specifically for multi-sensor image fusion of very-high-resolution (VHR) aerial optical imagery and light detection and ranging (LiDAR) data in the context of dense semantic segmentation/classification. The fusion of these two modalities (optical imagery and LiDAR data) usually can be performed at the feature level or decision level. Our research first investigated the feature level fusion that combines hand-crafted features derived from both optical imagery and LiDAR data. We then feed the combined features into various classifiers, and the results show clear advantages of using fused features. The pixel-wise classification results are then followed by the higher-order conditional random fields (CRFs) to eliminate noisy labels and enforce label consistency and coherence within one segment or between segments. As the recent use of pre-trained deep convolutional neural networks (DCNNs) for remote sensing image classification has been extremely successful, we proposed a decision-level fusion approach that trains one DCNN for optical imagery and one linear classifier for LiDAR data. These two probabilistic outputs are then combined later in various CRF frameworks (e.g., piece-wise CRFs, higher-order CRFS, and fully-connected CRFs) to generate the final classification results. We found in the extensive experiments that the proposed decision level fusion compares favorably or outperforms the state-of-the-art baseline methods that utilize feature level fusion.