M. Kakooei; Y. Baleghi
Abstract
Shadow detection provides worthwhile information for remote sensing applications, e.g. building height estimation. Shadow areas are formed in the opposite side of the sunlight radiation to tall objects, and thus, solar illumination angle is required to find probable shadow areas. In recent years, Very ...
Read More
Shadow detection provides worthwhile information for remote sensing applications, e.g. building height estimation. Shadow areas are formed in the opposite side of the sunlight radiation to tall objects, and thus, solar illumination angle is required to find probable shadow areas. In recent years, Very High Resolution (VHR) imagery provides more detailed data from objects including shadow areas. In this regard, the motivation of this paper is to propose a reliable feature, Shadow Low Gradient Direction (SLGD), to automatically determine shadow and solar illumination direction in VHR data. The proposed feature is based on inherent spatial feature of fine-resolution shadow areas. Therefore, it can facilitate shadow-based operations, especially when the solar illumination information is not available in remote sensing metadata. Shadow intensity is supposed to be dependent on two factors, including the surface material and sunlight illumination, which is analyzed by directional gradient values in low gradient magnitude areas. This feature considers the sunlight illumination and ignores the material differences. The method is fully implemented on the Google Earth Engine cloud computing platform, and is evaluated on VHR data with 0.3m resolution. Finally, SLGD performance is evaluated in determining shadow direction and compared in refining shadow maps.
M. Kakooei; Y. Baleghi
Abstract
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling ...
Read More
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling method is proposed by fusion of optical and normalized DSM data. Spectral and spatial features are fused into a Heterogeneous Feature Map to train the classifier. Evaluation database classes are impervious surface, building, low vegetation, tree, car, and background. The proposed method is implemented on Google Earth Engine. The method consists of several levels. First, Principal Component Analysis is applied to vegetation indexes to find maximum separable color space between vegetation and non-vegetation area. Gray Level Co-occurrence Matrix is computed to provide texture information as spatial features. Several Random Forests are trained with automatically selected train dataset. Several spatial operators follow the classification to refine the result. Leaf-Less-Tree feature is used to solve the underestimation problem in tree detection. Area, major and, minor axis of connected components are used to refine building and car detection. Evaluation shows significant improvement in tree, building, and car accuracy. Overall accuracy and Kappa coefficient are appropriate.