H.5. Image Processing and Computer Vision
Fateme Namazi; Mehdi Ezoji; Ebadat Ghanbari Parmehr
Abstract
Paddy fields in the north of Iran are highly fragmented, leading to challenges in accurately mapping them using remote sensing techniques. Cloudy weather often degrades image quality or renders images unusable, further complicating monitoring efforts. This paper presents a novel paddy rice mapping method ...
Read More
Paddy fields in the north of Iran are highly fragmented, leading to challenges in accurately mapping them using remote sensing techniques. Cloudy weather often degrades image quality or renders images unusable, further complicating monitoring efforts. This paper presents a novel paddy rice mapping method based on phenology, addressing these challenges. The method utilizes time series data from Sentinel-1 and 2 satellites to derive a rice phenology curve. This curve is constructed using the cross ratio (CR) index from Sentinel-1, and the normalized difference vegetation index (NDVI) and land surface water index (LSWI) from Sentinel-2. Unlike existing methods, which often rely on analyzing single-point indices at specific times, this approach examines the entire time series behavior of each pixel. This robust strategy significantly mitigates the impact of cloud cover on classification accuracy. The time series behavior of each pixel is then correlated with this rice phenology curve. The maximum correlation, typically achieved around the 50-day period in the middle of the cultivation season, helps identify potential rice fields. A Support Vector Machine (SVM) classifier with a Radial Basis Function (RBF) kernel is then employed, utilizing the maximum correlation values from all three indices to classify pixels as rice paddy or other land cover types. The implementation results validate the accuracy of this method, achieving an overall accuracy of 99%. All processes were carried out on the Google Earth Engine (GEE) platform.
M. Kakooei; Y. Baleghi
Abstract
Shadow detection provides worthwhile information for remote sensing applications, e.g. building height estimation. Shadow areas are formed in the opposite side of the sunlight radiation to tall objects, and thus, solar illumination angle is required to find probable shadow areas. In recent years, Very ...
Read More
Shadow detection provides worthwhile information for remote sensing applications, e.g. building height estimation. Shadow areas are formed in the opposite side of the sunlight radiation to tall objects, and thus, solar illumination angle is required to find probable shadow areas. In recent years, Very High Resolution (VHR) imagery provides more detailed data from objects including shadow areas. In this regard, the motivation of this paper is to propose a reliable feature, Shadow Low Gradient Direction (SLGD), to automatically determine shadow and solar illumination direction in VHR data. The proposed feature is based on inherent spatial feature of fine-resolution shadow areas. Therefore, it can facilitate shadow-based operations, especially when the solar illumination information is not available in remote sensing metadata. Shadow intensity is supposed to be dependent on two factors, including the surface material and sunlight illumination, which is analyzed by directional gradient values in low gradient magnitude areas. This feature considers the sunlight illumination and ignores the material differences. The method is fully implemented on the Google Earth Engine cloud computing platform, and is evaluated on VHR data with 0.3m resolution. Finally, SLGD performance is evaluated in determining shadow direction and compared in refining shadow maps.
M. Kakooei; Y. Baleghi
Abstract
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling ...
Read More
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling method is proposed by fusion of optical and normalized DSM data. Spectral and spatial features are fused into a Heterogeneous Feature Map to train the classifier. Evaluation database classes are impervious surface, building, low vegetation, tree, car, and background. The proposed method is implemented on Google Earth Engine. The method consists of several levels. First, Principal Component Analysis is applied to vegetation indexes to find maximum separable color space between vegetation and non-vegetation area. Gray Level Co-occurrence Matrix is computed to provide texture information as spatial features. Several Random Forests are trained with automatically selected train dataset. Several spatial operators follow the classification to refine the result. Leaf-Less-Tree feature is used to solve the underestimation problem in tree detection. Area, major and, minor axis of connected components are used to refine building and car detection. Evaluation shows significant improvement in tree, building, and car accuracy. Overall accuracy and Kappa coefficient are appropriate.