M. Rezaei; H. Nezamabadi-pour
Abstract
The present study aims to overcome some defects of the K-nearest neighbor (K-NN) rule. Two important data preprocessing methods to elevate the K-NN rule are prototype selection (PS) and prototype generation (PG) techniques. Often the advantage of these techniques is investigated separately. In this paper, ...
Read More
The present study aims to overcome some defects of the K-nearest neighbor (K-NN) rule. Two important data preprocessing methods to elevate the K-NN rule are prototype selection (PS) and prototype generation (PG) techniques. Often the advantage of these techniques is investigated separately. In this paper, using the gravitational search algorithm (GSA), two hybrid schemes have been proposed in which PG and PS problems have been considered together. To evaluate the classification performance of these hybrid models, we have performed a comparative experimental study including a comparison between our proposals and some approaches previously studied in the literature using several benchmark datasets. The experimental results demonstrate that our hybrid approaches outperform most of the competitive methods.
H.5. Image Processing and Computer Vision
M. Saeedzarandi; H. Nezamabadi-pour; S. Saryazdi
Abstract
Removing noise from images is a challenging problem in digital image processing. This paper presents an image denoising method based on a maximum a posteriori (MAP) density function estimator, which is implemented in the wavelet domain because of its energy compaction property. The performance of the ...
Read More
Removing noise from images is a challenging problem in digital image processing. This paper presents an image denoising method based on a maximum a posteriori (MAP) density function estimator, which is implemented in the wavelet domain because of its energy compaction property. The performance of the MAP estimator depends on the proposed model for noise-free wavelet coefficients. Thus in the wavelet based image denoising, selecting a proper model for wavelet coefficients is very important. In this paper, we model wavelet coefficients in each sub-band by heavy-tail distributions that are from scale mixture of normal distribution family. The parameters of distributions are estimated adaptively to model the correlation between the coefficient amplitudes, so the intra-scale dependency of wavelet coefficients is also considered. The denoising results confirm the effectiveness of the proposed method.
H.6.3.2. Feature evaluation and selection
Sh kashef; H. Nezamabadi-pour
Abstract
Multi-label classification has gained significant attention during recent years, due to the increasing number of modern applications associated with multi-label data. Despite its short life, different approaches have been presented to solve the task of multi-label classification. LIFT is a multi-label ...
Read More
Multi-label classification has gained significant attention during recent years, due to the increasing number of modern applications associated with multi-label data. Despite its short life, different approaches have been presented to solve the task of multi-label classification. LIFT is a multi-label classifier which utilizes a new strategy to multi-label learning by leveraging label-specific features. Label-specific features means that each class label is supposed to have its own characteristics and is determined by some specific features that are the most discriminative features for that label. LIFT employs clustering methods to discover the properties of data. More precisely, LIFT divides the training instances into positive and negative clusters for each label which respectively consist of the training examples with and without that label. It then selects representative centroids in the positive and negative instances of each label by k-means clustering and replaces the original features of a sample by the distances to these representatives. Constructing new features, the dimensionality of the new space reduces significantly. However, to construct these new features, the original features are needed. Therefore, the complexity of the process of multi-label classification does not diminish, in practice. In this paper, we make a modification on LIFT to reduce the computational burden of the classifier and improve or at least preserve the performance of it, as well. The experimental results show that the proposed algorithm has obtained these goals, simultaneously.