Original/Review Paper
H.6.3.2. Feature evaluation and selection
Maryam Imani; Hassan Ghassemian
Abstract
When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter ...
Read More
When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter matrices. The proposed method, called DA-PC1, copes with the small sample size problem and has not the limitation of linear discriminant analysis (LDA) in the number of extracted features. In DA-PC1, the dominant structure of distribution is preserved by PC1 and the class separability is increased by DA. The experimental results show the good performance of DA-PC1 compared to some state-of-the-art feature extraction methods.
Other
Mohsen Zare-Baghbidi; Saeid Homayouni; Kamal Jamshidi; A. R. Naghsh-Nilchi
Abstract
Anomaly Detection (AD) has recently become an important application of hyperspectral images analysis. The goal of these algorithms is to find the objects in the image scene which are anomalous in comparison to their surrounding background. One way to improve the performance and runtime of these algorithms ...
Read More
Anomaly Detection (AD) has recently become an important application of hyperspectral images analysis. The goal of these algorithms is to find the objects in the image scene which are anomalous in comparison to their surrounding background. One way to improve the performance and runtime of these algorithms is to use Dimensionality Reduction (DR) techniques. This paper evaluates the effect of three popular linear dimensionality reduction methods on the performance of three benchmark anomaly detection algorithms. The Principal Component Analysis (PCA), Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) as DR methods, act as pre-processing step for AD algorithms. The assessed AD algorithms are Reed-Xiaoli (RX), Kernel-based versions of the RX (Kernel-RX) and Dual Window-Based Eigen Separation Transform (DWEST). The AD methods have been applied to two hyperspectral datasets acquired by both the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperspectral Mapper (HyMap) sensors. The evaluation of experiments has been done using Receiver Operation Characteristic (ROC) curve, visual investigation and runtime of the algorithms. Experimental results show that the DR methods can significantly improve the detection performance of the RX method. The detection performance of neither the Kernel-RX method nor the DWEST method changes when using the proposed methods. Moreover, these DR methods increase the runtime of the RX and DWEST significantly and make them suitable to be implemented in real time applications.
Original/Review Paper
H.5. Image Processing and Computer Vision
R. Davarzani; S. Mozaffari; Kh. Yaghmaie
Abstract
Feature extraction is a main step in all perceptual image hashing schemes in which robust features will led to better results in perceptual robustness. Simplicity, discriminative power, computational efficiency and robustness to illumination changes are counted as distinguished properties of Local Binary ...
Read More
Feature extraction is a main step in all perceptual image hashing schemes in which robust features will led to better results in perceptual robustness. Simplicity, discriminative power, computational efficiency and robustness to illumination changes are counted as distinguished properties of Local Binary Pattern features. In this paper, we investigate the use of local binary patterns for perceptual image hashing. In feature extraction, we propose to use both sign and magnitude information of local differences. So, the algorithm utilizes a combination of gradient-based and LBP-based descriptors for feature extraction. To provide security needs, two secret keys are incorporated in feature extraction and hash generation steps. Performance of the proposed hashing method is evaluated with an important application in perceptual image hashing scheme: image authentication. Experiments are conducted to show that the present method has acceptable robustness against perceptual content-preserving manipulations. Moreover, the proposed method has this capability to localize the tampering area, which is not possible in all hashing schemes.
Original/Review Paper
Hossein Shahamat; Ali A. Pouyan
Abstract
In this paper we propose a new method for classification of subjects into schizophrenia and control groups using functional magnetic resonance imaging (fMRI) data. In the preprocessing step, the number of fMRI time points is reduced using principal component analysis (PCA). Then, independent component ...
Read More
In this paper we propose a new method for classification of subjects into schizophrenia and control groups using functional magnetic resonance imaging (fMRI) data. In the preprocessing step, the number of fMRI time points is reduced using principal component analysis (PCA). Then, independent component analysis (ICA) is used for further data analysis. It estimates independent components (ICs) of PCA results. For feature extraction, local binary patterns (LBP) technique is applied on the ICs. It transforms the ICs into spatial histograms of LBP values. For feature selection, genetic algorithm (GA) is used to obtain a set of features with large discrimination power. In the next step of feature selection, linear discriminant analysis (LDA) is applied to further extract features that maximize the ratio of between-class and within-class variability. Finally, a test subject is classified into schizophrenia or control group using a Euclidean distance based classifier and a majority vote method. In this paper, a leave-one-out cross validation method is used for performance evaluation. Experimental results prove that the proposed method has an acceptable accuracy.
Original/Review Paper
Timing analysis
Z. Izakian; M. Mesgari
Abstract
With rapid development in information gathering technologies and access to large amounts of data, we always require methods for data analyzing and extracting useful information from large raw dataset and data mining is an important method for solving this problem. Clustering analysis as the most commonly ...
Read More
With rapid development in information gathering technologies and access to large amounts of data, we always require methods for data analyzing and extracting useful information from large raw dataset and data mining is an important method for solving this problem. Clustering analysis as the most commonly used function of data mining, has attracted many researchers in computer science. Because of different applications, the problem of clustering the time series data has become highly popular and many algorithms have been proposed in this field. Recently Swarm Intelligence (SI) as a family of nature inspired algorithms has gained huge popularity in the field of pattern recognition and clustering. In this paper, a technique for clustering time series data using a particle swarm optimization (PSO) approach has been proposed, and Pearson Correlation Coefficient as one of the most commonly-used distance measures for time series is considered. The proposed technique is able to find (near) optimal cluster centers during the clustering process. To reduce the dimensionality of the search space and improve the performance of the proposed method, a singular value decomposition (SVD) representation of cluster centers is considered. Experimental results over three popular data sets indicate the superiority of the proposed technique in comparing with fuzzy C-means and fuzzy K-medoids clustering techniques.
Original/Review Paper
B.3. Communication/Networking and Information Technology
A. Ghaffari; S. Nobahary
Abstract
Wireless sensor networks (WSNs) consist of a large number of sensor nodes which are capable of sensing different environmental phenomena and sending the collected data to the base station or Sink. Since sensor nodes are made of cheap components and are deployed in remote and uncontrolled environments, ...
Read More
Wireless sensor networks (WSNs) consist of a large number of sensor nodes which are capable of sensing different environmental phenomena and sending the collected data to the base station or Sink. Since sensor nodes are made of cheap components and are deployed in remote and uncontrolled environments, they are prone to failure; thus, maintaining a network with its proper functions even when undesired events occur is necessary which is called fault tolerance. Hence, fault management is essential in these networks. In this paper, a new method has been proposed with particular attention to fault tolerance and fault detection in WSN. The performance of the proposed method was simulated in MATLAB. The proposed method was based on majority vote which can detect permanently faulty sensor nodes with high detection. Accuracy and low false alarm rate were excluded them from the network. To investigate the efficiency of the new method, the researchers compared it with Chen, Lee, and hybrid algorithms. Simulation results indicated that the novel proposed method has better performance in parameters such as detection accuracy (DA) and a false alarm rate (FAR) even with a large set of faulty sensor nodes.
Research Note
C.3. Software Engineering
E. Ghandehari; F. Saadatjoo; M. A. Zare Chahooki
Abstract
Agent oriented software engineering (AOSE) is an emerging field in computer science and proposes some systematic ideas for multi agent systems analysis, implementation and maintenance. Despite the various methodologies introduced in the agent-oriented software engineering, the main challenges ...
Read More
Agent oriented software engineering (AOSE) is an emerging field in computer science and proposes some systematic ideas for multi agent systems analysis, implementation and maintenance. Despite the various methodologies introduced in the agent-oriented software engineering, the main challenges are defects in different aspects of methodologies. According to the defects resulted from weaknesses in agent oriented methodologies in different aspects, a combinatory solution named ARA using, ASPECS, ROADMAP and AOR has been proposed. The three methodologies were analyzed in a comprehensive analytical framework according to concepts and Perceptions, modeling language, process and pragmatism. According to time and resource limitations, sample methodologies for evaluation and in titration were selected. This selection was based on the use of methodologies' and their combination ability. The evaluation show that, the ROADMAP methodology supports stages of agent-oriented systems' analysis and the design stage is not complete because it doesn’t model all semi agents. On the other hand, since AOR and ASPECS methodologies support the design stage and inter agent interactions, a mixed methodology has been proposed and is a combination of analysis stage of ROADMAP methodology and design stage of AOR and ASPECS methodologies. Furthermore, to increase the performance of proposed methodology of actor models, service model, capability and programming were also added to this proposed methodology. To describe its difference phases, it was used in a case study too. Results of this project can pave the way to introduce future agent-oriented methodologies.
Original/Review Paper
C. Software/Software Engineering
H. Motameni
Abstract
To evaluate and predict component-based software security, a two-dimensional model of software security is proposed by Stochastic Petri Net in this paper. In this approach, the software security is modeled by graphical presentation ability of Petri nets, and the quantitative prediction is provided by ...
Read More
To evaluate and predict component-based software security, a two-dimensional model of software security is proposed by Stochastic Petri Net in this paper. In this approach, the software security is modeled by graphical presentation ability of Petri nets, and the quantitative prediction is provided by the evaluation capability of Stochastic Petri Net and the computing power of Markov chain. Each vulnerable component is modeled by Stochastic Petri net and two parameters, Successfully Attack Probability (SAP) and Vulnerability Volume of each component to another component. The second parameter, as a second dimension of security evaluation, is a metric that is added to modeling to improve the accuracy of the result of system security prediction. An isomorphic Markov chain is obtained from a corresponding SPN model. The security prediction is calculated based on the probability distribution of the MC in the steady state. To identify and trace back to the critical points of system security, a sensitive analysis method is applied by derivation of the security prediction equation. It provides the possibility to investigate and compare different solutions with the target system in the designing phase.
Original/Review Paper
I.3.7. Engineering
A. Ardakani; V. R. Kohestani
Abstract
The prediction of liquefaction potential of soil due to an earthquake is an essential task in Civil Engineering. The decision tree is a tree structure consisting of internal and terminal nodes which process the data to ultimately yield a classification. C4.5 is a known algorithm widely used to design ...
Read More
The prediction of liquefaction potential of soil due to an earthquake is an essential task in Civil Engineering. The decision tree is a tree structure consisting of internal and terminal nodes which process the data to ultimately yield a classification. C4.5 is a known algorithm widely used to design decision trees. In this algorithm, a pruning process is carried out to solve the problem of the over-fitting. This article examines the capability of C4.5 decision tree for the prediction of seismic liquefaction potential of soil based on the Cone Penetration Test (CPT) data. The database contains the information about cone resistance (q_c), total vertical stress (σ_0), effective vertical stress (σ_0^'), mean grain size (D_50), normalized peak horizontal acceleration at ground surface (a_max), cyclic stress ratio (τ/σ_0^') and earthquake magnitude (M_w). The overall classification success rate for the entire data set is 98%. The results of C4.5 decision tree have been compared with the available artificial neural network (ANN) and relevance vector machine (RVM) models. The developed C4.5 decision tree provides a viable tool for civil engineers to determine the liquefaction potential of soil.
Original/Review Paper
F.2.2. Interpolation
V. Abolghasemi; S. Ferdowsi; S. Sanei
Abstract
The focus of this paper is to consider the compressed sensing problem. It is stated that the compressed sensing theory, under certain conditions, helps relax the Nyquist sampling theory and takes smaller samples. One of the important tasks in this theory is to carefully design measurement matrix (sampling ...
Read More
The focus of this paper is to consider the compressed sensing problem. It is stated that the compressed sensing theory, under certain conditions, helps relax the Nyquist sampling theory and takes smaller samples. One of the important tasks in this theory is to carefully design measurement matrix (sampling operator). Most existing methods in the literature attempt to optimize a randomly initialized matrix with the aim of decreasing the amount of required measurements. However, these approaches mainly lead to sophisticated structure of measurement matrix which makes it very difficult to implement. In this paper we propose an intermediate structure for the measurement matrix based on random sampling. The main advantage of block-based proposed technique is simplicity and yet achieving acceptable performance obtained through using conventional techniques. The experimental results clearly confirm that in spite of simplicity of the proposed approach it can be competitive to the existing methods in terms of reconstruction quality. It also outperforms existing methods in terms of computation time.
Original/Review Paper
F.2.7. Optimization
F. Tatari; M. B. Naghibi-Sistani
Abstract
In this paper, the optimal adaptive leader-follower consensus of linear continuous time multi-agent systems is considered. The error dynamics of each player depends on its neighbors’ information. Detailed analysis of online optimal leader-follower consensus under known and unknown dynamics is presented. ...
Read More
In this paper, the optimal adaptive leader-follower consensus of linear continuous time multi-agent systems is considered. The error dynamics of each player depends on its neighbors’ information. Detailed analysis of online optimal leader-follower consensus under known and unknown dynamics is presented. The introduced reinforcement learning-based algorithms learn online the approximate solution to algebraic Riccati equations. An optimal adaptive control technique is employed to iteratively solve the algebraic Riccati equation based on the online measured error state and input information for each agent without requiring the priori knowledge of the system matrices. The decoupling of the multi-agent system global error dynamics facilitates the employment of policy iteration and optimal adaptive control techniques to solve the leader-follower consensus problem under known and unknown dynamics. Simulation results verify the effectiveness of the proposed methods.
Original/Review Paper
A.2. Control Structures and Microprogramming
M. M. Fateh; S. Azargoshasb
Abstract
This paper presents a discrete-time robust control for electrically driven robot manipulators in the task space. A novel discrete-time model-free control law is proposed by employing an adaptive fuzzy estimator for the compensation of the uncertainty including model uncertainty, external disturbances ...
Read More
This paper presents a discrete-time robust control for electrically driven robot manipulators in the task space. A novel discrete-time model-free control law is proposed by employing an adaptive fuzzy estimator for the compensation of the uncertainty including model uncertainty, external disturbances and discretization error. Parameters of the fuzzy estimator are adapted to minimize the estimation error using a gradient descent algorithm. The proposed discrete control is robust against all uncertainties as verified by stability analysis. The proposed robust control law is simulated on a SCARA robot driven by permanent magnet dc motors. Simulation results show the effectiveness of the control approach.