Original/Review Paper
Mohammad AllamehAmiri; Vali Derhami; Mohammad Ghasemzadeh
Abstract
Quality of service (QoS) is an important issue in the design and management of web service composition. QoS in web services consists of various non-functional factors, such as execution cost, execution time, availability, successful execution rate, and security. In recent years, the number of available ...
Read More
Quality of service (QoS) is an important issue in the design and management of web service composition. QoS in web services consists of various non-functional factors, such as execution cost, execution time, availability, successful execution rate, and security. In recent years, the number of available web services has proliferated, and then offered the same services increasingly. The same web services are distinguished based on their quality parameters. Also, clients usually demand more value added services rather than those offered by single, isolated web services. Therefore, selecting a composition plan of web services among numerous plans satisfies client requirements and has become a challenging and time-consuming problem. This paper has proposed a new composition plan optimizer with constraints based on genetic algorithm. The proposed method can find the composition plan that satisfies user constraints efficiently. The performance of the method is evaluated in a simulated environment.
Original/Review Paper
Alireza Khosravi; Alireza Alfi; Amir Roshandel
Abstract
There are two significant goals in teleoperation systems: Stability and performance. This paper introduces an LMI-based robust control method for bilateral transparent teleoperation systems in presence of model mismatch. The uncertainties in time delay in communication channel, task environment and model ...
Read More
There are two significant goals in teleoperation systems: Stability and performance. This paper introduces an LMI-based robust control method for bilateral transparent teleoperation systems in presence of model mismatch. The uncertainties in time delay in communication channel, task environment and model parameters of master-slave systems is called model mismatch. The time delay in communication channel is assumed to be large, unknown and unsymmetric, but the upper bound of the delay is assumed to be known. The proposed method consists of two local controllers. One local controller namely local slave controller is located on the remote site to control the motion tracking and the other one is located on the local site namely local master controller to preserve the complete transparency by ensuring force tracking and the robust stability of the closed-loop system. To reduce the peak amplitude of output signal respect to the peak amplitude of input signal in slave site, the local slave controller is designed based on a bounded peak-to-peak gain controller. In order to provide a realistic case, an external signal as a noise of force sensor is also considered. Simulation results show the effectiveness of proposed control structure.
Research Note
Meysam Alikhani; Mohammad Ahmadi Livani
Abstract
Mobile Ad-hoc Networks (MANETs) by contrast of other networks have more vulnerability because of having nature properties such as dynamic topology and no infrastructure. Therefore, a considerable challenge for these networks, is a method expansion that to be able to specify anomalies with high accuracy ...
Read More
Mobile Ad-hoc Networks (MANETs) by contrast of other networks have more vulnerability because of having nature properties such as dynamic topology and no infrastructure. Therefore, a considerable challenge for these networks, is a method expansion that to be able to specify anomalies with high accuracy at network dynamic topology alternation. In this paper, two methods proposed for dynamic anomaly detection in MANETs those named IPAD and IAPAD. The anomaly detection procedure consists three main phases: Training, Detection and Updating in these methods. In the IPAD method, to create the normal profile, we use the normal feature vectors and principal components analysis, in the training phase. In detection phase, during each time window, anomaly feature vectors based on their projection distance from the first global principal component specified. In updating phase, at end of each time window, normal profile updated by using normal feature vectors in some previous time windows and increasing principal components analysis. IAPAD is similar to IPAD method with a difference that each node use approximate first global principal component to specify anomaly feature vectors. In addition, normal profile will updated by using approximate singular descriptions in some previous time windows. The simulation results by using NS2 simulator for some routing attacks show that average detection rate and average false alarm rate in IPAD method is 95.14% and 3.02% respectively, and in IAPAD method is 94.20% and 2.84% respectively.
Original/Review Paper
Morteza Haydari; Mahdi Banejad; Amin Hahizadeh
Abstract
Restructuring the recent developments in the power system and problems arising from construction as well as the maintenance of large power plants lead to increase in using the Distributed Generation (DG) resources. DG units due to its specifications, technology and location network connectivity can improve ...
Read More
Restructuring the recent developments in the power system and problems arising from construction as well as the maintenance of large power plants lead to increase in using the Distributed Generation (DG) resources. DG units due to its specifications, technology and location network connectivity can improve system and load point reliability indices. In this paper, the allocation and sizing of distributed generators in distribution electricity networks are determined through using an optimization method. The objective function of the proposed method is based on improving the reliability indices, such as a System Average Interruption Duration Index (SAIDI), and Average Energy Not Supplied (AENS) per customer index at the lowest cost. The optimization is based on the Modified Shuffled Frog Leaping Algorithm (MSFLA) aiming at determining the optimal DG allocation and sizing in the distribution network. The MSFLA is a new mimetic meta-heuristic algorithm with efficient mathematical function and global search capability. To evaluate the proposed algorithm, the 34-bus IEEE test system is used. In addition, the finding of comparative studies indicates the better capability of the proposed method compared with the genetic algorithm in finding the optimal sizing and location of DG’s with respect to the used objective function.
Original/Review Paper
Hossein Marvi; Zeynab Esmaileyan; Ali Harimi
Abstract
The vast use of Linear Prediction Coefficients (LPC) in speech processing systems has intensified the importance of their accurate computation. This paper is concerned with computing LPC coefficients using evolutionary algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Dif-ferential ...
Read More
The vast use of Linear Prediction Coefficients (LPC) in speech processing systems has intensified the importance of their accurate computation. This paper is concerned with computing LPC coefficients using evolutionary algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Dif-ferential Evolution (DE) and Particle Swarm Optimization with Differentially perturbed Velocity (PSO-DV). In this method, evolutionary algorithms try to find the LPC coefficients which can predict the origi-nal signal with minimum prediction error. To this end, the fitness function is defined as the maximum prediction error in all evolutionary algorithms. The coefficients computed by these algorithms compared to coefficients obtained by traditional autocorrelation method in term of prediction accuracy. Our results showed that coefficients obtained by evolutionary algorithms predict the original signal with less prediction error than autocorrelation methods. The maximum prediction error achieved by autocorrelation method, GA, PSO, DE and PSO-DV are 0.35, 0.06, 0.02, 0.07 and 0.001, respectively. This shows that the hybrid algorithm, PSO-DV, is superior to other algorithms in computing linear prediction coefficients.
Review Article
Seyed Mahdi sadatrasoul; Mohammadreza gholamian; Mohammad Siami; Zeynab Hajimohammadi
Abstract
This paper presents a comprehensive review of the works done, during the 2000–2012, in the application of data mining techniques in Credit scoring. Yet there isn’t any literature in the field of data mining applications in credit scoring. Using a novel research approach, this paper investigates ...
Read More
This paper presents a comprehensive review of the works done, during the 2000–2012, in the application of data mining techniques in Credit scoring. Yet there isn’t any literature in the field of data mining applications in credit scoring. Using a novel research approach, this paper investigates academic and systematic literature review and includes all of the journals in the Science direct online journal database. The articles are categorized and classified into enterprise, individual and small and midsized (SME) companies credit scoring. Data mining techniques is also categorized to single classifier, Hybrid methods and Ensembles. Variable selection methods are also investigated separately because it’s a major issue in credit scoring problem. The findings of the review reveals that data mining techniques are mostly applied to individual credit score and there are a few researches on enterprise and SME credit scoring. Also ensemble methods, support vector machines and neural network methods are the most favorite techniques used recently. Hybrid methods are investigated in four categories and two of them which are “classification and classification” and “clustering and classification” combinations are used more. Paper analysis provides a guide to future researches and concludes with several suggestions for further studies.