Original/Review Paper
N. Mobaraki; R. Boostani; M. Sabeti
Abstract
Among variety of meta-heuristic population-based search algorithms, particle swarm optimization (PSO) with adaptive inertia weight (AIW) has been considered as a versatile optimization tool, which incorporates the experience of the whole swarm into the movement of particles. Although the exploitation ...
Read More
Among variety of meta-heuristic population-based search algorithms, particle swarm optimization (PSO) with adaptive inertia weight (AIW) has been considered as a versatile optimization tool, which incorporates the experience of the whole swarm into the movement of particles. Although the exploitation ability of this algorithm is great, it cannot comprehensively explore the search space and may be trapped in a local minimum through a limited number of iterations. To increase its diversity as well as enhancing its exploration ability, this paper inserts a chaotic factor, generated by three chaotic systems, along with a perturbation stage into AIW-PSO to avoid premature convergence, especially in complex nonlinear problems. To assess the proposed method, a known optimization benchmark containing nonlinear complex functions was selected and its results were compared to that of standard PSO, AIW-PSO and genetic algorithm (GA). The empirical results demonstrate the superiority of the proposed chaotic AIW-PSO to the counterparts over 21 functions, which confirms the promising role of inserting the randomness into the AIW-PSO. The behavior of error through the epochs show that the proposed manner can smoothly find proper minimums in a timely manner without encountering with premature convergence.
Original/Review Paper
A. Salehi; B. Masoumi
Abstract
Biogeography-Based Optimization (BBO) algorithm has recently been of great interest to researchers for simplicity of implementation, efficiency, and the low number of parameters. The BBO Algorithm in optimization problems is one of the new algorithms which have been developed based on the biogeography ...
Read More
Biogeography-Based Optimization (BBO) algorithm has recently been of great interest to researchers for simplicity of implementation, efficiency, and the low number of parameters. The BBO Algorithm in optimization problems is one of the new algorithms which have been developed based on the biogeography concept. This algorithm uses the idea of animal migration to find suitable habitats for solving optimization problems. The BBO algorithm has three principal operators called migration, mutation and elite selection. The migration operator plays a very important role in sharing information among the candidate habitats. The original BBO algorithm, due to its poor exploration and exploitation, sometimes does not perform desirable results. On the other hand, the Edge Assembly Crossover (EAX) has been one of the high power crossovers for acquiring offspring and it increased the diversity of the population. The combination of biogeography-based optimization algorithm and EAX can provide high efficiency in solving optimization problems, including the traveling salesman problem (TSP). This paper proposed a combination of those approaches to solve traveling salesman problem. The new hybrid approach was examined with standard datasets for TSP in TSPLIB. In the experiments, the performance of the proposed approach was better than the original BBO and four others widely used metaheuristics algorithms.
Original/Review Paper
M. Abdollahi; M. Aliyari Shoorehdeli
Abstract
There are various automatic programming models inspired by evolutionary computation techniques. Due to the importance of devising an automatic mechanism to explore the complicated search space of mathematical problems where numerical methods fails, evolutionary computations are widely studied and applied ...
Read More
There are various automatic programming models inspired by evolutionary computation techniques. Due to the importance of devising an automatic mechanism to explore the complicated search space of mathematical problems where numerical methods fails, evolutionary computations are widely studied and applied to solve real world problems. One of the famous algorithm in optimization problem is shuffled frog leaping algorithm (SFLA) which is inspired by behaviour of frogs to find the highest quantity of available food by searching their environment both locally and globally. The results of SFLA prove that it is competitively effective to solve problems. In this paper, Shuffled Frog Leaping Programming (SFLP) inspired by SFLA is proposed as a novel type of automatic programming model to solve symbolic regression problems based on tree representation. Also, in SFLP, a new mechanism for improving constant numbers in the tree structure is proposed. In this way, different domains of mathematical problems can be addressed with the use of proposed method. To find out about the performance of generated solutions by SFLP, various experiments were conducted using a number of benchmark functions. The results were also compared with other evolutionary programming algorithms like BBP, GSP, GP and many variants of GP.
Original/Review Paper
M. Zeynali; H. Seyedarabi; B. Mozaffari Tazehkand
Abstract
Network security is very important when sending confidential data through the network. Cryptography is the science of hiding information, and a combination of cryptography solutions with cognitive science starts a new branch called cognitive cryptography that guarantee the confidentiality and integrity ...
Read More
Network security is very important when sending confidential data through the network. Cryptography is the science of hiding information, and a combination of cryptography solutions with cognitive science starts a new branch called cognitive cryptography that guarantee the confidentiality and integrity of the data. Brain signals as a biometric indicator can convert to a binary code which can be used as a cryptographic key. This paper proposes a new method for decreasing the error of EEG- based key generation process. Discrete Fourier Transform, Discrete Wavelet Transform, Autoregressive Modeling, Energy Entropy, and Sample Entropy were used to extract features. All features are used as the input of new method based on window segmentation protocol then are converted to the binary mode. We obtain 0.76%, and 0.48% mean Half Total Error Rate (HTER) for 18-channel and single-channel cryptographic key generation systems respectively.
Original/Review Paper
M. Kakooei; Y. Baleghi
Abstract
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling ...
Read More
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling method is proposed by fusion of optical and normalized DSM data. Spectral and spatial features are fused into a Heterogeneous Feature Map to train the classifier. Evaluation database classes are impervious surface, building, low vegetation, tree, car, and background. The proposed method is implemented on Google Earth Engine. The method consists of several levels. First, Principal Component Analysis is applied to vegetation indexes to find maximum separable color space between vegetation and non-vegetation area. Gray Level Co-occurrence Matrix is computed to provide texture information as spatial features. Several Random Forests are trained with automatically selected train dataset. Several spatial operators follow the classification to refine the result. Leaf-Less-Tree feature is used to solve the underestimation problem in tree detection. Area, major and, minor axis of connected components are used to refine building and car detection. Evaluation shows significant improvement in tree, building, and car accuracy. Overall accuracy and Kappa coefficient are appropriate.
Original/Review Paper
M. Salehi; J. Razmara; Sh. Lotfi
Abstract
Prediction of cancer survivability using machine learning techniques has become a popular approach in recent years. In this regard, an important issue is that preparation of some features may need conducting difficult and costly experiments while these features have less significant impacts on the ...
Read More
Prediction of cancer survivability using machine learning techniques has become a popular approach in recent years. In this regard, an important issue is that preparation of some features may need conducting difficult and costly experiments while these features have less significant impacts on the final decision and can be ignored from the feature set. Therefore, developing a machine for prediction of survivability, which ignores these features for simple cases and yields an acceptable prediction accuracy, has turned into a challenge for researchers. In this paper, we have developed an ensemble multi-stage machine for survivability prediction which ignores difficult features for simple cases. The machine employs three basic learners, namely multilayer perceptron (MLP), support vector machine (SVM), and decision tree (DT), in the first stage to predict survivability using simple features. If the learners agree on the output, the machine makes the final decision in the first stage. Otherwise, for difficult cases where the output of learners is different, the machine makes decision in the second stage using SVM over all features. The developed model was evaluated using the Surveillance, Epidemiology, and End Results (SEER) database. The experimental results revealed that the developed machine obtains considerable accuracy while it ignores difficult features for most of the input samples.
Original/Review Paper
B. Hassanpour; N. Abdolvand; S. Rajaee Harandi
Abstract
The rapid development of technology, the Internet, and the development of electronic commerce have led to the emergence of recommender systems. These systems will assist the users in finding and selecting their desired items. The accuracy of the advice in recommender systems is one of the main challenges ...
Read More
The rapid development of technology, the Internet, and the development of electronic commerce have led to the emergence of recommender systems. These systems will assist the users in finding and selecting their desired items. The accuracy of the advice in recommender systems is one of the main challenges of these systems. Regarding the fuzzy systems capabilities in determining the borders of user interests, it seems reasonable to combine it with social networks information and the factor of time. Hence, this study, for the first time, tries to assess the efficiency of the recommender systems by combining fuzzy logic, longitudinal data and social networks information such as tags, friendship, and membership in groups. And the impact of the proposed algorithm for improving the accuracy of recommender systems was studied by specifying the neighborhood and the border between the users’ preferences over time. The results revealed that using longitudinal data in social networks information in memory-based recommender systems improves the accuracy of these systems.
Original/Review Paper
V. Ghasemi; M. Javadian; S. Bagheri Shouraki
Abstract
In this work, a hierarchical ensemble of projected clustering algorithm for high-dimensional data is proposed. The basic concept of the algorithm is based on the active learning method (ALM) which is a fuzzy learning scheme, inspired by some behavioral features of human brain functionality. High-dimensional ...
Read More
In this work, a hierarchical ensemble of projected clustering algorithm for high-dimensional data is proposed. The basic concept of the algorithm is based on the active learning method (ALM) which is a fuzzy learning scheme, inspired by some behavioral features of human brain functionality. High-dimensional unsupervised active learning method (HUALM) is a clustering algorithm which blurs the data points as one-dimensional ink drop patterns, in order to summarize the effects of all data points, and then applies a threshold on the resulting vectors. It is based on an ensemble clustering method which performs one-dimensional density partitioning to produce ensemble of clustering solutions. Then, it assigns a unique prime number to the data points that exist in each partition as their labels. Consequently, a combination is performed by multiplying the labels of every data point in order to produce the absolute labels. The data points with identical absolute labels are fallen into the same cluster. The hierarchical property of the algorithm is intended to cluster complex data by zooming in each already formed cluster to find further sub-clusters. The algorithm is verified using several synthetic and real-world datasets. The results show that the proposed method has a promising performance, compared to some well-known high-dimensional data clustering algorithms.
Original/Review Paper
A.R. Tajary; E. Tahanian
Abstract
Wireless network on chip (WiNoC) is one of the promising on-chip interconnection networks for on-chip system architectures. In addition to wired links, these architectures also use wireless links. Using these wireless links makes packets reach destination nodes faster and with less power consumption. ...
Read More
Wireless network on chip (WiNoC) is one of the promising on-chip interconnection networks for on-chip system architectures. In addition to wired links, these architectures also use wireless links. Using these wireless links makes packets reach destination nodes faster and with less power consumption. These wireless links are provided by wireless interfaces in wireless routers. The WiNoC architectures differ in the position of the wireless routers and how they interact with other routers. So, the placement of wireless interfaces is an important step in designing WiNoC architectures. In this paper, we propose a simulated annealing (SA) placement method which considers the routing algorithm as a factor in designing cost function. To evaluate the proposed method, the Noxim, which is a cycle-accurate network-on-chip simulator, is used. The simulation results show that the proposed method can reduce flit latency by up to 24.6% with about a 0.2% increase in power consumption.
Original/Review Paper
Gh. Ahmadi; M. Teshnelab
Abstract
Because of the existing interactions among the variables of a multiple input-multiple output (MIMO) nonlinear system, its identification is a difficult task, particularly in the presence of uncertainties. Cement rotary kiln (CRK) is a MIMO nonlinear system in the cement factory with a complicated mechanism ...
Read More
Because of the existing interactions among the variables of a multiple input-multiple output (MIMO) nonlinear system, its identification is a difficult task, particularly in the presence of uncertainties. Cement rotary kiln (CRK) is a MIMO nonlinear system in the cement factory with a complicated mechanism and uncertain disturbances. The identification of CRK is very important for different purposes such as prediction, fault detection, and control. In the previous works, CRK was identified after decomposing it into several multiple input-single output (MISO) systems. In this paper, for the first time, the rough-neural network (R-NN) is utilized for the identification of CRK without the usage of MISO structures. R-NN is a neural structure designed on the base of rough set theory for dealing with the uncertainty and vagueness. In addition, a stochastic gradient descent learning algorithm is proposed for training the R-NNs. The simulation results show the effectiveness of proposed methodology.
Original/Review Paper
R. Asgarian Dehkordi; H. Khosravi
Abstract
Fine-grained vehicle type recognition is one of the main challenges in machine vision. Almost all of the ways presented so far have identified the type of vehicle with the help of feature extraction and classifiers. Because of the apparent similarity between car classes, these methods may produce erroneous ...
Read More
Fine-grained vehicle type recognition is one of the main challenges in machine vision. Almost all of the ways presented so far have identified the type of vehicle with the help of feature extraction and classifiers. Because of the apparent similarity between car classes, these methods may produce erroneous results. This paper presents a methodology that uses two criteria to identify common vehicle types. The first criterion is feature extraction and classification and the second criterion is to use the dimensions of car for classification. This method consists of three phases. In the first phase, the coordinates of the vanishing points are obtained. In the second phase, the bounding box and dimensions are calculated for each passing vehicle. Finally, in the third phase, the exact vehicle type is determined by combining the results of the first and second criteria. To evaluate the proposed method, a dataset of images and videos, prepared by the authors, has been used. This dataset is recorded from places similar to those of a roadside camera. Most existing methods use high-quality images for evaluation and are not applicable in the real world, but in the proposed method real-world video frames are used to determine the exact type of vehicle, and the accuracy of 89.5% is achieved, which represents a good performance.
Original/Review Paper
H.4.7. Methodology and Techniques
Osman K. Erol; I. Eksin; A. Akdemir; A. Aydınoglu
Abstract
In general, all of the hybridized evolutionary optimization algorithms use “first diversification and then intensification” routine approach. In other words, these hybridized methods all begin with a global search mode using a highly random initial search population and then switch to intense ...
Read More
In general, all of the hybridized evolutionary optimization algorithms use “first diversification and then intensification” routine approach. In other words, these hybridized methods all begin with a global search mode using a highly random initial search population and then switch to intense local search mode at some stage. The population initialization is still a crucial point in the hybridized evolutionary optimization algorithms since it can affect the speed of convergence and the quality of the final solution. In this study, we introduce a new approach by creating a paradigm shift that reverses the “diversification” and then “intensification” routines. Here, instead of starting from a random initial population, we firstly find a unique starting point by conducting an initial exhaustive search based on the coordinate exhaustive search local optimization algorithm only for single step iteration in order to collect a rough but some meaningful knowledge about the nature of the problem. Thus, our main assertion is that this approach will ameliorate convergence rate of any evolutionary optimization algorithms. In this study, we illustrate how one can use this unique starting point in the initialization of two evolutionary optimization algorithms, including but not limited to Big Bang-Big Crunch optimization and Particle Swarm Optimization. Experiments on a commonly used benchmark test suite, which consist of mainly rotated and shifted functions, show that the proposed initialization procedure leads to great improvement for the above-mentioned two evolutionary optimization algorithms.