ORIGINAL_ARTICLE
Chaotic-based Particle Swarm Optimization with Inertia Weight for Optimization Tasks
Among variety of meta-heuristic population-based search algorithms, particle swarm optimization (PSO) with adaptive inertia weight (AIW) has been considered as a versatile optimization tool, which incorporates the experience of the whole swarm into the movement of particles. Although the exploitation ability of this algorithm is great, it cannot comprehensively explore the search space and may be trapped in a local minimum through a limited number of iterations. To increase its diversity as well as enhancing its exploration ability, this paper inserts a chaotic factor, generated by three chaotic systems, along with a perturbation stage into AIW-PSO to avoid premature convergence, especially in complex nonlinear problems. To assess the proposed method, a known optimization benchmark containing nonlinear complex functions was selected and its results were compared to that of standard PSO, AIW-PSO and genetic algorithm (GA). The empirical results demonstrate the superiority of the proposed chaotic AIW-PSO to the counterparts over 21 functions, which confirms the promising role of inserting the randomness into the AIW-PSO. The behavior of error through the epochs show that the proposed manner can smoothly find proper minimums in a timely manner without encountering with premature convergence.
https://jad.shahroodut.ac.ir/article_1823_1dac588a0355b8da10f195fb238d6f52.pdf
2020-07-01
303
312
10.22044/jadm.2020.8594.1993
PSO-AIW
randomness
chaotic factor
swarm experience
convergence rate
N.
Mobaraki
neda_mobaraki@yahoo.com
1
Department of Computer Engineering, Apadana Institute of Higher Education, Shiraz, Iran.
AUTHOR
R.
Boostani
boostani@shirazu.ac.ir
2
Department of CSE & IT, Faculty of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran.
LEAD_AUTHOR
M.
Sabeti
sabeti@shirazu.ac.ir
3
Department of Computer Engineering, North Tehran Branch, Islamic Azad University, Tehran, Iran.
AUTHOR
ORIGINAL_ARTICLE
Solving Traveling Salesman Problem based on Biogeography-based Optimization and Edge Assembly Cross-over
Biogeography-Based Optimization (BBO) algorithm has recently been of great interest to researchers for simplicity of implementation, efficiency, and the low number of parameters. The BBO Algorithm in optimization problems is one of the new algorithms which have been developed based on the biogeography concept. This algorithm uses the idea of animal migration to find suitable habitats for solving optimization problems. The BBO algorithm has three principal operators called migration, mutation and elite selection. The migration operator plays a very important role in sharing information among the candidate habitats. The original BBO algorithm, due to its poor exploration and exploitation, sometimes does not perform desirable results. On the other hand, the Edge Assembly Crossover (EAX) has been one of the high power crossovers for acquiring offspring and it increased the diversity of the population. The combination of biogeography-based optimization algorithm and EAX can provide high efficiency in solving optimization problems, including the traveling salesman problem (TSP). This paper proposed a combination of those approaches to solve traveling salesman problem. The new hybrid approach was examined with standard datasets for TSP in TSPLIB. In the experiments, the performance of the proposed approach was better than the original BBO and four others widely used metaheuristics algorithms.
https://jad.shahroodut.ac.ir/article_1697_98c0bddece1cb575e0cb9c519d6fa4ec.pdf
2020-07-01
313
329
10.22044/jadm.2020.7835.1922
Biogeography-Based Optimization
Evolutionary Algorithms
Traveling Salesman Problem
A.
Salehi
abbas.s.q@gmail.com
1
Faculty of Computer and information Technology, Islamic Azad University, Qazvin Branch, Qazvin, Iran.
AUTHOR
B.
Masoumi
masoumi.bh@gmail.com
2
Faculty of Computer and information Technology, Islamic Azad University, Qazvin Branch, Qazvin, Iran.
LEAD_AUTHOR
ORIGINAL_ARTICLE
Shuffled Frog-Leaping Programming for Solving Regression Problems
There are various automatic programming models inspired by evolutionary computation techniques. Due to the importance of devising an automatic mechanism to explore the complicated search space of mathematical problems where numerical methods fails, evolutionary computations are widely studied and applied to solve real world problems. One of the famous algorithm in optimization problem is shuffled frog leaping algorithm (SFLA) which is inspired by behaviour of frogs to find the highest quantity of available food by searching their environment both locally and globally. The results of SFLA prove that it is competitively effective to solve problems. In this paper, Shuffled Frog Leaping Programming (SFLP) inspired by SFLA is proposed as a novel type of automatic programming model to solve symbolic regression problems based on tree representation. Also, in SFLP, a new mechanism for improving constant numbers in the tree structure is proposed. In this way, different domains of mathematical problems can be addressed with the use of proposed method. To find out about the performance of generated solutions by SFLP, various experiments were conducted using a number of benchmark functions. The results were also compared with other evolutionary programming algorithms like BBP, GSP, GP and many variants of GP.
https://jad.shahroodut.ac.ir/article_1698_4cc451902ceefd66954d47fad2547b9d.pdf
2020-07-01
331
341
10.22044/jadm.2020.7847.1924
Genetic Programming
Shuffled Frog Leaping Algorithm
Shuffled Frog Leaping Programming
Regression Problems
M.
Abdollahi
m.abdollahi64@gmail.com
1
Department of Computer Engineering, K.N. Toosi University of Technology, Tehran, Iran.
LEAD_AUTHOR
M.
Aliyari Shoorehdeli
aliyari@kntu.ac.ir
2
Department of Electrical Engineering, K.N. Toosi University of Technology, Tehran, Iran.
AUTHOR
ORIGINAL_ARTICLE
Development of a Unique Biometric-based Cryptographic Key Generation with Repeatability using Brain Signals
Network security is very important when sending confidential data through the network. Cryptography is the science of hiding information, and a combination of cryptography solutions with cognitive science starts a new branch called cognitive cryptography that guarantee the confidentiality and integrity of the data. Brain signals as a biometric indicator can convert to a binary code which can be used as a cryptographic key. This paper proposes a new method for decreasing the error of EEG- based key generation process. Discrete Fourier Transform, Discrete Wavelet Transform, Autoregressive Modeling, Energy Entropy, and Sample Entropy were used to extract features. All features are used as the input of new method based on window segmentation protocol then are converted to the binary mode. We obtain 0.76%, and 0.48% mean Half Total Error Rate (HTER) for 18-channel and single-channel cryptographic key generation systems respectively.
https://jad.shahroodut.ac.ir/article_1721_201fe0cafa680fc649fcaa27eabb1fdb.pdf
2020-07-01
343
356
10.22044/jadm.2020.7858.1923
Cryptography
Electroencephalogram (EEG)
Security
Biometric cryptosystem
M.
Zeynali
m_zeynali94@ms.tabrizu.ac.ir
1
Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran.
AUTHOR
H.
Seyedarabi
seyedarabi@tabrizu.ac.ir
2
Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran.
LEAD_AUTHOR
B.
Mozaffari Tazehkand
mozaffary@tabrizu.ac.ir
3
Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran.
AUTHOR
ORIGINAL_ARTICLE
VHR Semantic Labeling by Random Forest Classification and Fusion of Spectral and Spatial Features on Google Earth Engine
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling method is proposed by fusion of optical and normalized DSM data. Spectral and spatial features are fused into a Heterogeneous Feature Map to train the classifier. Evaluation database classes are impervious surface, building, low vegetation, tree, car, and background. The proposed method is implemented on Google Earth Engine. The method consists of several levels. First, Principal Component Analysis is applied to vegetation indexes to find maximum separable color space between vegetation and non-vegetation area. Gray Level Co-occurrence Matrix is computed to provide texture information as spatial features. Several Random Forests are trained with automatically selected train dataset. Several spatial operators follow the classification to refine the result. Leaf-Less-Tree feature is used to solve the underestimation problem in tree detection. Area, major and, minor axis of connected components are used to refine building and car detection. Evaluation shows significant improvement in tree, building, and car accuracy. Overall accuracy and Kappa coefficient are appropriate.
https://jad.shahroodut.ac.ir/article_1788_a0e317f5794069a86834fbc6a25802da.pdf
2020-07-01
357
370
10.22044/jadm.2020.8252.1964
VHR Semantic labeling
Spatial feature
Google Earth Engine
GLCM
Random Forest
M.
Kakooei
kakooei.mohammad@stu.nit.ac.ir
1
Electrical & Computer Engineering Department, Babol Noshirvani University of Technology, Babol, Iran
AUTHOR
Y.
Baleghi
y.baleghi@nit.ac.ir
2
Electrical & Computer Engineering Department, Babol Noshirvani University of Technology, Babol, Iran
LEAD_AUTHOR
ORIGINAL_ARTICLE
Development of an Ensemble Multi-stage Machine for Prediction of Breast Cancer Survivability
Prediction of cancer survivability using machine learning techniques has become a popular approach in recent years. In this regard, an important issue is that preparation of some features may need conducting difficult and costly experiments while these features have less significant impacts on the final decision and can be ignored from the feature set. Therefore, developing a machine for prediction of survivability, which ignores these features for simple cases and yields an acceptable prediction accuracy, has turned into a challenge for researchers. In this paper, we have developed an ensemble multi-stage machine for survivability prediction which ignores difficult features for simple cases. The machine employs three basic learners, namely multilayer perceptron (MLP), support vector machine (SVM), and decision tree (DT), in the first stage to predict survivability using simple features. If the learners agree on the output, the machine makes the final decision in the first stage. Otherwise, for difficult cases where the output of learners is different, the machine makes decision in the second stage using SVM over all features. The developed model was evaluated using the Surveillance, Epidemiology, and End Results (SEER) database. The experimental results revealed that the developed machine obtains considerable accuracy while it ignores difficult features for most of the input samples.
https://jad.shahroodut.ac.ir/article_1780_e3aa6f7b2b8463e031c1b4fc2785a103.pdf
2020-07-01
371
378
10.22044/jadm.2020.8406.1978
breast cancer survivability prediction
Ensemble learning
multi-stage machines
Feature Selection
M.
Salehi
yakjin.s@gmail.com
1
Department of Computer Science, Faculty of Mathematical Sciences, University of Tabriz, Tabriz, Iran.
AUTHOR
J.
Razmara
razmaraj@gmail.com
2
Department of Computer Science, Faculty of Mathematical Sciences, University of Tabriz, Tabriz, Iran.
LEAD_AUTHOR
Sh.
Lotfi
sh.lotfi@hotmail.com
3
Department of Computer Science, Faculty of Mathematical Sciences, University of Tabriz, Tabriz, Iran.
AUTHOR
ORIGINAL_ARTICLE
Improving Accuracy of Recommender Systems using Social Network Information and Longitudinal Data
The rapid development of technology, the Internet, and the development of electronic commerce have led to the emergence of recommender systems. These systems will assist the users in finding and selecting their desired items. The accuracy of the advice in recommender systems is one of the main challenges of these systems. Regarding the fuzzy systems capabilities in determining the borders of user interests, it seems reasonable to combine it with social networks information and the factor of time. Hence, this study, for the first time, tries to assess the efficiency of the recommender systems by combining fuzzy logic, longitudinal data and social networks information such as tags, friendship, and membership in groups. And the impact of the proposed algorithm for improving the accuracy of recommender systems was studied by specifying the neighborhood and the border between the users’ preferences over time. The results revealed that using longitudinal data in social networks information in memory-based recommender systems improves the accuracy of these systems.
https://jad.shahroodut.ac.ir/article_1787_e8ad8fe60751c9becdd1ff5b8f842cc3.pdf
2020-07-01
379
389
10.22044/jadm.2020.7326.1871
Recommender system
Social Network
Longitudinal Data
fuzzy logic
Tags
B.
Hassanpour
b.hassanpor@gmail.com
1
Department of Electrical, Computer and IT Engineering, Qazvin Islamic Azad University, Qazvin, Iran.
AUTHOR
N.
Abdolvand
abdolvand@gmail.com
2
Department of Management, Faculty of Social Sciences and Economics, Alzahra University, Tehran, Iran.
LEAD_AUTHOR
S.
Rajaee Harandi
rajaeeharandi.saeedeh@gmail.com
3
Department of Management, Faculty of Social Sciences and Economics, Alzahra University, Tehran, Iran.
AUTHOR
ORIGINAL_ARTICLE
High-Dimensional Unsupervised Active Learning Method
In this work, a hierarchical ensemble of projected clustering algorithm for high-dimensional data is proposed. The basic concept of the algorithm is based on the active learning method (ALM) which is a fuzzy learning scheme, inspired by some behavioral features of human brain functionality. High-dimensional unsupervised active learning method (HUALM) is a clustering algorithm which blurs the data points as one-dimensional ink drop patterns, in order to summarize the effects of all data points, and then applies a threshold on the resulting vectors. It is based on an ensemble clustering method which performs one-dimensional density partitioning to produce ensemble of clustering solutions. Then, it assigns a unique prime number to the data points that exist in each partition as their labels. Consequently, a combination is performed by multiplying the labels of every data point in order to produce the absolute labels. The data points with identical absolute labels are fallen into the same cluster. The hierarchical property of the algorithm is intended to cluster complex data by zooming in each already formed cluster to find further sub-clusters. The algorithm is verified using several synthetic and real-world datasets. The results show that the proposed method has a promising performance, compared to some well-known high-dimensional data clustering algorithms.
https://jad.shahroodut.ac.ir/article_1826_39827e6bc4a282f0784fb60b1391806f.pdf
2020-07-01
391
407
10.22044/jadm.2020.7826.1941
Ensemble Clustering
High Dimensional Clustering
Hierarchical Clustering
Unsupervised Active Learning Method
V.
Ghasemi
v.ghasemi@kut.ac.ir
1
Department of Computer Engineering, Kermanshah University of Technology. Kermanshah, Iran.
LEAD_AUTHOR
M.
Javadian
mo.javadian@gmail.com
2
Department of Computer Engineering, Kermanshah University of Technology. Kermanshah, Iran.
AUTHOR
S.
Bagheri Shouraki
bagheri-s@sharif.edu
3
Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran.
AUTHOR
ORIGINAL_ARTICLE
A Routing-Aware Simulated Annealing-based Placement Method in Wireless Network on Chips
Wireless network on chip (WiNoC) is one of the promising on-chip interconnection networks for on-chip system architectures. In addition to wired links, these architectures also use wireless links. Using these wireless links makes packets reach destination nodes faster and with less power consumption. These wireless links are provided by wireless interfaces in wireless routers. The WiNoC architectures differ in the position of the wireless routers and how they interact with other routers. So, the placement of wireless interfaces is an important step in designing WiNoC architectures. In this paper, we propose a simulated annealing (SA) placement method which considers the routing algorithm as a factor in designing cost function. To evaluate the proposed method, the Noxim, which is a cycle-accurate network-on-chip simulator, is used. The simulation results show that the proposed method can reduce flit latency by up to 24.6% with about a 0.2% increase in power consumption.
https://jad.shahroodut.ac.ir/article_1824_6a9c6c412584f5f041052ce10ad7b55a.pdf
2020-07-01
409
415
10.22044/jadm.2020.8964.2034
Simulated annealing
Wireless Network on Chip
Placement
A.R.
Tajary
tajary@shahroodut.ac.ir
1
Faculty of Computer Engineering, Shahrood University of Technology, Shahrood, Iran.
LEAD_AUTHOR
E.
Tahanian
e.tahanian@shahroodut.ac.ir
2
Faculty of Computer Engineering, Shahrood University of Technology, Shahrood, Iran.
AUTHOR
ORIGINAL_ARTICLE
Identification of Multiple Input-multiple Output Non-linear System Cement Rotary Kiln using Stochastic Gradient-based Rough-neural Network
Because of the existing interactions among the variables of a multiple input-multiple output (MIMO) nonlinear system, its identification is a difficult task, particularly in the presence of uncertainties. Cement rotary kiln (CRK) is a MIMO nonlinear system in the cement factory with a complicated mechanism and uncertain disturbances. The identification of CRK is very important for different purposes such as prediction, fault detection, and control. In the previous works, CRK was identified after decomposing it into several multiple input-single output (MISO) systems. In this paper, for the first time, the rough-neural network (R-NN) is utilized for the identification of CRK without the usage of MISO structures. R-NN is a neural structure designed on the base of rough set theory for dealing with the uncertainty and vagueness. In addition, a stochastic gradient descent learning algorithm is proposed for training the R-NNs. The simulation results show the effectiveness of proposed methodology.
https://jad.shahroodut.ac.ir/article_1804_39ce137f280721b29917428bca02e180.pdf
2020-07-01
417
425
10.22044/jadm.2020.8865.2021
Cement Rotary Kiln
Rough-Neural Network
Stochastic Gradient Descent Learning
System Identification
Uncertainty
Gh.
Ahmadi
ghasem453@gmail.com
1
Department of Mathematics, Payame Noor University, Tehran, Iran.
LEAD_AUTHOR
M.
Teshnelab
teshnehlab@eetd.kntu.ac.ir
2
Control Engineering Department, Faculty of Electrical Engineering, K.N. Toosi University of Technology, Tehran, Iran.
AUTHOR
ORIGINAL_ARTICLE
Vehicle Type Recognition based on Dimension Estimation and Bag of Word Classification
Fine-grained vehicle type recognition is one of the main challenges in machine vision. Almost all of the ways presented so far have identified the type of vehicle with the help of feature extraction and classifiers. Because of the apparent similarity between car classes, these methods may produce erroneous results. This paper presents a methodology that uses two criteria to identify common vehicle types. The first criterion is feature extraction and classification and the second criterion is to use the dimensions of car for classification. This method consists of three phases. In the first phase, the coordinates of the vanishing points are obtained. In the second phase, the bounding box and dimensions are calculated for each passing vehicle. Finally, in the third phase, the exact vehicle type is determined by combining the results of the first and second criteria. To evaluate the proposed method, a dataset of images and videos, prepared by the authors, has been used. This dataset is recorded from places similar to those of a roadside camera. Most existing methods use high-quality images for evaluation and are not applicable in the real world, but in the proposed method real-world video frames are used to determine the exact type of vehicle, and the accuracy of 89.5% is achieved, which represents a good performance.
https://jad.shahroodut.ac.ir/article_1722_9749fae8e64658949e4fc6907940b5f7.pdf
2020-07-01
427
438
10.22044/jadm.2020.8375.1975
bag of words
camera calibration
dimension estimation
vehicle type recognition
R.
Asgarian Dehkordi
r_asgarian_dehkordi@yahoo.com
1
Faculty of Electrical Engineering and Robotics, Shahrood University of Technology, Shahrood, Iran.
AUTHOR
H.
Khosravi
hosseinkhosravi@gmail.com
2
Faculty of Electrical Engineering and Robotics, Shahrood University of Technology, Shahrood, Iran.
LEAD_AUTHOR
ORIGINAL_ARTICLE
Coordinate Exhaustive Search Hybridization Enhancing Evolutionary Optimization Algorithms
In general, all of the hybridized evolutionary optimization algorithms use “first diversification and then intensification” routine approach. In other words, these hybridized methods all begin with a global search mode using a highly random initial search population and then switch to intense local search mode at some stage. The population initialization is still a crucial point in the hybridized evolutionary optimization algorithms since it can affect the speed of convergence and the quality of the final solution. In this study, we introduce a new approach by creating a paradigm shift that reverses the “diversification” and then “intensification” routines. Here, instead of starting from a random initial population, we firstly find a unique starting point by conducting an initial exhaustive search based on the coordinate exhaustive search local optimization algorithm only for single step iteration in order to collect a rough but some meaningful knowledge about the nature of the problem. Thus, our main assertion is that this approach will ameliorate convergence rate of any evolutionary optimization algorithms. In this study, we illustrate how one can use this unique starting point in the initialization of two evolutionary optimization algorithms, including but not limited to Big Bang-Big Crunch optimization and Particle Swarm Optimization. Experiments on a commonly used benchmark test suite, which consist of mainly rotated and shifted functions, show that the proposed initialization procedure leads to great improvement for the above-mentioned two evolutionary optimization algorithms.
https://jad.shahroodut.ac.ir/article_1612_0f1ab56c8f1415583fc32a8fee40e40b.pdf
2020-07-01
439
449
10.22044/jadm.2019.7351.1875
Coordinate exhaustive search
evolutionary computation
Big Bang- Big Crunch optimization algorithm
hybridization
a-priori knowledge utilization
Osman K.
Erol
okerol@itu.edu.tr
1
Istanbul Technical University, Electric-Electronics Faculty, Control and Automation Dept., Maslak, Sariyer, Turkey.
LEAD_AUTHOR
I.
Eksin
2
Istanbul Technical University, Electric-Electronics Faculty, Control and Automation Dept., Maslak, Sariyer, Turkey.
AUTHOR
A.
Akdemir
3
Bogazici University, Engineering Faculty, Computer Engineering Dept., Bebek, Besiktas, Turkey.
AUTHOR
A.
Aydınoglu
eksin@itu.edu.tr
4
Istanbul Technical University, Electric-Electronics Faculty, Control and Automation Dept., Maslak, Sariyer, Turkey.
AUTHOR