Technical Paper
M. Gordan; Saeed R. Sabbagh-Yazdi; Z. Ismail; Kh. Ghaedi; H. Hamad Ghayeb
Abstract
A structural health monitoring system contains two components, i.e. a data collection approach comprising a network of sensors for recording the structural responses as well as an extraction methodology in order to achieve beneficial information on the structural health condition. In this regard, data ...
Read More
A structural health monitoring system contains two components, i.e. a data collection approach comprising a network of sensors for recording the structural responses as well as an extraction methodology in order to achieve beneficial information on the structural health condition. In this regard, data mining which is one of the emerging computer-based technologies, can be employed for extraction of valuable information from obtained sensor databases. On the other hand, data inverse analysis scheme as a problem-based procedure has been developing rapidly. Therefore, the aforesaid scheme and data mining should be combined in order to satisfy increasing demand of data analysis, especially in complex systems such as bridges. Consequently, this study develops a damage detection methodology based on these strategies. To this end, an inverse analysis approach using data mining is applied for a composite bridge. To aid the aim, the support vector machine (SVM) algorithm is utilized to generate the patterns by means of vibration characteristics dataset. To compare the robustness and accuracy of the predicted outputs, four kernel functions, including linear, polynomial, sigmoid, and radial basis function (RBF) are applied to build the patterns. The results point out the feasibility of the proposed method for detecting damage in composite slab-on-girder bridges.
Original/Review Paper
F. Kaveh-Yazdy; S. Zarifzadeh
Abstract
Due to their structure and usage condition, water meters face degradation, breaking, freezing, and leakage problems. There are various studies intended to determine the appropriate time to replace degraded ones. Earlier studies have used several features, such as user meteorological parameters, usage ...
Read More
Due to their structure and usage condition, water meters face degradation, breaking, freezing, and leakage problems. There are various studies intended to determine the appropriate time to replace degraded ones. Earlier studies have used several features, such as user meteorological parameters, usage conditions, water network pressure, and structure of meters to detect failed water meters. This article proposes a recommendation framework that uses registered water consumption values as input data and provides meter replacement recommendations. This framework takes time series of registered consumption values and preprocesses them in two rounds to extract effective features. Then, multiple un-/semi-supervised outlier detection methods are applied to the processed data and assigns outlier/normal labels to them. At the final stage, a hypergraph-based ensemble method receives the labels and combines them to discover the suitable label. Due to the unavailability of ground truth labeled data for meter replacement, we compare our method with respect to its FPR and two internal metrics: Dunn index and Davies-Bouldin Index. Results of our comparative experiments show that the proposed framework detects more compact clusters with smaller variance.
Original/Review Paper
S. Bayatpour; Seyed M. H. Hasheminejad
Abstract
Most of the methods proposed for segmenting image objects are supervised methods which are costly due to their need for large amounts of labeled data. However, in this article, we have presented a method for segmenting objects based on a meta-heuristic optimization which does not need any training data. ...
Read More
Most of the methods proposed for segmenting image objects are supervised methods which are costly due to their need for large amounts of labeled data. However, in this article, we have presented a method for segmenting objects based on a meta-heuristic optimization which does not need any training data. This procedure consists of two main stages of edge detection and texture analysis. In the edge detection stage, we have utilized invasive weed optimization (IWO) and local thresholding. Edge detection methods that are based on local histograms are efficient methods, but it is very difficult to determine the desired parameters manually. In addition, these parameters must be selected specifically for each image. In this paper, a method is presented for automatic determination of these parameters using an evolutionary algorithm. Evaluation of this method demonstrates its high performance on natural images.
Applied Article
M. Nasiri; H. Rahmani
Abstract
Determining the personality dimensions of individuals is very important in psychological research. The most well-known example of personality dimensions is the Five-Factor Model (FFM). There are two approaches 1- Manual and 2- Automatic for determining the personality dimensions. In a manual approach, ...
Read More
Determining the personality dimensions of individuals is very important in psychological research. The most well-known example of personality dimensions is the Five-Factor Model (FFM). There are two approaches 1- Manual and 2- Automatic for determining the personality dimensions. In a manual approach, Psychologists discover these dimensions through personality questionnaires. As an automatic way, varied personal input types (textual/image/video) of people are gathered and analyzed for this purpose. In this paper, we proposed a method called DENOVA (DEep learning based on the ANOVA), which predicts FFM using deep learning based on the Analysis of variance (ANOVA) of words. For this purpose, DENOVA first applies ANOVA to select the most informative terms. Then, DENOVA employs Word2Vec to extract document embeddings. Finally, DENOVA uses Support Vector Machine (SVM), Logistic Regression, XGBoost, and Multilayer perceptron (MLP) as classifiers to predict FFM. The experimental results show that DENOVA outperforms on average, 6.91%, the state-of-the-art methods in predicting FFM with respect to accuracy.
Methodologies
A.H. Damia; M. Esnaashari; M.R. Parvizimosaed
Abstract
In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps ...
Read More
In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps to increase the effectiveness of this algorithm. In this paper, the Adaptive Genetic Algorithm (AGA) is used to maintain the diversity of the population to test data generation based on path coverage criterion, which calculates the rate of recombination and mutation with the similarity between chromosomes and the amount of chromosome fitness during and around each algorithm. Experiments have shown that this method is faster for generating test data than other versions of the genetic algorithm used by others.
Original/Review Paper
E. Pejhan; M. Ghasemzadeh
Abstract
This research is related to the development of technology in the field of automatic text to image generation. In this regard, two main goals are pursued; first, the generated image should look as real as possible; and second, the generated image should be a meaningful description of the input text. our ...
Read More
This research is related to the development of technology in the field of automatic text to image generation. In this regard, two main goals are pursued; first, the generated image should look as real as possible; and second, the generated image should be a meaningful description of the input text. our proposed method is a Multi Sentences Hierarchical GAN (MSH-GAN) for text to image generation. In this research project, we have considered two main strategies: 1) produce a higher quality image in the first step, and 2) use two additional descriptions to improve the original image in the next steps. Our goal is to focus on using more information to generate images with higher resolution by using more than one sentence input text. We have proposed different models based on GANs and Memory Networks. We have also used more challenging dataset called ids-ade. This is the first time; this dataset has been used in this area. We have evaluated our models based on IS, FID and, R-precision evaluation metrics. Experimental results demonstrate that our best model performs favorably against the basic state-of-the-art approaches like StackGAN and AttGAN.
Original/Review Paper
H. Khodadadi; V. Derhami
Abstract
A prominent weakness of dynamic programming methods is that they perform operations throughout the entire set of states in a Markov decision process in every updating phase. This paper proposes a novel chaos-based method to solve the problem. For this purpose, a chaotic system is first initialized, and ...
Read More
A prominent weakness of dynamic programming methods is that they perform operations throughout the entire set of states in a Markov decision process in every updating phase. This paper proposes a novel chaos-based method to solve the problem. For this purpose, a chaotic system is first initialized, and the resultant numbers are mapped onto the environment states through initial processing. In each traverse of the policy iteration method, policy evaluation is performed only once, and only a few states are updated. These states are proposed by the chaos system. In this method, the policy evaluation and improvement cycle lasts until an optimal policy is formulated in the environment. The same procedure is performed in the value iteration method, and only the values of a few states proposed by the chaos are updated in each traverse, whereas the values of other states are left unchanged. Unlike the conventional methods, an optimal solution can be obtained in the proposed method by only updating a limited number of states which are properly distributed all over the environment by chaos. The test results indicate the improved speed and efficiency of chaotic dynamic programming methods in obtaining the optimal solution in different grid environments.
Original/Review Paper
Y. Sharafi; M. Teshnelab; M. Ahmadieh Khanesar
Abstract
A new multi-objective evolutionary optimization algorithm is presented based on the competitive optimization algorithm (COOA) to solve multi-objective optimization problems (MOPs). Based on nature-inspired competition, the competitive optimization algorithm acts between animals such as birds, cats, bees, ...
Read More
A new multi-objective evolutionary optimization algorithm is presented based on the competitive optimization algorithm (COOA) to solve multi-objective optimization problems (MOPs). Based on nature-inspired competition, the competitive optimization algorithm acts between animals such as birds, cats, bees, ants, etc. The present study entails main contributions as follows: First, a novel method is presented to prune the external archive and at the same time keep the diversity of the Pareto front (PF). Second, a hybrid approach of powerful mechanisms such as opposition-based learning and chaotic maps is used to maintain the diversity in the search space of the initial population. Third, a novel method is provided to transform a multi-objective optimization problem into a single-objective optimization problem. A comparison of the result of the simulation for the proposed algorithm was made with some well-known optimization algorithms. The comparisons show that the proposed approach can be a better candidate to solve MOPs.
Technical Paper
E. Feli; R. Hosseini; S. Yazdani
Abstract
In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) ...
Read More
In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) were proposed to classify the infertility dataset. The Genetic algorithm was used to improve the performance of the Multilayer Perceptron Neural Network model. The proposed model was applied to a dataset including 594 eggs from 94 patients undergoing IVF, of which 318 were of good quality embryos and 276 were of lower quality embryos. For performance evaluation of the MLP model, an ROC curve analysis was conducted, and 10-fold cross-validation performed. The results revealed that this intelligent model has high efficiency with an accuracy of 96% for Multi-layer Perceptron neural network, which is promising compared to counterparts methods.
Original/Review Paper
P. Farzi; R. Akbari
Abstract
Abstract: Web service is a technology for defining self-describing objects, structural-based, and loosely coupled applications. They are accessible all over the web and provide a flexible platform. Although service registries such as Universal Description, Discovery, and Integration (UDDI) provide facilities ...
Read More
Abstract: Web service is a technology for defining self-describing objects, structural-based, and loosely coupled applications. They are accessible all over the web and provide a flexible platform. Although service registries such as Universal Description, Discovery, and Integration (UDDI) provide facilities for users to search requirements, retrieving the exact results that satisfy users’ need is still a difficult task since providers and requesters have various views about descriptions with different explanations. Consequently, one of the most challenging obstacles in the discovery task would be how to understand both sides, which is called knowledge-based understanding. This is of immense value for search engines, information retrieval tasks, and even NPL-based various tasks. The goal is to help recognize matching degrees precisely and retrieve the most relevant services more straightforward. In this research, we introduce a conceptual similarity method as a new way that facilitates discovery procedure with less dependency on the provider and user descriptions to reduce the manual intervention of both sides and being more explicit for the machines. We provide a comprehensive knowledge-based approach by applying the Latent Semantic Analysis (LSA) model to the ontology scheme - WordNet and domain-specific in-sense context-based similarity algorithm. The evaluation of our similarity method, done on OWL-S test collection, shows that a sense-context similarity algorithm can boost the disambiguation procedure of descriptions, which leads to conceptual clarity. The proposed method improves the performance of service discovery in comparison with the novel keyword-based and semantic-based methods.
Original/Review Paper
Y. Dorfeshan; R. Tavakkoli-Moghaddam; F. Jolai; S.M. Mousavi
Abstract
Multi-criteria decision-making (MCDM) methods have been received considerable attention for solving problems with a set of alternatives and conflict criteria in the last decade. Previously, MCDM methods have primarily relied on the judgment and knowledge of experts for making decisions. This paper introduces ...
Read More
Multi-criteria decision-making (MCDM) methods have been received considerable attention for solving problems with a set of alternatives and conflict criteria in the last decade. Previously, MCDM methods have primarily relied on the judgment and knowledge of experts for making decisions. This paper introduces a new data- and knowledge-driven MCDM method to reduce experts’ assessment dependence. The weight of the criteria is specified by using the extended data-driven DEMATEL method. Then, the ranking of alternatives is determined through knowledge-driven ELECTRE and VIKOR methods. All proposed methods for weighting and rankings are developed under grey numbers for coping with the uncertainty. Finally, the practicality and applicability of the proposed method are proved by solving an illustrative example.
Applied Article
M. Shokohi nia; A. Dideban; F. Yaghmaee
Abstract
Despite success of ontology in knowledge representation, its reasoning is still challenging. The most important challenge in reasoning of ontology-based methods is improving realization in the reasoning process. The time complexity of the realization problem-solving process is equal to that of NEXP Time. ...
Read More
Despite success of ontology in knowledge representation, its reasoning is still challenging. The most important challenge in reasoning of ontology-based methods is improving realization in the reasoning process. The time complexity of the realization problem-solving process is equal to that of NEXP Time. This can be done by solving the subsumption and satisfiability problems. On the other hand, uncertainty and ambiguity in these characteristics are unavoidable. Considering these requirements, use of the Fuzzy theory is necessary. This study proposed a method for overcoming this problem, which offers a new solution with a suitable time position. The purpose of this study is to model and improve reasoning and realization in an ontology using Fuzzy-Colored Petri Nets (FCPNs).To this end, an algorithm is presented for improve the realization problem. Then, unified modelling language (UML) class diagram is used for standard description and representing efficiency characteristics; RDFS representation is converted to UML diagram. Then fuzzy concepts in Fuzzy-colored Petri nets are further introduced. In the next step, an algorithm is presented to convert ontology description based on UML class diagram to an executive model based on FCPNs. Using this approach, a simple method is developed to from an executive model and reasoning based on FCPNs which can be employed to obtain the results of interest by applying different queries. Finally, the efficiency of the proposed method is evaluated with the results indicating the improve the performance of the proposed method from different aspects.
Original/Review Paper
A. Nozaripour; H. Soltanizadeh
Abstract
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we ...
Read More
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we had presented in our previous work, a new and robust method against rotation is introduced for dorsal hand vein recognition. In this method, to select the ROI, by changing the length and angle of the sides, undesirable effects of hand rotation during taking images have largely been neutralized. So, depending on the amount of hand rotation, ROI in each image will be different in size and shape. On the other hand, because of the same direction distribution on the dorsal hand vein patterns, we have used the kernel trick on sparse representation to classification. As a result, most samples with different classes but the same direction distribution will be classified properly. Using these two techniques, lead to introduce an effective method against hand rotation, for dorsal hand vein recognition. Increases of 2.26% in the recognition rate is observed for the proposed method when compared to the three conventional SRC-based algorithms and three classification methods based sparse coding that used dictionary learning.
Original/Review Paper
Z. Hassani; M. Alambardar Meybodi
Abstract
A major pitfall in the standard version of Particle Swarm Optimization (PSO) is that it might get stuck in the local optima. To escape this issue, a novel hybrid model based on the combination of PSO and AntLion Optimization (ALO) is proposed in this study. The proposed method, called H-PSO-ALO, uses ...
Read More
A major pitfall in the standard version of Particle Swarm Optimization (PSO) is that it might get stuck in the local optima. To escape this issue, a novel hybrid model based on the combination of PSO and AntLion Optimization (ALO) is proposed in this study. The proposed method, called H-PSO-ALO, uses a local search strategy by employing the Ant-Lion algorithm to select the less correlated and salient feature subset. The objective is to improve the prediction accuracy and adaptability of the model in various datasets by balancing the exploration and exploitation processes. The performance of our method has been evaluated on 30 benchmark classification problems, CEC 2017 benchmark problems, and some well-known datasets. To verify the performance, four algorithms, including FDR-PSO, CLPSO, HFPSO, MPSO, are elected to be compared with the efficiency of H-PSO-ALO. Considering the experimental results, the proposed method outperforms the others in many cases, so it seems it is a desirable candidate for optimization problems on real-world datasets.