Original/Review Paper
Vahid Kiani; Mahdi Imanparast
Abstract
In this paper, we present a bi-objective virtual-force local search particle swarm optimization (BVFPSO) algorithm to improve the placement of sensors in wireless sensor networks while it simultaneously increases the coverage rate and preserves the battery energy of the sensors. Mostly, sensor nodes ...
Read More
In this paper, we present a bi-objective virtual-force local search particle swarm optimization (BVFPSO) algorithm to improve the placement of sensors in wireless sensor networks while it simultaneously increases the coverage rate and preserves the battery energy of the sensors. Mostly, sensor nodes in a wireless sensor network are first randomly deployed in the target area, and their deployment should be then modified such that some objective functions are obtained. In the proposed BVFPSO algorithm, PSO is used as the basic meta-heuristic algorithm and the virtual-force operator is used as the local search. As far as we know, this is the first time that a bi-objective PSO algorithm has been combined with a virtual force operator to improve the coverage rate of sensors while preserving their battery energy. The results of the simulations on some initial random deployments with the different numbers of sensors show that the BVFPSO algorithm by combining two objectives and using virtual-force local search is enabled to achieve a more efficient deployment in comparison to the competitive algorithms PSO, GA, FRED and VFA with providing simultaneously maximum coverage rate and the minimum energy consumption.
Original/Review Paper
Ali Yousefi; Kambiz Badie; Mohammad Mehdi Ebadzadeh; Arash Sharifi
Abstract
Recently, learning classifier systems are used to control physical robots, sensory robots, and intelligent rescue systems. The most important challenge in these systems, which are models of real environments, is its non-markov quality. Therefore, it is necessary to use memory to store system states in ...
Read More
Recently, learning classifier systems are used to control physical robots, sensory robots, and intelligent rescue systems. The most important challenge in these systems, which are models of real environments, is its non-markov quality. Therefore, it is necessary to use memory to store system states in order to make decisions based on a chain of previous states. In this research, a memory-based XCS is proposed to help use more effective rules in classifier by identifying efficient rules. The proposed model was implemented on five important maze maps and led to a reduction in the number of steps to reach the goal and also an increase in the number of successes in reaching the goal in these maps.
Original/Review Paper
S. Hosseini; M. Khorashadizade
Abstract
High dimensionality is the biggest problem when working with large datasets. Feature selection is a procedure for reducing the dimensionality of datasets by removing additional and irrelevant features; the most effective features in the dataset will remain, increasing the algorithms’ performance. ...
Read More
High dimensionality is the biggest problem when working with large datasets. Feature selection is a procedure for reducing the dimensionality of datasets by removing additional and irrelevant features; the most effective features in the dataset will remain, increasing the algorithms’ performance. In this paper, a novel procedure for feature selection is presented that includes a binary teaching learning-based optimization algorithm with mutation (BMTLBO). The TLBO algorithm is one of the most efficient and practical optimization techniques. Although this algorithm has fast convergence speed and it benefits from exploration capability, there may be a possibility of trapping into a local optimum. So, we try to establish a balance between exploration and exploitation. The proposed method is in two parts: First, we used the binary version of the TLBO algorithm for feature selection and added a mutation operator to implement a strong local search capability (BMTLBO). Second, we used a modified TLBO algorithm with the self-learning phase (SLTLBO) for training a neural network to show the application of the classification problem to evaluate the performance of the procedures of the method. We tested the proposed method on 14 datasets in terms of classification accuracy and the number of features. The results showed BMTLBO outperformed the standard TLBO algorithm and proved the potency of the proposed method. The results are very promising and close to optimal.
Original/Review Paper
Amin Moradbeiky
Abstract
Managing software projects due to its intangible nature is full of challenges when predicting the effort needed for development. Accordingly, there exist many studies with the attempt to devise models to estimate efforts necessary in developing software. According to the literature, the accuracy of estimator ...
Read More
Managing software projects due to its intangible nature is full of challenges when predicting the effort needed for development. Accordingly, there exist many studies with the attempt to devise models to estimate efforts necessary in developing software. According to the literature, the accuracy of estimator models or methods can be improved by correct application of data filtering or feature weighting techniques. Numerous models have also been proposed based on machine learning methods for data modeling. This study proposes a new model consisted of data filtering and feature weighting techniques to improve the estimation accuracy in the final step of data modeling. The model proposed in this study consists of three layers. Tools and techniques in the first and second layers of the proposed model select the most effective features and weight features with the help of LSA (Lightning Search Algorithm). By combining LSA and an artificial neural network in the third layer of the model, an estimator model is developed from the first and second layers, significantly improving the final estimation accuracy. The upper layers of this model filter out and analyze data of lower layers. This arrangement significantly increased the accuracy of final estimation. Three datasets of real projects were used to evaluate the accuracy of proposed model, and the results were compared with those obtained from different methods. The results were compared based on performance criteria, indicating that the proposed model effectively improved the estimation accuracy.
Technical Paper
S. Ahmadluei; K. Faez; B. Masoumi
Abstract
Deep convolutional neural networks (CNNs) have attained remarkable success in numerous visual recognition tasks. There are two challenges when adopting CNNs in real-world applications: a) Existing CNNs are computationally expensive and memory intensive, impeding their use in edge computing; b) there ...
Read More
Deep convolutional neural networks (CNNs) have attained remarkable success in numerous visual recognition tasks. There are two challenges when adopting CNNs in real-world applications: a) Existing CNNs are computationally expensive and memory intensive, impeding their use in edge computing; b) there is no standard methodology for designing the CNN architecture for the intended problem. Network pruning/compression has emerged as a research direction to address the first challenge, and it has proven to moderate CNN computational load successfully. For the second challenge, various evolutionary algorithms have been proposed thus far. The algorithm proposed in this paper can be viewed as a solution to both challenges. Instead of using constant predefined criteria to evaluate the filters of CNN layers, the proposed algorithm establishes evaluation criteria in online manner during network training based on the combination of each filter’s profit in its layer and the next layer. In addition, the novel method suggested that it inserts new filters into the CNN layers. The proposed algorithm is not simply a pruning strategy but determines the optimal number of filters. Training on multiple CNN architectures allows us to demonstrate the efficacy of our approach empirically. Compared to current pruning algorithms, our algorithm yields a network with a remarkable prune ratio and accuracy. Despite the relatively high computational cost of an epoch in the proposed algorithm in pruning, altogether it achieves the resultant network faster than other algorithms.
Original/Review Paper
Z. MohammadHosseini; A. Jalaly Bidgoly
Abstract
Social media is an inseparable part of human life, although published information through social media is not always true. Rumors may spread easily and quickly in the social media, hence, it is vital to have a tool for rumor veracity detection. Papers already proved that users’ stance is an important ...
Read More
Social media is an inseparable part of human life, although published information through social media is not always true. Rumors may spread easily and quickly in the social media, hence, it is vital to have a tool for rumor veracity detection. Papers already proved that users’ stance is an important tool for this goal. To the best knowledge of authors, so far, no work has been proposed to study the ordering of users’ stances to achieve the best possible accuracy. In this work, we have investigated the importance of the stances ordering in the efficiency of rumor veracity detection. This paper introduces a concept called trust for stance sequence ordering and shows that proper definition of this function can significantly help improve to improve veracity detection. The paper examines and compares different modes of definition of trust. Then, by choosing the best possible definition, it was able to outperform state-of-the-art results on a well-known dataset in this field, namely SemEval 2019.
Original/Review Paper
M. Rahimi; A. A. Taheri; H. Mashayekhi
Abstract
Finding an effective way to combine the base learners is an essential part of constructing a heterogeneous ensemble of classifiers. In this paper, we propose a framework for heterogeneous ensembles, which investigates using an artificial neural network to learn a nonlinear combination of the base classifiers. ...
Read More
Finding an effective way to combine the base learners is an essential part of constructing a heterogeneous ensemble of classifiers. In this paper, we propose a framework for heterogeneous ensembles, which investigates using an artificial neural network to learn a nonlinear combination of the base classifiers. In the proposed framework, a set of heterogeneous classifiers are stacked to produce the first-level outputs. Then these outputs are augmented using several combination functions to construct the inputs of the second-level classifier. We conduct a set of extensive experiments on 121 datasets and compare the proposed method with other established and state-of-the-art heterogeneous methods. The results demonstrate that the proposed scheme outperforms many heterogeneous ensembles, and is superior compared to singly tuned classifiers. The proposed method is also compared to several homogeneous ensembles and performs notably better. Our findings suggest that the improvements are even more significant on larger datasets.
Original/Review Paper
M. M. Jaziriyan; F. Ghaderi
Abstract
Most of the existing neural machine translation (NMT) methods translate sentences without considering the context. It is shown that exploiting inter and intra-sentential context can improve the NMT models and yield to better overall translation quality. However, providing document-level data is costly, ...
Read More
Most of the existing neural machine translation (NMT) methods translate sentences without considering the context. It is shown that exploiting inter and intra-sentential context can improve the NMT models and yield to better overall translation quality. However, providing document-level data is costly, so properly exploiting contextual data from monolingual corpora would help translation quality. In this paper, we proposed a new method for context-aware neural machine translation (CA-NMT) using a combination of hierarchical attention networks (HAN) and automatic post-editing (APE) techniques to fix discourse phenomena when there is lack of context. HAN is used when we have a few document-level data, and APE can be trained on vast monolingual document-level data to improve results further. Experimental results show that combining HAN and APE can complement each other to mitigate contextual translation errors and further improve CA-NMT by achieving reasonable improvement over HAN (i.e., BLEU score of 22.91 on En-De news-commentary dataset).
Original/Review Paper
H.3. Artificial Intelligence
M. Taghian; A. Asadi; R. Safabakhsh
Abstract
The quality of the extracted features from a long-term sequence of raw prices of the instruments greatly affects the performance of the trading rules learned by machine learning models. Employing a neural encoder-decoder structure to extract informative features from complex input time-series has proved ...
Read More
The quality of the extracted features from a long-term sequence of raw prices of the instruments greatly affects the performance of the trading rules learned by machine learning models. Employing a neural encoder-decoder structure to extract informative features from complex input time-series has proved very effective in other popular tasks like neural machine translation and video captioning. In this paper, a novel end-to-end model based on the neural encoder-decoder framework combined with deep reinforcement learning is proposed to learn single instrument trading strategies from a long sequence of raw prices of the instrument. In addition, the effects of different structures for the encoder and various forms of the input sequences on the performance of the learned strategies are investigated. Experimental results showed that the proposed model outperforms other state-of-the-art models in highly dynamic environments.
Original/Review Paper
P. Abdzadeh; H. Veisi
Abstract
Automatic Speaker Verification (ASV) systems have proven to bevulnerable to various types of presentation attacks, among whichLogical Access attacks are manufactured using voiceconversion and text-to-speech methods. In recent years, there has beenloads of work concentrating on synthetic speech detection, ...
Read More
Automatic Speaker Verification (ASV) systems have proven to bevulnerable to various types of presentation attacks, among whichLogical Access attacks are manufactured using voiceconversion and text-to-speech methods. In recent years, there has beenloads of work concentrating on synthetic speech detection, and with the arrival of deep learning-based methods and their success in various computer science fields, they have been a prevailing tool for this very task too. Most of the deep neural network-based techniques forsynthetic speech detection have employed the acoustic features basedon Short-Term Fourier Transform (STFT), which are extracted from theraw audio signal. However, lately, it has been discovered that the usageof Constant Q Transform's (CQT) spectrogram can be a beneficialasset both for performance improvement and processing power andtime reduction of a deep learning-based synthetic speech detection. In this work, we compare the usage of the CQT spectrogram and some most utilized STFT-based acoustic features. As lateral objectives, we consider improving the model's performance as much as we can using methods such as self-attention and one-class learning. Also, short-duration synthetic speech detection has been one of the lateral goals too. Finally, we see that the CQT spectrogram-based model not only outperforms the STFT-based acoustic feature extraction methods but also reduces the processing time and resources for detecting genuine speech from fake. Also, the CQT spectrogram-based model places wellamong the best works done on the LA subset of the ASVspoof 2019 dataset, especially in terms of Equal Error Rate.
Original/Review Paper
Seyed Mahdi Sadatrasoul; Omid Mahdi Ebadati; Amir Amirzadeh Irani
Abstract
Companies have different considerations for using smoothing in their financial statements, including annual general meeting, auditing, Regulatory and Supervisory institutions and shareholders requirements. Smoothing is done based on the various possible and feasible choices in identifying company’s ...
Read More
Companies have different considerations for using smoothing in their financial statements, including annual general meeting, auditing, Regulatory and Supervisory institutions and shareholders requirements. Smoothing is done based on the various possible and feasible choices in identifying company’s incomes, costs, expenses, assets and liabilities. Smoothing can affect credit scoring models reliability, it can cause to providing/not providing facilities to a non-worthy/worthy organization orderly, which are both known as decision errors and are reported as “type I” and “type II” errors, which are very important for Banks Loan portfolio. This paper investigates this issue for the first time in credit scoring studies on the authors knowledge and searches. The data of companies associated with a major Asian Bank are first applied using logistic regression. Different smoothing scenarios are tested, using wilcoxon statistic indicated that traditional credit scoring models have significant errors when smoothing procedures have more than 20% change in adjusting company’s financial statements and balance sheets parameters.
Original/Review Paper
H. Morshedlou; A.R. Tajari
Abstract
Edge computing is an evolving approach for the growing computing and networking demands from end devices and smart things. Edge computing lets the computation to be offloaded from the cloud data centers to the network edge for lower latency, security, and privacy preservation. Although energy efficiency ...
Read More
Edge computing is an evolving approach for the growing computing and networking demands from end devices and smart things. Edge computing lets the computation to be offloaded from the cloud data centers to the network edge for lower latency, security, and privacy preservation. Although energy efficiency in cloud data centers has been widely studied, energy efficiency in edge computing has been left uninvestigated. In this paper, a new adaptive and decentralized approach is proposed for more energy efficiency in edge environments. In the proposed approach, edge servers collaborate with each other to achieve an efficient plan. The proposed approach is adaptive, and consider workload status in local, neighboring and global areas. The results of the conducted experiments show that the proposed approach can improve energy efficiency at network edges. e.g. by task completion rate of 100%, the proposed approach decreases energy consumption of edge servers from 1053 Kwh to 902 Kwh.