H.3. Artificial Intelligence
Seyed M. H. Hasheminejad; Z. Salimi
Abstract
One of the recent strategies for increasing the customer’s loyalty in banking industry is the use of customers’ club system. In this system, customers receive scores on the basis of financial and club activities they are performing, and due to the achieved points, they get credits from the ...
Read More
One of the recent strategies for increasing the customer’s loyalty in banking industry is the use of customers’ club system. In this system, customers receive scores on the basis of financial and club activities they are performing, and due to the achieved points, they get credits from the bank. In addition, by the advent of new technologies, fraud is growing in banking domain as well. Therefore, given the importance of financial activities in the customers’ club system, providing an efficient and applicable method for detecting fraud is highly important in these types of systems. In this paper, we propose a novel sliding time and scores window-based method, called FDiBC (Fraud Detection in Bank Club), to detect fraud in bank club. In FDiBC, firstly, based on each score obtained by customer members of bank club, 14 features are derived, then, based on all the scores of each customer member, five sliding time and scores window-based feature vectors are proposed. For generating training and test data set from the obtained scores of fraudster and common customers in the customers’ club system of a bank, a positive and a negative label are used, respectively. After generating training data set, learning is performed through two approaches: 1) clustering and binary classification with OCSVM method for positive data, i.e. fraudster customers, and 2) multi-class classification including SVM, C4.5, KNN, and Naïve Bayes methods. The results reveal that FDiBC has the ability to detect fraud with 78% accuracy and thus can be used in practice.
N. Zendehdel; S. J. Sadati; A. Ranjbar Noei
Abstract
This manuscript addresses trajectory tracking problem of autonomous underwater vehicles (AUVs) on the horizontal plane. Adaptive sliding mode control is employed in order to achieve a robust behavior against some uncertainty and ocean current disturbances, assuming that disturbance and its derivative ...
Read More
This manuscript addresses trajectory tracking problem of autonomous underwater vehicles (AUVs) on the horizontal plane. Adaptive sliding mode control is employed in order to achieve a robust behavior against some uncertainty and ocean current disturbances, assuming that disturbance and its derivative are bounded by unknown boundary levels. The proposed approach is based on a dual layer adaptive law, which is independent upon the knowledge of disturbance boundary limit and its derivative. The approach tends to play a significant role to reduce the chattering effect which is prevalent in conventional sliding mode controllers. To guarantee the stability of the proposed control technique, the Lyapunov theory is used. Simulation results illustrate the validity of the proposed control scheme compared to the finite-time tracking control method.
H.6.3.2. Feature evaluation and selection
A. Zangooei; V. Derhami; F. Jamshidi
Abstract
Phishing is one of the luring techniques used to exploit personal information. A phishing webpage detection system (PWDS) extracts features to determine whether it is a phishing webpage or not. Selecting appropriate features improves the performance of PWDS. Performance criteria are detection accuracy ...
Read More
Phishing is one of the luring techniques used to exploit personal information. A phishing webpage detection system (PWDS) extracts features to determine whether it is a phishing webpage or not. Selecting appropriate features improves the performance of PWDS. Performance criteria are detection accuracy and system response time. The major time consumed by PWDS arises from feature extraction that is considered as feature cost in this paper. Here, two novel features are proposed. They use semantic similarity measure to determine the relationship between the content and the URL of a page. Since suggested features don't apply third-party services such as search engines result, the features extraction time decreases dramatically. Login form pre-filer is utilized to reduce unnecessary calculations and false positive rate. In this paper, a cost-based feature selection is presented as the most effective feature. The selected features are employed in the suggested PWDS. Extreme learning machine algorithm is used to classify webpages. The experimental results demonstrate that suggested PWDS achieves high accuracy of 97.6% and short average detection time of 120.07 milliseconds.
G. Information Technology and Systems
M. Aghazadeh; F. Soleimanian Gharehchopogh
Abstract
The size and complexity of websites have grown significantly during recent years. In line with this growth, the need to maintain most of the resources has been intensified. Content Management Systems (CMSs) are software that was presented in accordance with increased demands of users. With the advent ...
Read More
The size and complexity of websites have grown significantly during recent years. In line with this growth, the need to maintain most of the resources has been intensified. Content Management Systems (CMSs) are software that was presented in accordance with increased demands of users. With the advent of Content Management Systems, factors such as: domains, predesigned module’s development, graphics, optimization and alternative support have become factors that influenced the cost of software and web-based projects. Consecutively, these factors have challenged the previously introduced cost estimation models. This paper provides a hybrid method in order to estimate the cost of websites designed by content management systems. The proposed method uses a combination of genetic algorithm and Multilayer Perceptron (MLP). Results have been evaluated by comparing the number of correctly classified and incorrectly classified data and Kappa coefficient, which represents the correlation coefficient between the sets. According to the obtained results, the Kappa coefficient on testing data set equals to: 0.82 percent for the proposed method, 0.06 percent for genetic algorithm and 0.54 percent for MLP Artificial Neural Network (ANN). Based on these results; it can be said that, the proposed method can be used as a considered method in order to estimate the cost of websites designed by content management systems.
H.7. Simulation, Modeling, and Visualization
R. Ghanizadeh; M. Ebadian
Abstract
This paper presents a new control method for a three-phase four-wire Unified Power Quality Conditioner (UPQC) to deal with the problems of power quality under distortional and unbalanced load conditions. The proposed control approach is the combination of instantaneous power theory and Synchronous Reference ...
Read More
This paper presents a new control method for a three-phase four-wire Unified Power Quality Conditioner (UPQC) to deal with the problems of power quality under distortional and unbalanced load conditions. The proposed control approach is the combination of instantaneous power theory and Synchronous Reference Frame (SRF) theory which is optimized by using a self-tuning filter (STF) and without using load or filter currents measurement. In this approach, load and source voltages are used to generate the reference voltages of series active power filter (APF) and source currents are used to generate the reference currents of shunt APF. Therefore, the number of current measurements is reduced and system performance is improved. The performance of proposed control system is tested for cases of power factor correction, reducing source neutral current, load balancing and current and voltage harmonics in a three-phase four-wire system for distortional and unbalanced loads. Results obtained through MATLAB/SIMULINK software show the effectiveness of proposed control technique in comparison to the conventional p-q method.
D. Data
S. Taherian Dehkordi; A. Khatibi Bardsiri; M. H. Zahedi
Abstract
Data mining is an appropriate way to discover information and hidden patterns in large amounts of data, where the hidden patterns cannot be easily discovered in normal ways. One of the most interesting applications of data mining is the discovery of diseases and disease patterns through investigating ...
Read More
Data mining is an appropriate way to discover information and hidden patterns in large amounts of data, where the hidden patterns cannot be easily discovered in normal ways. One of the most interesting applications of data mining is the discovery of diseases and disease patterns through investigating patients' records. Early diagnosis of diabetes can reduce the effects of this devastating disease. A common way to diagnose this disease is performing a blood test, which, despite its high precision, has some disadvantages such as: pain, cost, patient stress, lack of access to a laboratory, and so on. Diabetic patients’ information has hidden patterns, which can help you investigate the risk of diabetes in individuals, without performing any blood tests. Use of neural networks, as powerful data mining tools, is an appropriate method to discover hidden patterns in diabetic patients’ information. In this paper, in order to discover the hidden patterns and diagnose diabetes, a water wave optimization(WWO) algorithm; as a precise metaheuristic algorithm, was used along with a neural network to increase the precision of diabetes prediction. The results of our implementation in the MATLAB programming environment, using the dataset related to diabetes, indicated that the proposed method diagnosed diabetes at a precision of 94.73%,sensitivity of 94.20%, specificity of 93.34%, and accuracy of 95.46%, and was more sensitive than methods such as: support vector machines, artificial neural networks, and decision trees.
H.3.2.3. Decision support
F. Moslehi; A.R. Haeri; A.R. Moini
Abstract
In today's world, most financial transactions are carried out using done through electronic instruments and in the context of the Information Technology and Internet. Disregarding the application of new technologies at this field and sufficing to traditional ways, will result in financial loss and customer ...
Read More
In today's world, most financial transactions are carried out using done through electronic instruments and in the context of the Information Technology and Internet. Disregarding the application of new technologies at this field and sufficing to traditional ways, will result in financial loss and customer dissatisfaction. The aim of the present study is surveying and analyzing the use of electronic payment instruments in banks across the country using statistics and information retrieved from the Central Bank and data mining techniques. For this purpose, firstly, according to the volume of transactions carried out and with the help of using the K-Means algorithm, a label was dedicated to any record; then hidden patterns of the E-payment instruments transaction were detected using the CART algorithm. The obtained results of this study enable banks administrators to balance their future policies in the field of E-payment and in the bank and customers’ interest's direction based on detected patterns and provide higher quality services to their customers.
H.3. Artificial Intelligence
M. Vahedi; M. Hadad Zarif; A. Akbarzadeh Kalat
Abstract
This paper presents an indirect adaptive system based on neuro-fuzzy approximators for the speed control of induction motors. The uncertainty including parametric variations, the external load disturbance and unmodeled dynamics is estimated and compensated by designing neuro-fuzzy systems. The contribution ...
Read More
This paper presents an indirect adaptive system based on neuro-fuzzy approximators for the speed control of induction motors. The uncertainty including parametric variations, the external load disturbance and unmodeled dynamics is estimated and compensated by designing neuro-fuzzy systems. The contribution of this paper is presenting a stability analysis for neuro-fuzzy speed control of induction motors. The online training of the neuro-fuzzy systems is based on the Lyapunov stability analysis and the reconstruction errors of the neuro-fuzzy systems are compensated in order to guarantee the asymptotic convergence of the speed tracking error. Moreover, to improve the control system performance and reduce the chattering, a PI structure is used to produce the input of the neuro-fuzzy systems. Finally, simulation results verify high performance characteristics and robustness of the proposed control system against plant parameter variation, external load and input voltage disturbance.
I.3.5. Earth and atmospheric sciences
A. Jalalkamali; N. Jalalkamali
Abstract
The prediction of groundwater quality is very important for the management of water resources and environmental activities. The present study has integrated a number of methods such as Geographic Information Systems (GIS) and Artificial Intelligence (AI) methodologies to predict groundwater quality in ...
Read More
The prediction of groundwater quality is very important for the management of water resources and environmental activities. The present study has integrated a number of methods such as Geographic Information Systems (GIS) and Artificial Intelligence (AI) methodologies to predict groundwater quality in Kerman plain (including HCO3-, concentrations and Electrical Conductivity (EC) of groundwater). This research has investigated the abilities of Adaptive Neuro Fuzzy Inference System (ANFIS), the hybrid of ANFIS with Genetic Algorithm (GA), and Artificial Neural Network (ANN) techniques as well to predict the groundwater quality. Various combinations of monthly variability, namely rainfall and groundwater levels in the wells were used by two different neuro-fuzzy models (standard ANFIS and ANFIS-GA) and ANN. The results show that the ANFIS-GA method can present a more parsimonious model with a less number of employed rules (about 300% reduction in number of rules) compared to ANFIS model and improve the fitness criteria and so model efficiency at the same time (38.4% in R2 and 44% in MAPE). The study also reveals that groundwater level fluctuations and rainfall contribute as two important factors in predicting indices of groundwater quality.
H.4.7. Methodology and Techniques
Osman K. Erol; I. Eksin; A. Akdemir; A. Aydınoglu
Abstract
In general, all of the hybridized evolutionary optimization algorithms use “first diversification and then intensification” routine approach. In other words, these hybridized methods all begin with a global search mode using a highly random initial search population and then switch to intense ...
Read More
In general, all of the hybridized evolutionary optimization algorithms use “first diversification and then intensification” routine approach. In other words, these hybridized methods all begin with a global search mode using a highly random initial search population and then switch to intense local search mode at some stage. The population initialization is still a crucial point in the hybridized evolutionary optimization algorithms since it can affect the speed of convergence and the quality of the final solution. In this study, we introduce a new approach by creating a paradigm shift that reverses the “diversification” and then “intensification” routines. Here, instead of starting from a random initial population, we firstly find a unique starting point by conducting an initial exhaustive search based on the coordinate exhaustive search local optimization algorithm only for single step iteration in order to collect a rough but some meaningful knowledge about the nature of the problem. Thus, our main assertion is that this approach will ameliorate convergence rate of any evolutionary optimization algorithms. In this study, we illustrate how one can use this unique starting point in the initialization of two evolutionary optimization algorithms, including but not limited to Big Bang-Big Crunch optimization and Particle Swarm Optimization. Experiments on a commonly used benchmark test suite, which consist of mainly rotated and shifted functions, show that the proposed initialization procedure leads to great improvement for the above-mentioned two evolutionary optimization algorithms.
A.10. Power Management
F. Sabahi
Abstract
This paper develops an energy management approach for a multi-microgrid (MMG) taking into account multiple objectives involving plug-in electric vehicle (PEV), photovoltaic (PV) power, and a distribution static compensator (DSTATCOM) to improve power provision sharing. In the proposed approach, there ...
Read More
This paper develops an energy management approach for a multi-microgrid (MMG) taking into account multiple objectives involving plug-in electric vehicle (PEV), photovoltaic (PV) power, and a distribution static compensator (DSTATCOM) to improve power provision sharing. In the proposed approach, there is a pool of fuzzy microgrids granules that they compete with each other to prolong their lives while monitored and evaluated by the specific fuzzy sets. In addition, based on the hourly reconfiguration of microgrids (MGs), granules learn to dispatch cost-effective resources. To promote interactive service, a well-defined, multi-objective approach is derived from fuzzy granulation analysis to improve power quality in MMGs. A combination of the meta-heuristic approach of genetic algorithm (GA) and particle swarm optimization (PSO) eliminates the computational difficulty of the nonlinearity and uncertainty analysis of the system and improves the precision of the results. The proposed approach is successfully applied to a 69-bus MMG test with results reported in terms of stored energy improvement, daily voltage profile improvement, MMG operations, and cost reduction.
H.5. Image Processing and Computer Vision
M. Saeedzarandi; H. Nezamabadi-pour; S. Saryazdi
Abstract
Removing noise from images is a challenging problem in digital image processing. This paper presents an image denoising method based on a maximum a posteriori (MAP) density function estimator, which is implemented in the wavelet domain because of its energy compaction property. The performance of the ...
Read More
Removing noise from images is a challenging problem in digital image processing. This paper presents an image denoising method based on a maximum a posteriori (MAP) density function estimator, which is implemented in the wavelet domain because of its energy compaction property. The performance of the MAP estimator depends on the proposed model for noise-free wavelet coefficients. Thus in the wavelet based image denoising, selecting a proper model for wavelet coefficients is very important. In this paper, we model wavelet coefficients in each sub-band by heavy-tail distributions that are from scale mixture of normal distribution family. The parameters of distributions are estimated adaptively to model the correlation between the coefficient amplitudes, so the intra-scale dependency of wavelet coefficients is also considered. The denoising results confirm the effectiveness of the proposed method.
H. Gholamalinejad; H. Khosravi
Abstract
In recent years, vehicle classification has been one of the most important research topics. However, due to the lack of a proper dataset, this field has not been well developed as other fields of intelligent traffic management. Therefore, the preparation of large-scale datasets of vehicles for each country ...
Read More
In recent years, vehicle classification has been one of the most important research topics. However, due to the lack of a proper dataset, this field has not been well developed as other fields of intelligent traffic management. Therefore, the preparation of large-scale datasets of vehicles for each country is of great interest. In this paper, we introduce a new standard dataset of popular Iranian vehicles. This dataset, which consists of images from moving vehicles in urban streets and highways, can be used for vehicle classification and license plate recognition. It contains a large collection of vehicle images in different dimensions, viewing angles, weather, and lighting conditions. It took more than a year to construct this dataset. Images are taken from various types of mounted cameras, with different resolutions and at different altitudes. To estimate the complexity of the dataset, some classic methods alongside popular Deep Neural Networks are trained and evaluated on the dataset. Furthermore, two light-weight CNN structures are also proposed. One with 3-Conv layers and another with 5-Conv layers. The 5-Conv model with 152K parameters reached the recognition rate of 99.09% and can process 48 frames per second on CPU which is suitable for real-time applications.
J. Barazande; N. Farzaneh
Abstract
One of the crucial applications of IoT is developing smart cities via this technology. Smart cities are made up of smart components such as smart homes. In smart homes, a variety of sensors are used for making the environment smart, and the smart things, in such homes, can be used for detecting the activities ...
Read More
One of the crucial applications of IoT is developing smart cities via this technology. Smart cities are made up of smart components such as smart homes. In smart homes, a variety of sensors are used for making the environment smart, and the smart things, in such homes, can be used for detecting the activities of the people inside them. Detecting the activities of the smart homes’ users may include the detection of activities such as making food or watching TV. Detecting the activities of residents of smart homes can tremendously help the elderly or take care of the kids or, even, promote security issues. The information collected by the sensors could be used for detecting the kind of activities; however, the main challenge is the poor precision of most of the activity detection methods. In the proposed method, for reducing the clustering error of the data mining techniques, a hybrid learning approach is presented using Water Strider Algorithm. In the proposed method, Water Strider Algorithm can be used in the feature extraction phase and exclusively extract the main features for machine learning. The analysis of the proposed method shows that it has precision of 97.63 %, accuracy of 97. 12 %, and F1 index of 97.45 %. It, in comparison with similar algorithms (such as Butterfly Optimization Algorithm, Harris Hawks Optimization Algorithm, and Black Widow Optimization Algorithm), has higher precision while detecting the users’ activities.
Vahid Kiani; Mahdi Imanparast
Abstract
In this paper, we present a bi-objective virtual-force local search particle swarm optimization (BVFPSO) algorithm to improve the placement of sensors in wireless sensor networks while it simultaneously increases the coverage rate and preserves the battery energy of the sensors. Mostly, sensor nodes ...
Read More
In this paper, we present a bi-objective virtual-force local search particle swarm optimization (BVFPSO) algorithm to improve the placement of sensors in wireless sensor networks while it simultaneously increases the coverage rate and preserves the battery energy of the sensors. Mostly, sensor nodes in a wireless sensor network are first randomly deployed in the target area, and their deployment should be then modified such that some objective functions are obtained. In the proposed BVFPSO algorithm, PSO is used as the basic meta-heuristic algorithm and the virtual-force operator is used as the local search. As far as we know, this is the first time that a bi-objective PSO algorithm has been combined with a virtual force operator to improve the coverage rate of sensors while preserving their battery energy. The results of the simulations on some initial random deployments with the different numbers of sensors show that the BVFPSO algorithm by combining two objectives and using virtual-force local search is enabled to achieve a more efficient deployment in comparison to the competitive algorithms PSO, GA, FRED and VFA with providing simultaneously maximum coverage rate and the minimum energy consumption.
H.3. Artificial Intelligence
Farid Ariai; Maryam Tayefeh Mahmoudi; Ali Moeini
Abstract
In the era of pervasive internet use and the dominance of social networks, researchers face significant challenges in Persian text mining, including the scarcity of adequate datasets in Persian and the inefficiency of existing language models. This paper specifically tackles these challenges, aiming ...
Read More
In the era of pervasive internet use and the dominance of social networks, researchers face significant challenges in Persian text mining, including the scarcity of adequate datasets in Persian and the inefficiency of existing language models. This paper specifically tackles these challenges, aiming to amplify the efficiency of language models tailored to the Persian language. Focusing on enhancing the effectiveness of sentiment analysis, our approach employs an aspect-based methodology utilizing the ParsBERT model, augmented with a relevant lexicon. The study centers on sentiment analysis of user opinions extracted from the Persian website 'Digikala.' The experimental results not only highlight the proposed method's superior semantic capabilities but also showcase its efficiency gains with an accuracy of 88.2% and an F1 score of 61.7. The importance of enhancing language models in this context lies in their pivotal role in extracting nuanced sentiments from user-generated content, ultimately advancing the field of sentiment analysis in Persian text mining by increasing efficiency and accuracy.
T. Askari Javaran; A. Alidadi; S.R. Arab
Abstract
Estimation of blurriness value in image is an important issue in image processing applications such as image deblurring. In this paper, a no-reference blur metric with low computational cost is proposed, which is based on the difference between the second order gradients of a sharp image and the one ...
Read More
Estimation of blurriness value in image is an important issue in image processing applications such as image deblurring. In this paper, a no-reference blur metric with low computational cost is proposed, which is based on the difference between the second order gradients of a sharp image and the one associated with its blurred version. The experiments, in this paper, performed on four databases, including CSIQ, TID2008, IVC, and LIVE. The experimental results indicate the capability of the proposed blur metric in measuring image blurriness, also the low computational cost, comparing with other existing approaches.
Ali Yousefi; Kambiz Badie; Mohammad Mehdi Ebadzadeh; Arash Sharifi
Abstract
Recently, learning classifier systems are used to control physical robots, sensory robots, and intelligent rescue systems. The most important challenge in these systems, which are models of real environments, is its non-markov quality. Therefore, it is necessary to use memory to store system states in ...
Read More
Recently, learning classifier systems are used to control physical robots, sensory robots, and intelligent rescue systems. The most important challenge in these systems, which are models of real environments, is its non-markov quality. Therefore, it is necessary to use memory to store system states in order to make decisions based on a chain of previous states. In this research, a memory-based XCS is proposed to help use more effective rules in classifier by identifying efficient rules. The proposed model was implemented on five important maze maps and led to a reduction in the number of steps to reach the goal and also an increase in the number of successes in reaching the goal in these maps.
Sh. Golzari; F. Sanei; M.R. Saybani; A. Harifi; M. Basir
Abstract
A Question Answering System (QAS) is a special form of information retrieval which consists of three parts: question processing, information retrieval, and answer selection. Determining the type of question is the most important part of QAS as it affects other following parts. This study uses effective ...
Read More
A Question Answering System (QAS) is a special form of information retrieval which consists of three parts: question processing, information retrieval, and answer selection. Determining the type of question is the most important part of QAS as it affects other following parts. This study uses effective features and ensemble classification to improve the QAS performance by increasing the accuracy of question type identification. We use the gravitational search algorithm to select the features and perform ensemble classification. The proposed system is extensively tested on different datasets using four types of experiments: (1) neither feature selection nor ensemble classification, (2) feature selection without ensemble classification, (3) ensemble classification without feature selection, and (4) feature selection with ensemble classification. These four kinds of experiments are carried out under the differential evolution algorithm and gravitational search algorithm. The experimental results show that the proposed method outperforms compared to state-of-the-art methods in previous researches.
F.2.11. Applications
Ali Sedehi; Alireza Alfi; Mohammadreza Mirjafari
Abstract
This paper addresses a key challenge in designing a suitable controller for DC-DC converters to regulate the output voltage effectively within a limited time frame. In addition to non-minimum phase behavior of such type of converter, a significant issue, namely parametric uncertainty, can further complicate ...
Read More
This paper addresses a key challenge in designing a suitable controller for DC-DC converters to regulate the output voltage effectively within a limited time frame. In addition to non-minimum phase behavior of such type of converter, a significant issue, namely parametric uncertainty, can further complicate this task. Robust control theory is an efficient approach to deal with this problem. However, its implementation often requires high-order controllers, which may not be practical due to hardware and computational constraints. Here, we propose a low-order robust controller satisfying the robust stability and performance criteria of conventional high-order controllers. To tackle this issue, a constraint optimization problem is formulated, and the evolutionary algorithms are adopted to achieve the optimal parameter values of the controller. Both simulation and experimental outcomes have been documented, and a comparative analysis with an optimal Proportional-Integral (PI) controller has been conducted to substantiate efficiency to the proposed methodology.
N. Alibabaie; A.M. Latif
Abstract
Periodic noise reduction is a fundamental problem in image processing, which severely affects the visual quality and subsequent application of the data. Most of the conventional approaches are only dedicated to either the frequency or spatial domain. In this research, we propose a dual-domain approach ...
Read More
Periodic noise reduction is a fundamental problem in image processing, which severely affects the visual quality and subsequent application of the data. Most of the conventional approaches are only dedicated to either the frequency or spatial domain. In this research, we propose a dual-domain approach by converting the periodic noise reduction task into an image decomposition problem. We introduced a bio-inspired computational model to separate the original image from the noise pattern without having any a priori knowledge about its structure or statistics. Experiments on both synthetic and non-synthetic noisy images have been carried out to validate the effectiveness and efficiency of the proposed algorithm. The simulation results demonstrate the effectiveness of the proposed method both qualitatively and quantitatively.
M. Taherinia; M. Esmaeili; B. Minaei Bidgoli
Abstract
The Influence Maximization Problem in social networks aims to find a minimal set of individuals to produce the highest influence on other individuals in the network. In the last two decades, a lot of algorithms have been proposed to solve the time efficiency and effectiveness challenges of this NP-Hard ...
Read More
The Influence Maximization Problem in social networks aims to find a minimal set of individuals to produce the highest influence on other individuals in the network. In the last two decades, a lot of algorithms have been proposed to solve the time efficiency and effectiveness challenges of this NP-Hard problem. Undoubtedly, the CELF algorithm (besides the naive greedy algorithm) has the highest effectiveness among them. Of course, the CELF algorithm is faster than the naive greedy algorithm (about 700 times). This superiority has led many researchers to make extensive use of the CELF algorithm in their innovative approaches. However, the main drawback of the CELF algorithm is the very long running time of its first iteration. Because it has to estimate the influence spread for all nodes by expensive Monte-Carlo simulations, similar to the naive greedy algorithm. In this paper, a heuristic approach is proposed, namely Optimized-CELF algorithm, to improve this drawback of the CELF algorithm by avoiding unnecessary Monte-Carlo simulations. It is found that the proposed algorithm reduces the CELF running time, and subsequently improves the time efficiency of other algorithms that employed the CELF as a base algorithm. Experimental results on the wide spectral of real datasets showed that the Optimized-CELF algorithm provided better running time gain, about 88-99% and 56-98% for k=1 and k=50, respectively, compared to the CELF algorithm without missing effectiveness.
H.5. Image Processing and Computer Vision
Farima Fakouri; Mohsen Nikpour; Abbas Soleymani Amiri
Abstract
Due to the increased mortality caused by brain tumors, accurate and fast diagnosis of brain tumors is necessary to implement the treatment of this disease. In this research, brain tumor classification performed using a network based on ResNet architecture in MRI images. MRI images that available in the ...
Read More
Due to the increased mortality caused by brain tumors, accurate and fast diagnosis of brain tumors is necessary to implement the treatment of this disease. In this research, brain tumor classification performed using a network based on ResNet architecture in MRI images. MRI images that available in the cancer image archive database included 159 patients. First, two filters called median and Gaussian filters were used to improve the quality of the images. An edge detection operator is also used to identify the edges of the image. Second, the proposed network was first trained with the original images of the database, then with Gaussian filtered and Median filtered images. Finally, accuracy, specificity and sensitivity criteria have been used to evaluate the results. Proposed method in this study was lead to 87.21%, 90.35% and 93.86% accuracy for original, Gaussian filtered and Median filtered images. Also, the sensitivity and specificity was calculated 82.3% and 84.3% for the original images, respectively. Sensitivity for Gaussian and Median filtered images was calculated 90.8% and 91.57%, respectively and specificity was calculated 93.01% and 93.36%, respectively. As a conclusion, image processing approaches in preprocessing stage should be investigated to improve the performance of deep learning networks.
S. Hosseini; M. Khorashadizade
Abstract
High dimensionality is the biggest problem when working with large datasets. Feature selection is a procedure for reducing the dimensionality of datasets by removing additional and irrelevant features; the most effective features in the dataset will remain, increasing the algorithms’ performance. ...
Read More
High dimensionality is the biggest problem when working with large datasets. Feature selection is a procedure for reducing the dimensionality of datasets by removing additional and irrelevant features; the most effective features in the dataset will remain, increasing the algorithms’ performance. In this paper, a novel procedure for feature selection is presented that includes a binary teaching learning-based optimization algorithm with mutation (BMTLBO). The TLBO algorithm is one of the most efficient and practical optimization techniques. Although this algorithm has fast convergence speed and it benefits from exploration capability, there may be a possibility of trapping into a local optimum. So, we try to establish a balance between exploration and exploitation. The proposed method is in two parts: First, we used the binary version of the TLBO algorithm for feature selection and added a mutation operator to implement a strong local search capability (BMTLBO). Second, we used a modified TLBO algorithm with the self-learning phase (SLTLBO) for training a neural network to show the application of the classification problem to evaluate the performance of the procedures of the method. We tested the proposed method on 14 datasets in terms of classification accuracy and the number of features. The results showed BMTLBO outperformed the standard TLBO algorithm and proved the potency of the proposed method. The results are very promising and close to optimal.
S. Asadi Amiri; M. Rajabinasab
Abstract
Face recognition is a challenging problem because of different illuminations, poses, facial expressions, and occlusions. In this paper, a new robust face recognition method is proposed based on color and edge orientation difference histogram. Firstly, color and edge orientation difference histogram is ...
Read More
Face recognition is a challenging problem because of different illuminations, poses, facial expressions, and occlusions. In this paper, a new robust face recognition method is proposed based on color and edge orientation difference histogram. Firstly, color and edge orientation difference histogram is extracted using color, color difference, edge orientation and edge orientation difference of the face image. Then, backward feature selection is employed to reduce the number of features. Finally, Canberra measure is used to assess the similarity between the images. Color and edge orientation difference histogram shows uniform color difference and edge orientation difference between two neighboring pixels. This histogram will be effective for face recognition due to different skin colors and different edge orientations of the face image, which leads to different light reflection. The proposed method is evaluated on Yale and ORL face datasets. These datasets are consisted of gray-scale face images under different illuminations, poses, facial expressions and occlusions. The recognition rate over Yale and ORL datasets is achieved 100% and 98.75% respectively. Experimental results demonstrate that the proposed method outperforms the existing methods in face recognition.