N. Nekooghadirli; R. Tavakkoli-Moghaddam; V.R. Ghezavati
Abstract
An integrated model considers all parameters and elements of different deficiencies in one problem. This paper presents a new integrated model of a supply chain that simultaneously considers facility location, vehicle routing and inventory control problems as well as their interactions in one problem, ...
Read More
An integrated model considers all parameters and elements of different deficiencies in one problem. This paper presents a new integrated model of a supply chain that simultaneously considers facility location, vehicle routing and inventory control problems as well as their interactions in one problem, called location-routing-inventory (LRI) problem. This model also considers stochastic demands representing the customers’ requirement. The customers’ uncertain demand follows a normal distribution, in which each distribution center (DC) holds a certain amount of safety stock. In each DC, shortage is not permitted. Furthermore, the routes are not absolutely available all the time. Decisions are made in a multi-period planning horizon. The considered bi-objectives are to minimize the total cost and maximize the probability of delivery to customers. Stochastic availability of routes makes it similar to real-world problems. The presented model is solved by a multi-objective imperialist competitive algorithm (MOICA). Then, well-known multi-objective evolutionary algorithm, namely anon-dominated sorting genetic algorithm II (NSGA-II), is used to evaluate the performance of the proposed MOICA. Finally, the conclusion is presented.
F.1. General
A. Telikani; A. Shahbahrami; R. Tavoli
Abstract
Data sanitization is a process that is used to promote the sharing of transactional databases among organizations and businesses, it alleviates concerns for individuals and organizations regarding the disclosure of sensitive patterns. It transforms the source database into a released database so that ...
Read More
Data sanitization is a process that is used to promote the sharing of transactional databases among organizations and businesses, it alleviates concerns for individuals and organizations regarding the disclosure of sensitive patterns. It transforms the source database into a released database so that counterparts cannot discover the sensitive patterns and so data confidentiality is preserved against association rule mining method. This process strongly rely on the minimizing the impact of data sanitization on the data utility by minimizing the number of lost patterns in the form of non-sensitive patterns which are not mined from sanitized database. This study proposes a data sanitization algorithm to hide sensitive patterns in the form of frequent itemsets from the database while controls the impact of sanitization on the data utility using estimation of impact factor of each modification on non-sensitive itemsets. The proposed algorithm has been compared with Sliding Window size Algorithm (SWA) and Max-Min1 in term of execution time, data utility and data accuracy. The data accuracy is defined as the ratio of deleted items to the total support values of sensitive itemsets in the source dataset. Experimental results demonstrate that proposed algorithm outperforms SWA and Max-Min1 in terms of maximizing the data utility and data accuracy and it provides better execution time over SWA and Max-Min1 in high scalability for sensitive itemsets and transactions.
F.4.4. Experimental design
V. Khoshdel; A. R Akbarzadeh
Abstract
This paper presents an application of design of experiments techniques to determine the optimized parameters of artificial neural network (ANN), which are used to estimate force from Electromyogram (sEMG) signals. The accuracy of ANN model is highly dependent on the network parameters settings. There ...
Read More
This paper presents an application of design of experiments techniques to determine the optimized parameters of artificial neural network (ANN), which are used to estimate force from Electromyogram (sEMG) signals. The accuracy of ANN model is highly dependent on the network parameters settings. There are plenty of algorithms that are used to obtain the optimal ANN setting. However, to the best of our knowledge they did not use regression analysis to model the effect of each parameter as well as present the percent contribution and significance level of the ANN parameters for force estimation. In this paper, sEMG experimental data are collected and the ANN parameters based on an orthogonal array design table are regulated to train the ANN. Taguchi help us to find the optimal parameters settings. Next, analysis of variance (ANOVA) technique is used to obtain significance level as well as contribution percentage of each parameter to optimize ANN’s modeling in human force estimation. The results indicated that design of experiments is a promising solution to estimate the human force from sEMG signals.
H.5. Image Processing and Computer Vision
S. Mavaddati
Abstract
In scientific and commercial fields associated with modern agriculture, the categorization of different rice types and determination of its quality is very important. Various image processing algorithms are applied in recent years to detect different agricultural products. The problem of rice classification ...
Read More
In scientific and commercial fields associated with modern agriculture, the categorization of different rice types and determination of its quality is very important. Various image processing algorithms are applied in recent years to detect different agricultural products. The problem of rice classification and quality detection in this paper is presented based on model learning concepts including sparse representation and dictionary learning techniques to yield over-complete models in this processing field. There are color-based, statistical-based and texture-based features to represent the structural content of rice varieties. To achieve the desired results, different features from recorded images are extracted and used to learn the representative models of rice samples. Also, sparse principal component analysis and sparse structured principal component analysis is employed to reduce the dimension of classification problem and lead to an accurate detector with less computational time. The results of the proposed classifier based on the learned models are compared with the results obtained from neural network and support vector machine. Simulation results, along with a meaningful statistical test, show that the proposed algorithm based on the learned dictionaries derived from the combinational features can detect the type of rice grain and determine its quality precisely.
H.3.15.3. Evolutionary computing and genetic algorithms
M. B. Dowlatshahi; V. Derhami
Abstract
A combinatorial auction is an auction where the bidders have the choice to bid on bundles of items. The WDP in combinatorial auctions is the problem of finding winning bids that maximize the auctioneer’s revenue under the constraint that each item can be allocated to at most one bidder. The WDP ...
Read More
A combinatorial auction is an auction where the bidders have the choice to bid on bundles of items. The WDP in combinatorial auctions is the problem of finding winning bids that maximize the auctioneer’s revenue under the constraint that each item can be allocated to at most one bidder. The WDP is known as an NP-hard problem with practical applications like electronic commerce, production management, games theory, and resources allocation in multi-agent systems. This has motivated the quest for efficient approximate algorithms both in terms of solution quality and computational time. This paper proposes a hybrid Ant Colony Optimization with a novel Multi-Neighborhood Local Search (ACO-MNLS) algorithm for solving Winner Determination Problem (WDP) in combinatorial auctions. Our proposed MNLS algorithm uses the fact that using various neighborhoods in local search can generate different local optima for WDP and that the global optima of WDP is a local optima for a given its neighborhood. Therefore, proposed MNLS algorithm simultaneously explores a set of three different neighborhoods to get different local optima and to escape from local optima. The comparisons between ACO-MNLS, Genetic Algorithm (GA), Memetic Algorithm (MA), Stochastic Local Search (SLS), and Tabu Search (TS) on various benchmark problems confirm the efficiency of ACO-MNLS in the terms of solution quality and computational time.
H.3.15.1. Adaptive hypermedia
M. Tahmasebi; F. Fotouhi; M. Esmaeili
Abstract
Personalized recommenders have proved to be of use as a solution to reduce the information overload problem. Especially in Adaptive Hypermedia System, a recommender is the main module that delivers suitable learning objects to learners. Recommenders suffer from the cold-start and the sparsity ...
Read More
Personalized recommenders have proved to be of use as a solution to reduce the information overload problem. Especially in Adaptive Hypermedia System, a recommender is the main module that delivers suitable learning objects to learners. Recommenders suffer from the cold-start and the sparsity problems. Furthermore, obtaining learner’s preferences is cumbersome. Most studies have only focused on similarity between the interest profile of a user and those of others. However, it can lead to the gray-sheep problem, in which users with consistently different opinions from the group do not benefit from this approach. On this basis, matching the learner’s learning style with the web page features and mining specific attributes is more desirable. The primary contribution of this research is to introduce a feature-based recommender system that delivers educational web pages according to the user's individual learning style. We propose an Educational Resource recommender system which interacts with the users based on their learning style and cognitive traits. The learning style determination is based on Felder-Silverman theory. Furthermore, we incorporate all explicit/implicit data features of a page and the elements contained in them that have an influence on the quality of recommendation and help the system make more effective recommendations.
H.3.15.3. Evolutionary computing and genetic algorithms
A.M Esmilizaini; A.M Latif; Gh. Barid Loghmani
Abstract
Image zooming is one of the current issues of image processing where maintaining the quality and structure of the zoomed image is important. To zoom an image, it is necessary that the extra pixels be placed in the data of the image. Adding the data to the image must be consistent with the texture in ...
Read More
Image zooming is one of the current issues of image processing where maintaining the quality and structure of the zoomed image is important. To zoom an image, it is necessary that the extra pixels be placed in the data of the image. Adding the data to the image must be consistent with the texture in the image and not to create artificial blocks. In this study, the required pixels are estimated by using radial basis functions and calculating the shape parameter c with genetic algorithm. Then, all the estimated pixels are revised based on the sub-algorithm of edge correction. The proposed method is a non-linear method that preserves the edges and minimizes the blur and block artifacts of the zoomed image. The proposed method is evaluated on several images to calculate the optimum shape parameter of radial basis functions. Numerical results are presented by using PSNR and SSIM fidelity measures on different images and are compared to some other methods. The average PSNR of the original image and image zooming is 33.16 which shows that image zooming by factor 2 is similar to the original image, emphasizing that the proposed method has an efficient performance.
A. Salehi; B. Masoumi
Abstract
Biogeography-Based Optimization (BBO) algorithm has recently been of great interest to researchers for simplicity of implementation, efficiency, and the low number of parameters. The BBO Algorithm in optimization problems is one of the new algorithms which have been developed based on the biogeography ...
Read More
Biogeography-Based Optimization (BBO) algorithm has recently been of great interest to researchers for simplicity of implementation, efficiency, and the low number of parameters. The BBO Algorithm in optimization problems is one of the new algorithms which have been developed based on the biogeography concept. This algorithm uses the idea of animal migration to find suitable habitats for solving optimization problems. The BBO algorithm has three principal operators called migration, mutation and elite selection. The migration operator plays a very important role in sharing information among the candidate habitats. The original BBO algorithm, due to its poor exploration and exploitation, sometimes does not perform desirable results. On the other hand, the Edge Assembly Crossover (EAX) has been one of the high power crossovers for acquiring offspring and it increased the diversity of the population. The combination of biogeography-based optimization algorithm and EAX can provide high efficiency in solving optimization problems, including the traveling salesman problem (TSP). This paper proposed a combination of those approaches to solve traveling salesman problem. The new hybrid approach was examined with standard datasets for TSP in TSPLIB. In the experiments, the performance of the proposed approach was better than the original BBO and four others widely used metaheuristics algorithms.
H.6.2.2. Fuzzy set
N. Moradkhani; M. Teshnehlab
Abstract
Cement rotary kiln is the main part of cement production process that have always attracted many researchers’ attention. But this complex nonlinear system has not been modeled efficiently which can make an appropriate performance specially in noisy condition. In this paper Takagi-Sugeno neuro-fuzzy ...
Read More
Cement rotary kiln is the main part of cement production process that have always attracted many researchers’ attention. But this complex nonlinear system has not been modeled efficiently which can make an appropriate performance specially in noisy condition. In this paper Takagi-Sugeno neuro-fuzzy system (TSNFS) is used for identification of cement rotary kiln, and gradient descent (GD) algorithm is applied for tuning the parameters of antecedent and consequent parts of fuzzy rules. In addition, the optimal inputs of the system are selected by genetic algorithm (GA) to achieve less complexity in fuzzy system. The data related to Saveh White Cement (SWC) factory is used in simulations. The Results demonstrate that the proposed identifier has a better performance in comparison with neural and fuzzy models have presented earlier for the same data. Furthermore, in this paper TSNFS is evaluated in noisy condition which had not been worked out before in related researches. Simulations show that this model has a proper performance in different noisy condition.
H.6. Pattern Recognition
A. Noruzi; M. Mahlouji; A. Shahidinejad
Abstract
A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by him/her. Iris recognition (IR) is known to be the most reliable and accurate biometric identification system. The iris recognition system (IRS) consists of an automatic segmentation ...
Read More
A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by him/her. Iris recognition (IR) is known to be the most reliable and accurate biometric identification system. The iris recognition system (IRS) consists of an automatic segmentation mechanism which is based on the Hough transform (HT). This paper presents a robust IRS in unconstrained environments. Through this method, first a photo is taken from the iris, then edge detection is done, later on a contrast adjustment is persecuted in pre-processing stage. Circular HT is subsequently utilized for localizing circular area of iris inner and outer boundaries. The purpose of this last stage is to find circles in imperfect image inputs. Also, through applying parabolic HT, boundaries are localized between upper and lower eyelids. The proposed method, in comparison with available IRSs, not only enjoys higher accuracy, but also competes with them in terms of processing time. Experimental results on images available in UBIRIS, CASIA and MMUI database show that the proposed method has an accuracy rate of 99.12%, 98.80% and 98.34%, respectively.
H.6. Pattern Recognition
Z. Imani; Z. Ahmadyfard; A. Zohrevand
Abstract
In this paper we address the issue of recognizing Farsi handwritten words. Two types of gradient features are extracted from a sliding vertical stripe which sweeps across a word image. These are directional and intensity gradient features. The feature vector extracted from each stripe is then coded using ...
Read More
In this paper we address the issue of recognizing Farsi handwritten words. Two types of gradient features are extracted from a sliding vertical stripe which sweeps across a word image. These are directional and intensity gradient features. The feature vector extracted from each stripe is then coded using the Self Organizing Map (SOM). In this method each word is modeled using the discrete Hidden Markov Model (HMM). To evaluate the performance of the proposed method, FARSA dataset has been used. The experimental results show that the proposed system, applying directional gradient features, has achieved the recognition rate of 69.07% and outperformed all other existing methods.
H.5. Image Processing and Computer Vision
R. Davarzani; S. Mozaffari; Kh. Yaghmaie
Abstract
Feature extraction is a main step in all perceptual image hashing schemes in which robust features will led to better results in perceptual robustness. Simplicity, discriminative power, computational efficiency and robustness to illumination changes are counted as distinguished properties of Local Binary ...
Read More
Feature extraction is a main step in all perceptual image hashing schemes in which robust features will led to better results in perceptual robustness. Simplicity, discriminative power, computational efficiency and robustness to illumination changes are counted as distinguished properties of Local Binary Pattern features. In this paper, we investigate the use of local binary patterns for perceptual image hashing. In feature extraction, we propose to use both sign and magnitude information of local differences. So, the algorithm utilizes a combination of gradient-based and LBP-based descriptors for feature extraction. To provide security needs, two secret keys are incorporated in feature extraction and hash generation steps. Performance of the proposed hashing method is evaluated with an important application in perceptual image hashing scheme: image authentication. Experiments are conducted to show that the present method has acceptable robustness against perceptual content-preserving manipulations. Moreover, the proposed method has this capability to localize the tampering area, which is not possible in all hashing schemes.
H.5. Image Processing and Computer Vision
S. Memar Zadeh; A. Harimi
Abstract
In this paper, a new iris localization method for mobile devices is presented. Our system uses both intensity and saturation threshold on the captured eye images to determine iris boundary and sclera area, respectively. Estimated iris boundary pixels which have been placed outside the sclera will be ...
Read More
In this paper, a new iris localization method for mobile devices is presented. Our system uses both intensity and saturation threshold on the captured eye images to determine iris boundary and sclera area, respectively. Estimated iris boundary pixels which have been placed outside the sclera will be removed. The remaining pixels are mainly the boundary of iris inside the sclera. Then, circular Hough transform is applied to such iris boundary pixels in order to localize the iris. Experiments were done on 60 iris images taken by a HTC mobile device from 10 different persons with both left and right eyes images available per person. Also, we evaluate the proposed algorithm on MICHE datasets include iphone5, Samsung Galaxy S4 and Samsung Galaxy Tab2. Experimental evaluation shows that the proposed system can successfully localize iris on tested images.
D. Data
M. Hajizadeh-Tahan; M. Ghasemzadeh
Abstract
Learning models and related results depend on the quality of the input data. If raw data is not properly cleaned and structured, the results are tending to be incorrect. Therefore, discretization as one of the preprocessing techniques plays an important role in learning processes. The most important ...
Read More
Learning models and related results depend on the quality of the input data. If raw data is not properly cleaned and structured, the results are tending to be incorrect. Therefore, discretization as one of the preprocessing techniques plays an important role in learning processes. The most important challenge in the discretization process is to reduce the number of features’ values. This operation should be applied in a way that relationships between the features are maintained and accuracy of the classification algorithms would increase. In this paper, a new evolutionary multi-objective algorithm is presented. The proposed algorithm uses three objective functions to achieve high-quality discretization. The first and second objectives minimize the number of selected cut points and classification error, respectively. The third objective introduces a new criterion called the normalized cut, which uses the relationships between their features’ values to maintain the nature of the data. The performance of the proposed algorithm was tested using 20 benchmark datasets. According to the comparisons and the results of nonparametric statistical tests, the proposed algorithm has a better performance than other existing major methods.
H.3.2.2. Computer vision
M. H. Khosravi
Abstract
Image segmentation is an essential and critical process in image processing and pattern recognition. In this paper we proposed a textured-based method to segment an input image into regions. In our method an entropy-based textured map of image is extracted, followed by an histogram equalization step ...
Read More
Image segmentation is an essential and critical process in image processing and pattern recognition. In this paper we proposed a textured-based method to segment an input image into regions. In our method an entropy-based textured map of image is extracted, followed by an histogram equalization step to discriminate different regions. Then with the aim of eliminating unnecessary details and achieving more robustness against unwanted noises, a low-pass filtering technique is successfully used to smooth the image. As the next step, the appropriate pixons are extracted and delivered to a fuzzy c-mean clustering stage to obtain the final image segments. The results of applying the proposed method on several different images indicate its better performance in image segmentation compared to the other methods.
H.5. Image Processing and Computer Vision
F. Abdali-Mohammadi; A. Poorshamam
Abstract
Accurately detection of retinal landmarks, like optic disc, is an important step in the computer aided diagnosis frameworks. This paper presents an efficient method for automatic detection of the optic disc’s center and estimating its boundary. The center and initial diameter of optic disc are ...
Read More
Accurately detection of retinal landmarks, like optic disc, is an important step in the computer aided diagnosis frameworks. This paper presents an efficient method for automatic detection of the optic disc’s center and estimating its boundary. The center and initial diameter of optic disc are estimated by employing an ANN classifier. The ANN classifier employs visual features of vessels and their background tissue to classify extracted main vessels of retina into two groups: the vessels inside the optic disc and the vessels outside the optic disc. To this end, average intensity values and standard deviation of RGB channels, average width and orientation of the vessels and density of the detected vessels their junction points in a window around each central pixel of main vessels are employed. The center of detected vessels, which are belonging to the inside of the optic disc region, is adopted as the optic disc center and the average length of them in vertical and horizontal directions is selected as initial diameter of the optic disc circle. Then exact boundary of the optic disc is extracted using radial analysis of the initial circle. The performance of the proposed method is measured on the publicly available DRIONS, DRIVE and DIARETDB1 databases and compared with several state-of-the-art methods. The proposed method shows much higher mean overlap (70.6%) in the same range of detection accuracy (97.7%) and center distance (12 pixels). The average sensitivity and predictive values of the proposed optic disc detection method are 80.3% and 84.6% respectively.
Meysam Alikhani; Mohammad Ahmadi Livani
Abstract
Mobile Ad-hoc Networks (MANETs) by contrast of other networks have more vulnerability because of having nature properties such as dynamic topology and no infrastructure. Therefore, a considerable challenge for these networks, is a method expansion that to be able to specify anomalies with high accuracy ...
Read More
Mobile Ad-hoc Networks (MANETs) by contrast of other networks have more vulnerability because of having nature properties such as dynamic topology and no infrastructure. Therefore, a considerable challenge for these networks, is a method expansion that to be able to specify anomalies with high accuracy at network dynamic topology alternation. In this paper, two methods proposed for dynamic anomaly detection in MANETs those named IPAD and IAPAD. The anomaly detection procedure consists three main phases: Training, Detection and Updating in these methods. In the IPAD method, to create the normal profile, we use the normal feature vectors and principal components analysis, in the training phase. In detection phase, during each time window, anomaly feature vectors based on their projection distance from the first global principal component specified. In updating phase, at end of each time window, normal profile updated by using normal feature vectors in some previous time windows and increasing principal components analysis. IAPAD is similar to IPAD method with a difference that each node use approximate first global principal component to specify anomaly feature vectors. In addition, normal profile will updated by using approximate singular descriptions in some previous time windows. The simulation results by using NS2 simulator for some routing attacks show that average detection rate and average false alarm rate in IPAD method is 95.14% and 3.02% respectively, and in IAPAD method is 94.20% and 2.84% respectively.
Sh. Mehrjoo; M. Jasemi; A. Mahmoudi
Abstract
In this paper after a general literature review on the concept of Efficient Frontier (EF), an important inadequacy of the Variance based models for deriving EFs and the high necessity for applying another risk measure is exemplified. In this regard for this study the risk measure of Lower Partial Moment ...
Read More
In this paper after a general literature review on the concept of Efficient Frontier (EF), an important inadequacy of the Variance based models for deriving EFs and the high necessity for applying another risk measure is exemplified. In this regard for this study the risk measure of Lower Partial Moment of the first order is decided to replace Variance. Because of the particular shape of the proposed risk measure, one part of the paper is devoted to development of a mechanism for deriving EF on the basis of new model. After that superiority of the new model to old one is shown and then the shape of new EFs under different situations is investigated. At last it is concluded that application of LPM of the first order in financial models in the phase of deriving EF is completely wise and justifiable.
A.1. General
F. Alibakhshi; M. Teshnehlab; M. Alibakhshi; M. Mansouri
Abstract
The stability of learning rate in neural network identifiers and controllers is one of the challenging issues which attracts great interest from researchers of neural networks. This paper suggests adaptive gradient descent algorithm with stable learning laws for modified dynamic neural network (MDNN) ...
Read More
The stability of learning rate in neural network identifiers and controllers is one of the challenging issues which attracts great interest from researchers of neural networks. This paper suggests adaptive gradient descent algorithm with stable learning laws for modified dynamic neural network (MDNN) and studies the stability of this algorithm. Also, stable learning algorithm for parameters of MDNN is proposed. By proposed method, some constraints are obtained for learning rate. Lyapunov stability theory is applied to study the stability of the proposed algorithm. The Lyapunov stability theory is guaranteed the stability of the learning algorithm. In the proposed method, the learning rate can be calculated online and will provide an adaptive learning rare for the MDNN structure. Simulation results are given to validate the results.
D.3. Data Storage Representations
E. Azhir; N. Daneshpour; Sh. Ghanbari
Abstract
Technology assessment and selection has a substantial impact on organizations procedures in regards to technology transfer. Technological decisions are usually made by a group of experts, and whereby integrity of these viewpoints to a single decision can be quite complex. Today, operational databases ...
Read More
Technology assessment and selection has a substantial impact on organizations procedures in regards to technology transfer. Technological decisions are usually made by a group of experts, and whereby integrity of these viewpoints to a single decision can be quite complex. Today, operational databases and data warehouses exist to manage and organize data with specific features and henceforth, the need for a decision-aid approach is essential. The process of developing data warehouses involves time consuming steps, complex queries, slow query response rates and limited functions, which is also true for operational databases. In this regards, Fuzzy multi-criteria procedures in choosing efficient data sources (data warehouse and traditional relational databases) based on organization requirements is addressed in this paper. In proposing an appropriate selection framework the paper compares a Triangular Fuzzy Numbers (TFN) based framework and Fuzzy Analytical Hierarchy Process (AHP), based on data sources models, business logic, data access, storage and security. Results show that two procedures rank data sources in a similar manner and due to the accurate decision-making.
H.3. Artificial Intelligence
M. Kurmanji; F. Ghaderi
Abstract
Despite considerable enhances in recognizing hand gestures from still images, there are still many challenges in the classification of hand gestures in videos. The latter comes with more challenges, including higher computational complexity and arduous task of representing temporal features. Hand movement ...
Read More
Despite considerable enhances in recognizing hand gestures from still images, there are still many challenges in the classification of hand gestures in videos. The latter comes with more challenges, including higher computational complexity and arduous task of representing temporal features. Hand movement dynamics, represented by temporal features, have to be extracted by analyzing the total frames of a video. So far, both 2D and 3D convolutional neural networks have been used to manipulate the temporal dynamics of the video frames. 3D CNNs can extract the changes in the consecutive frames and tend to be more suitable for the video classification task, however, they usually need more time. On the other hand, by using techniques like tiling it is possible to aggregate all the frames in a single matrix and preserve the temporal and spatial features. This way, using 2D CNNs, which are inherently simpler than 3D CNNs can be used to classify the video instances. In this paper, we compared the application of 2D and 3D CNNs for representing temporal features and classifying hand gesture sequences. Additionally, providing a two-stage two-stream architecture, we efficiently combined color and depth modalities and 2D and 3D CNN predictions. The effect of different types of augmentation techniques is also investigated. Our results confirm that appropriate usage of 2D CNNs outperforms a 3D CNN implementation in this task.
H.3.15.3. Evolutionary computing and genetic algorithms
Sh. Lotfi; F. Karimi
Abstract
In many real-world applications, various optimization problems with conflicting objectives are very common. In this paper we employ Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), a newly developed method, beside Tabu Search (TS) accompaniment to achieve a new manner for solving ...
Read More
In many real-world applications, various optimization problems with conflicting objectives are very common. In this paper we employ Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), a newly developed method, beside Tabu Search (TS) accompaniment to achieve a new manner for solving multi-objective optimization problems (MOPs) with two or three conflicting objectives. This improved hybrid algorithm, namely MOEA/D-TS, uses the parallel computing capacity of MOEA/D along with the neighborhood search authority of TS for discovering Pareto optimal solutions. Our goal is exploiting the advantages of evolutionary algorithms and TS to achieve an integrated method to cover the totality of the Pareto front by uniformly distributed solutions. In order to evaluate the capabilities of the proposed method, its performance, based on the various metrics, is compared with SPEA, COMOEATS and SPEA2TS on well-known Zitzler-Deb-Thiele’s ZDT test suite and DTLZ test functions with separable objective functions. According to the experimental results, the proposed method could significantly outperform previous algorithms and produce fully satisfactory results.
B.3. Communication/Networking and Information Technology
Z. Shaeiri; J. Kazemitabar; Sh. Bijani; M. Talebi
Abstract
As fraudsters understand the time window and act fast, real-time fraud management systems becomes necessary in Telecommunication Industry. In this work, by analyzing traces collected from a nationwide cellular network over a period of a month, an online behavior-based anomaly detection system is provided. ...
Read More
As fraudsters understand the time window and act fast, real-time fraud management systems becomes necessary in Telecommunication Industry. In this work, by analyzing traces collected from a nationwide cellular network over a period of a month, an online behavior-based anomaly detection system is provided. Over time, users' interactions with the network provides a vast amount of usage data. These usage data are modeled to profiles by which users can be identified. A statistical model is proposed that allocate a risk number to each upcoming record which reveals deviation from the normal behavior stored in profiles. Based on the amount of this deviation a decision is made to flag the record as normal or anomaly. If the activity is normal the associated profile is updated; otherwise the record is flagged as anomaly and it will be considered for further investigation. For handling the big data set and implementing the methodology we have used the Apache Spark engine which is an open source, fast and general-purpose cluster computing system for big data handling and analyzes. Experimental results show that the proposed approach can perfectly detect deviations from the normal behavior and can be exploited for detecting anomaly patterns.
F.4.17. Survival analysis
S. Miri Rostami; M. Ahmadzadeh
Abstract
Application of data mining methods as a decision support system has a great benefit to predict survival of new patients. It also has a great potential for health researchers to investigate the relationship between risk factors and cancer survival. But due to the imbalanced nature of datasets associated ...
Read More
Application of data mining methods as a decision support system has a great benefit to predict survival of new patients. It also has a great potential for health researchers to investigate the relationship between risk factors and cancer survival. But due to the imbalanced nature of datasets associated with breast cancer survival, the accuracy of survival prognosis models is a challenging issue for researchers. This study aims to develop a predictive model for 5-year survivability of breast cancer patients and discover relationships between certain predictive variables and survival. The dataset was obtained from SEER database. First, the effectiveness of two synthetic oversampling methods Borderline SMOTE and Density based Synthetic Oversampling method (DSO) is investigated to solve the class imbalance problem. Then a combination of particle swarm optimization (PSO) and Correlation-based feature selection (CFS) is used to identify most important predictive variables. Finally, in order to build a predictive model three classifiers decision tree (C4.5), Bayesian Network, and Logistic Regression are applied to the cleaned dataset. Some assessment metrics such as accuracy, sensitivity, specificity, and G-mean are used to evaluate the performance of the proposed hybrid approach. Also, the area under ROC curve (AUC) is used to evaluate performance of feature selection method. Results show that among all combinations, DSO + PSO_CFS + C4.5 presents the best efficiency in criteria of accuracy, sensitivity, G-mean and AUC with values of 94.33%, 0.930, 0.939 and 0.939, respectively.
M. Abdollahi; M. Aliyari Shoorehdeli
Abstract
There are various automatic programming models inspired by evolutionary computation techniques. Due to the importance of devising an automatic mechanism to explore the complicated search space of mathematical problems where numerical methods fails, evolutionary computations are widely studied and applied ...
Read More
There are various automatic programming models inspired by evolutionary computation techniques. Due to the importance of devising an automatic mechanism to explore the complicated search space of mathematical problems where numerical methods fails, evolutionary computations are widely studied and applied to solve real world problems. One of the famous algorithm in optimization problem is shuffled frog leaping algorithm (SFLA) which is inspired by behaviour of frogs to find the highest quantity of available food by searching their environment both locally and globally. The results of SFLA prove that it is competitively effective to solve problems. In this paper, Shuffled Frog Leaping Programming (SFLP) inspired by SFLA is proposed as a novel type of automatic programming model to solve symbolic regression problems based on tree representation. Also, in SFLP, a new mechanism for improving constant numbers in the tree structure is proposed. In this way, different domains of mathematical problems can be addressed with the use of proposed method. To find out about the performance of generated solutions by SFLP, various experiments were conducted using a number of benchmark functions. The results were also compared with other evolutionary programming algorithms like BBP, GSP, GP and many variants of GP.