Journal of AI and Data MiningJournal of AI and Data Mining
http://jad.shahroodut.ac.ir/
Wed, 22 Feb 2017 02:51:18 +0100FeedCreatorJournal of AI and Data Mining
http://jad.shahroodut.ac.ir/
Feed provided by Journal of AI and Data Mining. Click to visit.Optimization of fuzzy membership functions via PSO and GA with application to quad rotor
http://jad.shahroodut.ac.ir/article_747_105.html
Quad rotor is a renowned underactuated Unmanned Aerial Vehicle (UAV) with widespread military and civilian applications. Despite its simple structure, the vehicle suffers from inherent instability. Therefore, control designers always face formidable challenge in stabilization and control goal. In this paper fuzzy membership functions of the quad rotor’s fuzzy controllers are optimized using nature-inspired algorithms such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA). Finally, the results of the proposed methods are compared and a trajectory is defined to verify the effectiveness of the designed fuzzy controllers based on the algorithm with better results.Tue, 28 Feb 2017 20:30:00 +0100Sub-transmission sub-station expansion planning based on bacterial foraging optimization algorithm
http://jad.shahroodut.ac.ir/article_734_105.html
In recent years, significant research efforts have been devoted to the optimal planning of power systems. Substation Expansion Planning (SEP) as a sub-system of power system planning consists of finding the most economical solution with the optimal location and size of future substations and/or feeders to meet the future load demand. The large number of design variables and combination of discrete and continuous variables make the substation expansion planning a very challenging problem. So far, various methods have been presented to solve such a complicated problem. Since the Bacterial Foraging Optimization Algorithm (BFOA) yield to proper results in power system studies, and it has not been applied to SEP in sub-transmission voltage level problems yet, this paper develops a new BFO-based method to solve the Sub-Transmission Substation Expansion Planning (STSEP) problem. The technique discussed in this paper uses BFOA to simultaneously optimize the sizes and locations of both the existing and new installed substations and feeders by considering reliability constraints. To clarify the capabilities of the presented method, two test systems (a typical network and a real ones) are considered, and the results of applying GA and BFOA on these networks are compared. The simulation results demonstrate that the BFOA has the potential to find more optimal results than the other algorithm under the same conditions. Also, the fast convergence, consideration of real-world networks limitations as problem constraints, and the simplicity in applying it to real networks are the main features of the proposed method.Tue, 28 Feb 2017 20:30:00 +0100Iris localization by means of adaptive thresholding and Circular Hough Transform
http://jad.shahroodut.ac.ir/article_731_105.html
In this paper, a new iris localization method for mobile devices is presented. Our system uses both intensity and saturation threshold on the captured eye images to determine iris boundary and sclera area, respectively. Estimated iris boundary pixels which have been placed outside the sclera will be removed. The remaining pixels are mainly the boundary of iris inside the sclera. Then, circular Hough transform is applied to such iris boundary pixels in order to localize the iris. Experiments were done on 60 iris images taken by a HTC mobile device from 10 different persons with both left and right eyes images available per person. Also, we evaluate the proposed algorithm on MICHE datasets include iphone5, Samsung Galaxy S4 and Samsung Galaxy Tab2. Experimental evaluation shows that the proposed system can successfully localize iris on tested images.Tue, 28 Feb 2017 20:30:00 +0100A stack-based chaotic algorithm for encryption of colored images
http://jad.shahroodut.ac.ir/article_735_105.html
In this paper, a new method is presented for encryption of colored images. This method is based on using stack data structure and chaos which make the image encryption algorithm more efficient and robust. In the proposed algorithm, a series of data whose range is between 0 and 3 is generated using chaotic logistic system. Then, the original image is divided into four subimages, and these four images are respectively pushed into the stack based on next number in the series. In the next step, the first element of the stack (which includes one of the four sub-images) is popped, and this image is divided into four other parts. Then, based on the next number in the series, four sub-images are pushed into the stack again. This procedure is repeated until the stack is empty. Therefore, during this process, each pixel unit is encrypted using another series of chaotic numbers (generated by Chen chaotic system). This method is repeated until all pixels of the plain image are encrypted. Finally, several extensive simulations on well-known USC datasets have been conducted to show the efficiency of this encryption algorithm. The tests performed showthat the proposed method has a really large key space and possesses high-entropic distribution. Consequently, it outperforms the other competing algorithms in the case of securityTue, 28 Feb 2017 20:30:00 +0100Feature extraction of hyperspectral images using boundary semi-labeled samples and hybrid criterion
http://jad.shahroodut.ac.ir/article_787_105.html
Feature extraction is a very important preprocessing step for classification of hyperspectral images. The linear discriminant analysis (LDA) method fails to work in small sample size situations. Moreover, LDA has poor efficiency for non-Gaussian data. LDA is optimized by a global criterion. Thus, it is not sufficiently flexible to cope with the multi-modal distributed data. We propose a new feature extraction method in this paper, which uses the boundary semi-labeled samples for solving small sample size problem. The proposed method, which called hybrid feature extraction based on boundary semi-labeled samples (HFE-BSL), uses a hybrid criterion that integrates both the local and global criteria for feature extraction. Thus, it is robust and flexible. The experimental results with three real hyperspectral images show the good efficiency of HFE-BSL compared to some popular and state-of-the-art feature extraction methods.Tue, 28 Feb 2017 20:30:00 +0100Estimating scour below inverted siphon structures using stochastic and soft computing approaches
http://jad.shahroodut.ac.ir/article_757_105.html
This paper uses nonlinear regression, Artificial Neural Network (ANN) and Genetic Programming (GP) approaches for predicting an important tangible issue i.e. scours dimensions downstream of inverted siphon structures. Dimensional analysis and nonlinear regression-based equations was proposed for estimation of maximum scour depth, location of the scour hole, location and height of the dune downstream of the structures. In addition, The GP-based formulation results are compared with experimental results and other accurate equations. The results analysis showed that the equations derived from Forward Stepwise nonlinear regression method have correlation coefficient of R2=0.962 , 0.971 and 0.991 respectively. This correlates the relative parameter of maximum scour depth (s/z) in comparison with the genetic programming (GP) model and artificial neural network (ANN) model. Furthermore, the slope of the fitted line extracted from computations and observations for dimensionless parameters generally presents a new achievement for sediment engineering and scientific community, indicating the superiority of artificial neural network (ANN) modelTue, 28 Feb 2017 20:30:00 +0100The application of data mining techniques in manipulated financial statement classification: ...
http://jad.shahroodut.ac.ir/article_664_105.html
Predicting financially false statements to detect frauds in companies has an increasing trend in recent studies. The manipulations in financial statements can be discovered by auditors when related financial records and indicators are analyzed in depth together with the experience of auditors in order to create knowledge to develop a decision support system to classify firms. Auditors may annotate the firms’ statements as “correct” or “incorrect” to add their experience, and then these annotations with related indicators can be used for the learning process to generate a model. Once the model is learned and tested for validation, it can be used for new firms to predict their class values. In this research, we attempted to reveal this benefit in the framework of Turkish firms. In this regard, the study aims at classifying financially correct and false statements of Turkish firms listed on Borsa İstanbul, using their particular financial ratios as indicators of a success or a manipulation. The dataset was selected from a particular period after the crisis (2009 to 2013). Commonly used three classification methods in data mining were employed for the classification: decision tree, logistic regression, and artificial neural network, respectively. According to the results, although all three methods are performed well, the latter had the best performance, and it outperforms other two classical methods. The common ground of the selected methods is that they pointed out the Z-score as the first distinctive indicator for classifying financial statements under consideration.Tue, 28 Feb 2017 20:30:00 +0100Fractional Modeling and Analysis of Buck Converter in CCM mode operation
http://jad.shahroodut.ac.ir/article_738_0.html
In this paper fractional order averaged model for DC/DC Buck converter in continues condition mode (CCM) operation is established. DC/DC Buck converter is one of the main components in the wind turbine system which is used in this research. Due to some practical restriction there weren’t exist input voltage and duty cycle of converter therefor whole of the wind system was simulated in Matlab/Simulink and gathered data is used in proposed method based on trial and error in order to find the fractional order of converter. There is an obvious relationship between controller performance and mathematical model. More accurate model leads us to better controller.Sat, 15 Oct 2016 20:30:00 +0100Artificial neural networks, genetic algorithm and response surface methods: The energy ...
http://jad.shahroodut.ac.ir/article_782_105.html
In this study, the energy consumption in the food and beverage industries of Iran was investigated. The energy consumption in this sector was modeled using artificial neural network (ANN), response surface methodology (RSM) and genetic algorithm (GA). First, the input data to the model were calculated according to the statistical source, balance-sheets and the method proposed in this paper. It can be seen that diesel and liquefied petroleum gas have respectively the highest and lowest shares of energy consumption compared with the other types of carriers. For each of the evaluated energy carriers (diesel, kerosene, fuel oil, natural gas, electricity, liquefied petroleum gas and gasoline), the best fitting model was selected after taking the average of runs of the developed models. At last, the developed models, representing the energy consumption of food and beverage industries by each energy carrier, were put into a finalized model using Simulink toolbox of Matlab software. Results of data analysis indicated that consumption of natural gas is being increased in Iran food and beverage industries, while in the case of fuel oil and liquefied petroleum gas a decreasing trend was estimated.Tue, 28 Feb 2017 20:30:00 +0100Using a new modified harmony search algorithm to solve multi-objective reactive power dispatch ...
http://jad.shahroodut.ac.ir/article_655_105.html
The optimal reactive power dispatch (ORPD) is a very important problem aspect of power system planning and is a highly nonlinear, non-convex optimization problem because consist of both continuous and discrete control variables. Since the power system has inherent uncertainty, hereby, this paper presents both of the deterministic and stochastic models for ORPD problem in multi objective and single objective formulation, respectively. The deterministic model consider three main issues in ORPD problem as real power loss, voltage deviation and voltage stability index, but, in the stochastic model the uncertainty on the demand and the equivalent availability of shunt reactive power compensators have been investigated. To solve them, propose a new modified harmony search algorithm (HSA) which implemented in single and multi objective forms. Since, like many other general purpose optimization methods, the original HSA often traps into local optima, to aim with this cope, an efficient local search method called chaotic local search (CLS) and global search operator are proposed in the internal architecture of the original HSA algorithm to improve its ability in finding of best solution because ORPD problem is very complex problem with different types of continuous and discrete constrains i.e. excitation settings of generators, sizes of fixed capacitors, tap positions of tap changing transformers and the amount of reactive compensation devices. Moreover, fuzzy decision-making method is employed to select the best solution from the set of Pareto solutions.Tue, 28 Feb 2017 20:30:00 +0100Non-zero probability of nearest neighbor searching
http://jad.shahroodut.ac.ir/article_733_105.html
Nearest Neighbor (NN) searching is a challenging problem in data management and has been widely studied in data mining, pattern recognition and computational geometry. The goal of NN searching is efficiently reporting the nearest data to a given object as a query. In most of the studies both the data and query are assumed to be precise, however, due to the real applications of NN searching, such as tracking and locating services, GIS and data mining, it is possible both of them are imprecise. So, in this situation, a natural way to handle the issue is to report the data have a nonzero probability —called nonzero nearest neighbor— to be the nearest neighbor of a given query. Formally, let P be a set of n uncertain points modeled by some regions. We first consider the following variation of NN searching problem under uncertainty. If both the query and the data are uncertain points modeled by distinct unit segments parallel to the x-axis, we propose an efficient algorithm that reports nonzero nearest neighbors under Manhattan metric in O(n^2 α(n^2 )) preprocessing and O(logn+k) query time, where α(.) is the extremely slowly growing functional inverse of Ackermann’s function. Finally, for the arbitrarily length segments parallel to the x-axis, we propose an approximation algorithm that reports nonzero nearest neighbor with maximum error L in O(n^2 α(n^2 )) preprocessing and O(logn+k) query time, where L is the length of the query.Tue, 28 Feb 2017 20:30:00 +0100Robust state estimation in power systems using pre-filtering measurement data
http://jad.shahroodut.ac.ir/article_722_105.html
State estimation is the foundation of any control and decision making in power networks. The first requirement for a secure network is a precise and safe state estimator in order to make decisions based on accurate knowledge of the network status. This paper introduces a new estimator which is able to detect bad data with few calculations without need for repetitions and estimation residual calculation. The estimator is equipped with a filter formed in different times according to Principal Component Analysis (PCA) of measurement data. In addition, the proposed estimator employs the dynamic relationships of the system and the prediction property of the Extended Kalman Filter (EKF) to estimate the states of network fast and precisely. Therefore, it makes real-time monitoring of the power network possible. The proposed dynamic model also enables the estimator to estimate the states of a large scale system online. Results of state estimation of the proposed algorithm for an IEEE 9 bus system shows that even with the presence of bad data, the estimator provides a valid and precise estimation of system states and tracks the network with appropriate speed.Tue, 28 Feb 2017 20:30:00 +0100Prediction of maximum surface settlement caused by earth pressure balance shield tunneling ...
http://jad.shahroodut.ac.ir/article_748_105.html
Due to urbanization and population increase, need for metro tunnels, has been considerably increased in urban areas. Estimating the surface settlement caused by tunnel excavation is an important task especially where the tunnels are excavated in urban areas or beneath important structures. Many models have been established for this purpose by extracting the relationship between the settlement and the factors that influence it. In this paper, Random Forest (RF) is introduced and investigated for the prediction of maximum surface settlement caused by EPB shield tunneling. Various factors that affect this settlement, including geometrical, geological and shield operational parameters were considered. The results of RF model has been compared with the available artificial neural network (ANN) model. It is shown that the proposed RF model provides more accurate results than the ANN model proposed in the literature.Tue, 28 Feb 2017 20:30:00 +0100Direct adaptive fuzzy control of flexible-joint robots including actuator dynamics using ...
http://jad.shahroodut.ac.ir/article_739_105.html
In this paper a novel direct adaptive fuzzy system is proposed to control flexible-joints robot including actuator dynamics. The design includes two interior loops: the inner loop controls the motor position using proposed approach while the outer loop controls the joint angle of the robot using a PID control law. One novelty of this paper is the use of a PSO algorithm for optimizing the control design parameters to achieve a desired performance. It is worthy of note that to form control law by considering practical considerations just the available feedbacks are used. It is beneficial for industrial applications wherethe real-time computation is costly. The proposed control approach has a fast response with a good tracking performance under the well-behaved control efforts. The stability is guaranteed in the presence of both structured and unstructured uncertainties. As a result, all system states are remained bounded. Simulation results on a two-link flexible-joint robot show the efficiency of the proposed scheme.Tue, 28 Feb 2017 20:30:00 +0100English-Persian plagiarism detection based on a semantic approach
http://jad.shahroodut.ac.ir/article_770_0.html
Plagiarism which is defined as “the wrongful appropriation of other writers’ or authors’ works and ideas without citing or informing them” poses a major challenge to knowledge spread publication. Plagiarism has been placed in four categories of direct, paraphrasing (rewriting), translation, and combinatory. This paper addresses translational plagiarism which is sometimes referred to as cross-lingual plagiarism. In cross-lingual translation, writers meld a translation with their own words and ideas. Based on monolingual plagiarism detection methods, this paper ultimately intends to find a way to detect cross-lingual plagiarism. A framework called Multi-Lingual Plagiarism Detection (MLPD) has been presented for cross-lingual plagiarism analysis with ultimate objective of detection of plagiarism cases. English is the reference language and Persian materials are back translated using translation tools. The data for assessment of MLPD were obtained from English-Persian Mizan parallel corpus. Apache’s Solr was also applied to record the creep of the documents and their indexation. The accuracy mean of the proposed method revealed to be 98.82% when employing highly accurate translation tools which indicate the high accuracy of the proposed method. Also, Google translation service showed the accuracy mean to be 56.9%. These tests demonstrate that improved translation tools enhance the accuracy of the proposed method.Sat, 05 Nov 2016 20:30:00 +0100Improved COA with chaotic initialization and intelligent migration for data clustering
http://jad.shahroodut.ac.ir/article_783_0.html
A well-known clustering algorithm is K-means. This algorithm, besides advantages such as high speed and ease of employment, suffers from the problem of local optima. In order to overcome this problem, a lot of studies have been done in clustering. This paper presents a hybrid Extended Cuckoo Optimization Algorithm (ECOA) and K-means (K), which is called ECOA-K. The COA algorithm has advantages such as fast convergence rate, intelligent operators and simultaneous local and global search which are the motivations behind choosing this algorithm. In the Extended Cuckoo Algorithm, we have enhanced the operators in the classical version of the Cuckoo algorithm. The proposed operator of production of the initial population is based on a Chaos trail whereas in the classical version, it is based on randomized trail. Moreover, allocating the number of eggs to each cuckoo in the revised algorithm is done based on its fitness. Another improvement is in cuckoos’ migration which is performed with different deviation degrees. The proposed method is evaluated on several standard data sets at UCI database and its performance is compared with those of Black Hole (BH), Big Bang Big Crunch (BBBC), Cuckoo Search Algorithm (CSA), traditional Cuckoo Optimization Algorithm (COA) and K-means algorithm. The results obtained are compared in terms of purity degree, coefficient of variance, convergence rate and time complexity. The simulation results show that the proposed algorithm is capable of yielding the optimized solution with higher purity degree, faster convergence rate and stability in comparison to the other compared algorithms.Sun, 13 Nov 2016 20:30:00 +0100A multi-objective approach to fuzzy clustering using ITLBO algorithm
http://jad.shahroodut.ac.ir/article_784_0.html
Data clustering is one of the most important areas of research in data mining and knowledge discovery. Recent research in this area has shown that the best clustering results can be achieved using multi-objective methods. In other words, assuming more than one criterion as objective functions for clustering data can measurably increase the quality of clustering. In this study, a model with two contradictory objective functions based on maximum data compactness in clusters (the degree of proximity of data) and maximum cluster separation (the degree of remoteness of clusters’ centers) is proposed. In order to solve this model, a recently proposed optimization method, the Multi-objective Improved Teaching Learning Based Optimization (MOITLBO) algorithm, is used. This algorithm is tested on several datasets and its clusters are compared with the results of some single-objective algorithms. Furthermore, with respect to noise, the comparison of the performance of the proposed model with another multi-objective model shows that it is robust to noisy data sets and thus can be efficiently used for multi-objective fuzzy clustering.Sun, 13 Nov 2016 20:30:00 +0100Ensemble classification and extended feature selection for credit card fraud detection
http://jad.shahroodut.ac.ir/article_788_0.html
Due to the rise of technology, the possibility of fraud in different areas such as banking has been increased. Credit card fraud is a crucial problem in banking and its danger is over increasing. This paper proposes an advanced data mining method, considering both feature selection and decision cost for accuracy enhancement of credit card fraud detection. After selecting the best and most effective features, using an extended wrapper method, ensemble classification is performed. The extended feature selection approach includes a prior feature filtering and a wrapper approach using C4.5 decision tree. Ensemble classification, using cost sensitive decision trees is performed in a decision forest framework. A locally gathered fraud detection dataset is used to estimate the proposed method. The proposed method is assessed using accuracy, recall, and F-measure as evaluation metrics and compared with basic classification algorithms including ID3, J48, Naïve Bayes, Bayesian Network and NB tree. Experiments show that considering the F-measure as evaluation metric, the proposed approach yields 1.8 to 2.4 percent performance improvement compared to other classifiers.Fri, 25 Nov 2016 20:30:00 +0100A case study for application of fuzzy inference and data mining in structural health monitoring
http://jad.shahroodut.ac.ir/article_813_0.html
In this study, a system for monitoring the structural health of bridge deck and predicting various possible damages to this section was designed based on measuring the temperature and humidity with the use of wireless sensor networks, and then it was implemented and investigated. A scaled model of a conventional medium sized bridge (length of 50 meters, height of 10 meters, and with 2 piers) was examined for the purpose of this study. This method includes installing two sensor nodes with the ability of measuring temperature and humidity on both side of the bridge deck. The data collected by the system including temperature and humidity values are received by a LABVIEW-based software to be analyzed and stored in a database. Proposed SHM monitoring system is equipped by a novel method of using data mining techniques on the database of climatic conditions of past few years related to the location of the bridge to predict the occurrence and severity of future damages. In addition, this system has several alarm levels which are based on analysis of bridge conditions with fuzzy inference method, so it can issue proactive and precise warnings and alarms in terms of place of occurrence and severity of possible damages in the bridge deck to ensure total productive (TPM) and proactive maintenance. Very low costs, increased efficiency of the bridge service, and reduced maintenance costs makes this SHM system a practical and applicable system. The data and results related to all mentioned subjects were thoroughly discussed .Fri, 09 Dec 2016 20:30:00 +0100A hybrid meta-heuristic algorithm based on imperialist competition algorithm
http://jad.shahroodut.ac.ir/article_823_0.html
The human has always been to find the best in all things. This Perfectionism has led to the creation of optimization methods. The goal of optimization is to determine the variables and find the best acceptable answer Due to the limitations of the problem, So that the objective function is minimum or maximum. One of the ways inaccurate optimization is meta-heuristics so that Inspired by nature, usually are looking for the optimal solution. in recent years, much effort has been done to improve or create metaheuristic algorithms. One of the ways to make improvements in meta-heuristic methods is using of combination. In this paper, a hybrid optimization algorithm based on imperialist competitive algorithm is presented. The used ideas are: assimilation operation with a variable parameter and the war function that is based on mathematical model of war in the real world. These changes led to increase the speed find the global optimum and reduce the search steps is in contrast with other metaheuristic. So that the evaluations done more than 80% of the test cases, in comparison to Imperialist Competitive Algorithm, Social Based Algorithm , Cuckoo Optimization Algorithm and Genetic Algorithm, the proposed algorithm was superior.Tue, 20 Dec 2016 20:30:00 +0100Evaluation of Classifiers in Software Fault-Proneness Prediction
http://jad.shahroodut.ac.ir/article_825_0.html
Reliability of software counts on its fault-prone modules. This means that the less software consists of fault-prone units the more we may trust it. Therefore, if we are able to predict the number of fault-prone modules of software, it will be possible to judge the software reliability. In predicting software fault-prone modules, one of the contributing features is software metric by which one can classify software modules into fault-prone and non-fault-prone ones. To make such a classification, we investigated into 17 classifier methods whose features (attributes) are software metrics (39 metrics) and instances (software modules) of mining are instances of 13 datasets reported by NASA. However, there are two important issues influencing our prediction accuracy when we use data mining methods: (1) selecting the best/most influent features (i.e. software metrics) when there is a wide diversity of them and (2) instance sampling in order to balance the imbalanced instances of mining; we have two imbalanced classes when the classifier biases towards the majority class. Based on the feature selection and instance sampling, we considered 4 scenarios in appraisal of 17 classifier methods to predict software fault-prone modules. To select features, we used Correlation-based Feature Selection (CFS) and to sample instances we did Synthetic Minority Oversampling Technique (SMOTE). Empirical results showed that suitable sampling software modules significantly influences on accuracy of predicting software reliability but metric selection has not considerable effect on the prediction.Tue, 27 Dec 2016 20:30:00 +0100A sensor-based scheme for activity recognition in smart homes using Dempster-Shafer theory of ...
http://jad.shahroodut.ac.ir/article_845_0.html
This paper proposes a scheme for activity recognition in sensor based smart homes using Dempster-Shafer theory of evidence. In this work, opinion owners and their belief masses are constructed from sensors and employed in a single-layered inference architecture. The belief masses are calculated using beta probability distribution function. The frames of opinion owners are derived automatically for activities, to achieve more flexibility and extensibility. Our method is verified via two experiments. In the first experiment, it is compared to a naïve Bayes approach and three ontology based methods. In this experiment our method outperforms the naïve Bayes classifier, having 88.9% accuracy. However, it is comparable and similar to the ontology based schemes, but since no manual ontology definition is needed, our method is more flexible and extensible than the previous ones. In the second experiment, a larger dataset is used and our method is compared to three approaches which are based on naïve Bayes classifiers, hidden Markov models, and hidden semi Markov models. Three features are extracted from sensors’ data and incorporated in the benchmark methods, making nine implementations. In this experiment our method shows an accuracy of 94.2% that in most of the cases outperforms the benchmark methods, or is comparable to them.Sat, 07 Jan 2017 20:30:00 +0100Winner determination in combinatorial auctions using hybrid ant colony optimization and ...
http://jad.shahroodut.ac.ir/article_880_0.html
A combinatorial auction is an auction where the bidders have the choice to bid on bundles of items. The WDP in combinatorial auctions is the problem of finding winning bids that maximize the auctioneer’s revenue under the constraint that each item can be allocated to at most one bidder. The WDP is known as an NP-hard problem with practical applications like electronic commerce, production management, games theory, and resources allocation in multi-agent systems. This has motivated the quest for efficient approximate algorithms both in terms of solution quality and computational time. This paper proposes a hybrid Ant Colony Optimization with a novel Multi-Neighborhood Local Search (ACO-MNLS) algorithm for solving Winner Determination Problem (WDP) in combinatorial auctions. Our proposed MNLS algorithm uses the fact that using various neighborhoods in local search can generate different local optima for WDP and that the global optima of WDP is a local optima for a given its neighborhood. Therefore, proposed MNLS algorithm simultaneously explores a set of three different neighborhoods to get different local optima and to escape from local optima. The comparisons between ACO-MNLS, Genetic Algorithm (GA), Memetic Algorithm (MA), Stochastic Local Search (SLS), and Tabu Search (TS) on various benchmark problems confirm the efficiency of ACO-MNLS in the terms of solution quality and computational time.Sun, 19 Feb 2017 20:30:00 +0100Drought monitoring and prediction using k-nearest neighbor algorithm
http://jad.shahroodut.ac.ir/article_881_0.html
Drought is a climate phenomenon which might occur in any climate condition and all regions on the earth. Effective drought management depends on the application of appropriate drought indices. Drought indices are variables which are used to detect and characterize drought conditions. In this study, it was tried to predict drought occurrence, based on the standard precipitation index (SPI), using k-nearest neighbor modeling. The model was tested by using precipitation data of Kerman, Iran. Results showed that the model gives reasonable predictions of drought situation in the region. Finally, the efficiency and precision of the model was quantified by some statistical coefficients. Appropriate values of the correlation coefficient (r=0.874), mean absolute error (MAE=0.106), root mean square error (RMSE=0.119) and coefficient of residual mass (CRM=0.0011) indicated that the present model is suitable and efficientMon, 20 Feb 2017 20:30:00 +0100