Original/Review Paper
H.5. Image Processing and Computer Vision
M. Shakeri; M.H. Dezfoulian; H. Khotanlou
Abstract
Histogram Equalization technique is one of the basic methods in image contrast enhancement. Using this method, in the case of images with uniform gray levels (with narrow histogram), causes loss of image detail and the natural look of the image. To overcome this problem and to have a better image contrast ...
Read More
Histogram Equalization technique is one of the basic methods in image contrast enhancement. Using this method, in the case of images with uniform gray levels (with narrow histogram), causes loss of image detail and the natural look of the image. To overcome this problem and to have a better image contrast enhancement, a new two-step method was proposed. In the first step, the image histogram is partitioned into some sub-histograms according to mean value and standard deviation, which will be controlled with PSNR measure. In the second step, each sub-histogram will be improved separately and locally with traditional histogram equalization. Finally, all sub-histograms will be combined to obtain the enhanced image. Experimental results shows that this method would not only keep the visual details of the histogram, but also enhance image contrast.
Original/Review Paper
H.5.11. Image Representation
E. Sahragard; H. Farsi; S. Mohammadzadeh
Abstract
The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting method fills the degraded or lost area of the image by appropriate information. This is performed in such a way so that the obtained image is undistinguishable for a casual ...
Read More
The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting method fills the degraded or lost area of the image by appropriate information. This is performed in such a way so that the obtained image is undistinguishable for a casual person who is unfamiliar with the original image. In this paper, different images are degraded by two procedures; one is to blur and to add noise to the original image, and the other one is to lose a percentage of the pixels belonging to the original image. Then, the degraded image is restored by the proposed method and also two state-of-art methods. For image restoration, it is required to use optimization methods. In this paper, we use a linear restoration method based on the total variation regularizer. The variable of optimization problem is split, and the new optimization problem is solved by using Lagrangian augmented method. The experimental results show that the proposed method is faster, and the restored images have higher quality compared to the other methods.
Original/Review Paper
H.5. Image Processing and Computer Vision
F. Abdali-Mohammadi; A. Poorshamam
Abstract
Accurately detection of retinal landmarks, like optic disc, is an important step in the computer aided diagnosis frameworks. This paper presents an efficient method for automatic detection of the optic disc’s center and estimating its boundary. The center and initial diameter of optic disc are ...
Read More
Accurately detection of retinal landmarks, like optic disc, is an important step in the computer aided diagnosis frameworks. This paper presents an efficient method for automatic detection of the optic disc’s center and estimating its boundary. The center and initial diameter of optic disc are estimated by employing an ANN classifier. The ANN classifier employs visual features of vessels and their background tissue to classify extracted main vessels of retina into two groups: the vessels inside the optic disc and the vessels outside the optic disc. To this end, average intensity values and standard deviation of RGB channels, average width and orientation of the vessels and density of the detected vessels their junction points in a window around each central pixel of main vessels are employed. The center of detected vessels, which are belonging to the inside of the optic disc region, is adopted as the optic disc center and the average length of them in vertical and horizontal directions is selected as initial diameter of the optic disc circle. Then exact boundary of the optic disc is extracted using radial analysis of the initial circle. The performance of the proposed method is measured on the publicly available DRIONS, DRIVE and DIARETDB1 databases and compared with several state-of-the-art methods. The proposed method shows much higher mean overlap (70.6%) in the same range of detection accuracy (97.7%) and center distance (12 pixels). The average sensitivity and predictive values of the proposed optic disc detection method are 80.3% and 84.6% respectively.
Original/Review Paper
I. Computer Applications
M. Fateh; E. Kabir
Abstract
In this paper, we present a method for color reduction of Persian carpet cartoons that increases both speed and accuracy of editing. Carpet cartoons are in two categories: machine-printed and hand-drawn. Hand-drawn cartoons are divided into two groups: before and after discretization. The purpose of ...
Read More
In this paper, we present a method for color reduction of Persian carpet cartoons that increases both speed and accuracy of editing. Carpet cartoons are in two categories: machine-printed and hand-drawn. Hand-drawn cartoons are divided into two groups: before and after discretization. The purpose of this study is color reduction of hand-drawn cartoons before discretization. The proposed algorithm consists of the following steps: image segmentation, finding the color of each region, color reduction around the edges and final color reduction with C-means. The proposed method requires knowing the desired number of colors in any cartoon. In this method, the number of colors is not reduced to more than about 1.3 times of the desired number. Automatic color reduction is done in such a way that final manual editing to reach the desired colors is very easy.
Original/Review Paper
F.2.7. Optimization
R. Roustaei; F. Yousefi Fakhr
Abstract
The human has always been to find the best in all things. This Perfectionism has led to the creation of optimization methods. The goal of optimization is to determine the variables and find the best acceptable answer Due to the limitations of the problem, So that the objective function is minimum or ...
Read More
The human has always been to find the best in all things. This Perfectionism has led to the creation of optimization methods. The goal of optimization is to determine the variables and find the best acceptable answer Due to the limitations of the problem, So that the objective function is minimum or maximum. One of the ways inaccurate optimization is meta-heuristics so that Inspired by nature, usually are looking for the optimal solution. in recent years, much effort has been done to improve or create metaheuristic algorithms. One of the ways to make improvements in meta-heuristic methods is using of combination. In this paper, a hybrid optimization algorithm based on imperialist competitive algorithm is presented. The used ideas are: assimilation operation with a variable parameter and the war function that is based on mathematical model of war in the real world. These changes led to increase the speed find the global optimum and reduce the search steps is in contrast with other metaheuristic. So that the evaluations done more than 80% of the test cases, in comparison to Imperialist Competitive Algorithm, Social Based Algorithm , Cuckoo Optimization Algorithm and Genetic Algorithm, the proposed algorithm was superior.
Original/Review Paper
H.3.2.5. Environment
M. T. Sattari; M. Pal; R. Mirabbasi; J. Abraham
Abstract
This work reports the results of four ensemble approaches with the M5 model tree as the base regression model to anticipate Sodium Adsorption Ratio (SAR). Ensemble methods that combine the output of multiple regression models have been found to be more accurate than any of the individual models making ...
Read More
This work reports the results of four ensemble approaches with the M5 model tree as the base regression model to anticipate Sodium Adsorption Ratio (SAR). Ensemble methods that combine the output of multiple regression models have been found to be more accurate than any of the individual models making up the ensemble. In this study additive boosting, bagging, rotation forest and random subspace methods are used. The dataset, which consisted of 488 samples with nine input parameters were obtained from the Barandoozchay River in West Azerbaijan province, Iran. Three evaluation criteria: correlation coefficient, root mean square error and mean absolute error were used to judge the accuracy of different ensemble models. In addition to the use of M5 model tree to predict the SAR values, a wrapper-based variable selection approach using a M5 model tree as the learning algorithm and a genetic algorithm, was also used to select useful input variables. The encouraging performance motivates the use of this technique to predict SAR values.
Original/Review Paper
A.1. General
A. Zarei; M. Maleki; D. Feiz; M. A. Siahsarani kojuri
Abstract
Competitive intelligence (CI) has become one of the major subjects for researchers in recent years. The present research is aimed to achieve a part of the CI by investigating the scientific articles on this field through text mining in three interrelated steps. In the first step, a total of 1143 articles ...
Read More
Competitive intelligence (CI) has become one of the major subjects for researchers in recent years. The present research is aimed to achieve a part of the CI by investigating the scientific articles on this field through text mining in three interrelated steps. In the first step, a total of 1143 articles released between 1987 and 2016 were selected by searching the phrase "competitive intelligence" in the valid databases and search engines; then, through reviewing the topic, abstract, and main text of the articles as well as screening the articles in several steps, the authors eventually selected 135 relevant articles in order to perform the text mining process. In the second step, pre-processing of the data was carried out. In the third step, using non-hierarchical cluster analysis (k-means), 5 optimum clusters were obtained based on the Davies–Bouldin index, for each of which a word cloud was drawn; then, the association rules of each cluster was extracted and analyzed using the indices of support, confidence, and lift. The results indicated the increased interest in researches on CI in recent years and tangibility of the strong and weak presence of the developed and developing countries in formation of the scientific products; further, the results showed that information, marketing, and strategy are the main elements of the CI that, along with other prerequisites, can lead to the CI and, consequently, the economic development, competitive advantage, and sustainability in market.
Original/Review Paper
H.3.2.2. Computer vision
Seyyed A. Hoseini; P. Kabiri
Abstract
In this paper, a feature-based technique for the camera pose estimation in a sequence of wide-baseline images has been proposed. Camera pose estimation is an important issue in many computer vision and robotics applications, such as, augmented reality and visual SLAM. The proposed method can track captured ...
Read More
In this paper, a feature-based technique for the camera pose estimation in a sequence of wide-baseline images has been proposed. Camera pose estimation is an important issue in many computer vision and robotics applications, such as, augmented reality and visual SLAM. The proposed method can track captured images taken by hand-held camera in room-sized workspaces with maximum scene depth of 3-4 meters. The system can be used in unknown environments with no additional information available from the outside world except in the first two images that are used for initialization. Pose estimation is performed using only natural feature points extracted and matched in successive images. In wide-baseline images unlike consecutive frames of a video stream, displacement of the feature points in consecutive images is notable and hence cannot be traced easily using patch-based methods. To handle this problem, a hybrid strategy is employed to obtain accurate feature correspondences. In this strategy, first initial feature correspondences are found using similarity of their descriptors and then outlier matchings are removed by applying RANSAC algorithm. Further, to provide a set of required feature matchings a mechanism based on sidelong result of robust estimator was employed. The proposed method is applied on indoor real data with images in VGA quality (640×480 pixels) and on average the translation error of camera pose is less than 2 cm which indicates the effectiveness and accuracy of the proposed approach.
Original/Review Paper
H.5.10. Applications
S. Shoorabi Sani
Abstract
In this study, a system for monitoring the structural health of bridge deck and predicting various possible damages to this section was designed based on measuring the temperature and humidity with the use of wireless sensor networks, and then it was implemented and investigated. A scaled model of a ...
Read More
In this study, a system for monitoring the structural health of bridge deck and predicting various possible damages to this section was designed based on measuring the temperature and humidity with the use of wireless sensor networks, and then it was implemented and investigated. A scaled model of a conventional medium sized bridge (length of 50 meters, height of 10 meters, and with 2 piers) was examined for the purpose of this study. This method includes installing two sensor nodes with the ability of measuring temperature and humidity on both side of the bridge deck. The data collected by the system including temperature and humidity values are received by a LABVIEW-based software to be analyzed and stored in a database. Proposed SHM monitoring system is equipped by a novel method of using data mining techniques on the database of climatic conditions of past few years related to the location of the bridge to predict the occurrence and severity of future damages. In addition, this system has several alarm levels which are based on analysis of bridge conditions with fuzzy inference method, so it can issue proactive and precise warnings and alarms in terms of place of occurrence and severity of possible damages in the bridge deck to ensure total productive (TPM) and proactive maintenance. Very low costs, increased efficiency of the bridge service, and reduced maintenance costs makes this SHM system a practical and applicable system. The data and results related to all mentioned subjects were thoroughly discussed .
Original/Review Paper
H.3.2.5. Environment
H. Fattahi; A. Agah; N. Soleimanpourmoghadam
Abstract
Pyrite oxidation, Acid Rock Drainage (ARD) generation, and associated release and transport of toxic metals are a major environmental concern for the mining industry. Estimation of the metal loading in ARD is a major task in developing an appropriate remediation strategy. In this study, an expert system, ...
Read More
Pyrite oxidation, Acid Rock Drainage (ARD) generation, and associated release and transport of toxic metals are a major environmental concern for the mining industry. Estimation of the metal loading in ARD is a major task in developing an appropriate remediation strategy. In this study, an expert system, the Multi-Output Adaptive Neuro-Fuzzy Inference System (MANFIS), was used for estimation of metal concentrations in the Shur River, resulting from ARD at the Sarcheshmeh porphyry copper deposit, southeast Iran. Concentrations of Cu, Fe, Mn and Zn are predicted using pH, sulphate (SO4) and magnesium (Mg) concentrations in the Shur River as input to the MANFIS. Three MANFIS models were implemented, Grid Partitioning (GP), the Subtractive Clustering Method (SCM) and the Fuzzy C-Means Clustering Method (FCM).A comparison was made between these three models and the results show the superiority of the MANFIS-SCM model. The results obtained indicate that the MANFIS-SCM model has potentialfor estimation of the metals with high a degree of accuracy and robustness.
Original/Review Paper
H.3. Artificial Intelligence
F. Barani; H. Nezamabadi-pour
Abstract
Artificial bee colony (ABC) algorithm is a swarm intelligence optimization algorithm inspired by the intelligent behavior of honey bees when searching for food sources. The various versions of the ABC algorithm have been widely used to solve continuous and discrete optimization problems in different ...
Read More
Artificial bee colony (ABC) algorithm is a swarm intelligence optimization algorithm inspired by the intelligent behavior of honey bees when searching for food sources. The various versions of the ABC algorithm have been widely used to solve continuous and discrete optimization problems in different fields. In this paper a new binary version of the ABC algorithm inspired by quantum computing, called binary quantum-inspired artificial bee colony algorithm (BQIABC), is proposed. The BQIABC combines the main structure of ABC with the concepts and principles of quantum computing such as, quantum bit, quantum superposition state and rotation Q-gates strategy to make an algorithm with more exploration ability. The proposed algorithm due to its higher exploration ability can provide a robust tool to solve binary optimization problems. To evaluate the effectiveness of the proposed algorithm, several experiments are conducted on the 0/1 knapsack problem, Max-Ones and Royal-Road functions. The results produced by BQIABC are compared with those of ten state-of-the-art binary optimization algorithms. Comparisons show that BQIABC presents the better results than or similar to other algorithms. The proposed algorithm can be regarded as a promising algorithm to solve binary optimization problems.
Original/Review Paper
H.3.11. Vision and Scene Understanding
Sh. Foolad; A. Maleki
Abstract
Visual saliency is a cognitive psychology concept that makes some stimuli of a scene stand out relative to their neighbors and attract our attention. Computing visual saliency is a topic of recent interest. Here, we propose a graph-based method for saliency detection, which contains three stages: pre-processing, ...
Read More
Visual saliency is a cognitive psychology concept that makes some stimuli of a scene stand out relative to their neighbors and attract our attention. Computing visual saliency is a topic of recent interest. Here, we propose a graph-based method for saliency detection, which contains three stages: pre-processing, initial saliency detection and final saliency detection. The initial saliency map is obtained by putting adaptive threshold on color differences relative to the background. In final saliency detection, a graph is constructed, and the ranking technique is exploited. In the proposed method, the background is suppressed effectively, and often salient regions are selected correctly. Experimental results on the MSRA-1000 database demonstrate excellent performance and low computational complexity in comparison with the state-of-the-art methods.
Original/Review Paper
C.1. General
L. khalvati; M. Keshtgary; N. Rikhtegar
Abstract
Information security and Intrusion Detection System (IDS) plays a critical role in the Internet. IDS is an essential tool for detecting different kinds of attacks in a network and maintaining data integrity, confidentiality and system availability against possible threats. In this paper, a hybrid approach ...
Read More
Information security and Intrusion Detection System (IDS) plays a critical role in the Internet. IDS is an essential tool for detecting different kinds of attacks in a network and maintaining data integrity, confidentiality and system availability against possible threats. In this paper, a hybrid approach towards achieving high performance is proposed. In fact, the important goal of this paper is generating an efficient training dataset. To exploit the strength of clustering and feature selection, an intensive focus on intrusion detection combines the two, so the proposed method is using these techniques too. At first, a new training dataset is created by K-Medoids clustering and Selecting Feature using SVM method. After that, Naïve Bayes classifier is used for evaluating. The proposed method is compared with another mentioned hybrid algorithm and also 10-fold cross validation. Experimental results based on KDD CUP’99 dataset show that the proposed method has better accuracy, detection rate and also false alarm rate than others.
Original/Review Paper
H.6.4. Clustering
M. Manteqipour; A.R. Ghaffari Hadigheh; R. Mahmoodvand; A. Safari
Abstract
Grouping datasets plays an important role in many scientific researches. Depending on data features and applications, different constrains are imposed on groups, while having groups with similar members is always a main criterion. In this paper, we propose an algorithm for grouping the objects with random ...
Read More
Grouping datasets plays an important role in many scientific researches. Depending on data features and applications, different constrains are imposed on groups, while having groups with similar members is always a main criterion. In this paper, we propose an algorithm for grouping the objects with random labels, nominal features having too many nominal attributes. In addition, the size constraint on groups is necessary. These conditions lead to a mixed integer optimization problem which is not convex nor linear. It is an NP-hard problem and exact solution methods are computationally costly. Our motivation to solve such a problem comes along with grouping insurance data which is essential for fair pricing. The proposed algorithm includes two phases. First, we rank random labels using fuzzy numbers. Afterwards, an adjusted K-means algorithm is used to produce homogenous groups satisfying a cluster size constraint. Fuzzy numbers are used to compare random labels, in both observed values and their chance of occurrence. Moreover, an index is defined to find the similarity of multi-valued attributes without perfect information with those accompanied with perfect information. Since all ranks are scaled into the interval [0,1], the result of ranking random labels does not need rescaling techniques. In the adjusted K-means algorithm, the optimum number of clusters is found using coefficient of variation instead of Euclidean distance. Experiments demonstrate that our proposed algorithm produces fairly homogenous and significantly different groups having requisite mass.
Original/Review Paper
H.3.2.15. Transportation
S. Mostafaei; H. Ganjavi; R. Ghodsi
Abstract
In this paper, the relation among factors in the road transportation sector from March, 2005 to March, 2011 is analyzed. Most of the previous studies have economical point of view on gasoline consumption. Here, a new approach is proposed in which different data mining techniques are used to extract meaningful ...
Read More
In this paper, the relation among factors in the road transportation sector from March, 2005 to March, 2011 is analyzed. Most of the previous studies have economical point of view on gasoline consumption. Here, a new approach is proposed in which different data mining techniques are used to extract meaningful relations between the aforementioned factors. The main and dependent factor is gasoline consumption. First, the data gathered from different organizations is analyzed by feature selection algorithm to investigate how many of these independent factors have influential effect on the dependent factor. A few of these factors were determined as unimportant and were deleted from the analysis. Two association rule mining algorithms, Apriori and Carma are used to analyze these data. These data which are continuous cannot be handled by these two algorithms. Therefore, the two-step clustering algorithm is used to discretize the data. Association rule mining analysis shows that fewer vehicles, gasoline rationing, and high taxi trips are the main factors that caused low gasoline consumption. Carma results show that the number of taxi trips increase after gasoline rationing. Results also showed that Carma can reach all rules that are achieved by Apriori algorithm. Finaly it showed that association rule mining algorithm results are more informative than statistical correlation analysis.
Original/Review Paper
F.2.7. Optimization
M. Mohammadpour; H. Parvin; M. Sina
Abstract
Many of the problems considered in optimization and learning assume that solutions exist in a dynamic. Hence, algorithms are required that dynamically adapt with the problem’s conditions and search new conditions. Mostly, utilization of information from the past allows to quickly adapting changes ...
Read More
Many of the problems considered in optimization and learning assume that solutions exist in a dynamic. Hence, algorithms are required that dynamically adapt with the problem’s conditions and search new conditions. Mostly, utilization of information from the past allows to quickly adapting changes after. This is the idea underlining the use of memory in this field, what involves key design issues concerning the memory content, the process of update, and the process of retrieval. In this article, we used chaotic genetic algorithm (GA) with memory for solving dynamic optimization problems. A chaotic system has much more accurate prediction of the future rather than random system. The proposed method used a new memory with diversity maximization. Here we proposed a new strategy for updating memory and retrieval memory. Experimental study is conducted based on the Moving Peaks Benchmark to test the performance of the proposed method in comparison with several state-of-the-art algorithms from the literature. Experimental results show superiority and more effectiveness of the proposed algorithm in dynamic environments.
Original/Review Paper
F.2.7. Optimization
E. Khodayari; V. Sattari-Naeini; M. Mirhosseini
Abstract
Developing optimal flocking control procedure is an essential problem in mobile sensor networks (MSNs). Furthermore, finding the parameters such that the sensors can reach to the target in an appropriate time is an important issue. This paper offers an optimization approach based on metaheuristic methods ...
Read More
Developing optimal flocking control procedure is an essential problem in mobile sensor networks (MSNs). Furthermore, finding the parameters such that the sensors can reach to the target in an appropriate time is an important issue. This paper offers an optimization approach based on metaheuristic methods for flocking control in MSNs to follow a target. We develop a non-differentiable optimization technique based on the gravitational search algorithm (GSA). Finding flocking parameters using swarm behaviors is the main contributing of this paper to minimize the cost function. The cost function displays the average of Euclidean distance of the center of mass (COM) away from the moving target. One of the benefits of using GSA is its application in multiple targets tracking with satisfying results. Simulation results indicate that this scheme outperforms existing ones and demonstrate the ability of this approach in comparison with the previous methods.
Original/Review Paper
H.3. Artificial Intelligence
Seyed M. H. Hasheminejad; Z. Salimi
Abstract
One of the recent strategies for increasing the customer’s loyalty in banking industry is the use of customers’ club system. In this system, customers receive scores on the basis of financial and club activities they are performing, and due to the achieved points, they get credits from the ...
Read More
One of the recent strategies for increasing the customer’s loyalty in banking industry is the use of customers’ club system. In this system, customers receive scores on the basis of financial and club activities they are performing, and due to the achieved points, they get credits from the bank. In addition, by the advent of new technologies, fraud is growing in banking domain as well. Therefore, given the importance of financial activities in the customers’ club system, providing an efficient and applicable method for detecting fraud is highly important in these types of systems. In this paper, we propose a novel sliding time and scores window-based method, called FDiBC (Fraud Detection in Bank Club), to detect fraud in bank club. In FDiBC, firstly, based on each score obtained by customer members of bank club, 14 features are derived, then, based on all the scores of each customer member, five sliding time and scores window-based feature vectors are proposed. For generating training and test data set from the obtained scores of fraudster and common customers in the customers’ club system of a bank, a positive and a negative label are used, respectively. After generating training data set, learning is performed through two approaches: 1) clustering and binary classification with OCSVM method for positive data, i.e. fraudster customers, and 2) multi-class classification including SVM, C4.5, KNN, and Naïve Bayes methods. The results reveal that FDiBC has the ability to detect fraud with 78% accuracy and thus can be used in practice.