G.2. Models and Principles
D. Qian; L. Yu
Abstract
This work proposes a neural-fuzzy sliding mode control scheme for a hydro-turbine speed governor system. Considering the assumption of elastic water hammer, a nonlinear mode of the hydro-turbine governor system is established. By linearizing this mode, a sliding mode controller is designed. The linearized ...
Read More
This work proposes a neural-fuzzy sliding mode control scheme for a hydro-turbine speed governor system. Considering the assumption of elastic water hammer, a nonlinear mode of the hydro-turbine governor system is established. By linearizing this mode, a sliding mode controller is designed. The linearized mode is subject to uncertainties. The uncertainties are generated in the process of linearization. A radial basis function (RBF) neural network is introduced to compensate for the uncertainties. The update formulas for the neural networks are derived from the Lyapunov direct method. For the chattering phenomenon of the sliding mode control, a fuzzy logic inference system is adopted. In the sense of Lyapunov, the asymptotical stability of the system can be guaranteed. Compared with the internal mode control and the conventional PID control method, some numerical simulations verify the feasibility and robustness of the proposed scheme.
F.2.2. Interpolation
V. Abolghasemi; S. Ferdowsi; S. Sanei
Abstract
The focus of this paper is to consider the compressed sensing problem. It is stated that the compressed sensing theory, under certain conditions, helps relax the Nyquist sampling theory and takes smaller samples. One of the important tasks in this theory is to carefully design measurement matrix (sampling ...
Read More
The focus of this paper is to consider the compressed sensing problem. It is stated that the compressed sensing theory, under certain conditions, helps relax the Nyquist sampling theory and takes smaller samples. One of the important tasks in this theory is to carefully design measurement matrix (sampling operator). Most existing methods in the literature attempt to optimize a randomly initialized matrix with the aim of decreasing the amount of required measurements. However, these approaches mainly lead to sophisticated structure of measurement matrix which makes it very difficult to implement. In this paper we propose an intermediate structure for the measurement matrix based on random sampling. The main advantage of block-based proposed technique is simplicity and yet achieving acceptable performance obtained through using conventional techniques. The experimental results clearly confirm that in spite of simplicity of the proposed approach it can be competitive to the existing methods in terms of reconstruction quality. It also outperforms existing methods in terms of computation time.
E.3. Analysis of Algorithms and Problem Complexity
A. Mesrikhani; M. Davoodi
Abstract
Nearest Neighbor (NN) searching is a challenging problem in data management and has been widely studied in data mining, pattern recognition and computational geometry. The goal of NN searching is efficiently reporting the nearest data to a given object as a query. In most of the studies both the data ...
Read More
Nearest Neighbor (NN) searching is a challenging problem in data management and has been widely studied in data mining, pattern recognition and computational geometry. The goal of NN searching is efficiently reporting the nearest data to a given object as a query. In most of the studies both the data and query are assumed to be precise, however, due to the real applications of NN searching, such as tracking and locating services, GIS and data mining, it is possible both of them are imprecise. So, in this situation, a natural way to handle the issue is to report the data have a nonzero probability —called nonzero nearest neighbor— to be the nearest neighbor of a given query. Formally, let P be a set of n uncertain points modeled by some regions. We first consider the following variation of NN searching problem under uncertainty. If both the query and the data are uncertain points modeled by distinct unit segments parallel to the x-axis, we propose an efficient algorithm that reports nonzero nearest neighbors under Manhattan metric in O(n^2 α(n^2 )) preprocessing and O(logn+k) query time, where α(.) is the extremely slowly growing functional inverse of Ackermann’s function. Finally, for the arbitrarily length segments parallel to the x-axis, we propose an approximation algorithm that reports nonzero nearest neighbor with maximum error L in O(n^2 α(n^2 )) preprocessing and O(logn+k) query time, where L is the length of the query.
Morteza Haydari; Mahdi Banejad; Amin Hahizadeh
Abstract
Restructuring the recent developments in the power system and problems arising from construction as well as the maintenance of large power plants lead to increase in using the Distributed Generation (DG) resources. DG units due to its specifications, technology and location network connectivity can improve ...
Read More
Restructuring the recent developments in the power system and problems arising from construction as well as the maintenance of large power plants lead to increase in using the Distributed Generation (DG) resources. DG units due to its specifications, technology and location network connectivity can improve system and load point reliability indices. In this paper, the allocation and sizing of distributed generators in distribution electricity networks are determined through using an optimization method. The objective function of the proposed method is based on improving the reliability indices, such as a System Average Interruption Duration Index (SAIDI), and Average Energy Not Supplied (AENS) per customer index at the lowest cost. The optimization is based on the Modified Shuffled Frog Leaping Algorithm (MSFLA) aiming at determining the optimal DG allocation and sizing in the distribution network. The MSFLA is a new mimetic meta-heuristic algorithm with efficient mathematical function and global search capability. To evaluate the proposed algorithm, the 34-bus IEEE test system is used. In addition, the finding of comparative studies indicates the better capability of the proposed method compared with the genetic algorithm in finding the optimal sizing and location of DG’s with respect to the used objective function.
H.7. Simulation, Modeling, and Visualization
A.R. Ebrahimi; Gh. Barid Loghmani; M. Sarfraz
Abstract
In this paper, a new technique has been designed to capture the outline of 2D shapes using cubic B´ezier curves. The proposed technique avoids the traditional method of optimizing the global squared fitting error and emphasizes the local control of data points. A maximum error has been determined ...
Read More
In this paper, a new technique has been designed to capture the outline of 2D shapes using cubic B´ezier curves. The proposed technique avoids the traditional method of optimizing the global squared fitting error and emphasizes the local control of data points. A maximum error has been determined to preserve the absolute fitting error less than a criterion and it administers the process of curve subdivision. Depending on the specified maximum error, the proposed technique itself subdivides complex segments, and curve fitting is done simultaneously. A comparative study of experimental results embosses various advantages of the proposed technique such as accurate representation, low approximation errors and efficient computational complexity.
Document and Text Processing
A. Ahmadi Tameh; M. Nassiri; M. Mansoorizadeh
Abstract
WordNet is a large lexical database of English language, in which, nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms (synsets). Each synset expresses a distinct concept. Synsets are interlinked by both semantic and lexical relations. WordNet is essentially used for word ...
Read More
WordNet is a large lexical database of English language, in which, nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms (synsets). Each synset expresses a distinct concept. Synsets are interlinked by both semantic and lexical relations. WordNet is essentially used for word sense disambiguation, information retrieval, and text translation. In this paper, we propose several automatic methods to extract Information and Communication Technology (ICT)-related data from Princeton WordNet. We, then, add these extracted data to our Persian WordNet. The advantage of automated methods is reducing the interference of human factors and accelerating the development of our bilingual ICT WordNet. In our first proposed method, based on a small subset of ICT words, we use the definition of each synset to decide whether that synset is ICT. The second mechanism is to extract synsets which are in a semantic relation with ICT synsets. We also use two similarity criteria, namely LCS and S3M, to measure the similarity between a synset definition in WordNet and definition of any word in Microsoft dictionary. Our last method is to verify the coordinate of ICT synsets. Results show that our proposed mechanisms are able to extract ICT data from Princeton WordNet at a good level of accuracy.
H.3.2.5. Environment
H. Fattahi; A. Agah; N. Soleimanpourmoghadam
Abstract
Pyrite oxidation, Acid Rock Drainage (ARD) generation, and associated release and transport of toxic metals are a major environmental concern for the mining industry. Estimation of the metal loading in ARD is a major task in developing an appropriate remediation strategy. In this study, an expert system, ...
Read More
Pyrite oxidation, Acid Rock Drainage (ARD) generation, and associated release and transport of toxic metals are a major environmental concern for the mining industry. Estimation of the metal loading in ARD is a major task in developing an appropriate remediation strategy. In this study, an expert system, the Multi-Output Adaptive Neuro-Fuzzy Inference System (MANFIS), was used for estimation of metal concentrations in the Shur River, resulting from ARD at the Sarcheshmeh porphyry copper deposit, southeast Iran. Concentrations of Cu, Fe, Mn and Zn are predicted using pH, sulphate (SO4) and magnesium (Mg) concentrations in the Shur River as input to the MANFIS. Three MANFIS models were implemented, Grid Partitioning (GP), the Subtractive Clustering Method (SCM) and the Fuzzy C-Means Clustering Method (FCM).A comparison was made between these three models and the results show the superiority of the MANFIS-SCM model. The results obtained indicate that the MANFIS-SCM model has potentialfor estimation of the metals with high a degree of accuracy and robustness.
R. Satpathy; V. B. Konkimalla; J. Ratha
Abstract
The present work was designed to classify and differentiate between the dehalogenase enzyme to non–dehalogenases (other hydrolases) by taking the amino acid propensity at the core, surface and both the parts. The data sets were made on an individual basis by selecting the 3D structures of protein ...
Read More
The present work was designed to classify and differentiate between the dehalogenase enzyme to non–dehalogenases (other hydrolases) by taking the amino acid propensity at the core, surface and both the parts. The data sets were made on an individual basis by selecting the 3D structures of protein available in the PDB (Protein Data Bank). The prediction of the core amino acid were predicted by IPFP tool and their structural propensity calculation was performed by an in-house built software, Propensity Calculator which is available online. All datasets were finally grouped into two categories namely, dehalogenase and non-dehalogenase using Naïve Bayes, J-48, Random forest, K-means clustering and SMO classification algorithm. By making the comparison of various classification methods, the proposed tree method (Random forest) performs well with a classification accuracy of 98.88 % (maximum) for the core propensity data set. Therefore we proposed that, the core amino acid propensity could be approved as a novel potential descriptor for the classification of enzymes.
H.3.14. Knowledge Management
M. Sakenian Dehkordi; M. Naderi Dehkordi
Abstract
Due to the rapid growth of data mining technology, obtaining private data on users through this technology becomes easier. Association Rules Mining is one of the data mining techniques to extract useful patterns in the form of association rules. One of the main problems in applying this technique on ...
Read More
Due to the rapid growth of data mining technology, obtaining private data on users through this technology becomes easier. Association Rules Mining is one of the data mining techniques to extract useful patterns in the form of association rules. One of the main problems in applying this technique on databases is the disclosure of sensitive data by endangering security and privacy. Hiding the association rules is one of the methods to preserve privacy and it is a main subject in the field of data mining and database security, for which several algorithms with different approaches are presented so far. An algorithm to hide sensitive association rules with a heuristic approach is presented in this article, where the Perturb technique based on reducing confidence or support rules is applied with the attempt to remove the considered item from a transaction with the highest weight by allocating weight to the items and transactions. Efficiency is measured by the failure criteria of hiding, number of lost rules and ghost rules, and execution time. The obtained results of this study are assessed and compared with two known FHSAR and RRLR algorithms, based on two real databases (dense and sparse). The results indicate that the number of lost rules in all experiments are reduced by 47% in comparison with RRLR and reduced by 23% in comparison with FHSAR. Moreover, the other undesirable side effects, in this proposed algorithm in the worst case are equal to that of the base algorithms.
F.3.3. Graph Theor
A. Jalili; M. Keshtgari
Abstract
Software-Defined Network (SDNs) is a decoupled architecture that enables administrators to build a customizable and manageable network. Although the decoupled control plane provides flexible management and facilitates the task of operating the network, it is the vulnerable point of failure in SDN. To ...
Read More
Software-Defined Network (SDNs) is a decoupled architecture that enables administrators to build a customizable and manageable network. Although the decoupled control plane provides flexible management and facilitates the task of operating the network, it is the vulnerable point of failure in SDN. To achieve a reliable control plane, multiple controller are often needed so that each switch must be assigned to more than one controller. In this paper, a Reliable Controller Placement Problem Model (RCPPM) is proposed to solve such a problem, so as to maximize the reliability of software defined networks. Unlike previous works that only consider latencies parameters, the new model takes into account the load of control traffic and reliability metrics as well. Furthermore, a near-optimal algorithm is proposed to solve the NP-hard RCPPM in a heuristic manner. Finally, through extensive simulation, a comprehensive analysis of the RCPPM is presented for various topologies extracted from Internet Topology Zoo. Our performance evaluations show the efficiency of the proposed framework.
Document and Text Processing
F. Safi-Esfahani; Sh. Rakian; M.H. Nadimi-Shahraki
Abstract
Plagiarism which is defined as “the wrongful appropriation of other writers’ or authors’ works and ideas without citing or informing them” poses a major challenge to knowledge spread publication. Plagiarism has been placed in four categories of direct, paraphrasing (rewriting), ...
Read More
Plagiarism which is defined as “the wrongful appropriation of other writers’ or authors’ works and ideas without citing or informing them” poses a major challenge to knowledge spread publication. Plagiarism has been placed in four categories of direct, paraphrasing (rewriting), translation, and combinatory. This paper addresses translational plagiarism which is sometimes referred to as cross-lingual plagiarism. In cross-lingual translation, writers meld a translation with their own words and ideas. Based on monolingual plagiarism detection methods, this paper ultimately intends to find a way to detect cross-lingual plagiarism. A framework called Multi-Lingual Plagiarism Detection (MLPD) has been presented for cross-lingual plagiarism analysis with ultimate objective of detection of plagiarism cases. English is the reference language and Persian materials are back translated using translation tools. The data for assessment of MLPD were obtained from English-Persian Mizan parallel corpus. Apache’s Solr was also applied to record the creep of the documents and their indexation. The accuracy mean of the proposed method revealed to be 98.82% when employing highly accurate translation tools which indicate the high accuracy of the proposed method. Also, Google translation service showed the accuracy mean to be 56.9%. These tests demonstrate that improved translation tools enhance the accuracy of the proposed method.
H.3.2.6. Games and infotainment
A.H. Khabbaz; A. Pouyan; M. Fateh; V. Abolghasemi
Abstract
This paper, presents an adapted serious game for rating social ability in children with autism spectrum disorder (ASD). The required measurements are obtained by challenges of the proposed serious game. The proposed serious game uses reinforcement learning concepts for being adaptive. It is based on ...
Read More
This paper, presents an adapted serious game for rating social ability in children with autism spectrum disorder (ASD). The required measurements are obtained by challenges of the proposed serious game. The proposed serious game uses reinforcement learning concepts for being adaptive. It is based on fuzzy logic to evaluate the social ability level of the children with ASD. The game adapts itself to the level of the autistic patient by reducing or increasing the challenges in the game via an intelligent agent during the play time. This task is accomplished by making more elements and reshaping them to a variety of real world shapes and redesigning their motions and speed. If autistic patient's communication level grows during the playtime, the challenges of game may become harder to make a dynamic procedure for evaluation. At each step or state, using fuzzy logic, the level of the player is estimated based on some attributes such as average of the distances between the fixed points gazed by the player, or number of the correct answers selected by the player divided by the number of the questioned objects. This paper offers the usage of dynamic AI difficulty system proposing a concept to enhance the conversation skills in autistic children. The proposed game is tested by participating of 3 autistic children. Each of them played the game in 5 turns. The results displays that the method is useful in the long-term.
H.3.15.3. Evolutionary computing and genetic algorithms
H.R Keshavarz; M. Saniee Abadeh
Abstract
In Web 2.0, people are free to share their experiences, views, and opinions. One of the problems that arises in web 2.0 is the sentiment analysis of texts produced by users in outlets such as Twitter. One of main the tasks of sentiment analysis is subjectivity classification. Our aim is to classify the ...
Read More
In Web 2.0, people are free to share their experiences, views, and opinions. One of the problems that arises in web 2.0 is the sentiment analysis of texts produced by users in outlets such as Twitter. One of main the tasks of sentiment analysis is subjectivity classification. Our aim is to classify the subjectivity of Tweets. To this end, we create subjectivity lexicons in which the words into objective and subjective words. To create these lexicons, we make use of three metaheuristic methods. We extract two meta-level features, which show the count of objective and subjective words in tweets according to the lexicons. We then classify the tweets based on these two features. Our method outperforms the baselines in terms of accuracy and f-measure. In the three metaheuristics, it is observed that genetic algorithm performs better than simulated annealing and asexual reproduction optimization, and it also outperforms all the baselines in terms of accuracy in two of the three assessed datasets. The created lexicons also give insight about the objectivity and subjectivity of words.
Gh. Ahmadi; M. Teshnelab
Abstract
Because of the existing interactions among the variables of a multiple input-multiple output (MIMO) nonlinear system, its identification is a difficult task, particularly in the presence of uncertainties. Cement rotary kiln (CRK) is a MIMO nonlinear system in the cement factory with a complicated mechanism ...
Read More
Because of the existing interactions among the variables of a multiple input-multiple output (MIMO) nonlinear system, its identification is a difficult task, particularly in the presence of uncertainties. Cement rotary kiln (CRK) is a MIMO nonlinear system in the cement factory with a complicated mechanism and uncertain disturbances. The identification of CRK is very important for different purposes such as prediction, fault detection, and control. In the previous works, CRK was identified after decomposing it into several multiple input-single output (MISO) systems. In this paper, for the first time, the rough-neural network (R-NN) is utilized for the identification of CRK without the usage of MISO structures. R-NN is a neural structure designed on the base of rough set theory for dealing with the uncertainty and vagueness. In addition, a stochastic gradient descent learning algorithm is proposed for training the R-NNs. The simulation results show the effectiveness of proposed methodology.
Document and Text Processing
A. Shojaie; F. Safi-Esfahani
Abstract
With the advent of the internet and easy access to digital libraries, plagiarism has become a major issue. Applying search engines is one of the plagiarism detection techniques that converts plagiarism patterns to search queries. Generating suitable queries is the heart of this technique and existing ...
Read More
With the advent of the internet and easy access to digital libraries, plagiarism has become a major issue. Applying search engines is one of the plagiarism detection techniques that converts plagiarism patterns to search queries. Generating suitable queries is the heart of this technique and existing methods suffer from lack of producing accurate queries, Precision and Speed of retrieved results. This research proposes a framework called ParaMaker. It generates accurate paraphrases of any sentence, similar to human behaviors and sends them to a search engine to find the plagiarism patterns. For English language, ParaMaker was examined against six known methods with standard PAN2014 datasets. Results showed an improvement of 34% in terms of Recall parameter while Precision and Speed parameters were maintained. In Persian language, statements of suspicious documents were examined compared to an exact search approach. ParaMaker showed an improvement of at least 42% while Precision and Speed were maintained.
I.3.7. Engineering
F. Nosratian; H. Nematzadeh; H. Motameni
Abstract
World Wide Web is growing at a very fast pace and makes a lot of information available to the public. Search engines used conventional methods to retrieve information on the Web; however, the search results of these engines are still able to be refined and their accuracy is not high enough. One of the ...
Read More
World Wide Web is growing at a very fast pace and makes a lot of information available to the public. Search engines used conventional methods to retrieve information on the Web; however, the search results of these engines are still able to be refined and their accuracy is not high enough. One of the methods for web mining is evolutionary algorithms which search according to the user interests. The proposed method based on genetic algorithm optimizes important relationships among links on web pages and also presented a way for classifying web documents. Likewise, the proposed method also finds the best pages among searched ones by engines. Also, it calculates the quality of pages by web page features independently or dependently. The proposed algorithm is complementary to the search engines. In the proposed methods, after implementation of the genetic algorithm using MATLAB 2013 with crossover rate of 0.7 and mutation rate of 0.05, the best and the most similar pages are presented to the user. The optimal solutions remained fixed in several running of the proposed algorithm.
H.3. Artificial Intelligence
Y. Vaghei; A. Farshidianfar
Abstract
In recent years, underactuated nonlinear dynamic systems trajectory tracking, such as space robots and manipulators with structural flexibility, has become a major field of interest due to the complexity and high computational load of these systems. Hierarchical sliding mode control has been investigated ...
Read More
In recent years, underactuated nonlinear dynamic systems trajectory tracking, such as space robots and manipulators with structural flexibility, has become a major field of interest due to the complexity and high computational load of these systems. Hierarchical sliding mode control has been investigated recently for these systems; however, the instability phenomena will possibly occur, especially for long-term operations. In this paper, a new design approach of an adaptive fuzzy hierarchical terminal sliding-mode controller (AFHTSMC) is proposed. The sliding surfaces of the subsystems construct the hierarchical structure of the proposed method; in which the top layer includes all of the subsystems’ sliding surfaces. Moreover, terminal sliding mode has been implemented in each layer to ensure the error convergence to zero in finite time besides chattering reduction. In addition, online fuzzy models are employed to approximate the two nonlinear dynamic system’s functions. Finally, a simulation example of an inverted pendulum is proposed to confirm the effectiveness and robustness of the proposed controller.
F.2.7. Optimization
F. Tatari; M. B. Naghibi-Sistani
Abstract
In this paper, the optimal adaptive leader-follower consensus of linear continuous time multi-agent systems is considered. The error dynamics of each player depends on its neighbors’ information. Detailed analysis of online optimal leader-follower consensus under known and unknown dynamics is presented. ...
Read More
In this paper, the optimal adaptive leader-follower consensus of linear continuous time multi-agent systems is considered. The error dynamics of each player depends on its neighbors’ information. Detailed analysis of online optimal leader-follower consensus under known and unknown dynamics is presented. The introduced reinforcement learning-based algorithms learn online the approximate solution to algebraic Riccati equations. An optimal adaptive control technique is employed to iteratively solve the algebraic Riccati equation based on the online measured error state and input information for each agent without requiring the priori knowledge of the system matrices. The decoupling of the multi-agent system global error dynamics facilitates the employment of policy iteration and optimal adaptive control techniques to solve the leader-follower consensus problem under known and unknown dynamics. Simulation results verify the effectiveness of the proposed methods.
Hossein Marvi; Zeynab Esmaileyan; Ali Harimi
Abstract
The vast use of Linear Prediction Coefficients (LPC) in speech processing systems has intensified the importance of their accurate computation. This paper is concerned with computing LPC coefficients using evolutionary algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Dif-ferential ...
Read More
The vast use of Linear Prediction Coefficients (LPC) in speech processing systems has intensified the importance of their accurate computation. This paper is concerned with computing LPC coefficients using evolutionary algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Dif-ferential Evolution (DE) and Particle Swarm Optimization with Differentially perturbed Velocity (PSO-DV). In this method, evolutionary algorithms try to find the LPC coefficients which can predict the origi-nal signal with minimum prediction error. To this end, the fitness function is defined as the maximum prediction error in all evolutionary algorithms. The coefficients computed by these algorithms compared to coefficients obtained by traditional autocorrelation method in term of prediction accuracy. Our results showed that coefficients obtained by evolutionary algorithms predict the original signal with less prediction error than autocorrelation methods. The maximum prediction error achieved by autocorrelation method, GA, PSO, DE and PSO-DV are 0.35, 0.06, 0.02, 0.07 and 0.001, respectively. This shows that the hybrid algorithm, PSO-DV, is superior to other algorithms in computing linear prediction coefficients.
I.3.7. Engineering
Mohsen Khosravi; Mahdi Banejad; Heydar Toosian Shandiz
Abstract
State estimation is the foundation of any control and decision making in power networks. The first requirement for a secure network is a precise and safe state estimator in order to make decisions based on accurate knowledge of the network status. This paper introduces a new estimator which is able to ...
Read More
State estimation is the foundation of any control and decision making in power networks. The first requirement for a secure network is a precise and safe state estimator in order to make decisions based on accurate knowledge of the network status. This paper introduces a new estimator which is able to detect bad data with few calculations without need for repetitions and estimation residual calculation. The estimator is equipped with a filter formed in different times according to Principal Component Analysis (PCA) of measurement data. In addition, the proposed estimator employs the dynamic relationships of the system and the prediction property of the Extended Kalman Filter (EKF) to estimate the states of network fast and precisely. Therefore, it makes real-time monitoring of the power network possible. The proposed dynamic model also enables the estimator to estimate the states of a large scale system online. Results of state estimation of the proposed algorithm for an IEEE 9 bus system shows that even with the presence of bad data, the estimator provides a valid and precise estimation of system states and tracks the network with appropriate speed.
A.1. General
S. Asadi Amiri
Abstract
Removing salt and pepper noise is an active research area in image processing. In this paper, a two-phase method is proposed for removing salt and pepper noise while preserving edges and fine details. In the first phase, noise candidate pixels are detected which are likely to be contaminated by noise. ...
Read More
Removing salt and pepper noise is an active research area in image processing. In this paper, a two-phase method is proposed for removing salt and pepper noise while preserving edges and fine details. In the first phase, noise candidate pixels are detected which are likely to be contaminated by noise. In the second phase, only noise candidate pixels are restored using adaptive median filter. In terms of noise detection, a two-stage method is utilized. At first, a thresholding is applied on the image to initial estimation of the noise candidate pixels. Since some pixels in the image may be similar to the salt and pepper noise, these pixels are mistakenly identified as noise. Hence, in the second step of the noise detection, the pixon-based segmentation is used to identify the salt and pepper noise pixels more accurately. Pixon is the neighboring pixels with similar gray levels. The proposed method was evaluated on several noisy images, and the results show the accuracy of the proposed method in salt and pepper noise removal and outperforms to several existing methods.
Document and Text Processing
N. Nazari; M. A. Mahdavi
Abstract
Text summarization endeavors to produce a summary version of a text, while maintaining the original ideas. The textual content on the web, in particular, is growing at an exponential rate. The ability to decipher through such massive amount of data, in order to extract the useful information, is a major ...
Read More
Text summarization endeavors to produce a summary version of a text, while maintaining the original ideas. The textual content on the web, in particular, is growing at an exponential rate. The ability to decipher through such massive amount of data, in order to extract the useful information, is a major undertaking and requires an automatic mechanism to aid with the extant repository of information. Text summarization systems intent to assist with content reduction by means of keeping the relevant information and filtering the non-relevant parts of the text. In terms of the input, there are two fundamental approaches among text summarization systems. The first approach summarizes a single document. In other words, the system takes one document as an input and produce a summary version as its output. The alternative approach is to take several documents as its input and produce a single summary document as its output. In terms of output, summarization systems are also categorized into two major types. One approach would be to extract exact sentences from the original document in order to build the summary output. The alternative would be a more complex approach, in which the rendered text is a rephrased version of the original document. This paper will offer an in-depth introduction to automatic text summarization. We also mention some evaluation techniques to evaluate the quality of automatic text summarization.
H.3. Artificial Intelligence
F. Barani; H. Nezamabadi-pour
Abstract
Artificial bee colony (ABC) algorithm is a swarm intelligence optimization algorithm inspired by the intelligent behavior of honey bees when searching for food sources. The various versions of the ABC algorithm have been widely used to solve continuous and discrete optimization problems in different ...
Read More
Artificial bee colony (ABC) algorithm is a swarm intelligence optimization algorithm inspired by the intelligent behavior of honey bees when searching for food sources. The various versions of the ABC algorithm have been widely used to solve continuous and discrete optimization problems in different fields. In this paper a new binary version of the ABC algorithm inspired by quantum computing, called binary quantum-inspired artificial bee colony algorithm (BQIABC), is proposed. The BQIABC combines the main structure of ABC with the concepts and principles of quantum computing such as, quantum bit, quantum superposition state and rotation Q-gates strategy to make an algorithm with more exploration ability. The proposed algorithm due to its higher exploration ability can provide a robust tool to solve binary optimization problems. To evaluate the effectiveness of the proposed algorithm, several experiments are conducted on the 0/1 knapsack problem, Max-Ones and Royal-Road functions. The results produced by BQIABC are compared with those of ten state-of-the-art binary optimization algorithms. Comparisons show that BQIABC presents the better results than or similar to other algorithms. The proposed algorithm can be regarded as a promising algorithm to solve binary optimization problems.
H.3.2.7. Industrial automation
M. Aghaei; A. Dastfan
Abstract
The harmonic in distribution systems becomes an important problem due to an increase in nonlinear loads. This paper presents a new approach based on a graph algorithm for optimum placement of passive harmonic filters in a multi-bus system, which suffers from harmonic current sources. The objective of ...
Read More
The harmonic in distribution systems becomes an important problem due to an increase in nonlinear loads. This paper presents a new approach based on a graph algorithm for optimum placement of passive harmonic filters in a multi-bus system, which suffers from harmonic current sources. The objective of this paper is to minimize the network loss, the cost of the filter and the total harmonic distortion of voltage, and also enhances voltage profile at each bus effectively. Four types of sub-graph have been used for search space of optimization. The method handles standard capacitor sizes in planning filters and associated costs. In this paper, objective function is not differential but eases solving process. The IEEE 30 bus test system is used for the placement of passive filter. The simulation has been done to show applicability of the proposed method. Simulation results prove that the method is effective and suitable for the passive filter planning in a power system.
H.3. Artificial Intelligence
M. Heidarian; H. Jalalifar; F. Rafati
Abstract
Uniaxial compressive strength (UCS) and internal friction coefficient (µ) are the most important strength parameters of rock. They could be determined either by laboratory tests or from empirical correlations. The laboratory analysis sometimes is not possible for many reasons. On the other hand, ...
Read More
Uniaxial compressive strength (UCS) and internal friction coefficient (µ) are the most important strength parameters of rock. They could be determined either by laboratory tests or from empirical correlations. The laboratory analysis sometimes is not possible for many reasons. On the other hand, Due to changes in rock compositions and properties, none of the correlations could be applied as an exact universal correlation. In such conditions, the artificial intelligence could be an appropriate candidate method for estimation of the strength parameters. In this study, the Adaptive Neuro-Fuzzy Inference System (ANFIS) which is one of the artificial intelligence techniques was used as dominant tool to predict the strength parameters in one of the Iranian southwest oil fields. A total of 655 data sets (including depth, compressional wave velocity and density data) were used. 436 and 219 data sets were randomly selected among the data for constructing and verification of the intelligent model, respectively. To evaluate the performance of the model, root mean square error (RMSE) and correlation coefficient (R2) between the reported values from the drilling site and estimated values was computed. A comparison between the RMSE of the proposed model and recently intelligent models shows that the proposed model is more accurate than others. Acceptable accuracy and using conventional well logging data are the highlight advantages of the proposed intelligent model.