H.6.3.2. Feature evaluation and selection
Maryam Imani; Hassan Ghassemian
Abstract
When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter ...
Read More
When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter matrices. The proposed method, called DA-PC1, copes with the small sample size problem and has not the limitation of linear discriminant analysis (LDA) in the number of extracted features. In DA-PC1, the dominant structure of distribution is preserved by PC1 and the class separability is increased by DA. The experimental results show the good performance of DA-PC1 compared to some state-of-the-art feature extraction methods.
H.3.8. Natural Language Processing
A. Pakzad; B. Minaei Bidgoli
Abstract
Dependency parsing is a way of syntactic parsing and a natural language that automatically analyzes the dependency structure of sentences, and the input for each sentence creates a dependency graph. Part-Of-Speech (POS) tagging is a prerequisite for dependency parsing. Generally, dependency parsers do ...
Read More
Dependency parsing is a way of syntactic parsing and a natural language that automatically analyzes the dependency structure of sentences, and the input for each sentence creates a dependency graph. Part-Of-Speech (POS) tagging is a prerequisite for dependency parsing. Generally, dependency parsers do the POS tagging task along with dependency parsing in a pipeline mode. Unfortunately, in pipeline models, a tagging error propagates, but the model is not able to apply useful syntactic information. The goal of joint models simultaneously reduce errors of POS tagging and dependency parsing tasks. In this research, we attempted to utilize the joint model on the Persian and English language using Corbit software. We optimized the model's features and improved its accuracy concurrently. Corbit software is an implementation of a transition-based approach for word segmentation, POS tagging and dependency parsing. In this research, the joint accuracy of POS tagging and dependency parsing over the test data on Persian, reached 85.59% for coarse-grained and 84.24% for fine-grained POS. Also, we attained 76.01% for coarse-grained and 74.34% for fine-grained POS on English.
F.2.7. Optimization
B. Safaee; S. K. Kamaleddin Mousavi Mashhadi
Abstract
Quad rotor is a renowned underactuated Unmanned Aerial Vehicle (UAV) with widespread military and civilian applications. Despite its simple structure, the vehicle suffers from inherent instability. Therefore, control designers always face formidable challenge in stabilization and control goal. In this ...
Read More
Quad rotor is a renowned underactuated Unmanned Aerial Vehicle (UAV) with widespread military and civilian applications. Despite its simple structure, the vehicle suffers from inherent instability. Therefore, control designers always face formidable challenge in stabilization and control goal. In this paper fuzzy membership functions of the quad rotor’s fuzzy controllers are optimized using nature-inspired algorithms such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA). Finally, the results of the proposed methods are compared and a trajectory is defined to verify the effectiveness of the designed fuzzy controllers based on the algorithm with better results.
H.5. Image Processing and Computer Vision
M. Shakeri; M.H. Dezfoulian; H. Khotanlou
Abstract
Histogram Equalization technique is one of the basic methods in image contrast enhancement. Using this method, in the case of images with uniform gray levels (with narrow histogram), causes loss of image detail and the natural look of the image. To overcome this problem and to have a better image contrast ...
Read More
Histogram Equalization technique is one of the basic methods in image contrast enhancement. Using this method, in the case of images with uniform gray levels (with narrow histogram), causes loss of image detail and the natural look of the image. To overcome this problem and to have a better image contrast enhancement, a new two-step method was proposed. In the first step, the image histogram is partitioned into some sub-histograms according to mean value and standard deviation, which will be controlled with PSNR measure. In the second step, each sub-histogram will be improved separately and locally with traditional histogram equalization. Finally, all sub-histograms will be combined to obtain the enhanced image. Experimental results shows that this method would not only keep the visual details of the histogram, but also enhance image contrast.
H.5. Image Processing and Computer Vision
Seyed M. Ghazali; Y. Baleghi
Abstract
Observation in absolute darkness and daytime under every atmospheric situation is one of the advantages of thermal imaging systems. In spite of increasing trend of using these systems, there are still lots of difficulties in analysing thermal images due to the variable features of pedestrians and atmospheric ...
Read More
Observation in absolute darkness and daytime under every atmospheric situation is one of the advantages of thermal imaging systems. In spite of increasing trend of using these systems, there are still lots of difficulties in analysing thermal images due to the variable features of pedestrians and atmospheric situations. In this paper an efficient method is proposed for detecting pedestrians in outdoor thermal images that adapts to variable atmospheric situations. In the first step, the type of atmospheric situation is estimated based on the global features of the thermal image. Then, for each situation, a relevant algorithm is performed for pedestrian detection. To do this, thermal images are divided into three classes of atmospheric situations: a) fine such as sunny weather, b) bad such as rainy and hazy weather, c) hot such as hot summer days where pedestrians are darker than background. Then 2-Dimensional Double Density Dual Tree Discrete Wavelet Transform (2D DD DT DWT) in three levels is acquired from input images and the energy of low frequency coefficients in third level is calculated as the discriminating feature for atmospheric situation identification. Feed-forward neural network (FFNN) classifier is trained by this feature vector to determine the category of atmospheric situation. Finally, a predetermined algorithm that is relevant to the category of atmospheric situation is applied for pedestrian detection. The proposed method in pedestrian detection has high performance so that the accuracy of pedestrian detection in two popular databases is more than 99%.
H.6.5.2. Computer vision
M. Karami; A. Moosavie nia; M. Ehsanian
Abstract
In this paper we address the problem of automatic arrangement of cameras in a 3D system to enhance the performance of depth acquisition procedure. Lacking ground truth or a priori information, a measure of uncertainty is required to assess the quality of reconstruction. The mathematical model of iso-disparity ...
Read More
In this paper we address the problem of automatic arrangement of cameras in a 3D system to enhance the performance of depth acquisition procedure. Lacking ground truth or a priori information, a measure of uncertainty is required to assess the quality of reconstruction. The mathematical model of iso-disparity surfaces provides an efficient way to estimate the depth estimation uncertainty which is believed to be related to the baseline length, focal length, panning angle and the pixel resolution in a stereo vision system. Accordingly, we first present analytical relations for fast estimation of the embedded uncertainty in depth acquisition and then these relations, along with the 3D sampling arrangement are employed to define a cost function. The optimal camera arrangement will be determined by minimizing the cost function with respect to the system parameters and the required constraints. Finally, the proposed algorithm is implemented on some 3D models. The simulation results demonstrate significant improvement (up to 35%) in depth uncertainty in the achieved depth maps compared with the traditional rectified camera setup.
Mohammad AllamehAmiri; Vali Derhami; Mohammad Ghasemzadeh
Abstract
Quality of service (QoS) is an important issue in the design and management of web service composition. QoS in web services consists of various non-functional factors, such as execution cost, execution time, availability, successful execution rate, and security. In recent years, the number of available ...
Read More
Quality of service (QoS) is an important issue in the design and management of web service composition. QoS in web services consists of various non-functional factors, such as execution cost, execution time, availability, successful execution rate, and security. In recent years, the number of available web services has proliferated, and then offered the same services increasingly. The same web services are distinguished based on their quality parameters. Also, clients usually demand more value added services rather than those offered by single, isolated web services. Therefore, selecting a composition plan of web services among numerous plans satisfies client requirements and has become a challenging and time-consuming problem. This paper has proposed a new composition plan optimizer with constraints based on genetic algorithm. The proposed method can find the composition plan that satisfies user constraints efficiently. The performance of the method is evaluated in a simulated environment.
Seyed M. Hosseinirad; M. Niazi; J Pourdeilami; S. K. Basu; A. A. Pouyan
Abstract
In Wireless Sensor Networks (WSNs), localization algorithms could be range-based or range-free. The Approximate Point in Triangle (APIT) is a range-free approach. We propose modification of the APIT algorithm and refer as modified-APIT. We select suitable triangles with appropriate distance between anchors ...
Read More
In Wireless Sensor Networks (WSNs), localization algorithms could be range-based or range-free. The Approximate Point in Triangle (APIT) is a range-free approach. We propose modification of the APIT algorithm and refer as modified-APIT. We select suitable triangles with appropriate distance between anchors to reduce PIT test errors (edge effect and non-uniform placement of neighbours) in APIT algorithm. To reduce the computational load and avoid useless anchors selection, we propose to segment the application area to four non-overlapping and four overlapping sub-regions. Our results show that the modified-APIT algorithm has better performance in terms of average error and time requirement for all sizes of network with random and grid deployments. For increasing the accuracy of localization and reduction of computation time, every sub-region should contain minimum 5 anchors. The modified-APIT has better performance for different sizes of network for both grid and random deployments in terms of average error and time requirement. Variations of the size of a network and radio communication radius of anchors affect the value of average error and time requirement. To have more accurate location estimation, 5 to 10 anchors per sub-region are effective in modified-APIT.
H.6. Pattern Recognition
J. Hamidzadeh
Abstract
In instance-based learning, a training set is given to a classifier for classifying new instances. In practice, not all information in the training set is useful for classifiers. Therefore, it is convenient to discard irrelevant instances from the training set. This process is known as instance reduction, ...
Read More
In instance-based learning, a training set is given to a classifier for classifying new instances. In practice, not all information in the training set is useful for classifiers. Therefore, it is convenient to discard irrelevant instances from the training set. This process is known as instance reduction, which is an important task for classifiers since through this process the time for classification or training could be reduced. Instance-based learning methods are often confronted with the difficulty of choosing the instances which must be stored to be used during an actual test. Storing too many instances may result in large memory requirements and slow execution speed. In this paper, first, a Distance-based Decision Surface (DDS) is proposed which is used as a separating surface between the classes, then an instance reduction method, which is based on the DDS surface is proposed, namely IRDDS (Instance Reduction based on Distance-based Decision Surface). Using the DDS surface with Genetic algorithm selects a reference set for classification. IRDDS selects the most representative instances, satisfying both following objectives: high accuracy and reduction rates. The performance of IRDDS has been evaluated on real world data sets from UCI repository by the 10-fold cross-validation method. The results of the experiments are compared with some state-of-the-art methods, which show the superiority of the proposed method over the surveyed literature, in terms of both classification accuracy and reduction percentage.
H.8. Document and Text Processing
Sh. Rafieian; A. Baraani dastjerdi
Abstract
With due respect to the authors’ rights, plagiarism detection, is one of the critical problems in the field of text-mining that many researchers are interested in. This issue is considered as a serious one in high academic institutions. There exist language-free tools which do not yield any reliable ...
Read More
With due respect to the authors’ rights, plagiarism detection, is one of the critical problems in the field of text-mining that many researchers are interested in. This issue is considered as a serious one in high academic institutions. There exist language-free tools which do not yield any reliable results since the special features of every language are ignored in them. Considering the paucity of works in the field of Persian language due to lack of reliable plagiarism checkers in Persian there is a need for a method to improve the accuracy of detecting plagiarized Persian phrases. Attempt is made in the article to present the PCP solution. This solution is a combinational method that in addition to meaning and stem of words, synonyms and pluralization is dealt with by applying the document tree representation based on manner fingerprinting the text in the 3-grams words. The obtained grams are eliminated from the text, hashed through the BKDR hash function, and stored as the fingerprint of a document in fingerprints of reference documents repository, for checking suspicious documents. The PCP proposed method here is evaluated by eight experiments on seven different sets, which include suspicions document and the reference document, from the Hamshahri newspaper website. The results indicate that accuracy of this proposed method in detection of similar texts in comparison with "Winnowing" localized method has 21.15 percent is improvement average. The accuracy of the PCP method in detecting the similarity in comparison with the language-free tool reveals 31.65 percent improvement average.
C.3. Software Engineering
F. Karimian; S. M. Babamir
Abstract
Reliability of software counts on its fault-prone modules. This means that the less software consists of fault-prone units the more we may trust it. Therefore, if we are able to predict the number of fault-prone modules of software, it will be possible to judge the software reliability. In predicting ...
Read More
Reliability of software counts on its fault-prone modules. This means that the less software consists of fault-prone units the more we may trust it. Therefore, if we are able to predict the number of fault-prone modules of software, it will be possible to judge the software reliability. In predicting software fault-prone modules, one of the contributing features is software metric by which one can classify software modules into fault-prone and non-fault-prone ones. To make such a classification, we investigated into 17 classifier methods whose features (attributes) are software metrics (39 metrics) and instances (software modules) of mining are instances of 13 datasets reported by NASA. However, there are two important issues influencing our prediction accuracy when we use data mining methods: (1) selecting the best/most influent features (i.e. software metrics) when there is a wide diversity of them and (2) instance sampling in order to balance the imbalanced instances of mining; we have two imbalanced classes when the classifier biases towards the majority class. Based on the feature selection and instance sampling, we considered 4 scenarios in appraisal of 17 classifier methods to predict software fault-prone modules. To select features, we used Correlation-based Feature Selection (CFS) and to sample instances we did Synthetic Minority Oversampling Technique (SMOTE). Empirical results showed that suitable sampling software modules significantly influences on accuracy of predicting software reliability but metric selection has not considerable effect on the prediction.
J.10.3. Financial
S. Beigi; M.R. Amin Naseri
Abstract
Due to today’s advancement in technology and businesses, fraud detection has become a critical component of financial transactions. Considering vast amounts of data in large datasets, it becomes more difficult to detect fraud transactions manually. In this research, we propose a combined method ...
Read More
Due to today’s advancement in technology and businesses, fraud detection has become a critical component of financial transactions. Considering vast amounts of data in large datasets, it becomes more difficult to detect fraud transactions manually. In this research, we propose a combined method using both data mining and statistical tasks, utilizing feature selection, resampling and cost-sensitive learning for credit card fraud detection. In the first step, useful features are identified using genetic algorithm. Next, the optimal resampling strategy is determined based on the design of experiments (DOE) and response surface methodologies. Finally, the cost sensitive C4.5 algorithm is used as the base learner in the Adaboost algorithm. Using a real-time data set, results show that applying the proposed method significantly reduces the misclassification cost by at least 14% compared with Decision tree, Naïve bayes, Bayesian Network, Neural network and Artificial immune system.
H.3. Artificial Intelligence
Z. Karimi Zandian; M. R. Keyvanpour
Abstract
Fraud detection is one of the ways to cope with damages associated with fraudulent activities that have become common due to the rapid development of the Internet and electronic business. There is a need to propose methods to detect fraud accurately and fast. To achieve to accuracy, fraud detection methods ...
Read More
Fraud detection is one of the ways to cope with damages associated with fraudulent activities that have become common due to the rapid development of the Internet and electronic business. There is a need to propose methods to detect fraud accurately and fast. To achieve to accuracy, fraud detection methods need to consider both kind of features, features based on user level and features based on network level. In this paper a method called MEFUASN is proposed to extract features that is based on social network analysis and then both of obtained features and features based on user level are combined together and used to detect fraud using semi-supervised learning. Evaluation results show using the proposed feature extraction as a pre-processing step in fraud detection improves the accuracy of detection remarkably while it controls runtime in comparison with other methods.
H.5. Image Processing and Computer Vision
M. Amin-Naji; A. Aghagolzadeh
Abstract
The purpose of multi-focus image fusion is gathering the essential information and the focused parts from the input multi-focus images into a single image. These multi-focus images are captured with different depths of focus of cameras. A lot of multi-focus image fusion techniques have been introduced ...
Read More
The purpose of multi-focus image fusion is gathering the essential information and the focused parts from the input multi-focus images into a single image. These multi-focus images are captured with different depths of focus of cameras. A lot of multi-focus image fusion techniques have been introduced using considering the focus measurement in the spatial domain. However, the multi-focus image fusion processing is very time-saving and appropriate in discrete cosine transform (DCT) domain, especially when JPEG images are used in visual sensor networks (VSN). So the most of the researchers are interested in focus measurements calculation and fusion processes directly in DCT domain. Accordingly, many researchers developed some techniques which are substituting the spatial domain fusion process with DCT domain fusion process. Previous works in DCT domain have some shortcomings in selection of suitable divided blocks according to their criterion for focus measurement. In this paper, calculation of two powerful focus measurements, energy of Laplacian (EOL) and variance of Laplacian (VOL), are proposed directly in DCT domain. In addition, two other new focus measurements which work by measuring correlation coefficient between source blocks and artificial blurred blocks are developed completely in DCT domain. However, a new consistency verification method is introduced as a post-processing, improving the quality of fused image significantly. These proposed methods reduce the drawbacks significantly due to unsuitable block selection. The output images quality of our proposed methods is demonstrated by comparing the results of proposed algorithms with the previous algorithms.
N. Mobaraki; R. Boostani; M. Sabeti
Abstract
Among variety of meta-heuristic population-based search algorithms, particle swarm optimization (PSO) with adaptive inertia weight (AIW) has been considered as a versatile optimization tool, which incorporates the experience of the whole swarm into the movement of particles. Although the exploitation ...
Read More
Among variety of meta-heuristic population-based search algorithms, particle swarm optimization (PSO) with adaptive inertia weight (AIW) has been considered as a versatile optimization tool, which incorporates the experience of the whole swarm into the movement of particles. Although the exploitation ability of this algorithm is great, it cannot comprehensively explore the search space and may be trapped in a local minimum through a limited number of iterations. To increase its diversity as well as enhancing its exploration ability, this paper inserts a chaotic factor, generated by three chaotic systems, along with a perturbation stage into AIW-PSO to avoid premature convergence, especially in complex nonlinear problems. To assess the proposed method, a known optimization benchmark containing nonlinear complex functions was selected and its results were compared to that of standard PSO, AIW-PSO and genetic algorithm (GA). The empirical results demonstrate the superiority of the proposed chaotic AIW-PSO to the counterparts over 21 functions, which confirms the promising role of inserting the randomness into the AIW-PSO. The behavior of error through the epochs show that the proposed manner can smoothly find proper minimums in a timely manner without encountering with premature convergence.
H.6.3.2. Feature evaluation and selection
Sh kashef; H. Nezamabadi-pour
Abstract
Multi-label classification has gained significant attention during recent years, due to the increasing number of modern applications associated with multi-label data. Despite its short life, different approaches have been presented to solve the task of multi-label classification. LIFT is a multi-label ...
Read More
Multi-label classification has gained significant attention during recent years, due to the increasing number of modern applications associated with multi-label data. Despite its short life, different approaches have been presented to solve the task of multi-label classification. LIFT is a multi-label classifier which utilizes a new strategy to multi-label learning by leveraging label-specific features. Label-specific features means that each class label is supposed to have its own characteristics and is determined by some specific features that are the most discriminative features for that label. LIFT employs clustering methods to discover the properties of data. More precisely, LIFT divides the training instances into positive and negative clusters for each label which respectively consist of the training examples with and without that label. It then selects representative centroids in the positive and negative instances of each label by k-means clustering and replaces the original features of a sample by the distances to these representatives. Constructing new features, the dimensionality of the new space reduces significantly. However, to construct these new features, the original features are needed. Therefore, the complexity of the process of multi-label classification does not diminish, in practice. In this paper, we make a modification on LIFT to reduce the computational burden of the classifier and improve or at least preserve the performance of it, as well. The experimental results show that the proposed algorithm has obtained these goals, simultaneously.
H.5. Image Processing and Computer Vision
J. Darvish; M. Ezoji
Abstract
Diabetic retinopathy lesion detection such as exudate in fundus image of retina can lead to early diagnosis of the disease. Retinal image includes dark areas such as main blood vessels and retinal tissue and also bright areas such as optic disk, optical fibers and lesions e.g. exudate. In this paper, ...
Read More
Diabetic retinopathy lesion detection such as exudate in fundus image of retina can lead to early diagnosis of the disease. Retinal image includes dark areas such as main blood vessels and retinal tissue and also bright areas such as optic disk, optical fibers and lesions e.g. exudate. In this paper, a multistage algorithm for the detection of exudate in foreground is proposed. The algorithm segments the background dark areas in the proper channels of RGB color space using morphological processing such as closing, opening and top-hat operations. Then an appropriate edge detector discriminates between exudates and cotton-like spots or other artificial effects. To tackle the problem of optical fibers and to discriminate between these brightness and exudates, in the first stage, main vessels are detected from the green channel of RGB color space. Then the optical fiber areas around the vessels are marked up. An algorithm which uses PCA-based reconstruction error is proposed to discard another fundus bright structure named optic disk. Several experiments have been performed with HEI-MED standard database and evaluated by comparing with ground truth images. These results show that the proposed algorithm has a detection accuracy of 96%.
Mohammad Ahmadi Livani; mahdi Abadi; Meysam Alikhany; Meisam Yadollahzadeh Tabari
Abstract
Detecting anomalies is an important challenge for intrusion detection and fault diagnosis in wireless sensor networks (WSNs). To address the problem of outlier detection in wireless sensor networks, in this paper we present a PCA-based centralized approach and a DPCA-based distributed energy-efficient ...
Read More
Detecting anomalies is an important challenge for intrusion detection and fault diagnosis in wireless sensor networks (WSNs). To address the problem of outlier detection in wireless sensor networks, in this paper we present a PCA-based centralized approach and a DPCA-based distributed energy-efficient approach for detecting outliers in sensed data in a WSN. The outliers in sensed data can be caused due to compromised or malfunctioning nodes. In the distributed approach, we use distributed principal component analysis (DPCA) and fixed-width clustering (FWC) in order to establish a global normal pattern and to detect outlier. The process of establishing the global normal pattern is distributed among all sensor nodes. We also use weighted coefficients and a forgetting curve to periodically update the established normal profile. We demonstrate that the proposed distributed approach achieves comparable accuracy compared to the centralized approach, while the communication overhead in the network and energy consumption is significantly reduced.
H.3.2.2. Computer vision
M. Askari; M. Asadi; A. Asilian Bidgoli; H. Ebrahimpour
Abstract
For many years, researchers have studied high accuracy methods for recognizing the handwriting and achieved many significant improvements. However, an issue that has rarely been studied is the speed of these methods. Considering the computer hardware limitations, it is necessary for these methods to ...
Read More
For many years, researchers have studied high accuracy methods for recognizing the handwriting and achieved many significant improvements. However, an issue that has rarely been studied is the speed of these methods. Considering the computer hardware limitations, it is necessary for these methods to run in high speed. One of the methods to increase the processing speed is to use the computer parallel processing power. This paper introduces one of the best feature extraction methods for the handwritten recognition, called DPP (Derivative Projection Profile), which is employed for isolated Persian handwritten recognition. In addition to achieving good results, this (computationally) light feature can easily be processed. Moreover, Hamming Neural Network is used to classify this system. To increase the speed, some part of the recognition method is executed on GPU (graphic processing unit) cores implemented by CUDA platform. HADAF database (Biggest isolated Persian character database) is utilized to evaluate the system. The results show 94.5% accuracy. We also achieved about 5.5 times speed-up using GPU.
Mohsen Zare-Baghbidi; Saeid Homayouni; Kamal Jamshidi; A. R. Naghsh-Nilchi
Abstract
Anomaly Detection (AD) has recently become an important application of hyperspectral images analysis. The goal of these algorithms is to find the objects in the image scene which are anomalous in comparison to their surrounding background. One way to improve the performance and runtime of these algorithms ...
Read More
Anomaly Detection (AD) has recently become an important application of hyperspectral images analysis. The goal of these algorithms is to find the objects in the image scene which are anomalous in comparison to their surrounding background. One way to improve the performance and runtime of these algorithms is to use Dimensionality Reduction (DR) techniques. This paper evaluates the effect of three popular linear dimensionality reduction methods on the performance of three benchmark anomaly detection algorithms. The Principal Component Analysis (PCA), Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) as DR methods, act as pre-processing step for AD algorithms. The assessed AD algorithms are Reed-Xiaoli (RX), Kernel-based versions of the RX (Kernel-RX) and Dual Window-Based Eigen Separation Transform (DWEST). The AD methods have been applied to two hyperspectral datasets acquired by both the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperspectral Mapper (HyMap) sensors. The evaluation of experiments has been done using Receiver Operation Characteristic (ROC) curve, visual investigation and runtime of the algorithms. Experimental results show that the DR methods can significantly improve the detection performance of the RX method. The detection performance of neither the Kernel-RX method nor the DWEST method changes when using the proposed methods. Moreover, these DR methods increase the runtime of the RX and DWEST significantly and make them suitable to be implemented in real time applications.
A.1. General
H. Kiani Rad; Z. Moravej
Abstract
In recent years, significant research efforts have been devoted to the optimal planning of power systems. Substation Expansion Planning (SEP) as a sub-system of power system planning consists of finding the most economical solution with the optimal location and size of future substations and/or feeders ...
Read More
In recent years, significant research efforts have been devoted to the optimal planning of power systems. Substation Expansion Planning (SEP) as a sub-system of power system planning consists of finding the most economical solution with the optimal location and size of future substations and/or feeders to meet the future load demand. The large number of design variables and combination of discrete and continuous variables make the substation expansion planning a very challenging problem. So far, various methods have been presented to solve such a complicated problem. Since the Bacterial Foraging Optimization Algorithm (BFOA) yield to proper results in power system studies, and it has not been applied to SEP in sub-transmission voltage level problems yet, this paper develops a new BFO-based method to solve the Sub-Transmission Substation Expansion Planning (STSEP) problem. The technique discussed in this paper uses BFOA to simultaneously optimize the sizes and locations of both the existing and new installed substations and feeders by considering reliability constraints. To clarify the capabilities of the presented method, two test systems (a typical network and a real ones) are considered, and the results of applying GA and BFOA on these networks are compared. The simulation results demonstrate that the BFOA has the potential to find more optimal results than the other algorithm under the same conditions. Also, the fast convergence, consideration of real-world networks limitations as problem constraints, and the simplicity in applying it to real networks are the main features of the proposed method.
H.5.11. Image Representation
E. Sahragard; H. Farsi; S. Mohammadzadeh
Abstract
The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting method fills the degraded or lost area of the image by appropriate information. This is performed in such a way so that the obtained image is undistinguishable for a casual ...
Read More
The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting method fills the degraded or lost area of the image by appropriate information. This is performed in such a way so that the obtained image is undistinguishable for a casual person who is unfamiliar with the original image. In this paper, different images are degraded by two procedures; one is to blur and to add noise to the original image, and the other one is to lose a percentage of the pixels belonging to the original image. Then, the degraded image is restored by the proposed method and also two state-of-art methods. For image restoration, it is required to use optimization methods. In this paper, we use a linear restoration method based on the total variation regularizer. The variable of optimization problem is split, and the new optimization problem is solved by using Lagrangian augmented method. The experimental results show that the proposed method is faster, and the restored images have higher quality compared to the other methods.
F.4. Probability and Statistics
Z. Shaeiri; M. R. Karami; A. Aghagolzadeh
Abstract
Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits ...
Read More
Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits of the perfect support set recovery. To reduce this gap, in this paper, the sufficient condition is enhanced. A specific form of a Joint Typicality decoder is used for the support recovery task. Two performance metrics are considered for the recovery validation; one, which considers exact support recovery, and the other which seeks partial support recovery. First, an upper bound is obtained on the error probability of the sparsity pattern recovery. Next, using the mentioned upper bound, sufficient number of measurements for reliable support recovery is derived. It is shown that the sufficient condition for reliable support recovery depends on three key parameters of the problem; the noise variance, the minimum nonzero entry of the unknown sparse vector and the sparsity level. Simulations are performed for different sparsity rate, different noise variances, and different distortion levels. The results show that for all the mentioned cases the proposed methodology increases convergence rate of upper bound of the error probability of support recovery significantly which leads to a lower error probability bound compared with previously proposed bounds.
H.3.12. Distributed Artificial Intelligence
M. Rezaei; V. Derhami
Abstract
Nonnegative Matrix Factorization (NMF) algorithms have been utilized in a wide range of real applications. NMF is done by several researchers to its part based representation property especially in the facial expression recognition problem. It decomposes a face image into its essential parts (e.g. nose, ...
Read More
Nonnegative Matrix Factorization (NMF) algorithms have been utilized in a wide range of real applications. NMF is done by several researchers to its part based representation property especially in the facial expression recognition problem. It decomposes a face image into its essential parts (e.g. nose, lips, etc.) but in all previous attempts, it is neglected that all features achieved by NMF do not need for recognition problem. For example, some facial parts do not have any useful information regarding the facial expression recognition. Addressing this challenge of defining and calculating the contributions of each part, the Shapley value is used. It is applied for identifying the contribution of each feature in the classification problem; then, affects less features are removed. Experiments on the JAFFE dataset and MUG Facial Expression Database as benchmarks of facial expression datasets demonstrate the effectiveness of our approach.
Alireza Khosravi; Alireza Alfi; Amir Roshandel
Abstract
There are two significant goals in teleoperation systems: Stability and performance. This paper introduces an LMI-based robust control method for bilateral transparent teleoperation systems in presence of model mismatch. The uncertainties in time delay in communication channel, task environment and model ...
Read More
There are two significant goals in teleoperation systems: Stability and performance. This paper introduces an LMI-based robust control method for bilateral transparent teleoperation systems in presence of model mismatch. The uncertainties in time delay in communication channel, task environment and model parameters of master-slave systems is called model mismatch. The time delay in communication channel is assumed to be large, unknown and unsymmetric, but the upper bound of the delay is assumed to be known. The proposed method consists of two local controllers. One local controller namely local slave controller is located on the remote site to control the motion tracking and the other one is located on the local site namely local master controller to preserve the complete transparency by ensuring force tracking and the robust stability of the closed-loop system. To reduce the peak amplitude of output signal respect to the peak amplitude of input signal in slave site, the local slave controller is designed based on a bounded peak-to-peak gain controller. In order to provide a realistic case, an external signal as a noise of force sensor is also considered. Simulation results show the effectiveness of proposed control structure.