D.4. Data Encryption
H. Khodadadi; A. Zandvakili
Abstract
This paper presents a new method for encryption of color images based on a combination of chaotic systems, which makes the image encryption more efficient and robust. The proposed algorithm generated three series of data, ranged between 0 and 255, using a chaotic Chen system. Another Chen system was ...
Read More
This paper presents a new method for encryption of color images based on a combination of chaotic systems, which makes the image encryption more efficient and robust. The proposed algorithm generated three series of data, ranged between 0 and 255, using a chaotic Chen system. Another Chen system was then started with different initial values, which were converted to three series of numbers from 0 to 10. The three red, green, and blue values were combined with three values of the first Chen system to encrypt pixel 1 of the image while values of the second Chen system were used to distort the combination order of the values of the first Chen system with the pixels of the image. The process was repeated until all pixels of the image were encrypted. The innovative aspect of this method was in combination of the two chaotic systems, which makes the encryption process more complicated. Tests performed on standard images (USC datasets) indicated effectiveness and robustness of this encryption method
H.5.7. Segmentation
V. Naghashi; Sh. Lotfi
Abstract
Image segmentation is a fundamental step in many of image processing applications. In most cases the image’s pixels are clustered only based on the pixels’ intensity or color information and neither spatial nor neighborhood information of pixels is used in the clustering process. Considering ...
Read More
Image segmentation is a fundamental step in many of image processing applications. In most cases the image’s pixels are clustered only based on the pixels’ intensity or color information and neither spatial nor neighborhood information of pixels is used in the clustering process. Considering the importance of including spatial information of pixels which improves the quality of image segmentation, and using the information of the neighboring pixels, causes enhancing of the accuracy of segmentation. In this paper the idea of combining the K-means algorithm and the Improved Imperialist Competitive algorithm is proposed. Also before applying the hybrid algorithm, a new image is created and then the hybrid algorithm is employed. Finally, a simple post-processing is applied on the clustered image. Comparing the results of the proposed method on different images, with other methods, shows that in most cases, the accuracy of the NLICA algorithm is better than the other methods.
B.3. Communication/Networking and Information Technology
M. Zahedi; A. Arjomandzadeh
Abstract
Multi-part words in English language are hyphenated and hyphen is used to separate different parts. Persian language consists of multi-part words as well. Based on Persian morphology, half-space character is needed to separate parts of multi-part words where in many cases people incorrectly use space ...
Read More
Multi-part words in English language are hyphenated and hyphen is used to separate different parts. Persian language consists of multi-part words as well. Based on Persian morphology, half-space character is needed to separate parts of multi-part words where in many cases people incorrectly use space character instead of half-space character. This common incorrectly use of space leads to some serious issues in Persian text processing and text readability. In order to cope with the issues, this work proposes a new model to correct spacing in multi-part words. The proposed method is based on statistical machine translation paradigm. In machine translation paradigm, text in source language is translated into a text in destination language on the basis of statistical models whose parameters are derived from the analysis of bilingual text corpora. The proposed method uses statistical machine translation techniques considering unedited multi-part words as a source language and the space-edited multi-part words as a destination language. The results show that the proposed method can edit and improve spacing correction process of Persian multi-part words with a statistically significant accuracy rate.
H.5. Image Processing and Computer Vision
H. Khodadadi; O. Mirzaei
Abstract
In this paper, a new method is presented for encryption of colored images. This method is based on using stack data structure and chaos which make the image encryption algorithm more efficient and robust. In the proposed algorithm, a series of data whose range is between 0 and 3 is generated using chaotic ...
Read More
In this paper, a new method is presented for encryption of colored images. This method is based on using stack data structure and chaos which make the image encryption algorithm more efficient and robust. In the proposed algorithm, a series of data whose range is between 0 and 3 is generated using chaotic logistic system. Then, the original image is divided into four subimages, and these four images are respectively pushed into the stack based on next number in the series. In the next step, the first element of the stack (which includes one of the four sub-images) is popped, and this image is divided into four other parts. Then, based on the next number in the series, four sub-images are pushed into the stack again. This procedure is repeated until the stack is empty. Therefore, during this process, each pixel unit is encrypted using another series of chaotic numbers (generated by Chen chaotic system). This method is repeated until all pixels of the plain image are encrypted. Finally, several extensive simulations on well-known USC datasets have been conducted to show the efficiency of this encryption algorithm. The tests performed showthat the proposed method has a really large key space and possesses high-entropic distribution. Consequently, it outperforms the other competing algorithms in the case of security
Hossein Shahamat; Ali A. Pouyan
Abstract
In this paper we propose a new method for classification of subjects into schizophrenia and control groups using functional magnetic resonance imaging (fMRI) data. In the preprocessing step, the number of fMRI time points is reduced using principal component analysis (PCA). Then, independent component ...
Read More
In this paper we propose a new method for classification of subjects into schizophrenia and control groups using functional magnetic resonance imaging (fMRI) data. In the preprocessing step, the number of fMRI time points is reduced using principal component analysis (PCA). Then, independent component analysis (ICA) is used for further data analysis. It estimates independent components (ICs) of PCA results. For feature extraction, local binary patterns (LBP) technique is applied on the ICs. It transforms the ICs into spatial histograms of LBP values. For feature selection, genetic algorithm (GA) is used to obtain a set of features with large discrimination power. In the next step of feature selection, linear discriminant analysis (LDA) is applied to further extract features that maximize the ratio of between-class and within-class variability. Finally, a test subject is classified into schizophrenia or control group using a Euclidean distance based classifier and a majority vote method. In this paper, a leave-one-out cross validation method is used for performance evaluation. Experimental results prove that the proposed method has an acceptable accuracy.
H.6.3.1. Classifier design and evaluation
Z. Mirzamomen; Kh. Ghafooripour
Abstract
Multi-label classification has many applications in the text categorization, biology and medical diagnosis, in which multiple class labels can be assigned to each training instance simultaneously. As it is often the case that there are relationships between the labels, extracting the existing relationships ...
Read More
Multi-label classification has many applications in the text categorization, biology and medical diagnosis, in which multiple class labels can be assigned to each training instance simultaneously. As it is often the case that there are relationships between the labels, extracting the existing relationships between the labels and taking advantage of them during the training or prediction phases can bring about significant improvements. In this paper, we have introduced positive, negative and hybrid relationships between the class labels for the first time and we have proposed a method to extract these relations for a multi-label classification task and consequently, to use them in order to improve the predictions made by a multi-label classifier. We have conducted extensive experiments to assess the effectiveness of the proposed method. The obtained results advocate the merits of the proposed method in improving the multi-label classification results.
H.6.5.13. Signal processing
F. Sabahi
Abstract
Frequency control is one of the key parts for the arrangement of the performance of a microgrid (MG) system. Theoretically, model-based controllers may be the ideal control mechanisms; however, they are highly sensitive to model uncertainties and have difficulty with preserving robustness. The presence ...
Read More
Frequency control is one of the key parts for the arrangement of the performance of a microgrid (MG) system. Theoretically, model-based controllers may be the ideal control mechanisms; however, they are highly sensitive to model uncertainties and have difficulty with preserving robustness. The presence of serious disturbances, the increasing number of MG, varying voltage supplies of MGs, and both independent operations of MGs and their interaction with the main grid makes the design of model-based frequency controllers for MGs become inherently challenging and problematic. This paper proposes an approach that takes advantage of interval Type II fuzzy logic for modeling an MG system in the process of its robust H∞ frequency control. Specifically, the main contribution of this paper is that the parameters of the MG system are modeled by interval Type-II fuzzy system (IT2FS), and simultaneously MG deals with perturbation using H∞ index to control its frequency. The performance of the microgrid equipped with the proposed modeling and controller is then compared with the other controllers such as H2 and μ-synthesis during changes in the microgrid parameters and occurring perturbations. The comparison shows the superiority and effectiveness of the proposed approach in terms of robustness against uncertainties in the modeling parameters and perturbations.
I. Computer Applications
M. Fateh; E. Kabir
Abstract
In this paper, we present a method for color reduction of Persian carpet cartoons that increases both speed and accuracy of editing. Carpet cartoons are in two categories: machine-printed and hand-drawn. Hand-drawn cartoons are divided into two groups: before and after discretization. The purpose of ...
Read More
In this paper, we present a method for color reduction of Persian carpet cartoons that increases both speed and accuracy of editing. Carpet cartoons are in two categories: machine-printed and hand-drawn. Hand-drawn cartoons are divided into two groups: before and after discretization. The purpose of this study is color reduction of hand-drawn cartoons before discretization. The proposed algorithm consists of the following steps: image segmentation, finding the color of each region, color reduction around the edges and final color reduction with C-means. The proposed method requires knowing the desired number of colors in any cartoon. In this method, the number of colors is not reduced to more than about 1.3 times of the desired number. Automatic color reduction is done in such a way that final manual editing to reach the desired colors is very easy.
F. Solaimannouri; M. Haddad zarif; M. M. Fateh
Abstract
This paper presents designing an optimal adaptive controller for tracking control of robot manipulators based on particle swarm optimization (PSO) algorithm. PSO algorithm has been employed to optimize parameters of the controller and hence to minimize the integral square of errors (ISE) as a performance ...
Read More
This paper presents designing an optimal adaptive controller for tracking control of robot manipulators based on particle swarm optimization (PSO) algorithm. PSO algorithm has been employed to optimize parameters of the controller and hence to minimize the integral square of errors (ISE) as a performance criteria. In this paper, an improved PSO using logic is proposed to increase the convergence speed. In this case, the performance of PSO algorithms such as an improved PSO (IPSO), an improved PSO using fuzzy logic (F-PSO), a linearly decreasing inertia weight of PSO (LWD-PSO) and a nonlinearly decreasing inertia weight of PSO (NDW-PSO) are compared in terms of parameter accuracy and convergence speed. As a result, the simulation results show that the F-PSO approach presents a better performance in the tracking control of robot manipulators than other algorithms.
C. Software/Software Engineering
D. Darabian; H. Marvi; M. Sharif Noughabi
Abstract
The Mel Frequency cepstral coefficients are the most widely used feature in speech recognition but they are very sensitive to noise. In this paper to achieve a satisfactorily performance in Automatic Speech Recognition (ASR) applications we introduce a noise robust new set of MFCC vector estimated through ...
Read More
The Mel Frequency cepstral coefficients are the most widely used feature in speech recognition but they are very sensitive to noise. In this paper to achieve a satisfactorily performance in Automatic Speech Recognition (ASR) applications we introduce a noise robust new set of MFCC vector estimated through following steps. First, spectral mean normalization is a pre-processing which applies to the noisy original speech signal. The pre-emphasized original speech segmented into overlapping time frames, then it is windowed by a modified hamming window .Higher order autocorrelation coefficients are extracted. The next step is to eliminate the lower order of the autocorrelation coefficients. The consequence pass from FFT block and then power spectrum of output is calculated. A Gaussian shape filter bank is applied to the results. Logarithm and two compensator blocks form which one is mean subtraction and the other one are root block applied to the results and DCT transformation is the last step. We use MLP neural network to evaluate the performance of proposed MFCC method and to classify the results. Some speech recognition experiments for various tasks indicate that the proposed algorithm is more robust than traditional ones in noisy condition.
C. Software/Software Engineering
S. Beiranvand; M.A. Z.Chahooki
Abstract
Software project management is one of the significant activates in the software development process. Software Development Effort Estimation (SDEE) is a challenging task in the software project management. SDEE is an old activity in computer industry from 1940s and has been reviewed several times. A SDEE ...
Read More
Software project management is one of the significant activates in the software development process. Software Development Effort Estimation (SDEE) is a challenging task in the software project management. SDEE is an old activity in computer industry from 1940s and has been reviewed several times. A SDEE model is appropriate if it provides the accuracy and confidence simultaneously before software project contract. Due to the uncertain nature of development estimates and in order to increase the accuracy, researchers recently have focused on machine learning techniques. Choosing the most effective features to achieve higher accuracy in machine learning is crucial. In this paper, for narrowing the semantic gap in SDEE, a hierarchical method of filter and wrapper Feature Selection (FS) techniques and a fused measurement criteria are developed in a two-phase approach. In the first phase, two stage filter FS methods provide start sets for wrapper FS techniques. In the second phase, a fused criterion is proposed for measuring accuracy in wrapper FS techniques. Experimental results show the validity and efficiency of the proposed approach for SDEE over a variety of standard datasets.
C.3. Software Engineering
N. Rezaee; H. Momeni
Abstract
Model checking is an automatic technique for software verification through which all reachable states are generated from an initial state to finding errors and desirable patterns. In the model checking approach, the behavior and structure of system should be modeled. Graph transformation system is a ...
Read More
Model checking is an automatic technique for software verification through which all reachable states are generated from an initial state to finding errors and desirable patterns. In the model checking approach, the behavior and structure of system should be modeled. Graph transformation system is a graphical formal modeling language to specify and model the system. However, modeling of large systems with the graph transformation system suffers from the state space explosion problem which usually requires huge amounts of computational resources. In this paper, we propose a hybrid meta-heuristic approach to deal with this searching problem in the graph transformation system because meta-heuristic algorithms are efficient solutions to traverse the graph of large systems. Our approach, using Artificial Bee Colony and Simulated Annealing, replaces a full state space generation, only by producing part of it checking the safety, and finding errors (e.g., deadlock). The experimental results show that our proposed approach is more efficient and accurate compared to other approaches.
H.4.6. Computational Geometry and Object Modeling
A. Mousavi; A. Sheikh Mohammad Zadeh; M. Akbari; A. Hunter
Abstract
Mobile technologies have deployed a variety of Internet–based services via location based services. The adoption of these services by users has led to mammoth amounts of trajectory data. To use these services effectively, analysis of these kinds of data across different application domains is required ...
Read More
Mobile technologies have deployed a variety of Internet–based services via location based services. The adoption of these services by users has led to mammoth amounts of trajectory data. To use these services effectively, analysis of these kinds of data across different application domains is required in order to identify the activities that users might need to do in different places. Researchers from different communities have developed models and techniques to extract activity types from such data, but they mainly have focused on the geometric properties of trajectories and do not consider the semantic aspect of moving objects. This work proposes a new ontology-based approach so as to recognize human activity from GPS data for understanding and interpreting mobility data. The performance of the approach was tested and evaluated using a dataset, which was acquired by a user over a year within the urban area in the City of Calgary in 2010. It was observed that the accuracy of the results was related to the availability of the points of interest around the places that the user had stopped. Moreover, an evaluation experiment was done, which revealed the effectiveness of the proposed method with an improvement of 50 % performance with complexity trend of an O(n).
H.6.2.2. Fuzzy set
Sh. Asadi; Seyed M. b. Jafari; Z. Shokrollahi
Abstract
Each semester, students go through the process of selecting appropriate courses. It is difficult to find information about each course and ultimately make decisions. The objective of this paper is to design a course recommender model which takes student characteristics into account to recommend appropriate ...
Read More
Each semester, students go through the process of selecting appropriate courses. It is difficult to find information about each course and ultimately make decisions. The objective of this paper is to design a course recommender model which takes student characteristics into account to recommend appropriate courses. The model uses clustering to identify students with similar interests and skills. Once similar students are found, dependencies between student course selections are examined using fuzzy association rules mining. The application of clustering and fuzzy association rules results in appropriate recommendations and a predicted score. In this study, a collection of data on undergraduate students at the Management and Accounting Faculty of College of Farabi in University of Tehran is used. The records are from 2004 to 2015. The students are divided into two clusters according to Educational background and demographics. Finally, recommended courses and predicted scores are given to students. The mined rules facilitate decision-making regarding course selection.
D. Data
M. Abdar; M. Zomorodi-Moghadam
Abstract
In this paper the accuracy of two machine learning algorithms including SVM and Bayesian Network are investigated as two important algorithms in diagnosis of Parkinson’s disease. We use Parkinson's disease data in the University of California, Irvine (UCI). In order to optimize the SVM algorithm, ...
Read More
In this paper the accuracy of two machine learning algorithms including SVM and Bayesian Network are investigated as two important algorithms in diagnosis of Parkinson’s disease. We use Parkinson's disease data in the University of California, Irvine (UCI). In order to optimize the SVM algorithm, different kernel functions and C parameters have been used and our results show that SVM with C parameter (C-SVM) with average of 99.18% accuracy with Polynomial Kernel function in testing step, has better performance compared to the other Kernel functions such as RBF and Sigmoid as well as Bayesian Network algorithm. It is also shown that ten important factors in SVM algorithm are Jitter (Abs), Subject #, RPDE, PPE, Age, NHR, Shimmer APQ 11, NHR, Total-UPDRS, Shimmer (dB) and Shimmer. We also prove that the accuracy of our proposed C-SVM and RBF approaches is in direct proportion to the value of C parameter such that with increasing the amount of C, accuracy in both Kernel functions is increased. But unlike Polynomial and RBF, Sigmoid has an inverse relation with the amount of C. Indeed, by using these methods, we can find the most effective factors common in both genders (male and female). To the best of our knowledge there is no study on Parkinson's disease for identifying the most effective factors which are common in both genders.
M. Zeynali; H. Seyedarabi; B. Mozaffari Tazehkand
Abstract
Network security is very important when sending confidential data through the network. Cryptography is the science of hiding information, and a combination of cryptography solutions with cognitive science starts a new branch called cognitive cryptography that guarantee the confidentiality and integrity ...
Read More
Network security is very important when sending confidential data through the network. Cryptography is the science of hiding information, and a combination of cryptography solutions with cognitive science starts a new branch called cognitive cryptography that guarantee the confidentiality and integrity of the data. Brain signals as a biometric indicator can convert to a binary code which can be used as a cryptographic key. This paper proposes a new method for decreasing the error of EEG- based key generation process. Discrete Fourier Transform, Discrete Wavelet Transform, Autoregressive Modeling, Energy Entropy, and Sample Entropy were used to extract features. All features are used as the input of new method based on window segmentation protocol then are converted to the binary mode. We obtain 0.76%, and 0.48% mean Half Total Error Rate (HTER) for 18-channel and single-channel cryptographic key generation systems respectively.
G.3.2. Logical Design
H. Tavakolaee; Gh. Ardeshir; Y. Baleghi
Abstract
Adders, as one of the major components of digital computing systems, have a strong influence on their performance. There are various types of adders, each of which uses a different algorithm to do addition with a certain delay. In addition to low computational delay, minimizing power consumption is also ...
Read More
Adders, as one of the major components of digital computing systems, have a strong influence on their performance. There are various types of adders, each of which uses a different algorithm to do addition with a certain delay. In addition to low computational delay, minimizing power consumption is also a main priority in adder circuit design. In this paper, the proposed adder is divided into several sub-blocks and the circuit of each sub-block is designed based on multiplexers and NOR gates to calculate the output carry or input carry of the next sub-block. This method reduces critical path delay (CPD) and therefore increases the speed of the adder. Simulation and synthesis of the proposed adder is done for cases of 8, 16, 32, and 64 bits and the results are compared with those of other fast adders. Synthesis results show that the proposed 16 and 32-bit adders have the lowest computation delay and also the best power delay product (PDP) among all recent popular adders.
H.5. Image Processing and Computer Vision
A. R. Yamghani; F. Zargari
Abstract
Video abstraction allows searching, browsing and evaluating videos only by accessing the useful contents. Most of the studies are using pixel domain, which requires the decoding process and needs more time and process consuming than compressed domain video abstraction. In this paper, we present a new ...
Read More
Video abstraction allows searching, browsing and evaluating videos only by accessing the useful contents. Most of the studies are using pixel domain, which requires the decoding process and needs more time and process consuming than compressed domain video abstraction. In this paper, we present a new video abstraction method in H.264/AVC compressed domain, AVAIF. The method is based on the normalized histogram of extracted I-frame prediction modes in H.264 standard. The frames’ similarity is calculated by intersecting their I-frame prediction modes’ histogram. Moreover, fuzzy c-means clustering is employed to categorize similar frames and extract key frames. The results show that the proposed method achieves on average 85% accuracy and 22% error rate in compressed domain video abstraction, which is higher than the other tested methods in the pixel domain. Moreover, on average, it generates video key frames that are closer to human summaries and it shows robustness to coding parameters.
F.2.7. Optimization
M. Maadi; M. Javidnia; M. Ghasemi
Abstract
Nowadays, due to inherent complexity of real optimization problems, it has always been a challenging issue to develop a solution algorithm to these problems. Single row facility layout problem (SRFLP) is a NP-hard problem of arranging a number of rectangular facilities with varying length on one side ...
Read More
Nowadays, due to inherent complexity of real optimization problems, it has always been a challenging issue to develop a solution algorithm to these problems. Single row facility layout problem (SRFLP) is a NP-hard problem of arranging a number of rectangular facilities with varying length on one side of a straight line with aim of minimizing the weighted sum of the distance between all facility pairs. In this paper two new algorithms of cuckoo optimization and forest optimization are applied and compared to solve SRFLP for the first time. The operators of two algorithms are adapted according to the characteristics of SRFLP and results are compared for two groups of benchmark instances of the literature. These groups consist of instances with the number of facilities less and more than 30. Results on two groups of instances show that proposed cuckoo optimization based algorithm has better performance rather than proposed forest optimization based algorithm in both aspects of finding the best solution and Computational time.
Timing analysis
Z. Izakian; M. Mesgari
Abstract
With rapid development in information gathering technologies and access to large amounts of data, we always require methods for data analyzing and extracting useful information from large raw dataset and data mining is an important method for solving this problem. Clustering analysis as the most commonly ...
Read More
With rapid development in information gathering technologies and access to large amounts of data, we always require methods for data analyzing and extracting useful information from large raw dataset and data mining is an important method for solving this problem. Clustering analysis as the most commonly used function of data mining, has attracted many researchers in computer science. Because of different applications, the problem of clustering the time series data has become highly popular and many algorithms have been proposed in this field. Recently Swarm Intelligence (SI) as a family of nature inspired algorithms has gained huge popularity in the field of pattern recognition and clustering. In this paper, a technique for clustering time series data using a particle swarm optimization (PSO) approach has been proposed, and Pearson Correlation Coefficient as one of the most commonly-used distance measures for time series is considered. The proposed technique is able to find (near) optimal cluster centers during the clustering process. To reduce the dimensionality of the search space and improve the performance of the proposed method, a singular value decomposition (SVD) representation of cluster centers is considered. Experimental results over three popular data sets indicate the superiority of the proposed technique in comparing with fuzzy C-means and fuzzy K-medoids clustering techniques.
H.6.3.2. Feature evaluation and selection
M. Imani; H. Ghassemian
Abstract
Feature extraction is a very important preprocessing step for classification of hyperspectral images. The linear discriminant analysis (LDA) method fails to work in small sample size situations. Moreover, LDA has poor efficiency for non-Gaussian data. LDA is optimized by a global criterion. Thus, it ...
Read More
Feature extraction is a very important preprocessing step for classification of hyperspectral images. The linear discriminant analysis (LDA) method fails to work in small sample size situations. Moreover, LDA has poor efficiency for non-Gaussian data. LDA is optimized by a global criterion. Thus, it is not sufficiently flexible to cope with the multi-modal distributed data. We propose a new feature extraction method in this paper, which uses the boundary semi-labeled samples for solving small sample size problem. The proposed method, which called hybrid feature extraction based on boundary semi-labeled samples (HFE-BSL), uses a hybrid criterion that integrates both the local and global criteria for feature extraction. Thus, it is robust and flexible. The experimental results with three real hyperspectral images show the good efficiency of HFE-BSL compared to some popular and state-of-the-art feature extraction methods.
G.3.9. Database Applications
M. Shamsollahi; A. Badiee; M. Ghazanfari
Abstract
Heart disease is one of the major causes of morbidity in the world. Currently, large proportions of healthcare data are not processed properly, thus, failing to be effectively used for decision making purposes. The risk of heart disease may be predicted via investigation of heart disease risk factors ...
Read More
Heart disease is one of the major causes of morbidity in the world. Currently, large proportions of healthcare data are not processed properly, thus, failing to be effectively used for decision making purposes. The risk of heart disease may be predicted via investigation of heart disease risk factors coupled with data mining knowledge. This paper presents a model developed using combined descriptive and predictive techniques of data mining that aims to aid specialists in the healthcare system to effectively predict patients with Coronary Artery Disease (CAD). To achieve this objective, some clustering and classification techniques are used. First, the number of clusters are determined using clustering indexes. Next, some types of decision tree methods and Artificial Neural Network (ANN) are applied to each cluster in order to predict CAD patients. Finally, results obtained show that the C&RT decision tree method performs best on all data used in this study with 0.074 error. All data used in this study are real and are collected from a heart clinic database.
H.6.2.2. Fuzzy set
N. Mohammadkarimi; V. Derhami
Abstract
This paper proposes fuzzy modeling using obtained data. Fuzzy system is known as knowledge-based or rule-bases system. The most important part of fuzzy system is rule-base. One of problems of generation of fuzzy rule with training data is inconsistence data. Existence of inconsistence and uncertain states ...
Read More
This paper proposes fuzzy modeling using obtained data. Fuzzy system is known as knowledge-based or rule-bases system. The most important part of fuzzy system is rule-base. One of problems of generation of fuzzy rule with training data is inconsistence data. Existence of inconsistence and uncertain states in training data causes high error in modeling. Here, Probability fuzzy system presents to improvement the above challenge. A zero order Sugeno fuzzy model used as fuzzy system structure. At first by using clustering obtains the number of rules and input membership functions. A set of candidate amounts for consequence parts of fuzzy rules is considered. Considering each pair of training data, according which rules fires and what is the output in the pair, the amount of probability of consequences candidates are change. In the next step, eligibility probability of each consequence candidate for all rules is determined. Finally, using these obtained probability, two probable outputs is generate for each input. The experimental results show superiority of the proposed approach rather than some available well-known approaches that makes reduce the number of rule and reduce system complexity.
F.2.7. Optimization
R. Roustaei; F. Yousefi Fakhr
Abstract
The human has always been to find the best in all things. This Perfectionism has led to the creation of optimization methods. The goal of optimization is to determine the variables and find the best acceptable answer Due to the limitations of the problem, So that the objective function is minimum or ...
Read More
The human has always been to find the best in all things. This Perfectionism has led to the creation of optimization methods. The goal of optimization is to determine the variables and find the best acceptable answer Due to the limitations of the problem, So that the objective function is minimum or maximum. One of the ways inaccurate optimization is meta-heuristics so that Inspired by nature, usually are looking for the optimal solution. in recent years, much effort has been done to improve or create metaheuristic algorithms. One of the ways to make improvements in meta-heuristic methods is using of combination. In this paper, a hybrid optimization algorithm based on imperialist competitive algorithm is presented. The used ideas are: assimilation operation with a variable parameter and the war function that is based on mathematical model of war in the real world. These changes led to increase the speed find the global optimum and reduce the search steps is in contrast with other metaheuristic. So that the evaluations done more than 80% of the test cases, in comparison to Imperialist Competitive Algorithm, Social Based Algorithm , Cuckoo Optimization Algorithm and Genetic Algorithm, the proposed algorithm was superior.
Farzaneh Zahedi; Mohammad-Reza Zare-Mirakabad
Abstract
Drug addiction is a major social, economic, and hygienic challenge that impacts on all the community and needs serious threat. Available treatments are successful only in short-term unless underlying reasons making individuals prone to the phenomenon are not investigated. Nowadays, there are some treatment ...
Read More
Drug addiction is a major social, economic, and hygienic challenge that impacts on all the community and needs serious threat. Available treatments are successful only in short-term unless underlying reasons making individuals prone to the phenomenon are not investigated. Nowadays, there are some treatment centers which have comprehensive information about addicted people. Therefore, given the huge data sources, data mining can be used to explore knowledge implicit in them, their results can be employed as a knowledge base of decision support systems to make decisions regarding addiction prevention and treatment. We studied participants of such clinics including 471 participants, where 86.2% were male and 13.8% were female. The study aimed to extract rules from the collected data by using association models. Results can be used by rehab clinics to give more knowledge regarding relationships between various parameters and help them for better and more effective treatments. E.g. according to the findings of the study, there is a relationship between individual characteristics and LSD abuse, individual characteristics, the kind of narcotics taken, and committing crimes, family history of drug addiction and family member drug addiction.