Original/Review Paper
H.6.3.2. Feature evaluation and selection
Sh kashef; H. Nezamabadi-pour
Abstract
Multi-label classification has gained significant attention during recent years, due to the increasing number of modern applications associated with multi-label data. Despite its short life, different approaches have been presented to solve the task of multi-label classification. LIFT is a multi-label ...
Read More
Multi-label classification has gained significant attention during recent years, due to the increasing number of modern applications associated with multi-label data. Despite its short life, different approaches have been presented to solve the task of multi-label classification. LIFT is a multi-label classifier which utilizes a new strategy to multi-label learning by leveraging label-specific features. Label-specific features means that each class label is supposed to have its own characteristics and is determined by some specific features that are the most discriminative features for that label. LIFT employs clustering methods to discover the properties of data. More precisely, LIFT divides the training instances into positive and negative clusters for each label which respectively consist of the training examples with and without that label. It then selects representative centroids in the positive and negative instances of each label by k-means clustering and replaces the original features of a sample by the distances to these representatives. Constructing new features, the dimensionality of the new space reduces significantly. However, to construct these new features, the original features are needed. Therefore, the complexity of the process of multi-label classification does not diminish, in practice. In this paper, we make a modification on LIFT to reduce the computational burden of the classifier and improve or at least preserve the performance of it, as well. The experimental results show that the proposed algorithm has obtained these goals, simultaneously.
Original/Review Paper
H.6.2.2. Fuzzy set
N. Moradkhani; M. Teshnehlab
Abstract
Cement rotary kiln is the main part of cement production process that have always attracted many researchers’ attention. But this complex nonlinear system has not been modeled efficiently which can make an appropriate performance specially in noisy condition. In this paper Takagi-Sugeno neuro-fuzzy ...
Read More
Cement rotary kiln is the main part of cement production process that have always attracted many researchers’ attention. But this complex nonlinear system has not been modeled efficiently which can make an appropriate performance specially in noisy condition. In this paper Takagi-Sugeno neuro-fuzzy system (TSNFS) is used for identification of cement rotary kiln, and gradient descent (GD) algorithm is applied for tuning the parameters of antecedent and consequent parts of fuzzy rules. In addition, the optimal inputs of the system are selected by genetic algorithm (GA) to achieve less complexity in fuzzy system. The data related to Saveh White Cement (SWC) factory is used in simulations. The Results demonstrate that the proposed identifier has a better performance in comparison with neural and fuzzy models have presented earlier for the same data. Furthermore, in this paper TSNFS is evaluated in noisy condition which had not been worked out before in related researches. Simulations show that this model has a proper performance in different noisy condition.
Original/Review Paper
D.4. Data Encryption
H. Khodadadi; A. Zandvakili
Abstract
This paper presents a new method for encryption of color images based on a combination of chaotic systems, which makes the image encryption more efficient and robust. The proposed algorithm generated three series of data, ranged between 0 and 255, using a chaotic Chen system. Another Chen system was ...
Read More
This paper presents a new method for encryption of color images based on a combination of chaotic systems, which makes the image encryption more efficient and robust. The proposed algorithm generated three series of data, ranged between 0 and 255, using a chaotic Chen system. Another Chen system was then started with different initial values, which were converted to three series of numbers from 0 to 10. The three red, green, and blue values were combined with three values of the first Chen system to encrypt pixel 1 of the image while values of the second Chen system were used to distort the combination order of the values of the first Chen system with the pixels of the image. The process was repeated until all pixels of the image were encrypted. The innovative aspect of this method was in combination of the two chaotic systems, which makes the encryption process more complicated. Tests performed on standard images (USC datasets) indicated effectiveness and robustness of this encryption method
Original/Review Paper
G.3.2. Logical Design
H. Tavakolaee; Gh. Ardeshir; Y. Baleghi
Abstract
Adders, as one of the major components of digital computing systems, have a strong influence on their performance. There are various types of adders, each of which uses a different algorithm to do addition with a certain delay. In addition to low computational delay, minimizing power consumption is also ...
Read More
Adders, as one of the major components of digital computing systems, have a strong influence on their performance. There are various types of adders, each of which uses a different algorithm to do addition with a certain delay. In addition to low computational delay, minimizing power consumption is also a main priority in adder circuit design. In this paper, the proposed adder is divided into several sub-blocks and the circuit of each sub-block is designed based on multiplexers and NOR gates to calculate the output carry or input carry of the next sub-block. This method reduces critical path delay (CPD) and therefore increases the speed of the adder. Simulation and synthesis of the proposed adder is done for cases of 8, 16, 32, and 64 bits and the results are compared with those of other fast adders. Synthesis results show that the proposed 16 and 32-bit adders have the lowest computation delay and also the best power delay product (PDP) among all recent popular adders.
Original/Review Paper
H.6.3.1. Classifier design and evaluation
M. Moradi; J. Hamidzadeh
Abstract
Recommender systems have been widely used in e-commerce applications. They are a subclass of information filtering system, used to either predict whether a user will prefer an item (prediction problem) or identify a set of k items that will be user-interest (Top-k recommendation problem). Demanding sufficient ...
Read More
Recommender systems have been widely used in e-commerce applications. They are a subclass of information filtering system, used to either predict whether a user will prefer an item (prediction problem) or identify a set of k items that will be user-interest (Top-k recommendation problem). Demanding sufficient ratings to make robust predictions and suggesting qualified recommendations are two significant challenges in recommender systems. However, the latter is far from satisfactory because human decisions affected by environmental conditions and they might change over time. In this paper, we introduce an innovative method to impute ratings to missed components of the rating matrix. We also design an ensemble-based method to obtain Top-k recommendations. To evaluate the performance of the proposed method, several experiments have been conducted based on 10-fold cross validation over real-world data sets. Experimental results show that the proposed method is superior to the state-of-the-art competing methods regarding applied evaluation metrics.
Original/Review Paper
H.7. Simulation, Modeling, and Visualization
J. Peymanfard; N. Mozayani
Abstract
In this paper, we present a data-driven method for crowd simulation with holonification model. With this extra module, the accuracy of simulation will increase and it generates more realistic behaviors of agents. First, we show how to use the concept of holon in crowd simulation and how effective it ...
Read More
In this paper, we present a data-driven method for crowd simulation with holonification model. With this extra module, the accuracy of simulation will increase and it generates more realistic behaviors of agents. First, we show how to use the concept of holon in crowd simulation and how effective it is. For this reason, we use simple rules for holonification. Using real-world data, we model the rules for joining each agent to a holon and leaving it with random forests. Then we use this model in simulation. Also, because we use data from a specific environment, we test the model in another environment. The result shows that the rules derived from the first environment exist in the second one. It confirms the generalization capabilities of the proposed method.
Original/Review Paper
H.3. Artificial Intelligence
A.R. Hatamlou; M. Deljavan
Abstract
Gold price forecast is of great importance. Many models were presented by researchers to forecast gold price. It seems that although different models could forecast gold price under different conditions, the new factors affecting gold price forecast have a significant importance and effect on the increase ...
Read More
Gold price forecast is of great importance. Many models were presented by researchers to forecast gold price. It seems that although different models could forecast gold price under different conditions, the new factors affecting gold price forecast have a significant importance and effect on the increase of forecast accuracy. In this paper, different factors were studied in comparison to the previous studies on gold price forecast. In terms of time span, the collected data were divided into three groups of daily, monthly and annually. The conducted tests using new factors indicate accuracy improvement up to 2% in neural networks methods, 7/3% in time series method and 5/6% in linear regression method.
Original/Review Paper
G.4. Information Storage and Retrieval
V. Derhami; J. Paksima; H. Khajeh
Abstract
Principal aim of a search engine is to provide the sorted results according to user’s requirements. To achieve this aim, it employs ranking methods to rank the web documents based on their significance and relevance to user query. The novelty of this paper is to provide user feedback-based ranking ...
Read More
Principal aim of a search engine is to provide the sorted results according to user’s requirements. To achieve this aim, it employs ranking methods to rank the web documents based on their significance and relevance to user query. The novelty of this paper is to provide user feedback-based ranking algorithm using reinforcement learning. The proposed algorithm is called RRLUFF, in which the ranking system is considered as the agent of the learning system and the selection of documents is displayed to the user as the agent's action. Reinforcement signal in this system is calculated based on user's click on the documents. Action-values in the RRLUFF algorithm are calculated for each feature of the document-query pair. In RRLUFF method, each feature is scored based on the number of the documents related to the query and their position in the ranked list of that feature. For learning, documents are sorted according to modified scores for the next query. Then, according to the position of a document in the ranking list, some documents are selected based on the random distribution of their scores to display to the user. OHSUMED and DOTIR benchmark datasets are used to evaluate the proposed method. The evaluation results indicate that the proposed method is more effective than the related methods in terms of P@n, NDCG@n, MAP, and NWN.
Original/Review Paper
Document and Text Processing
S. Momtazi; A. Rahbar; D. Salami; I. Khanijazani
Abstract
Text clustering and classification are two main tasks of text mining. Feature selection plays the key role in the quality of the clustering and classification results. Although word-based features such as term frequency-inverse document frequency (TF-IDF) vectors have been widely used in different applications, ...
Read More
Text clustering and classification are two main tasks of text mining. Feature selection plays the key role in the quality of the clustering and classification results. Although word-based features such as term frequency-inverse document frequency (TF-IDF) vectors have been widely used in different applications, their shortcoming in capturing semantic concepts of text motivated researches to use semantic models for document vector representations. Latent Dirichlet allocation (LDA) topic modeling and doc2vec neural document embedding are two well-known techniques for this purpose. In this paper, we first study the conceptual difference between the two models and show that they have different behavior and capture semantic features of texts from different perspectives. We then proposed a hybrid approach for document vector representation to benefit from the advantages of both models. The experimental results on 20newsgroup show the superiority of the proposed model compared to each of the baselines on both text clustering and classification tasks. We achieved 2.6% improvement in F-measure for text clustering and 2.1% improvement in F-measure in text classification compared to the best baseline model.
Original/Review Paper
Document and Text Processing
A. Shojaie; F. Safi-Esfahani
Abstract
With the advent of the internet and easy access to digital libraries, plagiarism has become a major issue. Applying search engines is one of the plagiarism detection techniques that converts plagiarism patterns to search queries. Generating suitable queries is the heart of this technique and existing ...
Read More
With the advent of the internet and easy access to digital libraries, plagiarism has become a major issue. Applying search engines is one of the plagiarism detection techniques that converts plagiarism patterns to search queries. Generating suitable queries is the heart of this technique and existing methods suffer from lack of producing accurate queries, Precision and Speed of retrieved results. This research proposes a framework called ParaMaker. It generates accurate paraphrases of any sentence, similar to human behaviors and sends them to a search engine to find the plagiarism patterns. For English language, ParaMaker was examined against six known methods with standard PAN2014 datasets. Results showed an improvement of 34% in terms of Recall parameter while Precision and Speed parameters were maintained. In Persian language, statements of suspicious documents were examined compared to an exact search approach. ParaMaker showed an improvement of at least 42% while Precision and Speed were maintained.
Research Note
H.3.8. Natural Language Processing
S. Lazemi; H. Ebrahimpour-komleh
Abstract
Dependency parser is one of the most important fundamental tools in the natural language processing, which extracts structure of sentences and determines the relations between words based on the dependency grammar. The dependency parser is proper for free order languages, such as Persian. In this paper, ...
Read More
Dependency parser is one of the most important fundamental tools in the natural language processing, which extracts structure of sentences and determines the relations between words based on the dependency grammar. The dependency parser is proper for free order languages, such as Persian. In this paper, data-driven dependency parser has been developed with the help of phrase-structure parser for Persian. The defined feature space in each parser is one of the important factors in its success. Our goal is to generate and extract appropriate features to dependency parsing of Persian sentences. To achieve this goal, new semantic and syntactic features have been defined and added to the MSTParser by stacking method. Semantic features are obtained by using word clustering algorithms based on syntagmatic analysis and syntactic features are obtained by using the Persian phrase-structure parser and have been used as bit-string. Experiments have been done on the Persian Dependency Treebank (PerDT) and the Uppsala Persian Dependency Treebank (UPDT). The results indicate that the definition of new features improves the performance of the dependency parser for the Persian. The achieved unlabeled attachment score for PerDT and UPDT are 89.17% and 88.96% respectively.
Original/Review Paper
N. Zendehdel; S. J. Sadati; A. Ranjbar Noei
Abstract
This manuscript addresses trajectory tracking problem of autonomous underwater vehicles (AUVs) on the horizontal plane. Adaptive sliding mode control is employed in order to achieve a robust behavior against some uncertainty and ocean current disturbances, assuming that disturbance and its derivative ...
Read More
This manuscript addresses trajectory tracking problem of autonomous underwater vehicles (AUVs) on the horizontal plane. Adaptive sliding mode control is employed in order to achieve a robust behavior against some uncertainty and ocean current disturbances, assuming that disturbance and its derivative are bounded by unknown boundary levels. The proposed approach is based on a dual layer adaptive law, which is independent upon the knowledge of disturbance boundary limit and its derivative. The approach tends to play a significant role to reduce the chattering effect which is prevalent in conventional sliding mode controllers. To guarantee the stability of the proposed control technique, the Lyapunov theory is used. Simulation results illustrate the validity of the proposed control scheme compared to the finite-time tracking control method.