G.4. Information Storage and Retrieval
V. Derhami; J. Paksima; H. Khajah
Abstract
The main challenge of a search engine is ranking web documents to provide the best response to a user`s query. Despite the huge number of the extracted results for user`s query, only a small number of the first results are examined by users; therefore, the insertion of the related results in the first ...
Read More
The main challenge of a search engine is ranking web documents to provide the best response to a user`s query. Despite the huge number of the extracted results for user`s query, only a small number of the first results are examined by users; therefore, the insertion of the related results in the first ranks is of great importance. In this paper, a ranking algorithm based on the reinforcement learning and user`s feedback called RL3F are considered. In the proposed algorithm, the ranking system has been considered to be the agent of learning system and selecting documents to display to the user is as the agents’ action. The reinforcement signal in the system is calculated according to a user`s clicks on documents. Action-value values of the proposed algorithm are computed for each feature. In each learning cycle, the documents are sorted out for the next query, and according to the document in the ranked list, documents are selected at random to show the user. Learning process continues until the training is completed. LETOR3 benchmark is used to evaluate the proposed method. Evaluation results indicated that the proposed method is more effective than other methods mentioned for comparison in this paper. The superiority of the proposed algorithm is using several features of document and user`s feedback simultaneously.
A.2. Control Structures and Microprogramming
A. Karami-Mollaee
Abstract
A new approach for pole placement of nonlinear systems using state feedback and fuzzy system is proposed. We use a new online fuzzy training method to identify and to obtain a fuzzy model for the unknown nonlinear system using only the system input and output. Then, we linearized this identified model ...
Read More
A new approach for pole placement of nonlinear systems using state feedback and fuzzy system is proposed. We use a new online fuzzy training method to identify and to obtain a fuzzy model for the unknown nonlinear system using only the system input and output. Then, we linearized this identified model at each sampling time to have an approximate linear time varying system. In order to stabilize the obtained linear system, we first choose the desired time invariant closed loop matrix and then a time varying state feedback is used. Then, the behavior of the closed loop nonlinear system will be as a linear time invariant (LTI) system. Therefore, the advantage of proposed method is global asymptotical exponential stability of unknown nonlinear system. Because of the high speed convergence of proposed adaptive fuzzy training method, the closed loop system is robust against uncertainty in system parameters. Finally the comparison has been done with the boundary layer sliding mode control (SMC).
D. Data
M. Zarezade; E. Nourani; Asgarali Bouyer
Abstract
Community structure is vital to discover the important structures and potential property of complex networks. In recent years, the increasing quality of local community detection approaches has become a hot spot in the study of complex network due to the advantages of linear time complexity and applicable ...
Read More
Community structure is vital to discover the important structures and potential property of complex networks. In recent years, the increasing quality of local community detection approaches has become a hot spot in the study of complex network due to the advantages of linear time complexity and applicable for large-scale networks. However, there are many shortcomings in these methods such as instability, low accuracy, randomness, etc. The G-CN algorithm is one of local methods that uses the same label propagation as the LPA method, but unlike the LPA, only the labels of boundary nodes are updated at each iteration that reduces its execution time. However, it has resolution limit and low accuracy problem. To overcome these problems, this paper proposes an improved community detection method called SD-GCN which uses a hybrid node scoring and synchronous label updating of boundary nodes, along with disabling random label updating in initial updates. In the first phase, it updates the label of boundary nodes in a synchronous manner using the obtained score based on degree centrality and common neighbor measures. In addition, we defined a new method for merging communities in second phase which is faster than modularity-based methods. Extensive set of experiments are conducted to evaluate performance of the SD-GCN on small and large-scale real-world networks and artificial networks. These experiments verify significant improvement in the accuracy and stability of community detection approaches in parallel with shorter execution time in a linear time complexity.
H.3.15.2. Computational neuroscience
A. Goshvarpour; A. Abbasi; A. Goshvarpour
Abstract
Emotion, as a psychophysiological state, plays an important role in human communications and daily life. Emotion studies related to the physiological signals are recently the subject of many researches. In This study a hybrid feature based approach was proposed to examine affective states. To this effect, ...
Read More
Emotion, as a psychophysiological state, plays an important role in human communications and daily life. Emotion studies related to the physiological signals are recently the subject of many researches. In This study a hybrid feature based approach was proposed to examine affective states. To this effect, Electrocardiogram (ECG) signals of 47 students were recorded using pictorial emotion elicitation paradigm. Affective pictures were selected from the International Affective Picture System and assigned into four different emotion classes. After extracting approximate and detail coefficients of Wavelet Transform (WT / Daubechies 4 at level 8), two measures of the second-order difference plot (CTM and D) were calculated for each wavelet coefficient. Subsequently, Least Squares Support Vector Machine (LS-SVM) was applied to discriminate between affective states and the rest. The statistical analysis indicated that the density of CTM in the rest is distinctive from the emotional categories. In addition, the second-order difference plot measurements at the last level of WT coefficients showed significant differences between the rest and emotion categories. Applying LS-SVM, the maximum classification rate of 80.24 % was reached for discrimination between rest and fear. The results of this study indicate the usefulness of the WT in combination with nonlinear technique in characterizing emotional states.
H.5. Image Processing and Computer Vision
S. Mavaddati
Abstract
In this paper, face detection problem is considered using the concepts of compressive sensing technique. This technique includes dictionary learning procedure and sparse coding method to represent the structural content of input images. In the proposed method, dictionaries are learned in such a way that ...
Read More
In this paper, face detection problem is considered using the concepts of compressive sensing technique. This technique includes dictionary learning procedure and sparse coding method to represent the structural content of input images. In the proposed method, dictionaries are learned in such a way that the trained models have the least degree of coherence to each other. The novelty of the proposed method involves the learning of comprehensive models with atoms that have the highest atom/data coherence with the training data and the lowest within-class and between-class coherence parameters. Each of these goals can be achieved through the proposed procedures. In order to achieve the desired results, a variety of features are extracted from the images and used to learn the characteristics of face and non-face images. Also, the results of the proposed classifier based on the incoherent dictionary learning technique are compared with the results obtained from the other common classifiers such as neural network and support vector machine. Simulation results, along with a significance statistical test show that the proposed method based on the incoherent models learned by the combinational features is able to detect the face regions with high accuracy rate.
H.3. Artificial Intelligence
Z. Sedighi; R. Boostani
Abstract
Although many studies have been conducted to improve the clustering efficiency, most of the state-of-art schemes suffer from the lack of robustness and stability. This paper is aimed at proposing an efficient approach to elicit prior knowledge in terms of must-link and cannot-link from the estimated ...
Read More
Although many studies have been conducted to improve the clustering efficiency, most of the state-of-art schemes suffer from the lack of robustness and stability. This paper is aimed at proposing an efficient approach to elicit prior knowledge in terms of must-link and cannot-link from the estimated distribution of raw data in order to convert a blind clustering problem into a semi-supervised one. To estimate the density distribution of data, Wiebull Mixture Model (WMM) is utilized due to its high flexibility. Another contribution of this study is to propose a new hill and valley seeking algorithm to find the constraints for semi-supervise algorithm. It is assumed that each density peak stands on a cluster center; therefore, neighbor samples of each center are considered as must-link samples while the near centroid samples belonging to different clusters are considered as cannot-link ones. The proposed approach is applied to a standard image dataset (designed for clustering evaluation) along with some UCI datasets. The achieved results on both databases demonstrate the superiority of the proposed method compared to the conventional clustering methods.
M. Kakooei; Y. Baleghi
Abstract
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling ...
Read More
Semantic labeling is an active field in remote sensing applications. Although handling high detailed objects in Very High Resolution (VHR) optical image and VHR Digital Surface Model (DSM) is a challenging task, it can improve the accuracy of semantic labeling methods. In this paper, a semantic labeling method is proposed by fusion of optical and normalized DSM data. Spectral and spatial features are fused into a Heterogeneous Feature Map to train the classifier. Evaluation database classes are impervious surface, building, low vegetation, tree, car, and background. The proposed method is implemented on Google Earth Engine. The method consists of several levels. First, Principal Component Analysis is applied to vegetation indexes to find maximum separable color space between vegetation and non-vegetation area. Gray Level Co-occurrence Matrix is computed to provide texture information as spatial features. Several Random Forests are trained with automatically selected train dataset. Several spatial operators follow the classification to refine the result. Leaf-Less-Tree feature is used to solve the underestimation problem in tree detection. Area, major and, minor axis of connected components are used to refine building and car detection. Evaluation shows significant improvement in tree, building, and car accuracy. Overall accuracy and Kappa coefficient are appropriate.
H.6.3.1. Classifier design and evaluation
M. Moradi; J. Hamidzadeh
Abstract
Recommender systems have been widely used in e-commerce applications. They are a subclass of information filtering system, used to either predict whether a user will prefer an item (prediction problem) or identify a set of k items that will be user-interest (Top-k recommendation problem). Demanding sufficient ...
Read More
Recommender systems have been widely used in e-commerce applications. They are a subclass of information filtering system, used to either predict whether a user will prefer an item (prediction problem) or identify a set of k items that will be user-interest (Top-k recommendation problem). Demanding sufficient ratings to make robust predictions and suggesting qualified recommendations are two significant challenges in recommender systems. However, the latter is far from satisfactory because human decisions affected by environmental conditions and they might change over time. In this paper, we introduce an innovative method to impute ratings to missed components of the rating matrix. We also design an ensemble-based method to obtain Top-k recommendations. To evaluate the performance of the proposed method, several experiments have been conducted based on 10-fold cross validation over real-world data sets. Experimental results show that the proposed method is superior to the state-of-the-art competing methods regarding applied evaluation metrics.
H.3.14. Knowledge Management
A. Soltani; M. Soltani
Abstract
High utility itemset mining (HUIM) is a new emerging field in data mining which has gained growing interest due to its various applications. The goal of this problem is to discover all itemsets whose utility exceeds minimum threshold. The basic HUIM problem does not consider length of itemsets in its ...
Read More
High utility itemset mining (HUIM) is a new emerging field in data mining which has gained growing interest due to its various applications. The goal of this problem is to discover all itemsets whose utility exceeds minimum threshold. The basic HUIM problem does not consider length of itemsets in its utility measurement and utility values tend to become higher for itemsets containing more items. Hence, HUIM algorithms discover a huge enormous number of long patterns. High average-utility itemset mining (HAUIM) is a variation of HUIM that selects patterns by considering both their utilities and lengths. In the last decades, several algorithms have been introduced to mine high average-utility itemsets. To speed up the HAUIM process, here a new algorithm is proposed which uses a new list structure and pruning strategy. Several experiments performed on real and synthetic datasets show that the proposed algorithm outperforms the state-of-the-art HAUIM algorithms in terms of runtime and memory consumption.
S. Arastehfar; Ali A. Pouyan; A. Jalalian
Abstract
In this paper, a novel decision based median (DBM) filter for enhancing MR images has been proposed. The method is based on eliminating impulse noise from MR images. A median-based method to remove impulse noise from digital MR images has been developed. Each pixel is leveled from black to white like ...
Read More
In this paper, a novel decision based median (DBM) filter for enhancing MR images has been proposed. The method is based on eliminating impulse noise from MR images. A median-based method to remove impulse noise from digital MR images has been developed. Each pixel is leveled from black to white like gray-level. The method is adjusted in order to decide whether the median operation can be applied on a pixel. The main deficiency in conventional median filter approaches is that all pixels are filtered with no concern about healthy pixels. In this research, to suppress this deficiency, noisy pixels are initially detected, and then the filtering operation is applied on them. The proposed decision method (DM) is simple and leads to fast filtering. The results are more accurate than other conventional filters. Moreover, DM adjusts itself based on the conditions of local detections. In other words, DM operation on detecting a pixel as a noise depends on the previous decision. As a considerable advantage, some unnecessary median operations are eliminated and the number of median operations reduces drastically by using DM. Decision method leads to more acceptable results in scenarios with high noise density. Furthermore, the proposed method reduces the probability of detecting noise-free pixels as noisy pixels and vice versa.
B.3. Communication/Networking and Information Technology
A. Ghaffari; S. Nobahary
Abstract
Wireless sensor networks (WSNs) consist of a large number of sensor nodes which are capable of sensing different environmental phenomena and sending the collected data to the base station or Sink. Since sensor nodes are made of cheap components and are deployed in remote and uncontrolled environments, ...
Read More
Wireless sensor networks (WSNs) consist of a large number of sensor nodes which are capable of sensing different environmental phenomena and sending the collected data to the base station or Sink. Since sensor nodes are made of cheap components and are deployed in remote and uncontrolled environments, they are prone to failure; thus, maintaining a network with its proper functions even when undesired events occur is necessary which is called fault tolerance. Hence, fault management is essential in these networks. In this paper, a new method has been proposed with particular attention to fault tolerance and fault detection in WSN. The performance of the proposed method was simulated in MATLAB. The proposed method was based on majority vote which can detect permanently faulty sensor nodes with high detection. Accuracy and low false alarm rate were excluded them from the network. To investigate the efficiency of the new method, the researchers compared it with Chen, Lee, and hybrid algorithms. Simulation results indicated that the novel proposed method has better performance in parameters such as detection accuracy (DA) and a false alarm rate (FAR) even with a large set of faulty sensor nodes.
E.3. Analysis of Algorithms and Problem Complexity
M. Asghari; H. Nematzadeh
Abstract
Suspended particles have deleterious effects on human health and one of the reasons why Tehran is effected is its geographically location of air pollution. One of the most important ways to reduce air pollution is to predict the concentration of pollutants. This paper proposed a hybrid method to predict ...
Read More
Suspended particles have deleterious effects on human health and one of the reasons why Tehran is effected is its geographically location of air pollution. One of the most important ways to reduce air pollution is to predict the concentration of pollutants. This paper proposed a hybrid method to predict the air pollution in Tehran based on particulate matter less than 10 microns (PM10), and the information and data of Aghdasiyeh Weather Quality Control Station and Mehrabad Weather Station from 2007 to 2013. Generally, 11 inputs have been inserted to the model, to predict the daily concentration of PM10. For this purpose, Artificial Neural Network with Back Propagation (BP) with a middle layer and sigmoid activation function and its hybrid with Genetic Algorithm (BP-GA) were used and ultimately the performance of the proposed method was compared with basic Artificial Neural Networks along with (BP) Based on the criteria of - R2-, RMSE and MAE. The finding shows that BP-GA has higher accuracy and performance. In addition, it was also found that the results are more accurate for shorter time periods and this is because the large fluctuation of data in long-term returns negative effect on network performance. Also, unregistered data have negative effect on predictions. Microsoft Excel and Matlab 2013 conducted the simulations.
F.2.11. Applications
M. Fatahi; B. Lashkar-Ara
Abstract
This paper uses nonlinear regression, Artificial Neural Network (ANN) and Genetic Programming (GP) approaches for predicting an important tangible issue i.e. scours dimensions downstream of inverted siphon structures. Dimensional analysis and nonlinear regression-based equations was proposed for estimation ...
Read More
This paper uses nonlinear regression, Artificial Neural Network (ANN) and Genetic Programming (GP) approaches for predicting an important tangible issue i.e. scours dimensions downstream of inverted siphon structures. Dimensional analysis and nonlinear regression-based equations was proposed for estimation of maximum scour depth, location of the scour hole, location and height of the dune downstream of the structures. In addition, The GP-based formulation results are compared with experimental results and other accurate equations. The results analysis showed that the equations derived from Forward Stepwise nonlinear regression method have correlation coefficient of R2=0.962 , 0.971 and 0.991 respectively. This correlates the relative parameter of maximum scour depth (s/z) in comparison with the genetic programming (GP) model and artificial neural network (ANN) model. Furthermore, the slope of the fitted line extracted from computations and observations for dimensionless parameters generally presents a new achievement for sediment engineering and scientific community, indicating the superiority of artificial neural network (ANN) model
H.3. Artificial Intelligence
A. Moradi; A. Abdi Seyedkolaei; Seyed A. Hosseini
Abstract
Software defined network is a new computer network architecture who separates controller and data layer in network devices such as switches and routers. By the emerge of software defined networks, a class of location problems, called controller placement problem, has attracted much more research attention. ...
Read More
Software defined network is a new computer network architecture who separates controller and data layer in network devices such as switches and routers. By the emerge of software defined networks, a class of location problems, called controller placement problem, has attracted much more research attention. The task in the problem is to simultaneously find optimal number and location of controllers satisfying a set of routing and capacity constraints. In this paper, we suggest an effective solution method based on the so-called Iterated Local Search (ILS) strategy. We then, compare our method to an existing standard mathematical programming solver on an extensive set of problem instances. It turns out that our suggested method is computationally much more effective and efficient over middle to large instances of the problem.
H.3. Artificial Intelligence
N. Emami; A. Pakzad
Abstract
Breast cancer has become a widespread disease around the world in young women. Expert systems, developed by data mining techniques, are valuable tools in diagnosis of breast cancer and can help physicians for decision making process. This paper presents a new hybrid data mining approach to classify two ...
Read More
Breast cancer has become a widespread disease around the world in young women. Expert systems, developed by data mining techniques, are valuable tools in diagnosis of breast cancer and can help physicians for decision making process. This paper presents a new hybrid data mining approach to classify two groups of breast cancer patients (malignant and benign). The proposed approach, AP-AMBFA, consists of two phases. In the first phase, the Affinity Propagation (AP) clustering method is used as instances reduction technique which can find noisy instance and eliminate them. In the second phase, feature selection and classification are conducted by the Adaptive Modified Binary Firefly Algorithm (AMBFA) for selection of the most related predictor variables to target variable and Support Vectors Machine (SVM) technique as classifier. It can reduce the computational complexity and speed up the data mining process. Experimental results on Wisconsin Diagnostic Breast Cancer (WDBC) datasets show higher predictive accuracy. The obtained classification accuracy is 98.606%, a very promising result compared to the current state-of-the-art classification techniques applied to the same database. Hence this method will help physicians in more accurate diagnosis of breast cancer.
H.3.2.5. Environment
M. T. Sattari; M. Pal; R. Mirabbasi; J. Abraham
Abstract
This work reports the results of four ensemble approaches with the M5 model tree as the base regression model to anticipate Sodium Adsorption Ratio (SAR). Ensemble methods that combine the output of multiple regression models have been found to be more accurate than any of the individual models making ...
Read More
This work reports the results of four ensemble approaches with the M5 model tree as the base regression model to anticipate Sodium Adsorption Ratio (SAR). Ensemble methods that combine the output of multiple regression models have been found to be more accurate than any of the individual models making up the ensemble. In this study additive boosting, bagging, rotation forest and random subspace methods are used. The dataset, which consisted of 488 samples with nine input parameters were obtained from the Barandoozchay River in West Azerbaijan province, Iran. Three evaluation criteria: correlation coefficient, root mean square error and mean absolute error were used to judge the accuracy of different ensemble models. In addition to the use of M5 model tree to predict the SAR values, a wrapper-based variable selection approach using a M5 model tree as the learning algorithm and a genetic algorithm, was also used to select useful input variables. The encouraging performance motivates the use of this technique to predict SAR values.
H.3.15.3. Evolutionary computing and genetic algorithms
V. Majidnezhad
Abstract
In this paper, first, an initial feature vector for vocal fold pathology diagnosis is proposed. Then, for optimizing the initial feature vector, a genetic algorithm is proposed. Some experiments are carried out for evaluating and comparing the classification accuracies which are obtained by the use of ...
Read More
In this paper, first, an initial feature vector for vocal fold pathology diagnosis is proposed. Then, for optimizing the initial feature vector, a genetic algorithm is proposed. Some experiments are carried out for evaluating and comparing the classification accuracies which are obtained by the use of the different classifiers (ensemble of decision tree, discriminant analysis and K-nearest neighbours) and the different feature vectors (the initial and the optimized ones). Finally, a hybrid of the ensemble of decision tree and the genetic algorithm is proposed for vocal fold pathology diagnosis based on Russian Language. The experimental results show a better performance (the higher classification accuracy and the lower response time) of the proposed method in comparison with the others. While the usage of pure decision tree leads to the classification accuracy of 85.4% for vocal fold pathology diagnosis based on Russian language, the proposed method leads to the 8.5% improvement (the accuracy of 93.9%).
H.6.3.2. Feature evaluation and selection
E. Golpar-Rabooki; S. Zarghamifar; J. Rezaeenour
Abstract
Opinion mining deals with an analysis of user reviews for extracting their opinions, sentiments and demands in a specific area, which can play an important role in making major decisions in such area. In general, opinion mining extracts user reviews at three levels of document, sentence and feature. ...
Read More
Opinion mining deals with an analysis of user reviews for extracting their opinions, sentiments and demands in a specific area, which can play an important role in making major decisions in such area. In general, opinion mining extracts user reviews at three levels of document, sentence and feature. Opinion mining at the feature level is taken into consideration more than the other two levels due to orientation analysis of different aspects of an area. In this paper, two methods are introduced for feature extraction. The recommended methods consist of four main stages. At the first stage, opinion-mining lexicon for Persian is created. This lexicon is used to determine the orientation of users’ reviews. The second one is the preprocessing stage including unification of writing, tokenization, creating parts-of-speech tagging and syntactic dependency parsing for documents. The third stage involves the extraction of features using two methods including frequency-based feature extraction and association rule based feature extraction. In the fourth stage, the features and polarities of the word reviews extracted in the previous stage are modified and the final features' polarity is determined. To assess the suggested techniques, a set of user reviews in both scopes of university and cell phone areas were collected and the results of the two methods were compared.
G. Information Technology and Systems
M. Dehghani; S. Emadi
Abstract
Nowadays organizations require an effective governance framework for their service-oriented architecture (SOA) in order to enable them to use a framework to evaluate their current state governance and determine the governance requirements, and then to offer a suitable model for their governance. Various ...
Read More
Nowadays organizations require an effective governance framework for their service-oriented architecture (SOA) in order to enable them to use a framework to evaluate their current state governance and determine the governance requirements, and then to offer a suitable model for their governance. Various frameworks have been developed to evaluate the SOA governance. In this paper, a brief introduction to the internal control framework COBIT is described, and it is used to show how to develop a framework to evaluate the SOA governance within an organization. The SOA and information technology expert surveys are carried out to evaluate the proposed framework. The results of this survey verify the proposed framework.
B.3. Communication/Networking and Information Technology
A. Azimi Kashani; M. Ghanbari; A. M. Rahmani
Abstract
Vehicular ad hoc networks are an emerging technology with an extensive capability in various applications including vehicles safety, traffic management and intelligent transportation systems. Considering the high mobility of vehicles and their inhomogeneous distributions, designing an efficient routing ...
Read More
Vehicular ad hoc networks are an emerging technology with an extensive capability in various applications including vehicles safety, traffic management and intelligent transportation systems. Considering the high mobility of vehicles and their inhomogeneous distributions, designing an efficient routing protocol seems necessary. Given the fact that a road is crowded at some sections and is not crowded at the others, the routing protocol should be able to dynamically make decisions. On the other hand, VANET networks environment is vulnerable at the time of data transmission. Broadcast routing, similar to opportunistic routing, could offer better efficiency compared to other protocols. In this paper, a fuzzy logic opportunistic routing (FLOR) protocol is presented in which the packet rebroadcasting decision-making process is carried out through the fuzzy logic system along with three input parameters of packet advancement, local density, and the number of duplicated delivered packets. The rebroadcasting procedures use the value of these parameters as inputs to the fuzzy logic system to resolve the issue of multicasting, considering the crowded and sparse zones. NS-2 simulator is used for evaluating the performance of the proposed FLOR protocol in terms of packet delivery ratio, the end-to-end delay, and the network throughput compared with the existing protocols such as: FLOODING, P-PERSISTENCE and FUZZBR. The performance comparison also emphasizes on effective utilization of the resources. Simulations on highway environment show that the proposed protocol has a better QoS efficiency compared to the above published methods in the literature
H.5.11. Image Representation
M. Nikpour; R. Karami; R. Ghaderi
Abstract
Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image and video. Sparse coding has increasing attraction for image classification applications in recent years. But in the cases where we have some similar images from different classes, such as ...
Read More
Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image and video. Sparse coding has increasing attraction for image classification applications in recent years. But in the cases where we have some similar images from different classes, such as face recognition applications, different images may be classified into the same class, and hence the classification performance may be decreased. In this paper, we propose an Affine Graph Regularized Sparse Coding approach for face recognition problem. Experiments on several well-known face datasets show that the proposed method can significantly improve the face classification accuracy. In addition, some experiments have been done to illustrate the robustness of the proposed method to noise. The results show the superiority of the proposed method in comparison to some other methods in face classification.
F.2.7. Optimization
F. Fouladi Mahani; A. Mahanipour; A. Mokhtari
Abstract
Recently, significant interest has been attracted by the potential use of aluminum nanostructures as plasmonic color filters to be great alternatives to the commercial color filters based on dye films or pigments. These color filters offer potential applications in LCDs, LEDs, color printing, CMOS image ...
Read More
Recently, significant interest has been attracted by the potential use of aluminum nanostructures as plasmonic color filters to be great alternatives to the commercial color filters based on dye films or pigments. These color filters offer potential applications in LCDs, LEDs, color printing, CMOS image sensors, and multispectral imaging. However, engineering the optical characteristics of these nanostructures to design a color filter with desired pass-band spectrum and high color purity requires accurate optimization techniques. In this paper, an optimization procedure integrating genetic algorithm with FDTD Solutions has been utilized to design plasmonic color filters, automatically. Our proposed aluminum nanohole arrays have been realized successfully to achieve additive (red, green, and blue) color filters using the automated optimization procedure. Despite all the considerations for fabrication simplicity, the designed filters feature transmission efficacies of 45-50 percent with a FWHM of 40 nm, 50 nm, and 80 nm for the red, green, and blue filters, respectively. The obtained results prove an efficient integration of genetic algorithm and FDTD Solutions revealing the potential application of the proposed method for automated design of similar nanostructures.
B.3. Communication/Networking and Information Technology
Seyed M. Hosseinirad
Abstract
Due to the resource constraint and dynamic parameters, reducing energy consumption became the most important issues of wireless sensor networks topology design. All proposed hierarchy methods cluster a WSN in different cluster layers in one step of evolutionary algorithm usage with complicated parameters ...
Read More
Due to the resource constraint and dynamic parameters, reducing energy consumption became the most important issues of wireless sensor networks topology design. All proposed hierarchy methods cluster a WSN in different cluster layers in one step of evolutionary algorithm usage with complicated parameters which may lead to reducing efficiency and performance. In fact, in WSNs topology, increasing a layer of cluster is a tradeoff between time complexity and energy efficiency. In this study, regarding the most important WSN’s design parameters, a novel dynamic multilayer hierarchy clustering approach using evolutionary algorithms for densely deployed WSN is proposed. Different evolutionary algorithms such as Genetic Algorithm (GA), Imperialist Competitive Algorithm (ICA) and Particle Swarm Optimization (PSO) are used to find an efficient evolutionary algorithm for implementation of the clustering proposed method. The obtained results demonstrate the PSO performance is more efficient compared with other algorithms to provide max network coverage, efficient cluster formation and network traffic reduction. The simulation results of multilayer WSN clustering design through PSO algorithm show that this novel approach reduces the energy communication significantly and increases lifetime of network up to 2.29 times with providing full network coverage (100%) till 350 rounds (56% of network lifetime) compared with WEEC and LEACH-ICA clsutering.
M. Salehi; J. Razmara; Sh. Lotfi
Abstract
Prediction of cancer survivability using machine learning techniques has become a popular approach in recent years. In this regard, an important issue is that preparation of some features may need conducting difficult and costly experiments while these features have less significant impacts on the ...
Read More
Prediction of cancer survivability using machine learning techniques has become a popular approach in recent years. In this regard, an important issue is that preparation of some features may need conducting difficult and costly experiments while these features have less significant impacts on the final decision and can be ignored from the feature set. Therefore, developing a machine for prediction of survivability, which ignores these features for simple cases and yields an acceptable prediction accuracy, has turned into a challenge for researchers. In this paper, we have developed an ensemble multi-stage machine for survivability prediction which ignores difficult features for simple cases. The machine employs three basic learners, namely multilayer perceptron (MLP), support vector machine (SVM), and decision tree (DT), in the first stage to predict survivability using simple features. If the learners agree on the output, the machine makes the final decision in the first stage. Otherwise, for difficult cases where the output of learners is different, the machine makes decision in the second stage using SVM over all features. The developed model was evaluated using the Surveillance, Epidemiology, and End Results (SEER) database. The experimental results revealed that the developed machine obtains considerable accuracy while it ignores difficult features for most of the input samples.
H.7. Simulation, Modeling, and Visualization
J. Peymanfard; N. Mozayani
Abstract
In this paper, we present a data-driven method for crowd simulation with holonification model. With this extra module, the accuracy of simulation will increase and it generates more realistic behaviors of agents. First, we show how to use the concept of holon in crowd simulation and how effective it ...
Read More
In this paper, we present a data-driven method for crowd simulation with holonification model. With this extra module, the accuracy of simulation will increase and it generates more realistic behaviors of agents. First, we show how to use the concept of holon in crowd simulation and how effective it is. For this reason, we use simple rules for holonification. Using real-world data, we model the rules for joining each agent to a holon and leaving it with random forests. Then we use this model in simulation. Also, because we use data from a specific environment, we test the model in another environment. The result shows that the rules derived from the first environment exist in the second one. It confirms the generalization capabilities of the proposed method.