H.6.4. Clustering
M. Owhadi-Kareshki; M.R. Akbarzadeh-T.
Abstract
The increasingly larger scale of available data and the more restrictive concerns on their privacy are some of the challenging aspects of data mining today. In this paper, Entropy-based Consensus on Cluster Centers (EC3) is introduced for clustering in distributed systems with a consideration for confidentiality ...
Read More
The increasingly larger scale of available data and the more restrictive concerns on their privacy are some of the challenging aspects of data mining today. In this paper, Entropy-based Consensus on Cluster Centers (EC3) is introduced for clustering in distributed systems with a consideration for confidentiality of data; i.e. it is the negotiations among local cluster centers that are used in the consensus process, hence no private data are transferred. With the proposed use of entropy as an internal measure of consensus clustering validation at each machine, the cluster centers of the local machines with higher expected clustering validity have more influence in the final consensus centers. We also employ relative cost function of the local Fuzzy C-Means (FCM) and the number of data points in each machine as measures of relative machine validity as compared to other machines and its reliability, respectively. The utility of the proposed consensus strategy is examined on 18 datasets from the UCI repository in terms of clustering accuracy and speed up against the centralized version of FCM. Several experiments confirm that the proposed approach yields to higher speed up and accuracy while maintaining data security due to its protected and distributed processing approach.
Mohammad Ghasemzadeh
Abstract
Binary Decision Diagram (BDD) is a data structure proved to be compact in representation and efficient in manipulation of Boolean formulas. Using Binary decision diagram in network reliability analysis has already been investigated by some researchers. In this paper we show how an exact algorithm for ...
Read More
Binary Decision Diagram (BDD) is a data structure proved to be compact in representation and efficient in manipulation of Boolean formulas. Using Binary decision diagram in network reliability analysis has already been investigated by some researchers. In this paper we show how an exact algorithm for network reliability can be improved and implemented efficiently by using CUDD - Colorado University Decision Diagram.
H.6. Pattern Recognition
A. Ebrahimzadeh; M. Ahmadi; M. Safarnejad
Abstract
Classification of heart arrhythmia is an important step in developing devices for monitoring the health of individuals. This paper proposes a three module system for classification of electrocardiogram (ECG) beats. These modules are: denoising module, feature extraction module and a classification module. ...
Read More
Classification of heart arrhythmia is an important step in developing devices for monitoring the health of individuals. This paper proposes a three module system for classification of electrocardiogram (ECG) beats. These modules are: denoising module, feature extraction module and a classification module. In the first module the stationary wavelet transform (SWF) is used for noise reduction of the ECG signals. The feature extraction module extracts a balanced combination of the Hermit features and three timing interval feature. Then a number of multi-layer perceptron (MLP) neural networks with different number of layers and eight training algorithms are designed. Seven files from the MIT/BIH arrhythmia database are selected as test data and the performances of the networks, for speed of convergence and accuracy classifications, are evaluated. Generally all of the proposed algorisms have good training time, however, the resilient back propagation (RP) algorithm illustrated the best overall training time among the different training algorithms. The Conjugate gradient back propagation (CGP) algorithm shows the best recognition accuracy about 98.02% using a little amount of features.
C.3. Software Engineering
E. Ghandehari; F. Saadatjoo; M. A. Zare Chahooki
Abstract
Agent oriented software engineering (AOSE) is an emerging field in computer science and proposes some systematic ideas for multi agent systems analysis, implementation and maintenance. Despite the various methodologies introduced in the agent-oriented software engineering, the main challenges ...
Read More
Agent oriented software engineering (AOSE) is an emerging field in computer science and proposes some systematic ideas for multi agent systems analysis, implementation and maintenance. Despite the various methodologies introduced in the agent-oriented software engineering, the main challenges are defects in different aspects of methodologies. According to the defects resulted from weaknesses in agent oriented methodologies in different aspects, a combinatory solution named ARA using, ASPECS, ROADMAP and AOR has been proposed. The three methodologies were analyzed in a comprehensive analytical framework according to concepts and Perceptions, modeling language, process and pragmatism. According to time and resource limitations, sample methodologies for evaluation and in titration were selected. This selection was based on the use of methodologies' and their combination ability. The evaluation show that, the ROADMAP methodology supports stages of agent-oriented systems' analysis and the design stage is not complete because it doesn’t model all semi agents. On the other hand, since AOR and ASPECS methodologies support the design stage and inter agent interactions, a mixed methodology has been proposed and is a combination of analysis stage of ROADMAP methodology and design stage of AOR and ASPECS methodologies. Furthermore, to increase the performance of proposed methodology of actor models, service model, capability and programming were also added to this proposed methodology. To describe its difference phases, it was used in a case study too. Results of this project can pave the way to introduce future agent-oriented methodologies.
J.10.3. Financial
G. Ozdagoglu; A. Ozdagoglu; Y. Gumus; G. Kurt Gumus
Abstract
Predicting financially false statements to detect frauds in companies has an increasing trend in recent studies. The manipulations in financial statements can be discovered by auditors when related financial records and indicators are analyzed in depth together with the experience of auditors in order ...
Read More
Predicting financially false statements to detect frauds in companies has an increasing trend in recent studies. The manipulations in financial statements can be discovered by auditors when related financial records and indicators are analyzed in depth together with the experience of auditors in order to create knowledge to develop a decision support system to classify firms. Auditors may annotate the firms’ statements as “correct” or “incorrect” to add their experience, and then these annotations with related indicators can be used for the learning process to generate a model. Once the model is learned and tested for validation, it can be used for new firms to predict their class values. In this research, we attempted to reveal this benefit in the framework of Turkish firms. In this regard, the study aims at classifying financially correct and false statements of Turkish firms listed on Borsa İstanbul, using their particular financial ratios as indicators of a success or a manipulation. The dataset was selected from a particular period after the crisis (2009 to 2013). Commonly used three classification methods in data mining were employed for the classification: decision tree, logistic regression, and artificial neural network, respectively. According to the results, although all three methods are performed well, the latter had the best performance, and it outperforms other two classical methods. The common ground of the selected methods is that they pointed out the Z-score as the first distinctive indicator for classifying financial statements under consideration.
H.5. Image Processing and Computer Vision
A. Azimzadeh Irani; R. Pourgholi
Abstract
Ray Casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently ...
Read More
Ray Casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently powerful voxel parsing mechanism for object distinction. In this work, we are proposing an image processing based approach towards enhancing ray casting technique for object distinction process. The rendering mode is modified to accommodate masking information generated by a K-means based hybrid segmentation algorithm. An effective set of image processing techniques are creatively employed in construction of a generic segmentation system capable of generating object membership information.
H.3. Artificial Intelligence
R. Yarinezhad; A. Sarabi
Abstract
Vehicular ad hoc networks (VANETs) are a particular type of Mobile ad hoc networks (MANET) in which the vehicles are considered as nodes. Due to rapid topology changing and frequent disconnection makes it difficult to design an efficient routing protocol for routing data among vehicles. In this paper, ...
Read More
Vehicular ad hoc networks (VANETs) are a particular type of Mobile ad hoc networks (MANET) in which the vehicles are considered as nodes. Due to rapid topology changing and frequent disconnection makes it difficult to design an efficient routing protocol for routing data among vehicles. In this paper, a new routing protocol based on glowworm swarm optimization algorithm is provided. Using the glowworm algorithm the proposed protocol detects the optimal route between three-way and intersections. Then, the packets are delivered based on the selected routes. The proposed algorithm by using the glowworm swarm optimization algorithm, which is a distributed heuristic algorithm, assigns a value to each route from a source to the destination. Then a route with the higher value is selected to send messages from the source to the destination. The simulation results show that the proposed algorithm has a better performance than the similar algorithms.
A.1. General
A. Zarei; M. Maleki; D. Feiz; M. A. Siahsarani kojuri
Abstract
Competitive intelligence (CI) has become one of the major subjects for researchers in recent years. The present research is aimed to achieve a part of the CI by investigating the scientific articles on this field through text mining in three interrelated steps. In the first step, a total of 1143 articles ...
Read More
Competitive intelligence (CI) has become one of the major subjects for researchers in recent years. The present research is aimed to achieve a part of the CI by investigating the scientific articles on this field through text mining in three interrelated steps. In the first step, a total of 1143 articles released between 1987 and 2016 were selected by searching the phrase "competitive intelligence" in the valid databases and search engines; then, through reviewing the topic, abstract, and main text of the articles as well as screening the articles in several steps, the authors eventually selected 135 relevant articles in order to perform the text mining process. In the second step, pre-processing of the data was carried out. In the third step, using non-hierarchical cluster analysis (k-means), 5 optimum clusters were obtained based on the Davies–Bouldin index, for each of which a word cloud was drawn; then, the association rules of each cluster was extracted and analyzed using the indices of support, confidence, and lift. The results indicated the increased interest in researches on CI in recent years and tangibility of the strong and weak presence of the developed and developing countries in formation of the scientific products; further, the results showed that information, marketing, and strategy are the main elements of the CI that, along with other prerequisites, can lead to the CI and, consequently, the economic development, competitive advantage, and sustainability in market.
M. Banejad; H. Ijadi
Abstract
This paper presets a method including a combination of the wavelet transform and fuzzy function approximation (FFA) for high impedance fault (HIF) detection in distribution electricity network. Discrete wavelet transform (DWT) has been used in this paper as a tool for signal analysis. With studying different ...
Read More
This paper presets a method including a combination of the wavelet transform and fuzzy function approximation (FFA) for high impedance fault (HIF) detection in distribution electricity network. Discrete wavelet transform (DWT) has been used in this paper as a tool for signal analysis. With studying different types of mother signals, detail types and feeder signal, the best case is selected. The DWT is used to extract the best features. The extracted features have been used as the FFA Systems inputs. The FFA system uses the input-output pairs to create a function approximation of the features. The FFA system is able to classify the new features. The combined model is used to model the HIF. This combined model has the high ability to model different types of HIF. In the proposed method, different kind of loads including nonlinear and asymmetric loads and HIF types studied. The results show that the proposed method is able to distinguish no fault and HIF state with high accuracy.
H.6.3.3. Pattern analysis
M. Imani; H. Ghassemian
Abstract
Hyperspectral sensors provide a large number of spectral bands. This massive and complex data structure of hyperspectral images presents a challenge to traditional data processing techniques. Therefore, reducing the dimensionality of hyperspectral images without losing important information is a very ...
Read More
Hyperspectral sensors provide a large number of spectral bands. This massive and complex data structure of hyperspectral images presents a challenge to traditional data processing techniques. Therefore, reducing the dimensionality of hyperspectral images without losing important information is a very important issue for the remote sensing community. We propose to use overlap-based feature weighting (OFW) for supervised feature extraction of hyperspectral data. In the OFW method, the feature vector of each pixel of hyperspectral image is divided to some segments. The weighted mean of adjacent spectral bands in each segment is calculated as an extracted feature. The less the overlap between classes is, the more the class discrimination ability will be. Therefore, the inverse of overlap between classes in each band (feature) is considered as a weight for that band. The superiority of OFW, in terms of classification accuracy and computation time, over other supervised feature extraction methods is established on three real hyperspectral images in the small sample size situation.
H.6.5.4. Face and gesture recognition
S. Shafeipour Yourdeshahi; H. Seyedarabi; A. Aghagolzadeh
Abstract
Video-based face recognition has attracted significant attention in many applications such as media technology, network security, human-machine interfaces, and automatic access control system in the past decade. The usual way for face recognition is based upon the grayscale image produced by combining ...
Read More
Video-based face recognition has attracted significant attention in many applications such as media technology, network security, human-machine interfaces, and automatic access control system in the past decade. The usual way for face recognition is based upon the grayscale image produced by combining the three color component images. In this work, we consider grayscale image as well as color space in the recognition process. For key frame extractions from a video sequence, the input video is converted to a number of clusters, each of which acts as a linear subspace. The center of each cluster is considered as the cluster representative. Also in this work, for comparing the key frames, the three popular color spaces RGB, YCbCr, and HSV are used for mathematical representation, and the graph-based discriminant analysis is applied for the recognition process. It is also shown that by introducing the intra-class and inter-class similarity graphs to the color space, the problem is changed to determining the color component combination vector and mapping matrix. We introduce an iterative algorithm to simultaneously determine the optimum above vector and matrix. Finally, the results of the three color spaces and grayscale image are compared with those obtained from other available methods. Our experimental results demonstrate the effectiveness of the proposed approach.
H.3.8. Natural Language Processing
L. Jafar Tafreshi; F. Soltanzadeh
Abstract
Named Entity Recognition is an information extraction technique that identifies name entities in a text. Three popular methods have been conventionally used namely: rule-based, machine-learning-based and hybrid of them to extract named entities from a text. Machine-learning-based methods have good performance ...
Read More
Named Entity Recognition is an information extraction technique that identifies name entities in a text. Three popular methods have been conventionally used namely: rule-based, machine-learning-based and hybrid of them to extract named entities from a text. Machine-learning-based methods have good performance in the Persian language if they are trained with good features. To get good performance in Conditional Random Field-based Persian Named Entity Recognition, a several syntactic features based on dependency grammar along with some morphological and language-independent features have been designed in order to extract suitable features for the learning phase. In this implementation, designed features have been applied to Conditional Random Field to build our model. To evaluate our system, the Persian syntactic dependency Treebank with about 30,000 sentences, prepared in NOOR Islamic science computer research center, has been implemented. This Treebank has Named-Entity tags, such as Person, Organization and location. The result of this study showed that our approach achieved 86.86% precision, 80.29% recall and 83.44% F-measure which are relatively higher than those values reported for other Persian NER methods.
H.3. Artificial Intelligence
F. Fadaei Noghani; M. Moattar
Abstract
Due to the rise of technology, the possibility of fraud in different areas such as banking has been increased. Credit card fraud is a crucial problem in banking and its danger is over increasing. This paper proposes an advanced data mining method, considering both feature selection and decision cost ...
Read More
Due to the rise of technology, the possibility of fraud in different areas such as banking has been increased. Credit card fraud is a crucial problem in banking and its danger is over increasing. This paper proposes an advanced data mining method, considering both feature selection and decision cost for accuracy enhancement of credit card fraud detection. After selecting the best and most effective features, using an extended wrapper method, ensemble classification is performed. The extended feature selection approach includes a prior feature filtering and a wrapper approach using C4.5 decision tree. Ensemble classification, using cost sensitive decision trees is performed in a decision forest framework. A locally gathered fraud detection dataset is used to estimate the proposed method. The proposed method is assessed using accuracy, recall, and F-measure as evaluation metrics and compared with basic classification algorithms including ID3, J48, Naïve Bayes, Bayesian Network and NB tree. Experiments show that considering the F-measure as evaluation metric, the proposed approach yields 1.8 to 2.4 percent performance improvement compared to other classifiers.
H.5. Image Processing and Computer Vision
V. Patil; T. Sarode
Abstract
Image hashing allows compression, enhancement or other signal processing operations on digital images which are usually acceptable manipulations. Whereas, cryptographic hash functions are very sensitive to even single bit changes in image. Image hashing is a sum of important quality features in quantized ...
Read More
Image hashing allows compression, enhancement or other signal processing operations on digital images which are usually acceptable manipulations. Whereas, cryptographic hash functions are very sensitive to even single bit changes in image. Image hashing is a sum of important quality features in quantized form. In this paper, we proposed a novel image hashing algorithm for authentication which is more robust against various kind of attacks. In proposed approach, a short hash code is obtained by using minimum magnitude Center Symmetric Local Binary Pattern (CSLBP). The desirable discrimination power of image hash is maintained by modified Local Binary Pattern(LBP) based edge weight factor generated from gradient image. The proposed hashing method extracts texture features using the Center Symmetric Local Binary Pattern (CSLBP). The discrimination power of hashing is increased by weight factor during CSLBP histogram construction. The generated histogram is compressed to 1/4 of the original histogram by minimum magnitude CSLBP. The proposed method, has a twofold advantage, first is a small length and second is acceptable discrimination power. Experimental results are demonstrated by hamming distance, TPR, FPR and ROC curves. Therefore the proposed method successfully does a fair classification of content preserving and content changing images.
B. Computer Systems Organization
F. Hoseini; A. Shahbahrami; A. Yaghoobi Notash
Abstract
One of the most important and typical application of wireless sensor networks (WSNs) is target tracking. Although target tracking, can provide benefits for large-scale WSNs and organize them into clusters but tracking a moving target in cluster-based WSNs suffers a boundary problem. The main goal of ...
Read More
One of the most important and typical application of wireless sensor networks (WSNs) is target tracking. Although target tracking, can provide benefits for large-scale WSNs and organize them into clusters but tracking a moving target in cluster-based WSNs suffers a boundary problem. The main goal of this paper was to introduce an efficient and novel mobility management protocol namely Target Tracking Based on Virtual Grid (TTBVG), which integrates on-demand dynamic clustering into a cluster- based WSN for target tracking. This protocol converts on-demand dynamic clusters to scalable cluster-based WSNs, by using boundary nodes and facilitates sensors’ collaboration around clusters. In this manner, each sensor node has the probability of becoming a cluster head and apperceives the tradeoff between energy consumption and local sensor collaboration in cluster-based sensor networks. The simulation results of this study demonstrated that the efficiency of the proposed protocol in both one-hop and multi-hop cluster-based sensor networks.
B. Hassanpour; N. Abdolvand; S. Rajaee Harandi
Abstract
The rapid development of technology, the Internet, and the development of electronic commerce have led to the emergence of recommender systems. These systems will assist the users in finding and selecting their desired items. The accuracy of the advice in recommender systems is one of the main challenges ...
Read More
The rapid development of technology, the Internet, and the development of electronic commerce have led to the emergence of recommender systems. These systems will assist the users in finding and selecting their desired items. The accuracy of the advice in recommender systems is one of the main challenges of these systems. Regarding the fuzzy systems capabilities in determining the borders of user interests, it seems reasonable to combine it with social networks information and the factor of time. Hence, this study, for the first time, tries to assess the efficiency of the recommender systems by combining fuzzy logic, longitudinal data and social networks information such as tags, friendship, and membership in groups. And the impact of the proposed algorithm for improving the accuracy of recommender systems was studied by specifying the neighborhood and the border between the users’ preferences over time. The results revealed that using longitudinal data in social networks information in memory-based recommender systems improves the accuracy of these systems.
H.3. Artificial Intelligence
A.R. Hatamlou; M. Deljavan
Abstract
Gold price forecast is of great importance. Many models were presented by researchers to forecast gold price. It seems that although different models could forecast gold price under different conditions, the new factors affecting gold price forecast have a significant importance and effect on the increase ...
Read More
Gold price forecast is of great importance. Many models were presented by researchers to forecast gold price. It seems that although different models could forecast gold price under different conditions, the new factors affecting gold price forecast have a significant importance and effect on the increase of forecast accuracy. In this paper, different factors were studied in comparison to the previous studies on gold price forecast. In terms of time span, the collected data were divided into three groups of daily, monthly and annually. The conducted tests using new factors indicate accuracy improvement up to 2% in neural networks methods, 7/3% in time series method and 5/6% in linear regression method.
H.3.2.2. Computer vision
H. Hosseinpour; Seyed A. Moosavie nia; M. A. Pourmina
Abstract
Virtual view synthesis is an essential part of computer vision and 3D applications. A high-quality depth map is the main problem with virtual view synthesis. Because as compared to the color image the resolution of the corresponding depth image is low. In this paper, an efficient and confided method ...
Read More
Virtual view synthesis is an essential part of computer vision and 3D applications. A high-quality depth map is the main problem with virtual view synthesis. Because as compared to the color image the resolution of the corresponding depth image is low. In this paper, an efficient and confided method based on the gradual omission of outliers is proposed to compute reliable depth values. In the proposed method depth values that are far from the mean of depth values are omitted gradually. By comparison with other state of the art methods, simulation results show that on average, PSNR is 2.5dB (8.1 %) higher, SSIM is 0.028 (3%) more, UNIQUE is 0.021 (2.4%) more, the running time is 8.6s (6.1%) less and wrong pixels are 1.97(24.8%) less.
Sekine Asadi Amiri; Ehsan Moudi
Abstract
One of the most common positioning errors in panoramic radiography is palatoglossal air space above the apices of the root of maxillary teeth. It causes a radiolucency obscuring the apices of maxillary teeth. In the case of this positioning error, the imaging should be repeated. This causes the patient ...
Read More
One of the most common positioning errors in panoramic radiography is palatoglossal air space above the apices of the root of maxillary teeth. It causes a radiolucency obscuring the apices of maxillary teeth. In the case of this positioning error, the imaging should be repeated. This causes the patient be exposed to radiation again. To avoid the repetition of exposing harmful X-rays to the patient, it is necessary to improve the panoramic images. This paper presents a new automatic panoramic image enhancement method to reduce the effect of this positioning error. Experimental results indicate that the enhanced panoramic images provide with adequate diagnostic information specially in maxilla sinusoid region. Hence, this technique dispenses the need for repetition of X-ray imaging.
H.3. Artificial Intelligence
H. Motameni
Abstract
This paper proposes a method to solve multi-objective problems using improved Particle Swarm Optimization. We propose leader particles which guide other particles inside the problem domain. Two techniques are suggested for selection and deletion of such particles to improve the optimal solutions. The ...
Read More
This paper proposes a method to solve multi-objective problems using improved Particle Swarm Optimization. We propose leader particles which guide other particles inside the problem domain. Two techniques are suggested for selection and deletion of such particles to improve the optimal solutions. The first one is based on the mean of the m optimal particles and the second one is based on appointing a leader particle for any n founded particles. We used an intensity criterion to delete the particles in both techniques. The proposed techniques were evaluated based on three standard tests in multi-objective evolutionary optimization problems. The evaluation criterion in this paper is the number of particles in the optimal-Pareto set, error, and uniformity. The results show that the proposed method searches more number of optimal particles with higher intensity and less error in comparison with basic MOPSO and SIGMA and CMPSO and NSGA-II and microGA and PAES and can be used as proper techniques to solve multi-objective optimization problems.
C. Software/Software Engineering
H. Motameni
Abstract
To evaluate and predict component-based software security, a two-dimensional model of software security is proposed by Stochastic Petri Net in this paper. In this approach, the software security is modeled by graphical presentation ability of Petri nets, and the quantitative prediction is provided by ...
Read More
To evaluate and predict component-based software security, a two-dimensional model of software security is proposed by Stochastic Petri Net in this paper. In this approach, the software security is modeled by graphical presentation ability of Petri nets, and the quantitative prediction is provided by the evaluation capability of Stochastic Petri Net and the computing power of Markov chain. Each vulnerable component is modeled by Stochastic Petri net and two parameters, Successfully Attack Probability (SAP) and Vulnerability Volume of each component to another component. The second parameter, as a second dimension of security evaluation, is a metric that is added to modeling to improve the accuracy of the result of system security prediction. An isomorphic Markov chain is obtained from a corresponding SPN model. The security prediction is calculated based on the probability distribution of the MC in the steady state. To identify and trace back to the critical points of system security, a sensitive analysis method is applied by derivation of the security prediction equation. It provides the possibility to investigate and compare different solutions with the target system in the designing phase.
H.3.2.4. Education
Seyed M. H. Hasheminejad; M. Sarvmili
Abstract
Nowadays, new methods are required to take advantage of the rich and extensive gold mine of data given the vast content of data particularly created by educational systems. Data mining algorithms have been used in educational systems especially e-learning systems due to the broad usage of these systems. ...
Read More
Nowadays, new methods are required to take advantage of the rich and extensive gold mine of data given the vast content of data particularly created by educational systems. Data mining algorithms have been used in educational systems especially e-learning systems due to the broad usage of these systems. Providing a model to predict final student results in educational course is a reason for using data mining in educational systems. In this paper, we propose a novel rule-based classification method, called S3PSO (Students’ Performance Prediction based on Particle Swarm Optimization), to extract the hidden rules, which could be used to predict students’ final outcome. The proposed S3PSO method is based on Particle Swarm Optimization (PSO) algorithm in discrete space. The S3PSO particles encoding inducts more interpretable even for normal users like instructors. In S3PSO, Support, Confidence, and Comprehensibility criteria are used to calculate the fitness of each rule. Comparing the obtained results from S3PSO with other rule-based classification methods such as CART, C4.5, and ID3 reveals that S3PSO improves 31 % of the value of fitness measurement for Moodle data set. Additionally, comparing the obtained results from S3PSO with other classification methods such as SVM, KNN, Naïve Bayes, Neural Network and APSO reveals that S3PSO improves 9 % of the value of accuracy for Moodle data set and yields promising results for predicting students’ final outcome.
I.3.7. Engineering
B. Hosseinzadeh Samani; H. HouriJafari; H. Zareiforoush
Abstract
In this study, the energy consumption in the food and beverage industries of Iran was investigated. The energy consumption in this sector was modeled using artificial neural network (ANN), response surface methodology (RSM) and genetic algorithm (GA). First, the input data to the model were calculated ...
Read More
In this study, the energy consumption in the food and beverage industries of Iran was investigated. The energy consumption in this sector was modeled using artificial neural network (ANN), response surface methodology (RSM) and genetic algorithm (GA). First, the input data to the model were calculated according to the statistical source, balance-sheets and the method proposed in this paper. It can be seen that diesel and liquefied petroleum gas have respectively the highest and lowest shares of energy consumption compared with the other types of carriers. For each of the evaluated energy carriers (diesel, kerosene, fuel oil, natural gas, electricity, liquefied petroleum gas and gasoline), the best fitting model was selected after taking the average of runs of the developed models. At last, the developed models, representing the energy consumption of food and beverage industries by each energy carrier, were put into a finalized model using Simulink toolbox of Matlab software. Results of data analysis indicated that consumption of natural gas is being increased in Iran food and beverage industries, while in the case of fuel oil and liquefied petroleum gas a decreasing trend was estimated.
H.3. Artificial Intelligence
S. Adeli; P. Moradi
Abstract
Since, most of the organizations present their services electronically, the number of functionally-equivalent web services is increasing as well as the number of users that employ those web services. Consequently, plenty of information is generated by the users and the web services that lead to the users ...
Read More
Since, most of the organizations present their services electronically, the number of functionally-equivalent web services is increasing as well as the number of users that employ those web services. Consequently, plenty of information is generated by the users and the web services that lead to the users be in trouble in finding their appropriate web services. Therefore, it is required to provide a recommendation method for predicting the quality of web services (QoS) and recommending web services. Most of the existing collaborative filtering approaches don’t operate efficiently in recommending web services due to ignoring some effective factors such as dependency among users/web services, the popularity of users/web services, and the location of web services/users. In this paper, a web service recommendation method called Popular-Dependent Collaborative Filtering (PDCF) is proposed. The proposed method handles QoS differences experienced by the users as well as the dependency among users on a specific web service using the user/web service dependency factor. Additionally, the user/web service popularity factor is considered in the PDCF that significantly enhances its effectiveness. We also proposed a location-aware method called LPDCF which considers the location of web services into the recommendation process of the PDCF. A set of experiments is conducted to evaluate the performance of the PDCF and investigating the impression of the matrix factorization model on the efficiency of the PDCF with two real-world datasets. The results indicate that the PDCF outperforms other competing methods in most cases.
H.3.2.2. Computer vision
Seyyed A. Hoseini; P. Kabiri
Abstract
In this paper, a feature-based technique for the camera pose estimation in a sequence of wide-baseline images has been proposed. Camera pose estimation is an important issue in many computer vision and robotics applications, such as, augmented reality and visual SLAM. The proposed method can track captured ...
Read More
In this paper, a feature-based technique for the camera pose estimation in a sequence of wide-baseline images has been proposed. Camera pose estimation is an important issue in many computer vision and robotics applications, such as, augmented reality and visual SLAM. The proposed method can track captured images taken by hand-held camera in room-sized workspaces with maximum scene depth of 3-4 meters. The system can be used in unknown environments with no additional information available from the outside world except in the first two images that are used for initialization. Pose estimation is performed using only natural feature points extracted and matched in successive images. In wide-baseline images unlike consecutive frames of a video stream, displacement of the feature points in consecutive images is notable and hence cannot be traced easily using patch-based methods. To handle this problem, a hybrid strategy is employed to obtain accurate feature correspondences. In this strategy, first initial feature correspondences are found using similarity of their descriptors and then outlier matchings are removed by applying RANSAC algorithm. Further, to provide a set of required feature matchings a mechanism based on sidelong result of robust estimator was employed. The proposed method is applied on indoor real data with images in VGA quality (640×480 pixels) and on average the translation error of camera pose is less than 2 cm which indicates the effectiveness and accuracy of the proposed approach.