Original/Review Paper
J.10.3. Financial
S. Beigi; M.R. Amin Naseri
Abstract
Due to today’s advancement in technology and businesses, fraud detection has become a critical component of financial transactions. Considering vast amounts of data in large datasets, it becomes more difficult to detect fraud transactions manually. In this research, we propose a combined method ...
Read More
Due to today’s advancement in technology and businesses, fraud detection has become a critical component of financial transactions. Considering vast amounts of data in large datasets, it becomes more difficult to detect fraud transactions manually. In this research, we propose a combined method using both data mining and statistical tasks, utilizing feature selection, resampling and cost-sensitive learning for credit card fraud detection. In the first step, useful features are identified using genetic algorithm. Next, the optimal resampling strategy is determined based on the design of experiments (DOE) and response surface methodologies. Finally, the cost sensitive C4.5 algorithm is used as the base learner in the Adaboost algorithm. Using a real-time data set, results show that applying the proposed method significantly reduces the misclassification cost by at least 14% compared with Decision tree, Naïve bayes, Bayesian Network, Neural network and Artificial immune system.
Original/Review Paper
H.5. Image Processing and Computer Vision
S. Mavaddati
Abstract
In scientific and commercial fields associated with modern agriculture, the categorization of different rice types and determination of its quality is very important. Various image processing algorithms are applied in recent years to detect different agricultural products. The problem of rice classification ...
Read More
In scientific and commercial fields associated with modern agriculture, the categorization of different rice types and determination of its quality is very important. Various image processing algorithms are applied in recent years to detect different agricultural products. The problem of rice classification and quality detection in this paper is presented based on model learning concepts including sparse representation and dictionary learning techniques to yield over-complete models in this processing field. There are color-based, statistical-based and texture-based features to represent the structural content of rice varieties. To achieve the desired results, different features from recorded images are extracted and used to learn the representative models of rice samples. Also, sparse principal component analysis and sparse structured principal component analysis is employed to reduce the dimension of classification problem and lead to an accurate detector with less computational time. The results of the proposed classifier based on the learned models are compared with the results obtained from neural network and support vector machine. Simulation results, along with a meaningful statistical test, show that the proposed algorithm based on the learned dictionaries derived from the combinational features can detect the type of rice grain and determine its quality precisely.
Original/Review Paper
H.3. Artificial Intelligence
M. Kurmanji; F. Ghaderi
Abstract
Despite considerable enhances in recognizing hand gestures from still images, there are still many challenges in the classification of hand gestures in videos. The latter comes with more challenges, including higher computational complexity and arduous task of representing temporal features. Hand movement ...
Read More
Despite considerable enhances in recognizing hand gestures from still images, there are still many challenges in the classification of hand gestures in videos. The latter comes with more challenges, including higher computational complexity and arduous task of representing temporal features. Hand movement dynamics, represented by temporal features, have to be extracted by analyzing the total frames of a video. So far, both 2D and 3D convolutional neural networks have been used to manipulate the temporal dynamics of the video frames. 3D CNNs can extract the changes in the consecutive frames and tend to be more suitable for the video classification task, however, they usually need more time. On the other hand, by using techniques like tiling it is possible to aggregate all the frames in a single matrix and preserve the temporal and spatial features. This way, using 2D CNNs, which are inherently simpler than 3D CNNs can be used to classify the video instances. In this paper, we compared the application of 2D and 3D CNNs for representing temporal features and classifying hand gesture sequences. Additionally, providing a two-stage two-stream architecture, we efficiently combined color and depth modalities and 2D and 3D CNN predictions. The effect of different types of augmentation techniques is also investigated. Our results confirm that appropriate usage of 2D CNNs outperforms a 3D CNN implementation in this task.
Original/Review Paper
C.3. Software Engineering
N. Rezaee; H. Momeni
Abstract
Model checking is an automatic technique for software verification through which all reachable states are generated from an initial state to finding errors and desirable patterns. In the model checking approach, the behavior and structure of system should be modeled. Graph transformation system is a ...
Read More
Model checking is an automatic technique for software verification through which all reachable states are generated from an initial state to finding errors and desirable patterns. In the model checking approach, the behavior and structure of system should be modeled. Graph transformation system is a graphical formal modeling language to specify and model the system. However, modeling of large systems with the graph transformation system suffers from the state space explosion problem which usually requires huge amounts of computational resources. In this paper, we propose a hybrid meta-heuristic approach to deal with this searching problem in the graph transformation system because meta-heuristic algorithms are efficient solutions to traverse the graph of large systems. Our approach, using Artificial Bee Colony and Simulated Annealing, replaces a full state space generation, only by producing part of it checking the safety, and finding errors (e.g., deadlock). The experimental results show that our proposed approach is more efficient and accurate compared to other approaches.
Original/Review Paper
D. Data
M. Zarezade; E. Nourani; Asgarali Bouyer
Abstract
Community structure is vital to discover the important structures and potential property of complex networks. In recent years, the increasing quality of local community detection approaches has become a hot spot in the study of complex network due to the advantages of linear time complexity and applicable ...
Read More
Community structure is vital to discover the important structures and potential property of complex networks. In recent years, the increasing quality of local community detection approaches has become a hot spot in the study of complex network due to the advantages of linear time complexity and applicable for large-scale networks. However, there are many shortcomings in these methods such as instability, low accuracy, randomness, etc. The G-CN algorithm is one of local methods that uses the same label propagation as the LPA method, but unlike the LPA, only the labels of boundary nodes are updated at each iteration that reduces its execution time. However, it has resolution limit and low accuracy problem. To overcome these problems, this paper proposes an improved community detection method called SD-GCN which uses a hybrid node scoring and synchronous label updating of boundary nodes, along with disabling random label updating in initial updates. In the first phase, it updates the label of boundary nodes in a synchronous manner using the obtained score based on degree centrality and common neighbor measures. In addition, we defined a new method for merging communities in second phase which is faster than modularity-based methods. Extensive set of experiments are conducted to evaluate performance of the SD-GCN on small and large-scale real-world networks and artificial networks. These experiments verify significant improvement in the accuracy and stability of community detection approaches in parallel with shorter execution time in a linear time complexity.
Original/Review Paper
B.3. Communication/Networking and Information Technology
A. Azimi Kashani; M. Ghanbari; A. M. Rahmani
Abstract
Vehicular ad hoc networks are an emerging technology with an extensive capability in various applications including vehicles safety, traffic management and intelligent transportation systems. Considering the high mobility of vehicles and their inhomogeneous distributions, designing an efficient routing ...
Read More
Vehicular ad hoc networks are an emerging technology with an extensive capability in various applications including vehicles safety, traffic management and intelligent transportation systems. Considering the high mobility of vehicles and their inhomogeneous distributions, designing an efficient routing protocol seems necessary. Given the fact that a road is crowded at some sections and is not crowded at the others, the routing protocol should be able to dynamically make decisions. On the other hand, VANET networks environment is vulnerable at the time of data transmission. Broadcast routing, similar to opportunistic routing, could offer better efficiency compared to other protocols. In this paper, a fuzzy logic opportunistic routing (FLOR) protocol is presented in which the packet rebroadcasting decision-making process is carried out through the fuzzy logic system along with three input parameters of packet advancement, local density, and the number of duplicated delivered packets. The rebroadcasting procedures use the value of these parameters as inputs to the fuzzy logic system to resolve the issue of multicasting, considering the crowded and sparse zones. NS-2 simulator is used for evaluating the performance of the proposed FLOR protocol in terms of packet delivery ratio, the end-to-end delay, and the network throughput compared with the existing protocols such as: FLOODING, P-PERSISTENCE and FUZZBR. The performance comparison also emphasizes on effective utilization of the resources. Simulations on highway environment show that the proposed protocol has a better QoS efficiency compared to the above published methods in the literature
Original/Review Paper
H.3.8. Natural Language Processing
L. Jafar Tafreshi; F. Soltanzadeh
Abstract
Named Entity Recognition is an information extraction technique that identifies name entities in a text. Three popular methods have been conventionally used namely: rule-based, machine-learning-based and hybrid of them to extract named entities from a text. Machine-learning-based methods have good performance ...
Read More
Named Entity Recognition is an information extraction technique that identifies name entities in a text. Three popular methods have been conventionally used namely: rule-based, machine-learning-based and hybrid of them to extract named entities from a text. Machine-learning-based methods have good performance in the Persian language if they are trained with good features. To get good performance in Conditional Random Field-based Persian Named Entity Recognition, a several syntactic features based on dependency grammar along with some morphological and language-independent features have been designed in order to extract suitable features for the learning phase. In this implementation, designed features have been applied to Conditional Random Field to build our model. To evaluate our system, the Persian syntactic dependency Treebank with about 30,000 sentences, prepared in NOOR Islamic science computer research center, has been implemented. This Treebank has Named-Entity tags, such as Person, Organization and location. The result of this study showed that our approach achieved 86.86% precision, 80.29% recall and 83.44% F-measure which are relatively higher than those values reported for other Persian NER methods.
Original/Review Paper
H.6.3.2. Feature evaluation and selection
E. Enayati; Z. Hassani; M. Moodi
Abstract
Breast cancer is one of the most common cancer in the world. Early detection of cancers cause significantly reduce in morbidity rate and treatment costs. Mammography is a known effective diagnosis method of breast cancer. A way for mammography screening behavior identification is women's awareness evaluation ...
Read More
Breast cancer is one of the most common cancer in the world. Early detection of cancers cause significantly reduce in morbidity rate and treatment costs. Mammography is a known effective diagnosis method of breast cancer. A way for mammography screening behavior identification is women's awareness evaluation for participating in mammography screening programs. Todays, intelligence systems could identify main factors on specific incident. These could help to the experts in the wide range of areas specially health scopes such as prevention, diagnosis and treatment. In this paper we use a hybrid model called H-BwoaSvm which BWOA is used for detecting effective factors on mammography screening behavior and SVM for classification. Our model is applied on a data set which collected from a segmental analytical descriptive study on 2256 women. Proposed model is operated on data set with 82.27 and 98.89 percent accuracy and select effective features on mammography screening behavior.
Original/Review Paper
H. Haghshenas Gorgani; A. R. Jahantigh Pak
Abstract
Identification of the factors affecting teaching quality of engineering drawing and interaction between them is necessary until it is determined which manipulation will improve the quality of teaching this course. Since the above issue is a Multi-Criteria Decision Making (MCDM) problem and on the other ...
Read More
Identification of the factors affecting teaching quality of engineering drawing and interaction between them is necessary until it is determined which manipulation will improve the quality of teaching this course. Since the above issue is a Multi-Criteria Decision Making (MCDM) problem and on the other hand, we are faced with human factors, the Fuzzy DEMATEL method is suggested for solving it. Also, because DEMATEL analysis does not lead to a weighting of the criteria, it is combined with the ANP and a hybrid fuzzy DEMATEL-ANP (FDANP) methodology is used. The results of investigating 7 Dimensions and 21 Criteria show that the quality of teaching this course increases, if the updated teaching methods and contents to be used, the evaluation policy to be tailored to the course, the course professor and his/her assistants be available to correct students' mistakes and there is also an interactive system based on student comments.
Original/Review Paper
F.3.3. Graph Theor
A. Jalili; M. Keshtgari
Abstract
Software-Defined Network (SDNs) is a decoupled architecture that enables administrators to build a customizable and manageable network. Although the decoupled control plane provides flexible management and facilitates the task of operating the network, it is the vulnerable point of failure in SDN. To ...
Read More
Software-Defined Network (SDNs) is a decoupled architecture that enables administrators to build a customizable and manageable network. Although the decoupled control plane provides flexible management and facilitates the task of operating the network, it is the vulnerable point of failure in SDN. To achieve a reliable control plane, multiple controller are often needed so that each switch must be assigned to more than one controller. In this paper, a Reliable Controller Placement Problem Model (RCPPM) is proposed to solve such a problem, so as to maximize the reliability of software defined networks. Unlike previous works that only consider latencies parameters, the new model takes into account the load of control traffic and reliability metrics as well. Furthermore, a near-optimal algorithm is proposed to solve the NP-hard RCPPM in a heuristic manner. Finally, through extensive simulation, a comprehensive analysis of the RCPPM is presented for various topologies extracted from Internet Topology Zoo. Our performance evaluations show the efficiency of the proposed framework.
Original/Review Paper
Seyed M. Sadatrasoul; O. Ebadati; R. Saedi
Abstract
The purpose of this study is to reduce the uncertainty of early stage startups success prediction and filling the gap of previous studies in the field, by identifying and evaluating the success variables and developing a novel business success failure (S/F) data mining classification prediction model ...
Read More
The purpose of this study is to reduce the uncertainty of early stage startups success prediction and filling the gap of previous studies in the field, by identifying and evaluating the success variables and developing a novel business success failure (S/F) data mining classification prediction model for Iranian start-ups. For this purpose, the paper is seeking to extend Bill Gross and Robert Lussier S/F prediction model variables and algorithms in a new context of Iranian start-ups which starts from accelerators in order to build a new S/F prediction model. A sample of 161 Iranian start-ups which are based in accelerators from 2013 to 2018 is applied and 39 variables are extracted from the literature and organized in five groups. Then the sample is fed into six well-known classification algorithms. Two staged stacking as a classification model is the best performer among all other six classification based S/F prediction models and it can predict binary dependent variable of success or failure with accuracy of 89% on average. Also finding shows that “starting from Accelerators”, “creativity and problem solving ability of founders”, “fist mover advantage” and “amount of seed investment” are the four most important variables which affects the start-ups success and the other 15 variables are less important.
Original/Review Paper
H.5. Image Processing and Computer Vision
M. Saeedzarandi; H. Nezamabadi-pour; S. Saryazdi
Abstract
Removing noise from images is a challenging problem in digital image processing. This paper presents an image denoising method based on a maximum a posteriori (MAP) density function estimator, which is implemented in the wavelet domain because of its energy compaction property. The performance of the ...
Read More
Removing noise from images is a challenging problem in digital image processing. This paper presents an image denoising method based on a maximum a posteriori (MAP) density function estimator, which is implemented in the wavelet domain because of its energy compaction property. The performance of the MAP estimator depends on the proposed model for noise-free wavelet coefficients. Thus in the wavelet based image denoising, selecting a proper model for wavelet coefficients is very important. In this paper, we model wavelet coefficients in each sub-band by heavy-tail distributions that are from scale mixture of normal distribution family. The parameters of distributions are estimated adaptively to model the correlation between the coefficient amplitudes, so the intra-scale dependency of wavelet coefficients is also considered. The denoising results confirm the effectiveness of the proposed method.