Technical Paper
J. Barazande; N. Farzaneh
Abstract
One of the crucial applications of IoT is developing smart cities via this technology. Smart cities are made up of smart components such as smart homes. In smart homes, a variety of sensors are used for making the environment smart, and the smart things, in such homes, can be used for detecting the activities ...
Read More
One of the crucial applications of IoT is developing smart cities via this technology. Smart cities are made up of smart components such as smart homes. In smart homes, a variety of sensors are used for making the environment smart, and the smart things, in such homes, can be used for detecting the activities of the people inside them. Detecting the activities of the smart homes’ users may include the detection of activities such as making food or watching TV. Detecting the activities of residents of smart homes can tremendously help the elderly or take care of the kids or, even, promote security issues. The information collected by the sensors could be used for detecting the kind of activities; however, the main challenge is the poor precision of most of the activity detection methods. In the proposed method, for reducing the clustering error of the data mining techniques, a hybrid learning approach is presented using Water Strider Algorithm. In the proposed method, Water Strider Algorithm can be used in the feature extraction phase and exclusively extract the main features for machine learning. The analysis of the proposed method shows that it has precision of 97.63 %, accuracy of 97. 12 %, and F1 index of 97.45 %. It, in comparison with similar algorithms (such as Butterfly Optimization Algorithm, Harris Hawks Optimization Algorithm, and Black Widow Optimization Algorithm), has higher precision while detecting the users’ activities.
Original/Review Paper
Sh. Golzari; F. Sanei; M.R. Saybani; A. Harifi; M. Basir
Abstract
A Question Answering System (QAS) is a special form of information retrieval which consists of three parts: question processing, information retrieval, and answer selection. Determining the type of question is the most important part of QAS as it affects other following parts. This study uses effective ...
Read More
A Question Answering System (QAS) is a special form of information retrieval which consists of three parts: question processing, information retrieval, and answer selection. Determining the type of question is the most important part of QAS as it affects other following parts. This study uses effective features and ensemble classification to improve the QAS performance by increasing the accuracy of question type identification. We use the gravitational search algorithm to select the features and perform ensemble classification. The proposed system is extensively tested on different datasets using four types of experiments: (1) neither feature selection nor ensemble classification, (2) feature selection without ensemble classification, (3) ensemble classification without feature selection, and (4) feature selection with ensemble classification. These four kinds of experiments are carried out under the differential evolution algorithm and gravitational search algorithm. The experimental results show that the proposed method outperforms compared to state-of-the-art methods in previous researches.
Original/Review Paper
M. Taherinia; M. Esmaeili; B. Minaei Bidgoli
Abstract
The Influence Maximization Problem in social networks aims to find a minimal set of individuals to produce the highest influence on other individuals in the network. In the last two decades, a lot of algorithms have been proposed to solve the time efficiency and effectiveness challenges of this NP-Hard ...
Read More
The Influence Maximization Problem in social networks aims to find a minimal set of individuals to produce the highest influence on other individuals in the network. In the last two decades, a lot of algorithms have been proposed to solve the time efficiency and effectiveness challenges of this NP-Hard problem. Undoubtedly, the CELF algorithm (besides the naive greedy algorithm) has the highest effectiveness among them. Of course, the CELF algorithm is faster than the naive greedy algorithm (about 700 times). This superiority has led many researchers to make extensive use of the CELF algorithm in their innovative approaches. However, the main drawback of the CELF algorithm is the very long running time of its first iteration. Because it has to estimate the influence spread for all nodes by expensive Monte-Carlo simulations, similar to the naive greedy algorithm. In this paper, a heuristic approach is proposed, namely Optimized-CELF algorithm, to improve this drawback of the CELF algorithm by avoiding unnecessary Monte-Carlo simulations. It is found that the proposed algorithm reduces the CELF running time, and subsequently improves the time efficiency of other algorithms that employed the CELF as a base algorithm. Experimental results on the wide spectral of real datasets showed that the Optimized-CELF algorithm provided better running time gain, about 88-99% and 56-98% for k=1 and k=50, respectively, compared to the CELF algorithm without missing effectiveness.
Original/Review Paper
M. Tavakkoli; A. Ebrahimzadeh; A. Nasiraei Moghaddam; J. Kazemitabar
Abstract
One of the most advanced non-invasive medical imaging methods is MRI that can make a good contrast between soft tissues. The main problem with this method is the time limitation in data acquisition, particularly in dynamic imaging. Radial sampling is an alternative for faster data acquisition and has ...
Read More
One of the most advanced non-invasive medical imaging methods is MRI that can make a good contrast between soft tissues. The main problem with this method is the time limitation in data acquisition, particularly in dynamic imaging. Radial sampling is an alternative for faster data acquisition and has several advantages compared to Cartesian sampling. Among them, robustness to motion artifacts makes this acquisition useful in cardiac imaging. Recently, CS has been used to accelerate data acquisition in dynamic MRI. Cartesian acquisition uses irregular undersampling patterns to create incoherent artifacts to meet the Incoherent sampling requirement of CS. Radial acquisition, due to its incoherent artifact, even in regular sampling, has an inherent fitness to CS reconstruction. In this study, we reconstruct the (3D) stack of stars data in cardiac imaging using the combination of the TV penalty function and the GRASP algorithm. We reduced the number of spokes from 21 to 13 and then reduced to 8 to observe the performance of the algorithm at a high acceleration factor. We compared the output images of the proposed algorithm with both GRASP and NUFFT algorithms. In all three modes (21, 13, and 8 spokes), average image similarity was increased by at least by 0.4, 0.1 compared to NUFFT, GRASP respectively. Moreover, streaking artifacts were significantly reduced. According to the results, the proposed method can be used on a clinical study for fast dynamic MRI, such as cardiac imaging with the high image quality from low- rate sampling.
Original/Review Paper
M. Kakooei; Y. Baleghi
Abstract
Shadow detection provides worthwhile information for remote sensing applications, e.g. building height estimation. Shadow areas are formed in the opposite side of the sunlight radiation to tall objects, and thus, solar illumination angle is required to find probable shadow areas. In recent years, Very ...
Read More
Shadow detection provides worthwhile information for remote sensing applications, e.g. building height estimation. Shadow areas are formed in the opposite side of the sunlight radiation to tall objects, and thus, solar illumination angle is required to find probable shadow areas. In recent years, Very High Resolution (VHR) imagery provides more detailed data from objects including shadow areas. In this regard, the motivation of this paper is to propose a reliable feature, Shadow Low Gradient Direction (SLGD), to automatically determine shadow and solar illumination direction in VHR data. The proposed feature is based on inherent spatial feature of fine-resolution shadow areas. Therefore, it can facilitate shadow-based operations, especially when the solar illumination information is not available in remote sensing metadata. Shadow intensity is supposed to be dependent on two factors, including the surface material and sunlight illumination, which is analyzed by directional gradient values in low gradient magnitude areas. This feature considers the sunlight illumination and ignores the material differences. The method is fully implemented on the Google Earth Engine cloud computing platform, and is evaluated on VHR data with 0.3m resolution. Finally, SLGD performance is evaluated in determining shadow direction and compared in refining shadow maps.
Original/Review Paper
H. Fathi; A.R. Ahmadyfard; H. Khosravi
Abstract
Recently, significant attention has been paid to the development of virtual reality systems in several fields such as commerce. Trying on virtual clothes is becoming a solution for the online clothing industry. In this paper, we propose a method for the problem of virtual clothing using 3D point matching ...
Read More
Recently, significant attention has been paid to the development of virtual reality systems in several fields such as commerce. Trying on virtual clothes is becoming a solution for the online clothing industry. In this paper, we propose a method for the problem of virtual clothing using 3D point matching of a selected cloth and the customer body. For this purpose, we provide a 3D model of the customer and the selected clothes, put up on the mannequin, using a Kinect camera. As the size of the abdominal part of the customer is different from the mannequin, after pre-processing of the two captured point clouds, the 3D point cloud of the selected clothes is deformed to fit the 3D point cloud of the customer’s body. We use Laplacian-Beltrami curvature as a descriptor to find the abdominal part in the two point clouds. Then, the abdominal part of the mannequin is deformed in 3D space to fit the abdominal part of the customer. Finally, the head and neck of the customer are attached to the mannequin point.The proposed method has two main advantages over the existing methods for virtual clothing. First, no need for an expert to design a 3D model for the customer body and the selected clothes in advanced graphical software such as Unity. Second, there is no restriction for the style of the selected clothes and their texture while existing methods have such restrictions. The experimental results justify the ability of the proposed method for virtual clothing.
Review Article
M. Sepahvand; F. Abdali-Mohammadi
Abstract
The success of handwriting recognition methods based on digitizer-pen signal processing is mostly dependent on the defined features. Strong and discriminating feature descriptors can play the main role in improving the accuracy of pattern recognition. Moreover, most recognition studies utilize local ...
Read More
The success of handwriting recognition methods based on digitizer-pen signal processing is mostly dependent on the defined features. Strong and discriminating feature descriptors can play the main role in improving the accuracy of pattern recognition. Moreover, most recognition studies utilize local features or sequences of them. Whereas, it has been shown that the combination of global and local features can increase the recognition accuracy. This paper addresses two mentioned topics. First, a new high discriminative local feature, called Rotation Invariant Histogram of Degrees (RIHoD), is proposed for online digitizer-pen handwriting signals. Second, a feature representation layer is proposed, which maps local features into global ones in a new space using some learning kernels. Different aspects of the proposed local feature and learned global feature are analyzed and its efficiency is evaluated in several online handwriting recognition scenarios.
Original/Review Paper
F. Baratzadeh; Seyed M. H. Hasheminejad
Abstract
With the advancement of technology, the daily use of bank credit cards has been increasing exponentially. Therefore, the fraudulent use of credit cards by others as one of the new crimes is also growing fast. For this reason, detecting and preventing these attacks has become an active area of study. ...
Read More
With the advancement of technology, the daily use of bank credit cards has been increasing exponentially. Therefore, the fraudulent use of credit cards by others as one of the new crimes is also growing fast. For this reason, detecting and preventing these attacks has become an active area of study. This article discusses the challenges of detecting fraudulent banking transactions and presents solutions based on deep learning. Transactions are examined and compared with other traditional models in fraud detection. According to the results obtained, optimal performance is related to the combined model of deep convolutional networks and short-term memory, which is trained using the aggregated data received from the generative adversarial network. This paper intends to produce sensible data to address the unequal class distribution problem, which is far more effective than traditional methods. Also, it uses the strengths of the two approaches by combining deep convolutional network and Long Short Term Memory network to improve performance. Due to the inefficiency of evaluation criteria such as accuracy in this application, the measure of distance score and the equal error rate has been used to evaluate models more transparent and more precise. Traditional methods were compared to the proposed approach to evaluate the efficiency of the experiment.
Original/Review Paper
H.R. Koosha; Z. Ghorbani; R. Nikfetrat
Abstract
In the last decade, online shopping has played a vital role in customers' approach to purchasing different products, providing convenience to shop and many benefits for the economy. E-commerce is widely used for digital media products such as movies, images, and software. So, recommendation systems are ...
Read More
In the last decade, online shopping has played a vital role in customers' approach to purchasing different products, providing convenience to shop and many benefits for the economy. E-commerce is widely used for digital media products such as movies, images, and software. So, recommendation systems are of great importance, especially in today's hectic world, which search for content that would be interesting to an individual. In this research, a new two-steps recommender system is proposed based on demographic data and user ratings on the public MovieLens datasets. In the first step, clustering on the training dataset is performed based on demographic data, grouping customers in homogeneous clusters. The clustering includes a hybrid Firefly Algorithm (FA) and K-means approach. Due to the FA's ability to avoid trapping into local optima, which resolves K-means' main pitfall, the combination of these two techniques leads to much better performance. In the next step, for each cluster, two recommender systems are proposed based on K-Nearest Neighbor (KNN) and Naïve Bayesian Classification. The results are evaluated based on many internal and external measures like the Davies-Bouldin index, precision, accuracy, recall, and F-measure. The results showed the effectiveness of the K-means/FA/KNN compared with other extant models.
Original/Review Paper
M. Khanzadi; H. Veisi; R. Alinaghizade; Z. Soleymani
Abstract
One of the main problems in children with learning difficulties is the weakness of phonological awareness (PA) skills. In this regard, PA tests are used to evaluate this skill. Currently, this assessment is paper-based for the Persian language. To accelerate the process of the assessments and make it ...
Read More
One of the main problems in children with learning difficulties is the weakness of phonological awareness (PA) skills. In this regard, PA tests are used to evaluate this skill. Currently, this assessment is paper-based for the Persian language. To accelerate the process of the assessments and make it engaging for children, we propose a computer-based solution that is a comprehensive Persian phonological awareness assessment system implementing expressive and pointing tasks. For the expressive tasks, the solution is powered by recurrent neural network-based speech recognition systems. To this end, various recognition modules are implemented, including a phoneme recognition system for the phoneme segmentation task, a syllable recognition system for the syllable segmentation task, and a sub-word recognition system for three types of phoneme deletion tasks, including initial, middle, and final phoneme deletion. The recognition systems use bidirectional long short-term memory neural networks to construct acoustic models. To implement the recognition systems, we designed and collected Persian Kid’s Speech Corpus that is the largest in Persian for children’s speech. The accuracy rate for phoneme recognition was 85.5%, and for syllable recognition was 89.4%. The accuracy rates of the initial, middle, and final phoneme deletion were 96.76%, 98.21%, and 95.9%, respectively.
Original/Review Paper
R. Ghotboddini; H. Toossian Shandiz
Abstract
Lighting continuity is one of the preferences of citizens. Public lighting management from the viewpoint of city residents improves social welfare. The quality of lamps and duration of lighting defect correction is important in lighting continuity. In this regard, reward and penalty mechanism plays an ...
Read More
Lighting continuity is one of the preferences of citizens. Public lighting management from the viewpoint of city residents improves social welfare. The quality of lamps and duration of lighting defect correction is important in lighting continuity. In this regard, reward and penalty mechanism plays an important role in contract. Selecting labor and lamps has a significant impact on risk reduction during the contract period. This research improves strategies for public lighting asset management. The lifespan of lamp that announced by manufacturers is used to calculate maintenance cost in order to provide a possibility to estimate the actual cost of high-pressure sodium luminaire in public lighting system. Guarantee period of lamps and maximum permissible lighting defect detection and correction time is used for reward and penalty mechanism. The result shows that the natural life guarantee and permissible correction time have a considerable effect in maintenance cost and city resident’s satisfaction.
Original/Review Paper
A. Lakizadeh; E. Moradizadeh
Abstract
Text sentiment classification in aspect level is one of the hottest research topics in the field of natural language processing. The purpose of the aspect-level sentiment analysis is to determine the polarity of the text according to a particular aspect. Recently, various methods have been developed ...
Read More
Text sentiment classification in aspect level is one of the hottest research topics in the field of natural language processing. The purpose of the aspect-level sentiment analysis is to determine the polarity of the text according to a particular aspect. Recently, various methods have been developed to determine sentiment polarity of the text at the aspect level, however, these studies have not yet been able to model well complementary effects of the context and aspect in the polarization detection process. Here, we present ACTSC, a method for determining the sentiment polarity of the text based on separate embedding of aspects and context. In the first step, ACTSC deals with separate modelling of the aspects and context to extract new representation vectors. Next, by combining generative representations of aspect and context, it determines the corresponding polarity to each particular aspect using a short-term memory network and a self-attention mechanism. Experimental results in the SemEval2014 dataset in both restaurant and laptop categories show that ACTSC has been able to improve the accuracy of aspect-based sentiment classification compared to the latest proposed methods.