Original/Review Paper
H. Gholamalinejad; H. Khosravi
Abstract
In recent years, vehicle classification has been one of the most important research topics. However, due to the lack of a proper dataset, this field has not been well developed as other fields of intelligent traffic management. Therefore, the preparation of large-scale datasets of vehicles for each country ...
Read More
In recent years, vehicle classification has been one of the most important research topics. However, due to the lack of a proper dataset, this field has not been well developed as other fields of intelligent traffic management. Therefore, the preparation of large-scale datasets of vehicles for each country is of great interest. In this paper, we introduce a new standard dataset of popular Iranian vehicles. This dataset, which consists of images from moving vehicles in urban streets and highways, can be used for vehicle classification and license plate recognition. It contains a large collection of vehicle images in different dimensions, viewing angles, weather, and lighting conditions. It took more than a year to construct this dataset. Images are taken from various types of mounted cameras, with different resolutions and at different altitudes. To estimate the complexity of the dataset, some classic methods alongside popular Deep Neural Networks are trained and evaluated on the dataset. Furthermore, two light-weight CNN structures are also proposed. One with 3-Conv layers and another with 5-Conv layers. The 5-Conv model with 152K parameters reached the recognition rate of 99.09% and can process 48 frames per second on CPU which is suitable for real-time applications.
Original/Review Paper
T. Askari Javaran; A. Alidadi; S.R. Arab
Abstract
Estimation of blurriness value in image is an important issue in image processing applications such as image deblurring. In this paper, a no-reference blur metric with low computational cost is proposed, which is based on the difference between the second order gradients of a sharp image and the one ...
Read More
Estimation of blurriness value in image is an important issue in image processing applications such as image deblurring. In this paper, a no-reference blur metric with low computational cost is proposed, which is based on the difference between the second order gradients of a sharp image and the one associated with its blurred version. The experiments, in this paper, performed on four databases, including CSIQ, TID2008, IVC, and LIVE. The experimental results indicate the capability of the proposed blur metric in measuring image blurriness, also the low computational cost, comparing with other existing approaches.
Original/Review Paper
N. Alibabaie; A.M. Latif
Abstract
Periodic noise reduction is a fundamental problem in image processing, which severely affects the visual quality and subsequent application of the data. Most of the conventional approaches are only dedicated to either the frequency or spatial domain. In this research, we propose a dual-domain approach ...
Read More
Periodic noise reduction is a fundamental problem in image processing, which severely affects the visual quality and subsequent application of the data. Most of the conventional approaches are only dedicated to either the frequency or spatial domain. In this research, we propose a dual-domain approach by converting the periodic noise reduction task into an image decomposition problem. We introduced a bio-inspired computational model to separate the original image from the noise pattern without having any a priori knowledge about its structure or statistics. Experiments on both synthetic and non-synthetic noisy images have been carried out to validate the effectiveness and efficiency of the proposed algorithm. The simulation results demonstrate the effectiveness of the proposed method both qualitatively and quantitatively.
Technical Paper
S. Asadi Amiri; M. Rajabinasab
Abstract
Face recognition is a challenging problem because of different illuminations, poses, facial expressions, and occlusions. In this paper, a new robust face recognition method is proposed based on color and edge orientation difference histogram. Firstly, color and edge orientation difference histogram is ...
Read More
Face recognition is a challenging problem because of different illuminations, poses, facial expressions, and occlusions. In this paper, a new robust face recognition method is proposed based on color and edge orientation difference histogram. Firstly, color and edge orientation difference histogram is extracted using color, color difference, edge orientation and edge orientation difference of the face image. Then, backward feature selection is employed to reduce the number of features. Finally, Canberra measure is used to assess the similarity between the images. Color and edge orientation difference histogram shows uniform color difference and edge orientation difference between two neighboring pixels. This histogram will be effective for face recognition due to different skin colors and different edge orientations of the face image, which leads to different light reflection. The proposed method is evaluated on Yale and ORL face datasets. These datasets are consisted of gray-scale face images under different illuminations, poses, facial expressions and occlusions. The recognition rate over Yale and ORL datasets is achieved 100% and 98.75% respectively. Experimental results demonstrate that the proposed method outperforms the existing methods in face recognition.
Original/Review Paper
Z. Shojaee; Seyed A. Shahzadeh Fazeli; E. Abbasi; F. Adibnia
Abstract
Today, feature selection, as a technique to improve the performance of the classification methods, has been widely considered by computer scientists. As the dimensions of a matrix has a huge impact on the performance of processing on it, reducing the number of features by choosing the best subset of ...
Read More
Today, feature selection, as a technique to improve the performance of the classification methods, has been widely considered by computer scientists. As the dimensions of a matrix has a huge impact on the performance of processing on it, reducing the number of features by choosing the best subset of all features, will affect the performance of the algorithms. Finding the best subset by comparing all possible subsets, even when n is small, is an intractable process, hence many researches approach to heuristic methods to find a near-optimal solutions. In this paper, we introduce a novel feature selection technique which selects the most informative features and omits the redundant or irrelevant ones. Our method is embedded in PSO (Particle Swarm Optimization). To omit the redundant or irrelevant features, it is necessary to figure out the relationship between different features. There are many correlation functions that can reveal this relationship. In our proposed method, to find this relationship, we use mutual information technique. We evaluate the performance of our method on three classification benchmarks: Glass, Vowel, and Wine. Comparing the results with four state-of-the-art methods, demonstrates its superiority over them.
Original/Review Paper
A. Hashemi; M. A. Zare Chahooki
Abstract
Social networks are valuable sources for marketers. Marketers can publish campaigns to reach target audiences according to their interest. Although Telegram was primarily designed as an instant messenger, it is used as a social network in Iran due to censorship of Facebook, Twitter, etc. Telegram neither ...
Read More
Social networks are valuable sources for marketers. Marketers can publish campaigns to reach target audiences according to their interest. Although Telegram was primarily designed as an instant messenger, it is used as a social network in Iran due to censorship of Facebook, Twitter, etc. Telegram neither provides a marketing platform nor the possibility to search among groups. It is difficult for marketers to find target audience groups in Telegram, hence we developed a system to fill the gap. Marketers use our system to find target audience groups by keyword search. Our system has to search and rank groups as relevant as possible to the search query. This paper proposes a method called GroupRank to improve the ranking of group searching. GroupRank elicits associative connections among groups based on membership records they have in common. After detailed analysis, five-group quality factors have been introduced and used in the ranking. Our proposed method combines TF-IDF scoring with group quality scores and associative connections among groups. Experimental results show improvement in many different queries.
Original/Review Paper
S. Shadravan; H. Naji; V. Khatibi
Abstract
The SailFish Optimizer (SFO) is a metaheuristic algorithm inspired by a group of hunting sailfish that alternates their attacks on group of prey. The SFO algorithm takes advantage of using a simple method for providing the dynamic balance between exploration and exploitation phases, creating the swarm ...
Read More
The SailFish Optimizer (SFO) is a metaheuristic algorithm inspired by a group of hunting sailfish that alternates their attacks on group of prey. The SFO algorithm takes advantage of using a simple method for providing the dynamic balance between exploration and exploitation phases, creating the swarm diversity, avoiding local optima, and guaranteeing high convergence speed. Nowadays, multi agent systems and metaheuristic algorithms can provide high performance solutions for solving combinatorial optimization problems. These methods provide a prominent approach to reduce the execution time and improve of the solution quality. In this paper, we elaborate a multi agent based and distributed method for sailfish optimizer (DSFO), which improves the execution time and speedup of the algorithm while maintaining the results of optimization in high quality. The Graphics Processing Units (GPUs) using Compute Unified Device Architecture (CUDA) are used for the massive computation requirements in this approach. In depth of the study, we present the implementation details and performance observations of DSFO algorithm. Also, a comparative study of distributed and sequential SFO is performed on a set of standard benchmark optimization functions. Moreover, the execution time of distributed SFO is compared with other parallel algorithms to show the speed of the proposed algorithm for solving unconstrained optimization problems. The final results indicate that the proposed method is executed about maximum 14 times faster than other parallel algorithms and shows the ability of DSFO for solving non-separable, non-convex and scalable optimization problems.
Original/Review Paper
M. Yadollahzadeh Tabari; Z. Mataji
Abstract
The Internet of Things (IoT) is a novel paradigm in computer networks which is capable to connect things to the internet via a wide range of technologies. Due to the features of the sensors used in IoT networks and the unsecured nature of the internet, IoT is vulnerable to many internal routing attacks. ...
Read More
The Internet of Things (IoT) is a novel paradigm in computer networks which is capable to connect things to the internet via a wide range of technologies. Due to the features of the sensors used in IoT networks and the unsecured nature of the internet, IoT is vulnerable to many internal routing attacks. Using traditional IDS in these networks has its own challenges due to the resource constraint of the nodes, and the characteristics of the IoT network. A sinkhole attacker node, in this network, attempts to attract traffic through incorrect information advertisement. In this research, a distributed IDS architecture is proposed to detect sinkhole routing attack in RPL-based IoT networks, which is aimed to improve true detection rate and reduce the false alarms. For the latter we used one type of post processing mechanism in which a threshold is defined for separating suspicious alarms for further verifications. Also, the implemented IDS modules distributed via client and router border nodes that makes it energy efficient. The required data for interpretation of network’s behavior gathered from scenarios implemented in Cooja environment with the aim of Rapidminer for mining the produces patterns. The produced dataset optimized using Genetic algorithm by selecting appropriate features. We investigate three different classification algorithms which in its best case Decision Tree could reaches to 99.35 rate of accuracy.
Original/Review Paper
A. Lakizadeh; Z. Zinaty
Abstract
Aspect-level sentiment classification is an essential issue in sentiment analysis that intends to resolve the sentiment polarity of a specific aspect mentioned in the input text. Recent methods have discovered the role of aspects in sentiment polarity classification and developed various techniques to ...
Read More
Aspect-level sentiment classification is an essential issue in sentiment analysis that intends to resolve the sentiment polarity of a specific aspect mentioned in the input text. Recent methods have discovered the role of aspects in sentiment polarity classification and developed various techniques to assess the sentiment polarity of each aspect in the text. However, these studies do not pay enough attention to the need for vectors to be optimal for the aspect. To address this issue, in the present study, we suggest a Hierarchical Attention-based Method (HAM) for aspect-based polarity classification of the text. HAM works in a hierarchically manner; firstly, it extracts an embedding vector for aspects. Next, it employs these aspect vectors with information content to determine the sentiment of the text. The experimental findings on the SemEval2014 data set show that HAM can improve accuracy by up to 6.74% compared to the state-of-the-art methods in aspect-based sentiment classification task.
Original/Review Paper
A. Omondi; I. Lukandu; G. Wanyembi
Abstract
Variable environmental conditions and runtime phenomena require developers of complex business information systems to expose configuration parameters to system administrators. This allows system administrators to intervene by tuning the bottleneck configuration parameters in response to current changes ...
Read More
Variable environmental conditions and runtime phenomena require developers of complex business information systems to expose configuration parameters to system administrators. This allows system administrators to intervene by tuning the bottleneck configuration parameters in response to current changes or in anticipation of future changes in order to maintain the system’s performance at an optimum level. However, these manual performance tuning interventions are prone to error and lack of standards due to fatigue, varying levels of expertise and over-reliance on inaccurate predictions of future states of a business information system. As a result, the purpose of this research is to investigate on how the capacity of probabilistic reasoning to handle uncertainty can be combined with the capacity of Markov chains to map stochastic environmental phenomena to ideal self-optimization actions. This was done using a comparative experimental research design that involved quantitative data collection through simulations of different algorithm variants. This provided compelling results that indicate that applying the algorithm in a distributed database system improves performance of tuning decisions under uncertainty. The improvement was quantitatively measured by a response-time latency that was 27% lower than average and a transaction throughput that was 17% higher than average.
Original/Review Paper
M. Zarbazoo Siahkali; A.A. Ghaderi; Abdol H. Bahrpeyma; M. Rashki; N. Safaeian Hamzehkolaei
Abstract
Scouring, occurring when the water flow erodes the bed materials around the bridge pier structure, is a serious safety assessment problem for which there are many equations and models in the literature to estimate the approximate scour depth. This research is aimed to study how surrogate models estimate ...
Read More
Scouring, occurring when the water flow erodes the bed materials around the bridge pier structure, is a serious safety assessment problem for which there are many equations and models in the literature to estimate the approximate scour depth. This research is aimed to study how surrogate models estimate the scour depth around circular piers and compare the results with those of the empirical formulations. To this end, the pier scour depth was estimated in non-cohesive soils based on a subcritical flow and live bed conditions using the artificial neural networks (ANN), group method of data handling (GMDH), multivariate adaptive regression splines (MARS) and Gaussian process models (Kriging). A database containing 246 lab data gathered from various studies was formed and the data were divided into three random parts: 1) training, 2) validation and 3) testing to build the surrogate models. The statistical error criteria such as the coefficient of determination (R2), root mean squared error (RMSE), mean absolute percentage error (MAPE) and absolute maximum percentage error (MPE) of the surrogate models were then found and compared with those of the popular empirical formulations. Results revealed that the surrogate models’ test data estimations were more accurate than those of the empirical equations; Kriging has had better estimations than other models. In addition, sensitivity analyses of all surrogate models showed that the pier width’s dimensionless expression (b/y) had a greater effect on estimating the normalized scour depth (Ds/y).
Original/Review Paper
A. Fakhari; K. Kiani
Abstract
Image restoration and its different variations are important topics in low-level image processing. One of the main challenges in image restoration is dependency of current methods to the corruption characteristics. In this paper, we have proposed an image restoration architecture that enables us to address ...
Read More
Image restoration and its different variations are important topics in low-level image processing. One of the main challenges in image restoration is dependency of current methods to the corruption characteristics. In this paper, we have proposed an image restoration architecture that enables us to address different types of corruption, regardless of type, amount and location. The main intuition behind our approach is restoring original images from abstracted perceptual features. Using an encoder-decoder architecture, image restoration can be defined as an image transformation task. Abstraction of perceptual features is done in the encoder part of the model and determines the sampling point within original images' Probability Density Function (PDF). The PDF of original images is learned in the decoder section by using a Generative Adversarial Network (GAN) that receives the sampling point from the encoder part. Concretely, sampling from the learned PDF restores original image from its corrupted version. Pretrained network extracts perceptual features and Restricted Boltzmann Machine (RBM) makes the abstraction over them in the encoder section. By developing a new algorithm for training the RBM, the features of the corrupted images have been refined. In the decoder, the Generator network restores original images from abstracted perceptual features while Discriminator determines how good the restoration result is. The proposed approach has been compared with both traditional approaches like BM3D and with modern deep models like IRCNN and NCSR. We have also considered three different categories of corruption including denoising, inpainting and deblurring. Experimental results confirm performance of the model.