Original/Review Paper
E. Kalhor; B. Bakhtiari
Abstract
Feature selection is the one of the most important steps in designing speech emotion recognition systems. Because there is uncertainty as to which speech feature is related to which emotion, many features must be taken into account and, for this purpose, identifying the most discriminative features is ...
Read More
Feature selection is the one of the most important steps in designing speech emotion recognition systems. Because there is uncertainty as to which speech feature is related to which emotion, many features must be taken into account and, for this purpose, identifying the most discriminative features is necessary. In the interest of selecting appropriate emotion-related speech features, the current paper focuses on a multi-task approach. For this reason, the study considers each speaker as a task and proposes a multi-task objective function to select features. As a result, the proposed method chooses one set of speaker-independent features of which the selected features are discriminative in all emotion classes. Correspondingly, multi-class classifiers are utilized directly or binary classifications simply perform multi-class classifications. In addition, the present work employs two well-known datasets, the Berlin and Enterface. The experiments also applied the openSmile toolkit to extract more than 6500 features. After feature selection phase, the results illustrated that the proposed method selects the features which is common in the different runs. Also, the runtime of proposed method is the lowest in comparison to other methods. Finally, 7 classifiers are employed and the best achieved performance is 73.76% for the Berlin dataset and 72.17% for the Enterface dataset, in the faced of a new speaker .These experimental results then show that the proposed method is superior to existing state-of-the-art methods.
Original/Review Paper
N. Taghvaei; B. Masoumi; M. R. Keyvanpour
Abstract
In general, humans are very complex organisms, and therefore, research into their various dimensions and aspects, including personality, has become an attractive subject of research. With the advent of technology, the emergence of a new kind of communication in the context of social networks has also ...
Read More
In general, humans are very complex organisms, and therefore, research into their various dimensions and aspects, including personality, has become an attractive subject of research. With the advent of technology, the emergence of a new kind of communication in the context of social networks has also given a new form of social communication to humans, and the recognition and categorization of people in this new space have become a hot topic of research that has been challenged by many researchers. In this paper, considering the Big Five personality characteristics of individuals, first, categorization of related work is proposed, and then a hybrid framework based on Fuzzy Neural Networks (FNN), along with, Deep Neural Networks (DNN) has been proposed that improves the accuracy of personality recognition by combining different FNN-classifiers with DNN-classifier in a proposed two-stage decision fusion scheme. Finally, a simulation of the proposed approach is carried out. The proposed approach is using the structural features of Social Networks Analysis (SNA), along with a linguistic analysis (LA) feature extracted from the description of the activities of individuals and comparison with the previous similar researches. The results, well-illustrated the performance improvement of the proposed framework up to 83.2 % of average accuracy on myPersonality dataset.
Original/Review Paper
H. Kamali Ardakani; Seyed A. Mousavinia; F. Safaei
Abstract
Stereo machine vision can be used as a Space Sampling technique and the cameras parameters and configuration can effectively change the number of Samples in each Volume of space called Space Sampling Density (SSD). Using the concept of Voxels, this paper presents a method to optimize the geometric configuration ...
Read More
Stereo machine vision can be used as a Space Sampling technique and the cameras parameters and configuration can effectively change the number of Samples in each Volume of space called Space Sampling Density (SSD). Using the concept of Voxels, this paper presents a method to optimize the geometric configuration of the cameras to maximize the SSD which means minimizing the Voxel volume and reducing the uncertainty in localizing an object in 3D space. Each pixel’s field of view (FOV) is considered as a skew pyramid. The uncertainty region will be created from the intersection of two pyramids associated with any of the cameras. Then, the mathematical equation of the uncertainty region is developed based on the correspondence field as a criterion for the localization error, including depth error as well as X and Y axes error. This field is completely dependent on the internal and external parameters of the cameras. Given the mathematical equation of localization error, the camera’s configuration optimization is addressed in a stereo vision system. Finally, the validity of the proposed method is examined by simulation and empirical results. These results show that the localization error will be significantly decreased in the optimized camera configuration.
Original/Review Paper
A. H Safari-Bavil; S. Jabbehdari; M. Ghobaei-Arani
Abstract
Generally, the issue of quality assurance is a specific assurance in computer networks. The conventional computer networks with hierarchical structures that are used in organizations are formed using some nodes of Ethernet switches within a tree structure. Open Flow is one of the main fundamental protocols ...
Read More
Generally, the issue of quality assurance is a specific assurance in computer networks. The conventional computer networks with hierarchical structures that are used in organizations are formed using some nodes of Ethernet switches within a tree structure. Open Flow is one of the main fundamental protocols of Software-defined networks (SDNs) and provides the direct access to and change in program of sending network equipment such as switches and routers, physically and virtually. Lack of an open interface in data sending program has led to advent of integrated and close equipment that are similar to CPU in current networks. This study proposes a solution to reduce traffic using a correct placement of virtual machines while their security is maintained. The proposed solution is based on the moth-flame optimization, which has been evaluated. The obtained results indicate the priority of the proposed method.
Original/Review Paper
K. Kiani; R. Hematpour; R. Rastgoo
Abstract
Image colorization is an interesting yet challenging task due to the descriptive nature of getting a natural-looking color image from any grayscale image. To tackle this challenge and also have a fully automatic procedure, we propose a Convolutional Neural Network (CNN)-based model to benefit from the ...
Read More
Image colorization is an interesting yet challenging task due to the descriptive nature of getting a natural-looking color image from any grayscale image. To tackle this challenge and also have a fully automatic procedure, we propose a Convolutional Neural Network (CNN)-based model to benefit from the impressive ability of CNN in the image processing tasks. To this end, we propose a deep-based model for automatic grayscale image colorization. Harnessing from convolutional-based pre-trained models, we fuse three pre-trained models, VGG16, ResNet50, and Inception-v2, to improve the model performance. The average of three model outputs is used to obtain more rich features in the model. The fused features are fed to an encoder-decoder network to obtain a color image from a grayscale input image. We perform a step-by-step analysis of different pre-trained models and fusion methodologies to include a more accurate combination of these models in the proposed model. Results on LFW and ImageNet datasets confirm the effectiveness of our model compared to state-of-the-art alternatives in the field.
Original/Review Paper
Z. Shahpar; V. Khatibi; A. Khatibi Bardsiri
Abstract
Software effort estimation plays an important role in software project management, and analogy-based estimation (ABE) is the most common method used for this purpose. ABE estimates the effort required for a new software project based on its similarity to previous projects. A similarity between the projects ...
Read More
Software effort estimation plays an important role in software project management, and analogy-based estimation (ABE) is the most common method used for this purpose. ABE estimates the effort required for a new software project based on its similarity to previous projects. A similarity between the projects is evaluated based on a set of project features, each of which has a particular effect on the degree of similarity between projects and the effort feature. The present study examines the hybrid PSO-SA approach for feature weighting in analogy-based software project effort estimation. The proposed approach was implemented and tested on two well-known datasets of software projects. The performance of the proposed model was compared with other optimization algorithms based on MMRE, MDMRE, and PRED(0.25) measures. The results showed that weighted ABE models provide more accurate and better effort estimates relative to unweighted ABE models and that the PSO-SA hybrid approach has led to better and more accurate results compared with the other weighting approaches in both datasets.
Original/Review Paper
A. Hasan-Zadeh; F. Asadi; N. Garbazkar
Abstract
For an economic review of food prices in May 2019 to determine the trend of rising or decreasing prices compared to previous periods, we considered the price of food items at that time. The types of items consumed during specific periods in urban areas and the whole country are selected for our statistical ...
Read More
For an economic review of food prices in May 2019 to determine the trend of rising or decreasing prices compared to previous periods, we considered the price of food items at that time. The types of items consumed during specific periods in urban areas and the whole country are selected for our statistical analysis. Among the various methods of modelling and statistical prediction, and in a new approach, we modeled the data using data mining techniques consisting of decision tree methods, associative rules, and Bayesian law. Then, prediction, validation, and standardization of the accuracy of the validation are performed on them. Results of data validation in the urban and national area and the results of the standardization of the accuracy of validation in the urban and national area are presented with the desired accuracy.
Original/Review Paper
Oladosu Oladimeji; Olayanju Oladimeji
Abstract
Breast cancer is the second major cause of death and accounts for 16% of all cancer deaths worldwide. Most of the methods of detecting breast cancer are very expensive and difficult to interpret such as mammography. There are also limitations such as cumulative radiation exposure, over-diagnosis, false ...
Read More
Breast cancer is the second major cause of death and accounts for 16% of all cancer deaths worldwide. Most of the methods of detecting breast cancer are very expensive and difficult to interpret such as mammography. There are also limitations such as cumulative radiation exposure, over-diagnosis, false positives and negatives in women with a dense breast which pose certain uncertainties in high-risk population. The objective of this study is Detecting Breast Cancer Through Blood Analysis Data Using Classification Algorithms. This will serve as a complement to these expensive methods. High ranking features were extracted from the dataset. The KNN, SVM and J48 algorithms were used as the training platform to classify 116 instances. Furthermore, 10-fold cross validation and holdout procedures were used coupled with changing of random seed. The result showed that KNN algorithm has the highest and best accuracy of 89.99% and 85.21% for cross validation and holdout procedure respectively. This is followed by the J48 with 84.65% and 75.65% for the two procedures respectively. SVM had 77.58% and 68.69% respectively. Although it was also discovered that Blood Glucose level is a major determinant in detecting breast cancer, it has to be combined with other attributes to make decision as a result of other health issues like diabetes. With the result obtained, women are advised to do regular check-ups including blood analysis in order to know which of the blood components need to be worked on to prevent breast cancer based on the model generated in this study.
Original/Review Paper
A. Hadian; M. Bagherian; B. Fathi Vajargah
Abstract
Background: One of the most important concepts in cloud computing is modeling the problem as a multi-layer optimization problem which leads to cost savings in designing and operating the networks. Previous researchers have modeled the two-layer network operating problem as an Integer Linear Programming ...
Read More
Background: One of the most important concepts in cloud computing is modeling the problem as a multi-layer optimization problem which leads to cost savings in designing and operating the networks. Previous researchers have modeled the two-layer network operating problem as an Integer Linear Programming (ILP) problem, and due to the computational complexity of solving it jointly, they suggested a two-stage procedure for solving it by considering one layer at each stage.Aim: In this paper, considering the ILP model and using some properties of it, we propose a heuristic algorithm for solving the model jointly, considering unicast, multicast, and anycast flows simultaneously. Method: We first sort demands in decreasing order and use a greedy method to realize demands in order. Due to the high computational complexity of ILP model, the proposed heuristic algorithm is suitable for networks with a large number of nodes; In this regard, various examples are solved by CPLEX and MATLAB soft wares. Results: Our simulation results show that for small values of M and N CPLEX fails to find the optimal solution, while AGA finds a near-optimal solution quickly.Conclusion: The proposed greedy algorithm could solve the large-scale networks approximately in polynomial time and its approximation is reasonable.
Original/Review Paper
L. Falahiazar; V. Seydi; M. Mirzarezaee
Abstract
Many of the real-world issues have multiple conflicting objectives that the optimization between contradictory objectives is very difficult. In recent years, the Multi-objective Evolutionary Algorithms (MOEAs) have shown great performance to optimize such problems. So, the development of MOEAs will always ...
Read More
Many of the real-world issues have multiple conflicting objectives that the optimization between contradictory objectives is very difficult. In recent years, the Multi-objective Evolutionary Algorithms (MOEAs) have shown great performance to optimize such problems. So, the development of MOEAs will always lead to the advancement of science. The Non-dominated Sorting Genetic Algorithm II (NSGAII) is considered as one of the most used evolutionary algorithms, and many MOEAs have emerged to resolve NSGAII problems, such as the Sequential Multi-Objective Algorithm (SEQ-MOGA). SEQ-MOGA presents a new survival selection that arranges individuals systematically, and the chromosomes can cover the entire Pareto Front region. In this study, the Archive Sequential Multi-Objective Algorithm (ASMOGA) is proposed to develop and improve SEQ-MOGA. ASMOGA uses the archive technique to save the history of the search procedure, so that the maintenance of the diversity in the decision space is satisfied adequately. To demonstrate the performance of ASMOGA, it is used and compared with several state-of-the-art MOEAs for optimizing benchmark functions and designing the I-Beam problem. The optimization results are evaluated by Performance Metrics such as hypervolume, Generational Distance, Spacing, and the t-test (a statistical test); based on the results, the superiority of the proposed algorithm is identified clearly.
Review Article
N. Nowrozian; F. Tashtarian
Abstract
Battery power limitation of sensor nodes (SNs) is a major challenge for wireless sensor networks (WSNs) which affects network survival. Thus, optimizing the energy consumption of the SNs as well as increasing the lifetime of the SNs and thus, extending the lifetime of WSNs are of crucial importance in ...
Read More
Battery power limitation of sensor nodes (SNs) is a major challenge for wireless sensor networks (WSNs) which affects network survival. Thus, optimizing the energy consumption of the SNs as well as increasing the lifetime of the SNs and thus, extending the lifetime of WSNs are of crucial importance in these types of networks. Mobile chargers (MCs) and wireless power transfer (WPT) technologies have played an important long role in WSNs, and much research has been done on how to use the MC to enhance the performance of WSNs in recent decades. In this paper, we first review the application of MCs and WPT technologies in WSNs. Then, forwarding issues the MC has been considered in the role of power transmitter in WSNs and the existing approaches are categorized, with the purposes and limitations of MC dispatching studied. Then an overview of the existing articles is presented and to better understand the contents, tables and figures are offered that summarize the existing methods. We examine them in different dimensions such as advantages and disadvantages etc. Finally, the future prospects of MC are discussed.
Original/Review Paper
F. Rismanian Yazdi; M. Hosseinzadeh; S. Jabbehdari
Abstract
Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting ...
Read More
Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting to the central server. Biomedical sensors are limited in energy resources and need an improved design for managing energy consumption. Therefore, DTEC-MAC (Diverse Traffic with Energy Consumption-MAC) is proposed based on the priority of data classification in the cluster nodes and provides medical data based on energy management. The proposed method uses fuzzy logic based on the distance to sink and the remaining energy and length of data to select the cluster head. MATLAB software was used to simulate the method. This method compared with similar methods called iM-SIMPLE and M-ATTEMPT, ERP. Results of the simulations indicate that it works better to extend the lifetime and guarantee minimum energy and packet delivery rates, maximizing the throughput.