A.H. Damia; M. Esnaashari; M.R. Parvizimosaed
Abstract
In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps ...
Read More
In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps to increase the effectiveness of this algorithm. In this paper, the Adaptive Genetic Algorithm (AGA) is used to maintain the diversity of the population to test data generation based on path coverage criterion, which calculates the rate of recombination and mutation with the similarity between chromosomes and the amount of chromosome fitness during and around each algorithm. Experiments have shown that this method is faster for generating test data than other versions of the genetic algorithm used by others.
H. Gholamalinejad; H. Khosravi
Abstract
Optimizers are vital components of deep neural networks that perform weight updates. This paper introduces a new updating method for optimizers based on gradient descent, called whitened gradient descent (WGD). This method is easy to implement and can be used in every optimizer based on the gradient ...
Read More
Optimizers are vital components of deep neural networks that perform weight updates. This paper introduces a new updating method for optimizers based on gradient descent, called whitened gradient descent (WGD). This method is easy to implement and can be used in every optimizer based on the gradient descent algorithm. It does not increase the training time of the network significantly. This method smooths the training curve and improves classification metrics. To evaluate the proposed algorithm, we performed 48 different tests on two datasets, Cifar100 and Animals-10, using three network structures, including densenet121, resnet18, and resnet50. The experiments show that using the WGD method in gradient descent based optimizers, improves the classification results significantly. For example, integrating WGD in RAdam optimizer increased the accuracy of DenseNet from 87.69% to 90.02% on the Animals-10 dataset.
A. Omondi; I.A. Lukandu; G.W. Wanyembi
Abstract
Redundant and irrelevant features in high dimensional data increase the complexity in underlying mathematical models. It is necessary to conduct pre-processing steps that search for the most relevant features in order to reduce the dimensionality of the data. This study made use of a meta-heuristic search ...
Read More
Redundant and irrelevant features in high dimensional data increase the complexity in underlying mathematical models. It is necessary to conduct pre-processing steps that search for the most relevant features in order to reduce the dimensionality of the data. This study made use of a meta-heuristic search approach which uses lightweight random simulations to balance between the exploitation of relevant features and the exploration of features that have the potential to be relevant. In doing so, the study evaluated how effective the manipulation of the search component in feature selection is on achieving high accuracy with reduced dimensions. A control group experimental design was used to observe factual evidence. The context of the experiment was the high dimensional data experienced in performance tuning of complex database systems. The Wilcoxon signed-rank test at .05 level of significance was used to compare repeated classification accuracy measurements on the independent experiment and control group samples. Encouraging results with a p-value < 0.05 were recorded and provided evidence to reject the null hypothesis in favour of the alternative hypothesis which states that meta-heuristic search approaches are effective in achieving high accuracy with reduced dimensions depending on the outcome variable under investigation.
E. Pejhan; M. Ghasemzadeh
Abstract
This research is related to the development of technology in the field of automatic text to image generation. In this regard, two main goals are pursued; first, the generated image should look as real as possible; and second, the generated image should be a meaningful description of the input text. our ...
Read More
This research is related to the development of technology in the field of automatic text to image generation. In this regard, two main goals are pursued; first, the generated image should look as real as possible; and second, the generated image should be a meaningful description of the input text. our proposed method is a Multi Sentences Hierarchical GAN (MSH-GAN) for text to image generation. In this research project, we have considered two main strategies: 1) produce a higher quality image in the first step, and 2) use two additional descriptions to improve the original image in the next steps. Our goal is to focus on using more information to generate images with higher resolution by using more than one sentence input text. We have proposed different models based on GANs and Memory Networks. We have also used more challenging dataset called ids-ade. This is the first time; this dataset has been used in this area. We have evaluated our models based on IS, FID and, R-precision evaluation metrics. Experimental results demonstrate that our best model performs favorably against the basic state-of-the-art approaches like StackGAN and AttGAN.
H.3. Artificial Intelligence
Saiful Bukhori; Muhammad Almas Bariiqy; Windi Eka Y. R; Januar Adi Putra
Abstract
Breast cancer is a disease of abnormal cell proliferation in the breast tissue organs. One method for diagnosing and screening breast cancer is mammography. However, the results of this mammography image have limitations because it has low contrast and high noise and contrast as non-coherence. This research ...
Read More
Breast cancer is a disease of abnormal cell proliferation in the breast tissue organs. One method for diagnosing and screening breast cancer is mammography. However, the results of this mammography image have limitations because it has low contrast and high noise and contrast as non-coherence. This research segmented breast cancer images derived from Ultrasonography (USG) photo using a Convolutional Neural Network (CNN) using the U-Net architecture. Testing on the CNN model with the U-Net architecture results the highest Mean Intersection over Union (Mean IoU) value in the data scenario with a ratio of 70:30, 100 epochs, and a learning rate of 5x10-5, which is 77%, while the lowest Mean IoU in the data scenario with a ratio 90:10, 50 epochs, and a learning rate of 1x10-4 learning rate, which is 64.4%.
H. Sarabi Sarvarani; F. Abdali-Mohammadi
Abstract
Bone age assessment is a method that is constantly used for investigating growth abnormalities, endocrine gland treatment, and pediatric syndromes. Since the advent of digital imaging, for several decades the bone age assessment has been performed by visually examining the ossification of the left hand, ...
Read More
Bone age assessment is a method that is constantly used for investigating growth abnormalities, endocrine gland treatment, and pediatric syndromes. Since the advent of digital imaging, for several decades the bone age assessment has been performed by visually examining the ossification of the left hand, usually using the G&P reference method. However, the subjective nature of hand-craft methods, the large number of ossification centers in the hand, and the huge changes in ossification stages lead to some difficulties in the evaluation of the bone age. Therefore, many efforts were made to develop image processing methods. These methods automatically extract the main features of the bone formation stages to effectively and more accurately assess the bone age. In this paper, a new fully automatic method is proposed to reduce the errors of subjective methods and improve the automatic methods of age estimation. This model was applied to 1400 radiographs of healthy children from 0 to 18 years of age and gathered from 4 continents. This method starts with the extraction of all regions of the hand, the five fingers and the wrist, and independently calculates the age of each region through examination of the joints and growth regions associated with these regions by CNN networks; It ends with the final age assessment through an ensemble of CNNs. The results indicated that the proposed method has an average assessment accuracy of 81% and has a better performance in comparison to the commercial system that is currently in use.
H. Khodadadi; V. Derhami
Abstract
A prominent weakness of dynamic programming methods is that they perform operations throughout the entire set of states in a Markov decision process in every updating phase. This paper proposes a novel chaos-based method to solve the problem. For this purpose, a chaotic system is first initialized, and ...
Read More
A prominent weakness of dynamic programming methods is that they perform operations throughout the entire set of states in a Markov decision process in every updating phase. This paper proposes a novel chaos-based method to solve the problem. For this purpose, a chaotic system is first initialized, and the resultant numbers are mapped onto the environment states through initial processing. In each traverse of the policy iteration method, policy evaluation is performed only once, and only a few states are updated. These states are proposed by the chaos system. In this method, the policy evaluation and improvement cycle lasts until an optimal policy is formulated in the environment. The same procedure is performed in the value iteration method, and only the values of a few states proposed by the chaos are updated in each traverse, whereas the values of other states are left unchanged. Unlike the conventional methods, an optimal solution can be obtained in the proposed method by only updating a limited number of states which are properly distributed all over the environment by chaos. The test results indicate the improved speed and efficiency of chaotic dynamic programming methods in obtaining the optimal solution in different grid environments.
H.3. Artificial Intelligence
Akram Pasandideh; Mohsen Jahanshahi
Abstract
Link prediction (LP) has become a hot topic in the data mining, machine learning, and deep learning community. This study aims to implement bibliometric analysis to find the current status of the LP studies and investigate it from different perspectives. The present study provides a Scopus-based bibliometric ...
Read More
Link prediction (LP) has become a hot topic in the data mining, machine learning, and deep learning community. This study aims to implement bibliometric analysis to find the current status of the LP studies and investigate it from different perspectives. The present study provides a Scopus-based bibliometric overview of the LP studies landscape since 1987 when LP studies were published for the first time. Various kinds of analysis, including document, subject, and country distribution are applied. Moreover, author productivity, citation analysis, and keyword analysis is used, and Bradford’s law is applied to discover the main journals in this field. Most documents were published by conferences in the field. The majority of LP documents have been published in the computer science and mathematics fields. So far, China has been at the forefront of publishing countries. In addition, the most active sources of LP publications are lecture notes in Computer Science, including subseries lecture notes in Artificial Intelligence (AI) and lecture notes in Bioinformatics, and IEEE Access. The keyword analysis demonstrates that while social networks had attracted attention in the early period, knowledge graphs have attracted more attention, recently. Since the LP problem has been approached recently using machine learning (ML), the current study may inform researchers to concentrate on ML techniques. This is the first bibliometric study of “link prediction” literature and provides a broad landscape of the field.
Z. Anari; A. Hatamlou; B. Anari; M. Masdari
Abstract
The Transactions in web data often consist of quantitative data, suggesting that fuzzy set theory can be used to represent such data. The time spent by users on each web page is one type of web data, was regarded as a trapezoidal membership function (TMF) and can be used to evaluate user browsing behavior. ...
Read More
The Transactions in web data often consist of quantitative data, suggesting that fuzzy set theory can be used to represent such data. The time spent by users on each web page is one type of web data, was regarded as a trapezoidal membership function (TMF) and can be used to evaluate user browsing behavior. The quality of mining fuzzy association rules depends on membership functions and since the membership functions of each web page are different from those of other web pages, so automatic finding the number and position of TMF is significant. In this paper, a different reinforcement-based optimization approach called LA-OMF was proposed to find both the number and positions of TMFs for fuzzy association rules. In the proposed algorithm, the centers and spreads of TMFs were considered as parameters of the search space, and a new representation using learning automata (LA) was proposed to optimize these parameters. The performance of the proposed approach was evaluated and the results were compared with the results of other algorithms on a real dataset. Experiments on datasets with different sizes confirmed that the proposed LA-OMF improved the efficiency of mining fuzzy association rules by extracting optimized membership functions.
Seyyed A. Hoseini; P. Kabiri
Abstract
When a camera moves in an unfamiliar environment, for many computer vision and robotic applications it is desirable to estimate camera position and orientation. Camera tracking is perhaps the most challenging part of Visual Simultaneous Localization and Mapping (Visual SLAM) and Augmented Reality problems. ...
Read More
When a camera moves in an unfamiliar environment, for many computer vision and robotic applications it is desirable to estimate camera position and orientation. Camera tracking is perhaps the most challenging part of Visual Simultaneous Localization and Mapping (Visual SLAM) and Augmented Reality problems. This paper proposes a feature-based approach for tracking a hand-held camera that moves within an indoor place with a maximum depth of around 4-5 meters. In the first few frames the camera observes a chessboard as a marker to bootstrap the system and construct the initial map. Thereafter, upon arrival of each new frame, the algorithm pursues the camera tracking procedure. This procedure is carried-out in a framework, which operates using only the extracted visible natural feature points and the initial map. Constructed initial map is extended as the camera explores new areas. In addition, the proposed system employs a hierarchical method on basis of Lucas-Kanade registration technique to track FAST features. For each incoming frame, 6-DOF camera pose parameters are estimated using an Unscented Kalman Filter (UKF). The proposed algorithm is tested on real-world videos and performance of the UKF is compared against other camera tracking methods. Two evaluation criteria (i.e. Relative pose error and absolute trajectory error) are used to assess performance of the proposed algorithm. Accordingly, reported experimental results show accuracy and effectiveness and of the presented approach. Conducted experiments also indicate that the type of extracted feature points has not significant effect on precision of the proposed approach.
Y. Sharafi; M. Teshnelab; M. Ahmadieh Khanesar
Abstract
A new multi-objective evolutionary optimization algorithm is presented based on the competitive optimization algorithm (COOA) to solve multi-objective optimization problems (MOPs). Based on nature-inspired competition, the competitive optimization algorithm acts between animals such as birds, cats, bees, ...
Read More
A new multi-objective evolutionary optimization algorithm is presented based on the competitive optimization algorithm (COOA) to solve multi-objective optimization problems (MOPs). Based on nature-inspired competition, the competitive optimization algorithm acts between animals such as birds, cats, bees, ants, etc. The present study entails main contributions as follows: First, a novel method is presented to prune the external archive and at the same time keep the diversity of the Pareto front (PF). Second, a hybrid approach of powerful mechanisms such as opposition-based learning and chaotic maps is used to maintain the diversity in the search space of the initial population. Third, a novel method is provided to transform a multi-objective optimization problem into a single-objective optimization problem. A comparison of the result of the simulation for the proposed algorithm was made with some well-known optimization algorithms. The comparisons show that the proposed approach can be a better candidate to solve MOPs.
F. Jafarinejad
Abstract
In recent years, new word embedding methods have clearly improved the accuracy of NLP tasks. A review of the progress of these methods shows that the complexity of these models and the number of their training parameters grows increasingly. Therefore, there is a need for methodological innovation for ...
Read More
In recent years, new word embedding methods have clearly improved the accuracy of NLP tasks. A review of the progress of these methods shows that the complexity of these models and the number of their training parameters grows increasingly. Therefore, there is a need for methodological innovation for presenting new word embedding methodologies. Most current word embedding methods use a large corpus of unstructured data to train the semantic vectors of words. This paper addresses the basic idea of utilizing from structure of structured data to introduce embedding vectors. Therefore, the need for high processing power, large amount of processing memory, and long processing time will be met using structures and conceptual knowledge lies in them. For this purpose, a new embedding vector, Word2Node is proposed. It uses a well-known structured resource, the WordNet, as a training corpus and hypothesis that graphic structure of the WordNet includes valuable linguistic knowledge that can be considered and not ignored to provide cost-effective and small sized embedding vectors. The Node2Vec graph embedding method allows us to benefit from this powerful linguistic resource. Evaluation of this idea in two tasks of word similarity and text classification has shown that this method perform the same or better in comparison to the word embedding method embedded in it (Word2Vec). This result is achieved while the required training data is reduced by about 50,000,000%. These results provide a view of capacity of the structured data to improve the quality of existing embedding methods and the resulting vectors.
H.3. Artificial Intelligence
Ali Zahmatkesh Zakariaee; Hossein Sadr; Mohamad Reza Yamaghani
Abstract
Machine learning (ML) is a popular tool in healthcare while it can help to analyze large amounts of patient data, such as medical records, predict diseases, and identify early signs of cancer. Gastric cancer starts in the cells lining the stomach and is known as the 5th most common cancer worldwide. ...
Read More
Machine learning (ML) is a popular tool in healthcare while it can help to analyze large amounts of patient data, such as medical records, predict diseases, and identify early signs of cancer. Gastric cancer starts in the cells lining the stomach and is known as the 5th most common cancer worldwide. Therefore, predicting the survival of patients, checking their health status, and detecting their risk of gastric cancer in the early stages can be very beneficial. Surprisingly, with the help of machine learning methods, this can be possible without the need for any invasive methods which can be useful for both patients and physicians in making informed decisions. Accordingly, a new hybrid machine learning-based method for detecting the risk of gastric cancer is proposed in this paper. The proposed model is compared with traditional methods and based on the empirical results, not only the proposed method outperform existing methods with an accuracy of 98% but also gastric cancer can be one of the most important consequences of H. pylori infection. Additionally, it can be concluded that lifestyle and dietary factors can heighten the risk of gastric cancer, especially among individuals who frequently consume fried foods and suffer from chronic atrophic gastritis and stomach ulcers. This risk is further exacerbated in individuals with limited fruit and vegetable intake and high salt consumption.
J. Tayyebi; E. Hosseinzadeh
Abstract
The fuzzy c-means clustering algorithm is a useful tool for clustering; but it is convenient only for crisp complete data. In this article, an enhancement of the algorithm is proposed which is suitable for clustering trapezoidal fuzzy data. A linear ranking function is used to define a distance for trapezoidal ...
Read More
The fuzzy c-means clustering algorithm is a useful tool for clustering; but it is convenient only for crisp complete data. In this article, an enhancement of the algorithm is proposed which is suitable for clustering trapezoidal fuzzy data. A linear ranking function is used to define a distance for trapezoidal fuzzy data. Then, as an application, a method based on the proposed algorithm is presented to cluster incomplete fuzzy data. The method substitutes missing attribute by a trapezoidal fuzzy number to be determined by using the corresponding attribute of q nearest-neighbor. Comparisons and analysis of the experimental results demonstrate the capability of the proposed method.
E. Feli; R. Hosseini; S. Yazdani
Abstract
In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) ...
Read More
In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) were proposed to classify the infertility dataset. The Genetic algorithm was used to improve the performance of the Multilayer Perceptron Neural Network model. The proposed model was applied to a dataset including 594 eggs from 94 patients undergoing IVF, of which 318 were of good quality embryos and 276 were of lower quality embryos. For performance evaluation of the MLP model, an ROC curve analysis was conducted, and 10-fold cross-validation performed. The results revealed that this intelligent model has high efficiency with an accuracy of 96% for Multi-layer Perceptron neural network, which is promising compared to counterparts methods.
E. Zarei; N. Barimani; G. Nazari Golpayegani
Abstract
Cardiac Arrhythmias are known as one of the most dangerous cardiac diseases. Applying intelligent algorithms in this area, leads into the reduction of the ECG signal processing time by the physician as well as reducing the probable mistakes caused by fatigue of the specialist. The purpose of this study ...
Read More
Cardiac Arrhythmias are known as one of the most dangerous cardiac diseases. Applying intelligent algorithms in this area, leads into the reduction of the ECG signal processing time by the physician as well as reducing the probable mistakes caused by fatigue of the specialist. The purpose of this study is to introduce an intelligent algorithm for the separation of three cardiac arrhythmias by using chaos features of ECG signal and combining three types of the most common classifiers in these signal’s processing area. First, ECG signals related to three cardiac arrhythmias of Atrial Fibrillation, Ventricular Tachycardia and Post Supra Ventricular Tachycardia along with the normal cardiac signal from the arrhythmia database of MIT-BIH were gathered. Then, chaos features describing non-linear dynamic of ECG signal were extracted by calculating the Lyapunov exponent values and signal’s fractal dimension. finally, the compound classifier was used by combining of multilayer perceptron neural network, support vector machine network and K-Nearest Neighbor. Obtained results were compared to the classifying method based on features of time-domain and time-frequency domain, as a proof for the efficacy of the chaos features of the ECG signal. Likewise, to evaluate the efficacy of the compound classifier, each network was used both as separately and also as combined and the results were compared. The obtained results showed that Using the chaos features of ECG signal and the compound classifier, can classify cardiac arrhythmias with 99.1% ± 0.2 accuracy and 99.6% ± 0.1 sensitivity and specificity rate of 99.3 % ± 0.1
H.6.3.2. Feature evaluation and selection
Farhad Abedinzadeh Torghabeh; Yeganeh Modaresnia; Seyyed Abed Hosseini
Abstract
Various data analysis research has recently become necessary in to find and select relevant features without class labels using Unsupervised Feature Selection (UFS) approaches. Despite the fact that several open-source toolboxes provide feature selection techniques to reduce redundant features, data ...
Read More
Various data analysis research has recently become necessary in to find and select relevant features without class labels using Unsupervised Feature Selection (UFS) approaches. Despite the fact that several open-source toolboxes provide feature selection techniques to reduce redundant features, data dimensionality, and computation costs, these approaches require programming knowledge, which limits their popularity and has not adequately addressed unlabeled real-world data. Automatic UFS Toolbox (Auto-UFSTool) for MATLAB, proposed in this study, is a user-friendly and fully-automatic toolbox that utilizes several UFS approaches from the most recent research. It is a collection of 25 robust UFS approaches, most of which were developed within the last five years. Therefore, a clear and systematic comparison of competing methods is feasible without requiring a single line of code. Even users without any previous programming experience may utilize the actual implementation by the Graphical User Interface (GUI). It also provides the opportunity to evaluate the feature selection results and generate graphs that facilitate the comparison of subsets of varying sizes. It is freely accessible in the MATLAB File Exchange repository and includes scripts and source code for each technique. The link to this toolbox is freely available to the general public on: bit.ly/AutoUFSTool
M. Danesh; S. Danesh
Abstract
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, ...
Read More
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, the estimated response of the process is also the average of a set of output values where the variation around the mean is not determinate. However, to provide unbiased and precise estimations, the predictions are required to be correct on average and the spread of date be specified. To address this issue, we proposed a method based on the fuzzy inference system, and genetic and linear programming algorithms. We consider the crisp inputs and the symmetrical triangular fuzzy output. The proposed algorithm is applied to fit the fuzzy regression model. In addition, we apply a simulation example and a practical example in the field of machining process to assess the performance of the proposed method in dealing with practical problems in which the output variables have the nature of uncertainty and impression. Finally, we compare the performance of the suggested method with other methods. Based on the examples, the proposed method is verified for prediction. The results show that the proposed method reduces the error values to a minimum level and is more accurate than the Linear Programming (LP) and fuzzy weights with linear programming (FWLP) methods.
P. Farzi; R. Akbari
Abstract
Abstract: Web service is a technology for defining self-describing objects, structural-based, and loosely coupled applications. They are accessible all over the web and provide a flexible platform. Although service registries such as Universal Description, Discovery, and Integration (UDDI) provide facilities ...
Read More
Abstract: Web service is a technology for defining self-describing objects, structural-based, and loosely coupled applications. They are accessible all over the web and provide a flexible platform. Although service registries such as Universal Description, Discovery, and Integration (UDDI) provide facilities for users to search requirements, retrieving the exact results that satisfy users’ need is still a difficult task since providers and requesters have various views about descriptions with different explanations. Consequently, one of the most challenging obstacles in the discovery task would be how to understand both sides, which is called knowledge-based understanding. This is of immense value for search engines, information retrieval tasks, and even NPL-based various tasks. The goal is to help recognize matching degrees precisely and retrieve the most relevant services more straightforward. In this research, we introduce a conceptual similarity method as a new way that facilitates discovery procedure with less dependency on the provider and user descriptions to reduce the manual intervention of both sides and being more explicit for the machines. We provide a comprehensive knowledge-based approach by applying the Latent Semantic Analysis (LSA) model to the ontology scheme - WordNet and domain-specific in-sense context-based similarity algorithm. The evaluation of our similarity method, done on OWL-S test collection, shows that a sense-context similarity algorithm can boost the disambiguation procedure of descriptions, which leads to conceptual clarity. The proposed method improves the performance of service discovery in comparison with the novel keyword-based and semantic-based methods.
H.3. Artificial Intelligence
Amir Mehrabinezhad; Mohammad Teshnelab; Arash Sharifi
Abstract
Due to the growing number of data-driven approaches, especially in artificial intelligence and machine learning, extracting appropriate information from the gathered data with the best performance is a remarkable challenge. The other important aspect of this issue is storage costs. The principal component ...
Read More
Due to the growing number of data-driven approaches, especially in artificial intelligence and machine learning, extracting appropriate information from the gathered data with the best performance is a remarkable challenge. The other important aspect of this issue is storage costs. The principal component analysis (PCA) and autoencoders (AEs) are samples of the typical feature extraction methods in data science and machine learning that are widely used in various approaches. The current work integrates the advantages of AEs and PCA for presenting an online supervised feature extraction selection method. Accordingly, the desired labels for the final model are involved in the feature extraction procedure and embedded in the PCA method as well. Also, stacking the nonlinear autoencoder layers with the PCA algorithm eliminated the kernel selection of the traditional kernel PCA methods. Besides the performance improvement proved by the experimental results, the main advantage of the proposed method is that, in contrast with the traditional PCA approaches, the model has no requirement for all samples to feature extraction. As regards the previous works, the proposed method can outperform the other state-of-the-art ones in terms of accuracy and authenticity for feature extraction.
Kh. Aghajani
Abstract
Emotion recognition has several applications in various fields, including human-computer interactions. In recent years, various methods have been proposed to recognize emotion using facial or speech information. While the fusion of these two has been paid less attention in emotion recognition. In this ...
Read More
Emotion recognition has several applications in various fields, including human-computer interactions. In recent years, various methods have been proposed to recognize emotion using facial or speech information. While the fusion of these two has been paid less attention in emotion recognition. In this paper, first of all, the use of only face or speech information in emotion recognition is examined. For emotion recognition through speech, a pre-trained network called YAMNet is used to extract features. After passing through a convolutional neural network (CNN), the extracted features are then fed into a bi-LSTM with an attention mechanism to perform the recognition. For emotion recognition through facial information, a deep CNN-based model has been proposed. Finally, after reviewing these two approaches, an emotion detection framework based on the fusion of these two models is proposed. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), containing videos taken from 24 actors (12 men and 12 women) with 8 categories has been used to evaluate the proposed model. The results of the implementation show that the combination of the face and speech information improves the performance of the emotion recognizer.
I Pasandideh; A. Rajabi; F. Yosefvand; S. Shabanlou
Abstract
Generally, length of hydraulic jump is one the most important parameters to design stilling basin. In this study, the length of hydraulic jump on sloping rough beds was predicted using Gene Expression Programming (GEP) for the first time. The Monte Carlo simulations were used to examine the ability of ...
Read More
Generally, length of hydraulic jump is one the most important parameters to design stilling basin. In this study, the length of hydraulic jump on sloping rough beds was predicted using Gene Expression Programming (GEP) for the first time. The Monte Carlo simulations were used to examine the ability of the GEP model. In addition, k-fold cross validation was employed in order to verify the results of the GEP model. To determine the length of hydraulic jump, five different GEP models were introduced using input parameters. Then by analyzing the GEP models results, the superior model was presented. For the superior model, correlation coefficient (R), Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) were computed 0.901, 11.517 and 1.664, respectively. According to the sensitivity analysis, the Froude number at upstream of hydraulic jump was identified as the most important parameter to model the length of hydraulic jump. Furthermore, the partial derivative sensitivity analysis (PDSA) was performed. For instance, the PDSA was calculated as positive for all input variables.
H.3. Artificial Intelligence
Zeinab Poshtiban; Elham Ghanbari; Mohammadreza Jahangir
Abstract
Analyzing the influence of people and nodes in social networks has attracted a lot of attention. Social networks gain meaning, despite the groups, associations, and people interested in a specific issue or topic, and people demonstrate their theoretical and practical tendencies in such places. Influential ...
Read More
Analyzing the influence of people and nodes in social networks has attracted a lot of attention. Social networks gain meaning, despite the groups, associations, and people interested in a specific issue or topic, and people demonstrate their theoretical and practical tendencies in such places. Influential nodes are often identified based on the information related to the social network structure and less attention is paid to the information spread by the social network user. The present study aims to assess the structural information in the network to identify influential users in addition to using their information in the social network. To this aim, the user’s feelings were extracted. Then, an emotional or affective score was assigned to each user based on an emotional dictionary and his/her weight in the network was determined utilizing centrality criteria. Here, the Twitter network was applied. Thus, the structure of the social network was defined and its graph was drawn after collecting and processing the data. Then, the analysis capability of the network and existing data was extracted and identified based on the algorithm proposed by users and influential nodes. Based on the results, the nodes identified by the proposed algorithm are considered high-quality and the speed of information simulated is higher than other existing algorithms.
B. Z. Mansouri; H.R. Ghaffary; A. Harimi
Abstract
Speech emotion recognition (SER) is a challenging field of research that has attracted attention during the last two decades. Feature extraction has been reported as the most challenging issue in SER systems. Deep neural networks could partially solve this problem in some other applications. In order ...
Read More
Speech emotion recognition (SER) is a challenging field of research that has attracted attention during the last two decades. Feature extraction has been reported as the most challenging issue in SER systems. Deep neural networks could partially solve this problem in some other applications. In order to address this problem, we proposed a novel enriched spectrogram calculated based on the fusion of wide-band and narrow-band spectrograms. The proposed spectrogram benefited from both high temporal and spectral resolution. Then we applied the resultant spectrogram images to the pre-trained deep convolutional neural network, ResNet152. Instead of the last layer of ResNet152, we added five additional layers to adopt the model to the present task. All the experiments performed on the popular EmoDB dataset are based on leaving one speaker out of a technique that guarantees the speaker's independency from the model. The model gains an accuracy rate of 88.97% which shows the efficiency of the proposed approach in contrast to other state-of-the-art methods.
Y. Dorfeshan; R. Tavakkoli-Moghaddam; F. Jolai; S.M. Mousavi
Abstract
Multi-criteria decision-making (MCDM) methods have been received considerable attention for solving problems with a set of alternatives and conflict criteria in the last decade. Previously, MCDM methods have primarily relied on the judgment and knowledge of experts for making decisions. This paper introduces ...
Read More
Multi-criteria decision-making (MCDM) methods have been received considerable attention for solving problems with a set of alternatives and conflict criteria in the last decade. Previously, MCDM methods have primarily relied on the judgment and knowledge of experts for making decisions. This paper introduces a new data- and knowledge-driven MCDM method to reduce experts’ assessment dependence. The weight of the criteria is specified by using the extended data-driven DEMATEL method. Then, the ranking of alternatives is determined through knowledge-driven ELECTRE and VIKOR methods. All proposed methods for weighting and rankings are developed under grey numbers for coping with the uncertainty. Finally, the practicality and applicability of the proposed method are proved by solving an illustrative example.