H.6.3.2. Feature evaluation and selection
Farhad Abedinzadeh Torghabeh; Yeganeh Modaresnia; Seyyed Abed Hosseini
Abstract
Various data analysis research has recently become necessary in to find and select relevant features without class labels using Unsupervised Feature Selection (UFS) approaches. Despite the fact that several open-source toolboxes provide feature selection techniques to reduce redundant features, data ...
Read More
Various data analysis research has recently become necessary in to find and select relevant features without class labels using Unsupervised Feature Selection (UFS) approaches. Despite the fact that several open-source toolboxes provide feature selection techniques to reduce redundant features, data dimensionality, and computation costs, these approaches require programming knowledge, which limits their popularity and has not adequately addressed unlabeled real-world data. Automatic UFS Toolbox (Auto-UFSTool) for MATLAB, proposed in this study, is a user-friendly and fully-automatic toolbox that utilizes several UFS approaches from the most recent research. It is a collection of 25 robust UFS approaches, most of which were developed within the last five years. Therefore, a clear and systematic comparison of competing methods is feasible without requiring a single line of code. Even users without any previous programming experience may utilize the actual implementation by the Graphical User Interface (GUI). It also provides the opportunity to evaluate the feature selection results and generate graphs that facilitate the comparison of subsets of varying sizes. It is freely accessible in the MATLAB File Exchange repository and includes scripts and source code for each technique. The link to this toolbox is freely available to the general public on: bit.ly/AutoUFSTool
M. Danesh; S. Danesh
Abstract
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, ...
Read More
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, the estimated response of the process is also the average of a set of output values where the variation around the mean is not determinate. However, to provide unbiased and precise estimations, the predictions are required to be correct on average and the spread of date be specified. To address this issue, we proposed a method based on the fuzzy inference system, and genetic and linear programming algorithms. We consider the crisp inputs and the symmetrical triangular fuzzy output. The proposed algorithm is applied to fit the fuzzy regression model. In addition, we apply a simulation example and a practical example in the field of machining process to assess the performance of the proposed method in dealing with practical problems in which the output variables have the nature of uncertainty and impression. Finally, we compare the performance of the suggested method with other methods. Based on the examples, the proposed method is verified for prediction. The results show that the proposed method reduces the error values to a minimum level and is more accurate than the Linear Programming (LP) and fuzzy weights with linear programming (FWLP) methods.
P. Farzi; R. Akbari
Abstract
Abstract: Web service is a technology for defining self-describing objects, structural-based, and loosely coupled applications. They are accessible all over the web and provide a flexible platform. Although service registries such as Universal Description, Discovery, and Integration (UDDI) provide facilities ...
Read More
Abstract: Web service is a technology for defining self-describing objects, structural-based, and loosely coupled applications. They are accessible all over the web and provide a flexible platform. Although service registries such as Universal Description, Discovery, and Integration (UDDI) provide facilities for users to search requirements, retrieving the exact results that satisfy users’ need is still a difficult task since providers and requesters have various views about descriptions with different explanations. Consequently, one of the most challenging obstacles in the discovery task would be how to understand both sides, which is called knowledge-based understanding. This is of immense value for search engines, information retrieval tasks, and even NPL-based various tasks. The goal is to help recognize matching degrees precisely and retrieve the most relevant services more straightforward. In this research, we introduce a conceptual similarity method as a new way that facilitates discovery procedure with less dependency on the provider and user descriptions to reduce the manual intervention of both sides and being more explicit for the machines. We provide a comprehensive knowledge-based approach by applying the Latent Semantic Analysis (LSA) model to the ontology scheme - WordNet and domain-specific in-sense context-based similarity algorithm. The evaluation of our similarity method, done on OWL-S test collection, shows that a sense-context similarity algorithm can boost the disambiguation procedure of descriptions, which leads to conceptual clarity. The proposed method improves the performance of service discovery in comparison with the novel keyword-based and semantic-based methods.
H.3. Artificial Intelligence
Amir Mehrabinezhad; Mohammad Teshnelab; Arash Sharifi
Abstract
Due to the growing number of data-driven approaches, especially in artificial intelligence and machine learning, extracting appropriate information from the gathered data with the best performance is a remarkable challenge. The other important aspect of this issue is storage costs. The principal component ...
Read More
Due to the growing number of data-driven approaches, especially in artificial intelligence and machine learning, extracting appropriate information from the gathered data with the best performance is a remarkable challenge. The other important aspect of this issue is storage costs. The principal component analysis (PCA) and autoencoders (AEs) are samples of the typical feature extraction methods in data science and machine learning that are widely used in various approaches. The current work integrates the advantages of AEs and PCA for presenting an online supervised feature extraction selection method. Accordingly, the desired labels for the final model are involved in the feature extraction procedure and embedded in the PCA method as well. Also, stacking the nonlinear autoencoder layers with the PCA algorithm eliminated the kernel selection of the traditional kernel PCA methods. Besides the performance improvement proved by the experimental results, the main advantage of the proposed method is that, in contrast with the traditional PCA approaches, the model has no requirement for all samples to feature extraction. As regards the previous works, the proposed method can outperform the other state-of-the-art ones in terms of accuracy and authenticity for feature extraction.
Kh. Aghajani
Abstract
Emotion recognition has several applications in various fields, including human-computer interactions. In recent years, various methods have been proposed to recognize emotion using facial or speech information. While the fusion of these two has been paid less attention in emotion recognition. In this ...
Read More
Emotion recognition has several applications in various fields, including human-computer interactions. In recent years, various methods have been proposed to recognize emotion using facial or speech information. While the fusion of these two has been paid less attention in emotion recognition. In this paper, first of all, the use of only face or speech information in emotion recognition is examined. For emotion recognition through speech, a pre-trained network called YAMNet is used to extract features. After passing through a convolutional neural network (CNN), the extracted features are then fed into a bi-LSTM with an attention mechanism to perform the recognition. For emotion recognition through facial information, a deep CNN-based model has been proposed. Finally, after reviewing these two approaches, an emotion detection framework based on the fusion of these two models is proposed. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), containing videos taken from 24 actors (12 men and 12 women) with 8 categories has been used to evaluate the proposed model. The results of the implementation show that the combination of the face and speech information improves the performance of the emotion recognizer.
I Pasandideh; A. Rajabi; F. Yosefvand; S. Shabanlou
Abstract
Generally, length of hydraulic jump is one the most important parameters to design stilling basin. In this study, the length of hydraulic jump on sloping rough beds was predicted using Gene Expression Programming (GEP) for the first time. The Monte Carlo simulations were used to examine the ability of ...
Read More
Generally, length of hydraulic jump is one the most important parameters to design stilling basin. In this study, the length of hydraulic jump on sloping rough beds was predicted using Gene Expression Programming (GEP) for the first time. The Monte Carlo simulations were used to examine the ability of the GEP model. In addition, k-fold cross validation was employed in order to verify the results of the GEP model. To determine the length of hydraulic jump, five different GEP models were introduced using input parameters. Then by analyzing the GEP models results, the superior model was presented. For the superior model, correlation coefficient (R), Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) were computed 0.901, 11.517 and 1.664, respectively. According to the sensitivity analysis, the Froude number at upstream of hydraulic jump was identified as the most important parameter to model the length of hydraulic jump. Furthermore, the partial derivative sensitivity analysis (PDSA) was performed. For instance, the PDSA was calculated as positive for all input variables.
H.3. Artificial Intelligence
Zeinab Poshtiban; Elham Ghanbari; Mohammadreza Jahangir
Abstract
Analyzing the influence of people and nodes in social networks has attracted a lot of attention. Social networks gain meaning, despite the groups, associations, and people interested in a specific issue or topic, and people demonstrate their theoretical and practical tendencies in such places. Influential ...
Read More
Analyzing the influence of people and nodes in social networks has attracted a lot of attention. Social networks gain meaning, despite the groups, associations, and people interested in a specific issue or topic, and people demonstrate their theoretical and practical tendencies in such places. Influential nodes are often identified based on the information related to the social network structure and less attention is paid to the information spread by the social network user. The present study aims to assess the structural information in the network to identify influential users in addition to using their information in the social network. To this aim, the user’s feelings were extracted. Then, an emotional or affective score was assigned to each user based on an emotional dictionary and his/her weight in the network was determined utilizing centrality criteria. Here, the Twitter network was applied. Thus, the structure of the social network was defined and its graph was drawn after collecting and processing the data. Then, the analysis capability of the network and existing data was extracted and identified based on the algorithm proposed by users and influential nodes. Based on the results, the nodes identified by the proposed algorithm are considered high-quality and the speed of information simulated is higher than other existing algorithms.
B. Z. Mansouri; H.R. Ghaffary; A. Harimi
Abstract
Speech emotion recognition (SER) is a challenging field of research that has attracted attention during the last two decades. Feature extraction has been reported as the most challenging issue in SER systems. Deep neural networks could partially solve this problem in some other applications. In order ...
Read More
Speech emotion recognition (SER) is a challenging field of research that has attracted attention during the last two decades. Feature extraction has been reported as the most challenging issue in SER systems. Deep neural networks could partially solve this problem in some other applications. In order to address this problem, we proposed a novel enriched spectrogram calculated based on the fusion of wide-band and narrow-band spectrograms. The proposed spectrogram benefited from both high temporal and spectral resolution. Then we applied the resultant spectrogram images to the pre-trained deep convolutional neural network, ResNet152. Instead of the last layer of ResNet152, we added five additional layers to adopt the model to the present task. All the experiments performed on the popular EmoDB dataset are based on leaving one speaker out of a technique that guarantees the speaker's independency from the model. The model gains an accuracy rate of 88.97% which shows the efficiency of the proposed approach in contrast to other state-of-the-art methods.
Y. Dorfeshan; R. Tavakkoli-Moghaddam; F. Jolai; S.M. Mousavi
Abstract
Multi-criteria decision-making (MCDM) methods have been received considerable attention for solving problems with a set of alternatives and conflict criteria in the last decade. Previously, MCDM methods have primarily relied on the judgment and knowledge of experts for making decisions. This paper introduces ...
Read More
Multi-criteria decision-making (MCDM) methods have been received considerable attention for solving problems with a set of alternatives and conflict criteria in the last decade. Previously, MCDM methods have primarily relied on the judgment and knowledge of experts for making decisions. This paper introduces a new data- and knowledge-driven MCDM method to reduce experts’ assessment dependence. The weight of the criteria is specified by using the extended data-driven DEMATEL method. Then, the ranking of alternatives is determined through knowledge-driven ELECTRE and VIKOR methods. All proposed methods for weighting and rankings are developed under grey numbers for coping with the uncertainty. Finally, the practicality and applicability of the proposed method are proved by solving an illustrative example.
H. Rahmani; H. Kamali; H. Shah-Hosseini
Abstract
Nowadays, a significant amount of studies are devoted to discovering important nodes in graph data. Social networks as graph data have attracted a lot of attention. There are various purposes for discovering the important nodes in social networks such as finding the leaders in them, i.e. the users who ...
Read More
Nowadays, a significant amount of studies are devoted to discovering important nodes in graph data. Social networks as graph data have attracted a lot of attention. There are various purposes for discovering the important nodes in social networks such as finding the leaders in them, i.e. the users who play an important role in promoting advertising, etc. Different criteria have been proposed in discovering important nodes in graph data. Measuring a node’s importance by a single criterion may be inefficient due to the variety of graph structures. Recently, a combination of criteria has been used in the discovery of important nodes. In this paper, we propose a system for the Discovery of Important Nodes in social networks using Genetic Algorithms (DINGA). In our proposed system, important nodes in social networks are discovered by employing a combination of eight informative criteria and their intelligent weighting. We compare our results with a manually weighted method, that uses random weightings for each criterion, in four real networks. Our method shows an average of 22% improvement in the accuracy of important nodes discovery.
H.5. Image Processing and Computer Vision
Mohammad Mahdi Nakhaie; Sasan Karamizadeh; Mohammad Ebrahim Shiri; Kambiz Badie
Abstract
Lung cancer is a highly serious illness, and detecting cancer cells early significantly enhances patients' chances of recovery. Doctors regularly examine a large number of CT scan images, which can lead to fatigue and errors. Therefore, there is a need to create a tool that can automatically detect and ...
Read More
Lung cancer is a highly serious illness, and detecting cancer cells early significantly enhances patients' chances of recovery. Doctors regularly examine a large number of CT scan images, which can lead to fatigue and errors. Therefore, there is a need to create a tool that can automatically detect and classify lung nodules in their early stages. Computer-aided diagnosis systems, often employing image processing and machine learning techniques, assist radiologists in identifying and categorizing these nodules. Previous studies have often used complex models or pre-trained networks that demand significant computational power and a long time to execute. Our goal is to achieve accurate diagnosis without the need for extensive computational resources. We introduce a simple convolutional neural network with only two convolution layers, capable of accurately classifying nodules without requiring advanced computing capabilities. We conducted training and validation on two datasets, LIDC-IDRI and LUNA16, achieving impressive accuracies of 99.7% and 97.52%, respectively. These results demonstrate the superior accuracy of our proposed model compared to state-of-the-art research papers.
A. Rahati; K. Rahbar
Abstract
Doing sports movements correctly is very important in ensuring body health. In this article, an attempt has been made to achieve the movements correction through the usage of a different approach based on the 2D position of the joints from the image in 3D space. A person performing in front of the camera ...
Read More
Doing sports movements correctly is very important in ensuring body health. In this article, an attempt has been made to achieve the movements correction through the usage of a different approach based on the 2D position of the joints from the image in 3D space. A person performing in front of the camera with landmarks on his/her joints is the subject of the input image. The coordinates of the joints are then measured in 2D space which is adapted to the extracted 2D skeletons from the reference skeletal sparse model modified movements. The accuracy and precision of this approach is accomplished on the standard Adidas dataset. Its efficiency has also been studied under the influence of cumulative Gaussian and impulse noises. Meanwhile, the average error of the model in detecting the wrong exercise in the set of sports movements is reported to be 5.69 pixels.
M. Shokohi nia; A. Dideban; F. Yaghmaee
Abstract
Despite success of ontology in knowledge representation, its reasoning is still challenging. The most important challenge in reasoning of ontology-based methods is improving realization in the reasoning process. The time complexity of the realization problem-solving process is equal to that of NEXP Time. ...
Read More
Despite success of ontology in knowledge representation, its reasoning is still challenging. The most important challenge in reasoning of ontology-based methods is improving realization in the reasoning process. The time complexity of the realization problem-solving process is equal to that of NEXP Time. This can be done by solving the subsumption and satisfiability problems. On the other hand, uncertainty and ambiguity in these characteristics are unavoidable. Considering these requirements, use of the Fuzzy theory is necessary. This study proposed a method for overcoming this problem, which offers a new solution with a suitable time position. The purpose of this study is to model and improve reasoning and realization in an ontology using Fuzzy-Colored Petri Nets (FCPNs).To this end, an algorithm is presented for improve the realization problem. Then, unified modelling language (UML) class diagram is used for standard description and representing efficiency characteristics; RDFS representation is converted to UML diagram. Then fuzzy concepts in Fuzzy-colored Petri nets are further introduced. In the next step, an algorithm is presented to convert ontology description based on UML class diagram to an executive model based on FCPNs. Using this approach, a simple method is developed to from an executive model and reasoning based on FCPNs which can be employed to obtain the results of interest by applying different queries. Finally, the efficiency of the proposed method is evaluated with the results indicating the improve the performance of the proposed method from different aspects.
M. Babazadeh Shareh; H.R. Navidi; H. Haj Seyed Javadi; M. HosseinZadeh
Abstract
In cooperative P2P networks, there are two kinds of illegal users, namely free riders and Sybils. Free riders are those who try to receive services without any sort of cost. Sybil users are rational peers which have multiple fake identities. There are some techniques to detect free riders and Sybil users ...
Read More
In cooperative P2P networks, there are two kinds of illegal users, namely free riders and Sybils. Free riders are those who try to receive services without any sort of cost. Sybil users are rational peers which have multiple fake identities. There are some techniques to detect free riders and Sybil users which have previously been proposed by a number of researchers such as the Tit-for-tat and Sybil guard techniques. Although such previously proposed techniques were quite successful in detecting free riders and Sybils individually, there is no technique capable of detecting both these riders simultaneously. Therefore, the main objective of this research is to propose a single mechanism to detect both kinds of these illegal users based on Game theory. Obtaining new centrality and bandwidth contribution formulas with an incentive mechanism approach is the basic idea of the present research’s proposed solution. The result of this paper shows that as the life of the network passes, free riders are identified, and through detecting Sybil nodes, the number of services offered to them will be decreased.
H.5. Image Processing and Computer Vision
S. Asadi Amiri; Z. Mohammadpoory; M. Nasrolahzadeh
Abstract
Content based image retrieval (CBIR) systems compare a query image with images in a dataset to find similar images to a query image. In this paper a novel and efficient CBIR system is proposed using color and texture features. The color features are represented by color moments and color histograms of ...
Read More
Content based image retrieval (CBIR) systems compare a query image with images in a dataset to find similar images to a query image. In this paper a novel and efficient CBIR system is proposed using color and texture features. The color features are represented by color moments and color histograms of RGB and HSV color spaces and texture features are represented by localized Discrete Cosine Transform (DCT) and localized Gray level co-occurrence matrix and local binary patterns (LBP). The DCT coefficients and Gray level co-occurrence matrix of the blocks are examined for assessing the block details. Also, LBP is used for rotation invariant texture information of the image. After feature extraction, Shannon entropy criterion is used to reduce inefficient features. Finally, an improved version of Canberra distance is employed to compare similarity of feature vectors. Experimental analysis is carried out using precision and recall on Corel-5K and Corel-10K datasets. Results demonstrate that the proposed method can efficiently improve the precision and recall and outperforms the most existing methods.s the most existing methods.
H.3. Artificial Intelligence
Hamid Ghaffari; Hemmatollah Pirdashti; Mohammad Reza Kangavari; Sjoerd Boersma
Abstract
An intelligent growth chamber was designed in 2021 to model and optimize rice seedlings' growth. According to this, an experiment was implemented at Sari University of Agricultural Sciences and Natural Resources, Iran, in March, April, and May 2021. The model inputs included radiation, temperature, carbon ...
Read More
An intelligent growth chamber was designed in 2021 to model and optimize rice seedlings' growth. According to this, an experiment was implemented at Sari University of Agricultural Sciences and Natural Resources, Iran, in March, April, and May 2021. The model inputs included radiation, temperature, carbon dioxide, and soil acidity. These growth factors were studied at ambient and incremental levels. The model outputs were seedlings' height, root length, chlorophyll content, CGR, RGR, the leaves number, and the shoot's dry weight. Rice seedlings' growth was modeled using LSTM neural networks and optimized by the Bayesian method. It concluded that the best parameter setting was at epoch=100, learning rate=0.001, and iteration number=500. The best performance during training was obtained when the validation RMSE=0.2884.
M. Azimi hemat; F. Shamsezat Ezat; M. Kuchaki Rafsanjani
Abstract
In content-based image retrieval (CBIR), the visual features of the database images are extracted, and the visual features database is assessed to find the images closest to the query image. Increasing the efficiency and decreasing both the time and storage space of indexed images is the priority in ...
Read More
In content-based image retrieval (CBIR), the visual features of the database images are extracted, and the visual features database is assessed to find the images closest to the query image. Increasing the efficiency and decreasing both the time and storage space of indexed images is the priority in developing image retrieval systems. In this research, an efficient system is proposed for image retrieval by applying fuzzy techniques, which are advantageous in increasing the efficiency and decreasing the length of the feature vector and storage space. The effect of increasing the considered content features' count is assessed to enhance image retrieval efficiency. The fuzzy features consist of color, statistical information related to the spatial dependency of the pixels on each other, and the position of image edges. These features are indexed in fuzzy vector format 16, 3, and 16 lengths. The extracted vectors are compared through the fuzzy similarity measures, where the most similar images are retrieved. To evaluate the proposed systems' performance, this system and three other non-fuzzy systems where fewer features are of concern were implemented. These four systems are tested on a database containing 1000 images, and the results indicate improvement in the retrieval precision and storage space.
A. Nozaripour; H. Soltanizadeh
Abstract
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we ...
Read More
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we had presented in our previous work, a new and robust method against rotation is introduced for dorsal hand vein recognition. In this method, to select the ROI, by changing the length and angle of the sides, undesirable effects of hand rotation during taking images have largely been neutralized. So, depending on the amount of hand rotation, ROI in each image will be different in size and shape. On the other hand, because of the same direction distribution on the dorsal hand vein patterns, we have used the kernel trick on sparse representation to classification. As a result, most samples with different classes but the same direction distribution will be classified properly. Using these two techniques, lead to introduce an effective method against hand rotation, for dorsal hand vein recognition. Increases of 2.26% in the recognition rate is observed for the proposed method when compared to the three conventional SRC-based algorithms and three classification methods based sparse coding that used dictionary learning.
S. Javadi; R. Safa; M. Azizi; Seyed A. Mirroshandel
Abstract
Online scientific communities are bases that publish books, journals, and scientific papers, and help promote knowledge. The researchers use search engines to find the given information including scientific papers, an expert to collaborate with, and the publication venue, but in many cases due to search ...
Read More
Online scientific communities are bases that publish books, journals, and scientific papers, and help promote knowledge. The researchers use search engines to find the given information including scientific papers, an expert to collaborate with, and the publication venue, but in many cases due to search by keywords and lack of attention to the content, they do not achieve the desired results at the early stages. Online scientific communities can increase the system efficiency to respond to their users utilizing a customized search. In this paper, using a dataset including bibliographic information of user’s publication, the publication venues, and other published papers provided as a way to find an expert in a particular context where experts are recommended to a user according to his records and preferences. In this way, a user request to find an expert is presented with keywords that represent a certain expertise and the system output will be a certain number of ranked suggestions for a specific user. Each suggestion is the name of an expert who has been identified appropriate to collaborate with the user. In evaluation using IEEE database, the proposed method reached an accuracy of 71.50 percent that seems to be an acceptable result.
H.3. Artificial Intelligence
Ali Rebwar Shabrandi; Ali Rajabzadeh Ghatari; Nader Tavakoli; Mohammad Dehghan Nayeri; Sahar Mirzaei
Abstract
To mitigate COVID-19’s overwhelming burden, a rapid and efficient early screening scheme for COVID-19 in the first-line is required. Much research has utilized laboratory tests, CT scans, and X-ray data, which are obstacles to agile and real-time screening. In this study, we propose a user-friendly ...
Read More
To mitigate COVID-19’s overwhelming burden, a rapid and efficient early screening scheme for COVID-19 in the first-line is required. Much research has utilized laboratory tests, CT scans, and X-ray data, which are obstacles to agile and real-time screening. In this study, we propose a user-friendly and low-cost COVID-19 detection model based on self-reportable data at home. The most exhausted input features were identified and included in the demographic, symptoms, semi-clinical, and past/present disease data categories. We employed Grid search to identify the optimal combination of hyperparameter settings that yields the most accurate prediction. Next, we apply the proposed model with tuned hyperparameters to 11 classic state-of-the-art classifiers. The results show that the XGBoost classifier provides the highest accuracy of 73.3%, but statistical analysis shows that there is no significant difference between the accuracy performance of XGBoost and AdaBoost, although it proved the superiority of these two methods over other methods. Furthermore, the most important features obtained using SHapely Adaptive explanations were analyzed. “Contact with infected people,” “cough,” “muscle pain,” “fever,” “age,” “Cardiovascular commodities,” “PO2,” and “respiratory distress” are the most important variables. Among these variables, the first three have a relatively large positive impact on the target variable. Whereas, “age,” “PO2”, and “respiratory distress” are highly negatively correlated with the target variable. Finally, we built a clinically operable, visible, and easy-to-interpret decision tree model to predict COVID-19 infection.
Mojtaba Nasehi; Mohsen Ashourian; Hosein Emami
Abstract
Vehicle type recognition has been widely used in practical applications such as traffic control, unmanned vehicle control, road taxation, smuggling detection, and so on. In this paper, various techniques such as data augmentation and space filtering have been used to improve and enhance the data. Then, ...
Read More
Vehicle type recognition has been widely used in practical applications such as traffic control, unmanned vehicle control, road taxation, smuggling detection, and so on. In this paper, various techniques such as data augmentation and space filtering have been used to improve and enhance the data. Then, a developed algorithm that integrates VGG neural network and YOLO algorithm has been used to detect and identify vehicles, Then the implementation on the Raspberry hardware board and practically through a scenario is mentioned. Real including image data sets are analyzed. The results show the good performance of the implemented algorithm in terms of detection performance (98%), processing speed, and environmental conditions, which indicates its capability in practical applications with low cost.
Z. Hassani; M. Alambardar Meybodi
Abstract
A major pitfall in the standard version of Particle Swarm Optimization (PSO) is that it might get stuck in the local optima. To escape this issue, a novel hybrid model based on the combination of PSO and AntLion Optimization (ALO) is proposed in this study. The proposed method, called H-PSO-ALO, uses ...
Read More
A major pitfall in the standard version of Particle Swarm Optimization (PSO) is that it might get stuck in the local optima. To escape this issue, a novel hybrid model based on the combination of PSO and AntLion Optimization (ALO) is proposed in this study. The proposed method, called H-PSO-ALO, uses a local search strategy by employing the Ant-Lion algorithm to select the less correlated and salient feature subset. The objective is to improve the prediction accuracy and adaptability of the model in various datasets by balancing the exploration and exploitation processes. The performance of our method has been evaluated on 30 benchmark classification problems, CEC 2017 benchmark problems, and some well-known datasets. To verify the performance, four algorithms, including FDR-PSO, CLPSO, HFPSO, MPSO, are elected to be compared with the efficiency of H-PSO-ALO. Considering the experimental results, the proposed method outperforms the others in many cases, so it seems it is a desirable candidate for optimization problems on real-world datasets.
H.3. Artificial Intelligence
Mahdi Rasouli; Vahid Kiani
Abstract
The identification of emotions in short texts of low-resource languages poses a significant challenge, requiring specialized frameworks and computational intelligence techniques. This paper presents a comprehensive exploration of shallow and deep learning methods for emotion detection in short Persian ...
Read More
The identification of emotions in short texts of low-resource languages poses a significant challenge, requiring specialized frameworks and computational intelligence techniques. This paper presents a comprehensive exploration of shallow and deep learning methods for emotion detection in short Persian texts. Shallow learning methods employ feature extraction and dimension reduction to enhance classification accuracy. On the other hand, deep learning methods utilize transfer learning and word embedding, particularly BERT, to achieve high classification accuracy. A Persian dataset called "ShortPersianEmo" is introduced to evaluate the proposed methods, comprising 5472 diverse short Persian texts labeled in five main emotion classes. The evaluation results demonstrate that transfer learning and BERT-based text embedding perform better in accurately classifying short Persian texts than alternative approaches. The dataset of this study ShortPersianEmo will be publicly available online at https://github.com/vkiani/ShortPersianEmo.
G.3.7. Database Machines
Abdul Aziz Danaa Abukari; Mohammed Daabo Ibrahim; Alhassan Abdul-Barik
Abstract
Hidden Markov Models (HMMs) are machine learning models that has been applied to a range of real-life applications including intrusion detection, pattern recognition, thermodynamics, statistical mechanics among others. A multi-layered HMMs for real-time fraud detection and prevention whilst reducing ...
Read More
Hidden Markov Models (HMMs) are machine learning models that has been applied to a range of real-life applications including intrusion detection, pattern recognition, thermodynamics, statistical mechanics among others. A multi-layered HMMs for real-time fraud detection and prevention whilst reducing drastically the number of false positives and negatives is proposed and implemented in this study. The study also focused on reducing the parameter optimization and detection times of the proposed models using a hybrid algorithm comprising the Baum-Welch, Genetic and Particle-Swarm Optimization algorithms. Simulation results revealed that, in terms of Precision, Recall and F1-scores, our proposed model performed better when compared to other approaches proposed in literature.
B.3. Communication/Networking and Information Technology
S. Mojtaba Matinkhah; Roya Morshedi; Akbar Mostafavi
Abstract
The Internet of Things (IoT) has emerged as a rapidly growing technology that enables seamless connectivity between a wide variety of devices. However, with this increased connectivity comes an increased risk of cyber-attacks. In recent years, the development of intrusion detection systems (IDS) has ...
Read More
The Internet of Things (IoT) has emerged as a rapidly growing technology that enables seamless connectivity between a wide variety of devices. However, with this increased connectivity comes an increased risk of cyber-attacks. In recent years, the development of intrusion detection systems (IDS) has become critical for ensuring the security and privacy of IoT networks. This article presents a study that evaluates the accuracy of an intrusion detection system (IDS) for detecting network attacks in the Internet of Things (IoT) network. The proposed IDS uses the Decision Tree Classifier and is tested on four benchmark datasets: NSL-KDD, BOT-IoT, CICIDS2017, and MQTT-IoT. The impact of noise on the training and test datasets on classification accuracy is analyzed. The results indicate that clean data has the highest accuracy, while noisy datasets significantly reduce accuracy. Furthermore, the study finds that when both training and test datasets are noisy, the accuracy of classification decreases further. The findings of this study demonstrate the importance of using clean data for training and testing an IDS in IoT networks to achieve accurate classification. This research provides valuable insights for the development of a robust and accurate IDS for IoT networks.