A. Nozaripour; H. Soltanizadeh
Abstract
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we ...
Read More
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we had presented in our previous work, a new and robust method against rotation is introduced for dorsal hand vein recognition. In this method, to select the ROI, by changing the length and angle of the sides, undesirable effects of hand rotation during taking images have largely been neutralized. So, depending on the amount of hand rotation, ROI in each image will be different in size and shape. On the other hand, because of the same direction distribution on the dorsal hand vein patterns, we have used the kernel trick on sparse representation to classification. As a result, most samples with different classes but the same direction distribution will be classified properly. Using these two techniques, lead to introduce an effective method against hand rotation, for dorsal hand vein recognition. Increases of 2.26% in the recognition rate is observed for the proposed method when compared to the three conventional SRC-based algorithms and three classification methods based sparse coding that used dictionary learning.
S. Javadi; R. Safa; M. Azizi; Seyed A. Mirroshandel
Abstract
Online scientific communities are bases that publish books, journals, and scientific papers, and help promote knowledge. The researchers use search engines to find the given information including scientific papers, an expert to collaborate with, and the publication venue, but in many cases due to search ...
Read More
Online scientific communities are bases that publish books, journals, and scientific papers, and help promote knowledge. The researchers use search engines to find the given information including scientific papers, an expert to collaborate with, and the publication venue, but in many cases due to search by keywords and lack of attention to the content, they do not achieve the desired results at the early stages. Online scientific communities can increase the system efficiency to respond to their users utilizing a customized search. In this paper, using a dataset including bibliographic information of user’s publication, the publication venues, and other published papers provided as a way to find an expert in a particular context where experts are recommended to a user according to his records and preferences. In this way, a user request to find an expert is presented with keywords that represent a certain expertise and the system output will be a certain number of ranked suggestions for a specific user. Each suggestion is the name of an expert who has been identified appropriate to collaborate with the user. In evaluation using IEEE database, the proposed method reached an accuracy of 71.50 percent that seems to be an acceptable result.
H.3. Artificial Intelligence
Ali Rebwar Shabrandi; Ali Rajabzadeh Ghatari; Nader Tavakoli; Mohammad Dehghan Nayeri; Sahar Mirzaei
Abstract
To mitigate COVID-19’s overwhelming burden, a rapid and efficient early screening scheme for COVID-19 in the first-line is required. Much research has utilized laboratory tests, CT scans, and X-ray data, which are obstacles to agile and real-time screening. In this study, we propose a user-friendly ...
Read More
To mitigate COVID-19’s overwhelming burden, a rapid and efficient early screening scheme for COVID-19 in the first-line is required. Much research has utilized laboratory tests, CT scans, and X-ray data, which are obstacles to agile and real-time screening. In this study, we propose a user-friendly and low-cost COVID-19 detection model based on self-reportable data at home. The most exhausted input features were identified and included in the demographic, symptoms, semi-clinical, and past/present disease data categories. We employed Grid search to identify the optimal combination of hyperparameter settings that yields the most accurate prediction. Next, we apply the proposed model with tuned hyperparameters to 11 classic state-of-the-art classifiers. The results show that the XGBoost classifier provides the highest accuracy of 73.3%, but statistical analysis shows that there is no significant difference between the accuracy performance of XGBoost and AdaBoost, although it proved the superiority of these two methods over other methods. Furthermore, the most important features obtained using SHapely Adaptive explanations were analyzed. “Contact with infected people,” “cough,” “muscle pain,” “fever,” “age,” “Cardiovascular commodities,” “PO2,” and “respiratory distress” are the most important variables. Among these variables, the first three have a relatively large positive impact on the target variable. Whereas, “age,” “PO2”, and “respiratory distress” are highly negatively correlated with the target variable. Finally, we built a clinically operable, visible, and easy-to-interpret decision tree model to predict COVID-19 infection.
Mojtaba Nasehi; Mohsen Ashourian; Hosein Emami
Abstract
Vehicle type recognition has been widely used in practical applications such as traffic control, unmanned vehicle control, road taxation, smuggling detection, and so on. In this paper, various techniques such as data augmentation and space filtering have been used to improve and enhance the data. Then, ...
Read More
Vehicle type recognition has been widely used in practical applications such as traffic control, unmanned vehicle control, road taxation, smuggling detection, and so on. In this paper, various techniques such as data augmentation and space filtering have been used to improve and enhance the data. Then, a developed algorithm that integrates VGG neural network and YOLO algorithm has been used to detect and identify vehicles, Then the implementation on the Raspberry hardware board and practically through a scenario is mentioned. Real including image data sets are analyzed. The results show the good performance of the implemented algorithm in terms of detection performance (98%), processing speed, and environmental conditions, which indicates its capability in practical applications with low cost.
Z. Hassani; M. Alambardar Meybodi
Abstract
A major pitfall in the standard version of Particle Swarm Optimization (PSO) is that it might get stuck in the local optima. To escape this issue, a novel hybrid model based on the combination of PSO and AntLion Optimization (ALO) is proposed in this study. The proposed method, called H-PSO-ALO, uses ...
Read More
A major pitfall in the standard version of Particle Swarm Optimization (PSO) is that it might get stuck in the local optima. To escape this issue, a novel hybrid model based on the combination of PSO and AntLion Optimization (ALO) is proposed in this study. The proposed method, called H-PSO-ALO, uses a local search strategy by employing the Ant-Lion algorithm to select the less correlated and salient feature subset. The objective is to improve the prediction accuracy and adaptability of the model in various datasets by balancing the exploration and exploitation processes. The performance of our method has been evaluated on 30 benchmark classification problems, CEC 2017 benchmark problems, and some well-known datasets. To verify the performance, four algorithms, including FDR-PSO, CLPSO, HFPSO, MPSO, are elected to be compared with the efficiency of H-PSO-ALO. Considering the experimental results, the proposed method outperforms the others in many cases, so it seems it is a desirable candidate for optimization problems on real-world datasets.
H.3.2.6. Games and infotainment
Shaqayeq Saffari; Morteza Dorrigiv; Farzin Yaghmaee
Abstract
Procedural Content Generation (PCG) through automated and algorithmic content generation is an active research field in the gaming industry. Recently, Machine Learning (ML) approaches have played a pivotal role in advancing this area. While recent studies have primarily focused on examining one or a ...
Read More
Procedural Content Generation (PCG) through automated and algorithmic content generation is an active research field in the gaming industry. Recently, Machine Learning (ML) approaches have played a pivotal role in advancing this area. While recent studies have primarily focused on examining one or a few specific approaches in PCG, this paper provides a more comprehensive perspective by exploring a wider range of approaches, their applications, advantages, and disadvantages. Furthermore, the current challenges and potential future trends in this field are discussed. Although this paper does not aim to provide an exhaustive review of all existing research due to the rapid and expansive growth of this domain, it is based on the analysis of selected articles published between 2020 and 2024.
H.3. Artificial Intelligence
Mahdi Rasouli; Vahid Kiani
Abstract
The identification of emotions in short texts of low-resource languages poses a significant challenge, requiring specialized frameworks and computational intelligence techniques. This paper presents a comprehensive exploration of shallow and deep learning methods for emotion detection in short Persian ...
Read More
The identification of emotions in short texts of low-resource languages poses a significant challenge, requiring specialized frameworks and computational intelligence techniques. This paper presents a comprehensive exploration of shallow and deep learning methods for emotion detection in short Persian texts. Shallow learning methods employ feature extraction and dimension reduction to enhance classification accuracy. On the other hand, deep learning methods utilize transfer learning and word embedding, particularly BERT, to achieve high classification accuracy. A Persian dataset called "ShortPersianEmo" is introduced to evaluate the proposed methods, comprising 5472 diverse short Persian texts labeled in five main emotion classes. The evaluation results demonstrate that transfer learning and BERT-based text embedding perform better in accurately classifying short Persian texts than alternative approaches. The dataset of this study ShortPersianEmo will be publicly available online at https://github.com/vkiani/ShortPersianEmo.
G.3.7. Database Machines
Abdul Aziz Danaa Abukari; Mohammed Daabo Ibrahim; Alhassan Abdul-Barik
Abstract
Hidden Markov Models (HMMs) are machine learning models that has been applied to a range of real-life applications including intrusion detection, pattern recognition, thermodynamics, statistical mechanics among others. A multi-layered HMMs for real-time fraud detection and prevention whilst reducing ...
Read More
Hidden Markov Models (HMMs) are machine learning models that has been applied to a range of real-life applications including intrusion detection, pattern recognition, thermodynamics, statistical mechanics among others. A multi-layered HMMs for real-time fraud detection and prevention whilst reducing drastically the number of false positives and negatives is proposed and implemented in this study. The study also focused on reducing the parameter optimization and detection times of the proposed models using a hybrid algorithm comprising the Baum-Welch, Genetic and Particle-Swarm Optimization algorithms. Simulation results revealed that, in terms of Precision, Recall and F1-scores, our proposed model performed better when compared to other approaches proposed in literature.
B.3. Communication/Networking and Information Technology
S. Mojtaba Matinkhah; Roya Morshedi; Akbar Mostafavi
Abstract
The Internet of Things (IoT) has emerged as a rapidly growing technology that enables seamless connectivity between a wide variety of devices. However, with this increased connectivity comes an increased risk of cyber-attacks. In recent years, the development of intrusion detection systems (IDS) has ...
Read More
The Internet of Things (IoT) has emerged as a rapidly growing technology that enables seamless connectivity between a wide variety of devices. However, with this increased connectivity comes an increased risk of cyber-attacks. In recent years, the development of intrusion detection systems (IDS) has become critical for ensuring the security and privacy of IoT networks. This article presents a study that evaluates the accuracy of an intrusion detection system (IDS) for detecting network attacks in the Internet of Things (IoT) network. The proposed IDS uses the Decision Tree Classifier and is tested on four benchmark datasets: NSL-KDD, BOT-IoT, CICIDS2017, and MQTT-IoT. The impact of noise on the training and test datasets on classification accuracy is analyzed. The results indicate that clean data has the highest accuracy, while noisy datasets significantly reduce accuracy. Furthermore, the study finds that when both training and test datasets are noisy, the accuracy of classification decreases further. The findings of this study demonstrate the importance of using clean data for training and testing an IDS in IoT networks to achieve accurate classification. This research provides valuable insights for the development of a robust and accurate IDS for IoT networks.
H.3. Artificial Intelligence
Seyed Alireza Bashiri Mosavi; Omid Khalaf Beigi
Abstract
A speedy and accurate transient stability assessment (TSA) is gained by employing efficient machine learning- and statistics-based (MLST) algorithms on transient nonlinear time series space. In the MLST’s world, the feature selection process by forming compacted optimal transient feature space ...
Read More
A speedy and accurate transient stability assessment (TSA) is gained by employing efficient machine learning- and statistics-based (MLST) algorithms on transient nonlinear time series space. In the MLST’s world, the feature selection process by forming compacted optimal transient feature space (COTFS) from raw high dimensional transient data can pave the way for high-performance TSA. Hence, designing a comprehensive feature selection scheme (FSS) that populates COTFS with the relevant-discriminative transient features (RDTFs) is an urgent need. This work aims to introduce twin hybrid FSS (THFSS) to select RDTFs from transient 28-variate time series data. Each fold of THFSS comprises filter-wrapper mechanisms. The conditional relevancy rate (CRR) is based on mutual information (MI) and entropy calculations are considered as the filter method, and incremental wrapper subset selection (IWSS) and IWSS with replacement (IWSSr) formed by kernelized support vector machine (SVM) and twin SVM (TWSVM) are used as wrapper ones. After exerting THFSS on transient univariates, RDTFs are entered into the cross-validation-based train-test procedure for evaluating their efficiency in TSA. The results manifested that THFSS-based RDTFs have a prediction accuracy of 98.87 % and a processing time of 102.653 milliseconds for TSA.
H.6. Pattern Recognition
Sadegh Rahmani Rahmani-Boldaji; Mehdi Bateni; Mahmood Mortazavi Dehkordi
Abstract
Efficient regular-frequent pattern mining from sensors-produced data has become a challenge. The large volume of data leads to prolonged runtime, thus delaying vital predictions and decision makings which need an immediate response. So, using big data platforms and parallel algorithms is an appropriate ...
Read More
Efficient regular-frequent pattern mining from sensors-produced data has become a challenge. The large volume of data leads to prolonged runtime, thus delaying vital predictions and decision makings which need an immediate response. So, using big data platforms and parallel algorithms is an appropriate solution. Additionally, an incremental technique is more suitable to mine patterns from big data streams than static methods. This study presents an incremental parallel approach and compact tree structure for extracting regular-frequent patterns from the data of wireless sensor networks. Furthermore, fewer database scans have been performed in an effort to reduce the mining runtime. This study was performed on Intel 5-day and 10-day datasets with 6, 4, and 2 nodes clusters. The findings show the runtime was improved in all 3 cluster modes by 14, 18, and 34% for the 5-day dataset and by 22, 55, and 85% for the 10-day dataset, respectively.
R. Azizi; A. M. Latif
Abstract
In this work, we show that an image reconstruction from a burst of individually demosaicked RAW captures propagates demosaicking artifacts throughout the image processing pipeline. Hence, we propose a joint regularization scheme for burst denoising and demosaicking. We model the burst alignment functions ...
Read More
In this work, we show that an image reconstruction from a burst of individually demosaicked RAW captures propagates demosaicking artifacts throughout the image processing pipeline. Hence, we propose a joint regularization scheme for burst denoising and demosaicking. We model the burst alignment functions and the color filter array sampling functions into one linear operator. Then, we formulate the individual burst reconstruction and the demosaicking problems into a three-color-channel optimization problem. We introduce a crosschannel prior to the solution of this optimization problem and develop a numerical solver via alternating direction method of multipliers. Moreover, our proposed method avoids the complexity of alignment estimation as a preprocessing step for burst reconstruction. It relies on a phase correlation approach in the Fourier’s domain to efficiently find the relative translation, rotation, and scale among the burst captures and to perform warping accordingly. As a result of these steps, the proposed joint burst denoising and demosaicking solution improves the quality of reconstructed images by a considerable margin compared to existing image model-based methods.
H.3. Artificial Intelligence
Naeimeh Mohammad Karimi; Mehdi Rezaeian
Abstract
In the era of massive data, analyzing bioinformatics fields and discovering its functions are very important. The rate of sequence generation using sequence generation techniques is increasing rapidly, and researchers are faced with many unknown functions. One of the essential operations in bioinformatics ...
Read More
In the era of massive data, analyzing bioinformatics fields and discovering its functions are very important. The rate of sequence generation using sequence generation techniques is increasing rapidly, and researchers are faced with many unknown functions. One of the essential operations in bioinformatics is the classification of sequences to discover unknown proteins. There are two methods to classify sequences: the traditional method and the modern method. The conventional methods use sequence alignment, which has a high computational cost. In the contemporary method, feature extraction is used to classify proteins. In this regard, methods such as DeepFam have been presented. This research is an improvement of the DeepFam model, and the special focus is on extracting the appropriate features to differentiate the sequences of different categories. As the model improved, the features tended to be more generic. The grad-CAM method has been used to analyze the extracted features and interpret improved network layers. Then, we used the fitting vector from the transformer model to check the performance of Grad-CAM. The COG database, a massive database of protein sequences, was used to check the accuracy of the presented method. We have shown that by extracting more efficient features, the conserved regions in the sequences can be discovered more accurately, which helps to classify the proteins better. One of the critical advantages of the presented method is that by increasing the number of categories, the necessary flexibility is maintained, and the classification accuracy in three tests is higher than that of other methods.
H.5. Image Processing and Computer Vision
Fateme Namazi; Mehdi Ezoji; Ebadat Ghanbari Parmehr
Abstract
Paddy fields in the north of Iran are highly fragmented, leading to challenges in accurately mapping them using remote sensing techniques. Cloudy weather often degrades image quality or renders images unusable, further complicating monitoring efforts. This paper presents a novel paddy rice mapping method ...
Read More
Paddy fields in the north of Iran are highly fragmented, leading to challenges in accurately mapping them using remote sensing techniques. Cloudy weather often degrades image quality or renders images unusable, further complicating monitoring efforts. This paper presents a novel paddy rice mapping method based on phenology, addressing these challenges. The method utilizes time series data from Sentinel-1 and 2 satellites to derive a rice phenology curve. This curve is constructed using the cross ratio (CR) index from Sentinel-1, and the normalized difference vegetation index (NDVI) and land surface water index (LSWI) from Sentinel-2. Unlike existing methods, which often rely on analyzing single-point indices at specific times, this approach examines the entire time series behavior of each pixel. This robust strategy significantly mitigates the impact of cloud cover on classification accuracy. The time series behavior of each pixel is then correlated with this rice phenology curve. The maximum correlation, typically achieved around the 50-day period in the middle of the cultivation season, helps identify potential rice fields. A Support Vector Machine (SVM) classifier with a Radial Basis Function (RBF) kernel is then employed, utilizing the maximum correlation values from all three indices to classify pixels as rice paddy or other land cover types. The implementation results validate the accuracy of this method, achieving an overall accuracy of 99%. All processes were carried out on the Google Earth Engine (GEE) platform.
B.3. Communication/Networking and Information Technology
Roya Morshedi; S. Mojtaba Matinkhah
Abstract
The Internet of Things (IoT) is a rapidly growing domain essential for modern smart services. However, resource limitations in IoT nodes create significant security vulnerabilities, making them prone to cyberattacks. Deep learning models have emerged as effective tools for detecting anomalies in IoT ...
Read More
The Internet of Things (IoT) is a rapidly growing domain essential for modern smart services. However, resource limitations in IoT nodes create significant security vulnerabilities, making them prone to cyberattacks. Deep learning models have emerged as effective tools for detecting anomalies in IoT traffic, yet Gaussian noise remains a major challenge, impacting detection accuracy. This study proposes an intrusion detection system based on a simple LSTM architecture with 128 memory units, optimized for deployment on edge servers and trained on the CIC-IDS2017 dataset. The model achieves outstanding performance, with a detection rate of 99.90%, accuracy of 99.90%, and an F1 score of 98.93%. A key innovation is integrating the Hurst parameter with the model, improving resilience against Gaussian noise and enhancing detection of attacks like DoS and DDoS. This research highlights the value of advanced statistical features and robust noise-resistant models in securing IoT networks. The system’s precision, rapid response, and innovative approach mark a significant advance in IoT cybersecurity.
H.5.9. Scene Analysis
Navid Raisi; Mahdi Rezaei; Behrooz Masoumi
Abstract
Human Activity Recognition (HAR) using computer vision is an expanding field with diverse applications, including healthcare, transportation, and human-computer interaction. While classical approaches such as Support Vector Machines (SVM), Histogram of Oriented Gradients (HOG), and ...
Read More
Human Activity Recognition (HAR) using computer vision is an expanding field with diverse applications, including healthcare, transportation, and human-computer interaction. While classical approaches such as Support Vector Machines (SVM), Histogram of Oriented Gradients (HOG), and Hidden Markov Models (HMM) rely on manually extracted features and struggle with complex motion patterns, deep learning-based models (e.g., Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), Transformer-based models) have improved performance but still face challenges in handling occlusions, noisy environments, and computational efficiency. This paper introduces Attention-HAR, a novel deep neural network model designed to enhance HAR performance through three key innovations: Conv3DTranspose for spatial upsampling, ConvLSTM2D for capturing spatiotemporal patterns, and a custom attention mechanism that prioritizes critical frames within sequences. Unlike conventional attention mechanisms, our approach dynamically assigns weights to key frames, reducing the impact of redundant frames and enhancing interpretability and computational efficiency. Experimental results on the UCF-101 dataset demonstrate that Attention-HAR outperforms state-of-the-art models, achieving an accuracy of 97.61%, a precision of 97.95%, a recall of 97.49%, an F1-score of 97.64, and an AUC of 99.9%. With only 1.26 million parameters, the model is computationally efficient and well-suited for deployment on lightweight platforms. These findings suggest that integrating temporal-spatial feature learning with attention mechanisms can significantly improve HAR in dynamic and complex environments.