Technical Paper
H.3. Artificial Intelligence
Naeimeh Mohammad Karimi; Mehdi Rezaeian
Abstract
In the era of massive data, analyzing bioinformatics fields and discovering its functions are very important. The rate of sequence generation using sequence generation techniques is increasing rapidly, and researchers are faced with many unknown functions. One of the essential operations in bioinformatics ...
Read More
In the era of massive data, analyzing bioinformatics fields and discovering its functions are very important. The rate of sequence generation using sequence generation techniques is increasing rapidly, and researchers are faced with many unknown functions. One of the essential operations in bioinformatics is the classification of sequences to discover unknown proteins. There are two methods to classify sequences: the traditional method and the modern method. The conventional methods use sequence alignment, which has a high computational cost. In the contemporary method, feature extraction is used to classify proteins. In this regard, methods such as DeepFam have been presented. This research is an improvement of the DeepFam model, and the special focus is on extracting the appropriate features to differentiate the sequences of different categories. As the model improved, the features tended to be more generic. The grad-CAM method has been used to analyze the extracted features and interpret improved network layers. Then, we used the fitting vector from the transformer model to check the performance of Grad-CAM. The COG database, a massive database of protein sequences, was used to check the accuracy of the presented method. We have shown that by extracting more efficient features, the conserved regions in the sequences can be discovered more accurately, which helps to classify the proteins better. One of the critical advantages of the presented method is that by increasing the number of categories, the necessary flexibility is maintained, and the classification accuracy in three tests is higher than that of other methods.
Original/Review Paper
H.5. Image Processing and Computer Vision
Fateme Namazi; Mehdi Ezoji; Ebadat Ghanbari Parmehr
Abstract
Paddy fields in the north of Iran are highly fragmented, leading to challenges in accurately mapping them using remote sensing techniques. Cloudy weather often degrades image quality or renders images unusable, further complicating monitoring efforts. This paper presents a novel paddy rice mapping method ...
Read More
Paddy fields in the north of Iran are highly fragmented, leading to challenges in accurately mapping them using remote sensing techniques. Cloudy weather often degrades image quality or renders images unusable, further complicating monitoring efforts. This paper presents a novel paddy rice mapping method based on phenology, addressing these challenges. The method utilizes time series data from Sentinel-1 and 2 satellites to derive a rice phenology curve. This curve is constructed using the cross ratio (CR) index from Sentinel-1, and the normalized difference vegetation index (NDVI) and land surface water index (LSWI) from Sentinel-2. Unlike existing methods, which often rely on analyzing single-point indices at specific times, this approach examines the entire time series behavior of each pixel. This robust strategy significantly mitigates the impact of cloud cover on classification accuracy. The time series behavior of each pixel is then correlated with this rice phenology curve. The maximum correlation, typically achieved around the 50-day period in the middle of the cultivation season, helps identify potential rice fields. A Support Vector Machine (SVM) classifier with a Radial Basis Function (RBF) kernel is then employed, utilizing the maximum correlation values from all three indices to classify pixels as rice paddy or other land cover types. The implementation results validate the accuracy of this method, achieving an overall accuracy of 99%. All processes were carried out on the Google Earth Engine (GEE) platform.
Technical Paper
B.3. Communication/Networking and Information Technology
Roya Morshedi; S. Mojtaba Matinkhah
Abstract
The Internet of Things (IoT) is a rapidly growing domain essential for modern smart services. However, resource limitations in IoT nodes create significant security vulnerabilities, making them prone to cyberattacks. Deep learning models have emerged as effective tools for detecting anomalies in IoT ...
Read More
The Internet of Things (IoT) is a rapidly growing domain essential for modern smart services. However, resource limitations in IoT nodes create significant security vulnerabilities, making them prone to cyberattacks. Deep learning models have emerged as effective tools for detecting anomalies in IoT traffic, yet Gaussian noise remains a major challenge, impacting detection accuracy. This study proposes an intrusion detection system based on a simple LSTM architecture with 128 memory units, optimized for deployment on edge servers and trained on the CIC-IDS2017 dataset. The model achieves outstanding performance, with a detection rate of 99.90%, accuracy of 99.90%, and an F1 score of 98.93%. A key innovation is integrating the Hurst parameter with the model, improving resilience against Gaussian noise and enhancing detection of attacks like DoS and DDoS. This research highlights the value of advanced statistical features and robust noise-resistant models in securing IoT networks. The system’s precision, rapid response, and innovative approach mark a significant advance in IoT cybersecurity.
Applied Article
H.5.9. Scene Analysis
Navid Raisi; Mahdi Rezaei; Behrooz Masoumi
Abstract
Human Activity Recognition (HAR) using computer vision is an expanding field with diverse applications, including healthcare, transportation, and human-computer interaction. While classical approaches such as Support Vector Machines (SVM), Histogram of Oriented Gradients (HOG), and ...
Read More
Human Activity Recognition (HAR) using computer vision is an expanding field with diverse applications, including healthcare, transportation, and human-computer interaction. While classical approaches such as Support Vector Machines (SVM), Histogram of Oriented Gradients (HOG), and Hidden Markov Models (HMM) rely on manually extracted features and struggle with complex motion patterns, deep learning-based models (e.g., Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), Transformer-based models) have improved performance but still face challenges in handling occlusions, noisy environments, and computational efficiency. This paper introduces Attention-HAR, a novel deep neural network model designed to enhance HAR performance through three key innovations: Conv3DTranspose for spatial upsampling, ConvLSTM2D for capturing spatiotemporal patterns, and a custom attention mechanism that prioritizes critical frames within sequences. Unlike conventional attention mechanisms, our approach dynamically assigns weights to key frames, reducing the impact of redundant frames and enhancing interpretability and computational efficiency. Experimental results on the UCF-101 dataset demonstrate that Attention-HAR outperforms state-of-the-art models, achieving an accuracy of 97.61%, a precision of 97.95%, a recall of 97.49%, an F1-score of 97.64, and an AUC of 99.9%. With only 1.26 million parameters, the model is computationally efficient and well-suited for deployment on lightweight platforms. These findings suggest that integrating temporal-spatial feature learning with attention mechanisms can significantly improve HAR in dynamic and complex environments.