Original/Review Paper
H.3. Artificial Intelligence
Sara Mahmoudi Rashid
Abstract
Teleoperation systems are increasingly deployed in critical applications such as robotic surgery, industrial automation, and hazardous environment exploration. However, these systems are highly susceptible to network-induced delays, cyber-attacks, and system uncertainties, which can degrade performance ...
Read More
Teleoperation systems are increasingly deployed in critical applications such as robotic surgery, industrial automation, and hazardous environment exploration. However, these systems are highly susceptible to network-induced delays, cyber-attacks, and system uncertainties, which can degrade performance and compromise safety. This paper proposes a Graph Neural Network (GNN)-based Digital Twin (DT) framework to enhance the cyber-resilience and predictive control of teleoperation systems. The GNN-based anomaly detection mechanism accurately identifies cyber-attacks, such as false data injection (FDI) and denial-of-service (DoS) attacks, with a detection rate of 24.3% and a false alarm rate of only 1.8%, significantly outperforming conventional machine learning methods. Furthermore, the predictive digital twin model, integrated with model predictive control (MPC), effectively compensates for latency and dynamic uncertainties, reducing control errors by 14.12% compared to traditional PID controllers. Simulation results in a robotic teleoperation testbed demonstrate a 24.4% improvement in trajectory tracking accuracy under variable delay conditions, ensuring precise and stable operation.
Original/Review Paper
H.3.2.2. Computer vision
Fatemeh Asadi-Zeydabadi; Ali Afkari-Fahandari; Elham Shabaninia; Hossein Nezamabadi-pour
Abstract
Farsi optical character recognition remains challenging due to the script’s cursive structure, positional glyph variations, and frequent diacritics. This study conducts a comparative evaluation of five foundational deep learning architectures widely used in OCR—two lightweight CRNN based ...
Read More
Farsi optical character recognition remains challenging due to the script’s cursive structure, positional glyph variations, and frequent diacritics. This study conducts a comparative evaluation of five foundational deep learning architectures widely used in OCR—two lightweight CRNN based models aimed at efficient deployment and three Transformer based models designed for advanced contextual modeling—to examine their suitability for the distinct characteristics of Farsi script. Performance was benchmarked on four publicly available datasets: Shotor and IDPL PFOD2 for printed text, and Iranshahr and Sadri for handwritten text, using word level accuracy, parameter count, and computational cost as evaluation criteria. CRNN based models achieved high accuracy on word level datasets—99.42% (Shotor), 97.08% (Iranshahr), 98.86% (Sadri)—while maintaining smaller model sizes and lower computational demands. However, their accuracy dropped to 78.49% on the larger and more diverse line level IDPL PFOD2 dataset. Transformer based models substantially narrowed this performance gap, exhibiting greater robustness to variations in font, style, and layout, with the best model reaching 92.81% on IDPL PFOD2. To the best of our knowledge, this work is among the first comprehensive comparative studies of lightweight CRNN and Transformer based architectures for Farsi OCR, encompassing both printed and handwritten scripts, and establishes a solid performance baseline for future research and deployment strategies.
Original/Review Paper
I.3.7. Engineering
Elahe Moradi
Abstract
Fault prediction in power transformers is pivotal for safeguarding operational reliability and reducing system disruptions. Leveraging dissolved gas analysis (DGA) data, AI‑driven techniques have recently been employed to enhance predictive performance. This paper introduces a novel machine-learning ...
Read More
Fault prediction in power transformers is pivotal for safeguarding operational reliability and reducing system disruptions. Leveraging dissolved gas analysis (DGA) data, AI‑driven techniques have recently been employed to enhance predictive performance. This paper introduces a novel machine-learning framework that integrates Hist Gradient Boosting (HGB) with a metaheuristic Particle Swarm Optimization (PSO) algorithm for hyperparameter tuning, thereby guaranteeing classifier robustness. The proposed method underwent a two‑stage evaluation: first, Gradient Boosting (GB), Extreme Gradient Boosting (XGBoost), and HGB were benchmarked, revealing HGB as the most effective method; second, PSO was applied to optimize HGB’s hyperparameters, yielding further performance improvements. Experimental results demonstrate that the hybrid HGB‑PSO model achieves an accuracy of 97.85 %, precision of 98.90 %, recall of 97.33 %, and an F1‑score of 98.99 %. All simulations and comparative analyses against state‑of‑the‑art methods were implemented in Python, and confusion‑matrix analysis was employed to assess predictive performance comprehensively. These findings demonstrate that the hybrid HGB‑PSO method achieves superior accuracy and robustness in transformer fault prediction.