Original/Review Paper
H.3.8. Natural Language Processing
Nura Esfandiari; Kourosh Kiani; Razieh Rastgoo
Abstract
Chatbots are computer programs designed to simulate human conversation. Powered by artificial intelligence (AI), these chatbots are increasingly used to provide customer service, particularly by large language models (LLMs). A process known as fine-tuning LLMs is employed to personalize chatbot answers. ...
Read More
Chatbots are computer programs designed to simulate human conversation. Powered by artificial intelligence (AI), these chatbots are increasingly used to provide customer service, particularly by large language models (LLMs). A process known as fine-tuning LLMs is employed to personalize chatbot answers. This process demands substantial high-quality data and computational resources. In this article, to overcome the computational hurdles associated with fine-tuning LLMs, innovative hybrid approach is proposed. This approach aims to enhance the answers generated by LLMs, specifically for Persian chatbots used in mobile customer services. A transformer-based evaluation model was developed to score generated answers and select the most appropriate answers. Additionally, a Persian language dataset tailored to the domain of mobile sales was collected to support the personalization of the Persian chatbot and the training of the evaluation model. This approach is expected to foster increased customer interaction and boost sales within the Persian mobile phone market. Experiments conducted on four different LLMs demonstrated the effectiveness of the proposed approach in generating more relevant and semantically accurate answers for users.
Original/Review Paper
I.3.6. Electronics
Samira Mavaddati; Mohammad Razavi
Abstract
Rice is one of the most important staple crops in the world and provides millions of people with a significant source of food and income. Problems related to rice classification and quality detection can significantly impact the profitability and sustainability of rice cultivation, which is why the importance ...
Read More
Rice is one of the most important staple crops in the world and provides millions of people with a significant source of food and income. Problems related to rice classification and quality detection can significantly impact the profitability and sustainability of rice cultivation, which is why the importance of solving these problems cannot be overstated. By improving the classification and quality detection techniques, it can be ensured the safety and quality of rice crops, and improving the productivity and profitability of rice cultivation. However, such techniques are often limited in their ability to accurately classify rice grains due to various factors such as lighting conditions, background, and image quality. To overcome these limitations a deep learning-based classification algorithm is introduced in this paper that combines the power of convolutional neural network (CNN) and long short-term memory (LSTM) networks to better represent the structural content of different types of rice grains. This hybrid model, called CNN-LSTM, combines the benefits of both neural networks to enable more effective and accurate classification of rice grains. Three scenarios are demonstrated in this paper include, CNN, CNN in combination with transfer learning technique, and CNN-LSTM deep model. The performance of the mentioned scenarios is compared with the other deep learning models and dictionary learning-based classifiers. The experimental results demonstrate that the proposed algorithm accurately detects different rice varieties with an impressive accuracy rate of over 99.85%, and 99.18% to identify quality for varying combinations of rice varieties with an average accuracy of 99.18%.
Applied Article
H.3. Artificial Intelligence
Mohamad Mahdi Yadegar; Hossein Rahmani
Abstract
In recent years, new technologies have brought new innovations into the financial and commercial world, giving fraudsters many ways to commit fraud and cost companies big time. We can build systems that detect fraudulent patterns and prevent future incidents using advanced technologies. Machine learning ...
Read More
In recent years, new technologies have brought new innovations into the financial and commercial world, giving fraudsters many ways to commit fraud and cost companies big time. We can build systems that detect fraudulent patterns and prevent future incidents using advanced technologies. Machine learning algorithms are being used more for fraud detection in financial data. But the common challenge is the imbalance of the dataset which hinders traditional machine learning methods. Finding the best approach towards these imbalance datasets is the problem many of the researchers are facing when trying to use machine learning methods. In this paper, we propose the method called FinFD-GCN that use Graph Convolutional Networks (GCNs) for fraud detection in credit card transaction datasets. FinFD-GCN represents transactions as graph in which each node represents a transaction and each edge represents similarity between transactions. By using this graph representation FinFD-GCN can capture complex relationships and anomalies that may have been overlooked by traditional methods or were even impossible to detect with conventional approaches, thus enhancing the accuracy and robustness of fraud detection in financial data. We use common evaluation metrics and confusion matrices to evaluate the proposed method. FinFD-GCN achieves significant improvements in recall and AUC compared to traditional methods such as logistic regression, support vector machines, and random forests, making it a robust solution for credit card fraud detection. By using the GCN model for fraud detection in this credit card dataset we outperformed base models 5% and 10%, with respect to F1 and AUC, respectively.
Original/Review Paper
H.5.7. Segmentation
Mohsen Erfani Haji Pour
Abstract
The segmentation of noisy images remains one of the primary challenges in image processing. Traditional fuzzy clustering algorithms often exhibit poor performance in the presence of high-density noise due to insufficient consideration of spatial features. In this paper, a novel approach is proposed that ...
Read More
The segmentation of noisy images remains one of the primary challenges in image processing. Traditional fuzzy clustering algorithms often exhibit poor performance in the presence of high-density noise due to insufficient consideration of spatial features. In this paper, a novel approach is proposed that leverages both local and non-local spatial information, utilizing a Gaussian kernel to counteract high-density noise. This method enhances the algorithm's sensitivity to spatial relationships between pixels, thereby reducing the impact of noise. Additionally, a C+ means initialization approach is introduced to improve performance and reduce sensitivity to initial conditions, along with an automatic smoothing parameter tuning method. The evaluation results, based on the criteria of fuzzy assignment coefficient, fuzzy segmentation entropy, and segmentation accuracy, demonstrate a significant improvement in the performance of the proposed method.
Technical Paper
I.3.7. Engineering
Elahe Moradi
Abstract
Thyroid disease is common worldwide and early diagnosis plays an important role in effective treatment and management. Utilizing machine learning techniques is vital in thyroid disease diagnosis. This research proposes tree-based machine learning algorithms using hyperparameter optimization techniques ...
Read More
Thyroid disease is common worldwide and early diagnosis plays an important role in effective treatment and management. Utilizing machine learning techniques is vital in thyroid disease diagnosis. This research proposes tree-based machine learning algorithms using hyperparameter optimization techniques to predict thyroid disease. The thyroid disease dataset from the UCI Repository is benchmarked to evaluate the performance of the proposed algorithms. After data preprocessing and normalization steps, data balancing has been applied to the data using the random oversampling (ROS) technique. Also, two methods of grid search (GS) and random search (RS) have been employed to optimize hyperparameters. Finally, employing Python software, various criteria were used to evaluate the performance of proposed algorithms such as decision tree, random forest, AdaBoost, and extreme gradient boosting. The results of the simulations indicate that the Extreme Gradient Boosting (XGB) algorithm with the grid search method outperforms all the other algorithms, obtaining an impressive accuracy, AUC, sensitivity, precision, and MCC of 99.39%, 99.97%, 98.85%, 99.40%, 98.79%, respectively. These results demonstrated the potential of the proposed method for accurately predicting thyroid disease.
Applied Article
A.1. General
Morteza Mohammadi Zanjireh; Farzad Morady
Abstract
This paper predicts the severity of crashes based on the analysis of multiple variables and using machine learning methods. For this purpose, data related to the years 2012 to 2024 of Tempe city in the state of Arizona USA was used. Features were selected using the metaheuristic method. Then, by using ...
Read More
This paper predicts the severity of crashes based on the analysis of multiple variables and using machine learning methods. For this purpose, data related to the years 2012 to 2024 of Tempe city in the state of Arizona USA was used. Features were selected using the metaheuristic method. Then, by using decision tree and artificial neural network, the classification of the severity of crashes was carried out. Based on the metrics, decision tree with an overall accuracy of 54% was the optimal. Finally, using the permutation feature importance method, the optimal model was interpreted. The results show that the characteristics of the year with 0.22 and the spatial characteristics with 0.11 and the collision manner with 0.1 have a higher importance in predicting the severity of crashes on urban roads.
Original/Review Paper
H.3.2.2. Computer vision
Shiva Zeymaran; Vali Derhami; Mehran Mehrandezh
Abstract
This paper presents an accurate and efficient method for determining the coordinates of welding seams, addressing a significant challenge in the deployment of welding robots for complex tasks. Despite welding robots’ precision in following predetermined paths, they struggle with seam identification ...
Read More
This paper presents an accurate and efficient method for determining the coordinates of welding seams, addressing a significant challenge in the deployment of welding robots for complex tasks. Despite welding robots’ precision in following predetermined paths, they struggle with seam identification due to noisy industrial environments, stringent accuracy requirements, and computational complexity. Unlike existing approaches, which either rely on random sampling or are limited to simple geometries, our method combines splicing techniques with welding map alignment to handle complex shapes with multiple seams. This research employs a weighed method to integrate point clouds captured by RGB-D cameras, producing a low-noise point cloud. By leveraging the welding map of parts drawn, the method identifies probable regions for weld seams within the point cloud, substantially reducing the search space. This enables the system to find the weld seam in a timely manner. Knowing the approximate shape of the weld based on the available weld map, an innovative technique is then used to accurately locate the weld seam within these regions. Experimental results on fence-shaped structures in a simulated environment show a mean average error of 1.30 mm, achieving a 30% improvement in precision and a 77% reduction in computation time compared to the state-of-the-art methods. The approach's ability to accurately identify weld seams in complex shapes, coupled with its computational efficiency, suggests strong potential for real-world application. By leveraging welding maps and robust point cloud processing techniques, the method is designed to handle noise and variability, key challenges in industrial environments.