I.3.6. Electronics
Samira Mavaddati; Mohammad Razavi
Abstract
Rice is one of the most important staple crops in the world and provides millions of people with a significant source of food and income. Problems related to rice classification and quality detection can significantly impact the profitability and sustainability of rice cultivation, which is why the importance ...
Read More
Rice is one of the most important staple crops in the world and provides millions of people with a significant source of food and income. Problems related to rice classification and quality detection can significantly impact the profitability and sustainability of rice cultivation, which is why the importance of solving these problems cannot be overstated. By improving the classification and quality detection techniques, it can be ensured the safety and quality of rice crops, and improving the productivity and profitability of rice cultivation. However, such techniques are often limited in their ability to accurately classify rice grains due to various factors such as lighting conditions, background, and image quality. To overcome these limitations a deep learning-based classification algorithm is introduced in this paper that combines the power of convolutional neural network (CNN) and long short-term memory (LSTM) networks to better represent the structural content of different types of rice grains. This hybrid model, called CNN-LSTM, combines the benefits of both neural networks to enable more effective and accurate classification of rice grains. Three scenarios are demonstrated in this paper include, CNN, CNN in combination with transfer learning technique, and CNN-LSTM deep model. The performance of the mentioned scenarios is compared with the other deep learning models and dictionary learning-based classifiers. The experimental results demonstrate that the proposed algorithm accurately detects different rice varieties with an impressive accuracy rate of over 99.85%, and 99.18% to identify quality for varying combinations of rice varieties with an average accuracy of 99.18%.
H.3. Artificial Intelligence
Seyed Alireza Bashiri Mosavi; Omid Khalaf Beigi; Arash Mahjoubifard
Abstract
Using intelligent approaches in diagnosing the COVID-19 disease based on machine learning algorithms (MLAs), as a joint work, has attracted the attention of pattern recognition and medicine experts. Before applying MLAs to the data extracted from infectious diseases, techniques such as RAT and RT-qPCR ...
Read More
Using intelligent approaches in diagnosing the COVID-19 disease based on machine learning algorithms (MLAs), as a joint work, has attracted the attention of pattern recognition and medicine experts. Before applying MLAs to the data extracted from infectious diseases, techniques such as RAT and RT-qPCR were used by data mining engineers to diagnose the contagious disease, whose weaknesses include the lack of test kits, the placement of the specialist and the patient pointed at a place and low accuracy. This study introduces a three-stage learning framework including a feature extractor by visual geometry group 16 (VGG16) model to solve the problems caused by the lack of samples, a three-channel convolution layer, and a classifier based on a three-layer neural network. The results showed that the Covid VGG16 (CoVGG16) has an accuracy of 96.37% and 100%, precision of 96.52% and 100%, and recall of 96.30% and 100% for COVID-19 prediction on the test sets of the two datasets (one type of CT-scan-based images and one type of X-ray-oriented ones gathered from Kaggle repositories).
Kh. Aghajani
Abstract
Emotion recognition has several applications in various fields, including human-computer interactions. In recent years, various methods have been proposed to recognize emotion using facial or speech information. While the fusion of these two has been paid less attention in emotion recognition. In this ...
Read More
Emotion recognition has several applications in various fields, including human-computer interactions. In recent years, various methods have been proposed to recognize emotion using facial or speech information. While the fusion of these two has been paid less attention in emotion recognition. In this paper, first of all, the use of only face or speech information in emotion recognition is examined. For emotion recognition through speech, a pre-trained network called YAMNet is used to extract features. After passing through a convolutional neural network (CNN), the extracted features are then fed into a bi-LSTM with an attention mechanism to perform the recognition. For emotion recognition through facial information, a deep CNN-based model has been proposed. Finally, after reviewing these two approaches, an emotion detection framework based on the fusion of these two models is proposed. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), containing videos taken from 24 actors (12 men and 12 women) with 8 categories has been used to evaluate the proposed model. The results of the implementation show that the combination of the face and speech information improves the performance of the emotion recognizer.
B. Z. Mansouri; H.R. Ghaffary; A. Harimi
Abstract
Speech emotion recognition (SER) is a challenging field of research that has attracted attention during the last two decades. Feature extraction has been reported as the most challenging issue in SER systems. Deep neural networks could partially solve this problem in some other applications. In order ...
Read More
Speech emotion recognition (SER) is a challenging field of research that has attracted attention during the last two decades. Feature extraction has been reported as the most challenging issue in SER systems. Deep neural networks could partially solve this problem in some other applications. In order to address this problem, we proposed a novel enriched spectrogram calculated based on the fusion of wide-band and narrow-band spectrograms. The proposed spectrogram benefited from both high temporal and spectral resolution. Then we applied the resultant spectrogram images to the pre-trained deep convolutional neural network, ResNet152. Instead of the last layer of ResNet152, we added five additional layers to adopt the model to the present task. All the experiments performed on the popular EmoDB dataset are based on leaving one speaker out of a technique that guarantees the speaker's independency from the model. The model gains an accuracy rate of 88.97% which shows the efficiency of the proposed approach in contrast to other state-of-the-art methods.
H. Sadr; Mir M. Pedram; M. Teshnehlab
Abstract
With the rapid development of textual information on the web, sentiment analysis is changing to an essential analytic tool rather than an academic endeavor and numerous studies have been carried out in recent years to address this issue. By the emergence of deep learning, deep neural networks have attracted ...
Read More
With the rapid development of textual information on the web, sentiment analysis is changing to an essential analytic tool rather than an academic endeavor and numerous studies have been carried out in recent years to address this issue. By the emergence of deep learning, deep neural networks have attracted a lot of attention and become mainstream in this field. Despite the remarkable success of deep learning models for sentiment analysis of text, they are in the early steps of development and their potential is yet to be fully explored. Convolutional neural network is one of the deep learning methods that has been surpassed for sentiment analysis but is confronted with some limitations. Firstly, convolutional neural network requires a large number of training data. Secondly, it assumes that all words in a sentence have an equal contribution to the polarity of a sentence. To fill these lacunas, a convolutional neural network equipped with the attention mechanism is proposed in this paper which not only takes advantage of the attention mechanism but also utilizes transfer learning to boost the performance of sentiment analysis. According to the empirical results, our proposed model achieved comparable or even better classification accuracy than the state-of-the-art methods.
M. R. Fallahzadeh; F. Farokhi; A. Harimi; R. Sabbaghi-Nadooshan
Abstract
Facial Expression Recognition (FER) is one of the basic ways of interacting with machines and has been getting more attention in recent years. In this paper, a novel FER system based on a deep convolutional neural network (DCNN) is presented. Motivated by the powerful ability of DCNN to learn features ...
Read More
Facial Expression Recognition (FER) is one of the basic ways of interacting with machines and has been getting more attention in recent years. In this paper, a novel FER system based on a deep convolutional neural network (DCNN) is presented. Motivated by the powerful ability of DCNN to learn features and image classification, the goal of this research is to design a compatible and discriminative input for pre-trained AlexNet-DCNN. The proposed method consists of 4 steps: first, extracting three channels of the image including the original gray-level image, in addition to horizontal and vertical gradients of the image similar to the red, green, and blue color channels of an RGB image as the DCNN input. Second, data augmentation including scale, rotation, width shift, height shift, zoom, horizontal flip, and vertical flip of the images are prepared in addition to the original images for training the DCNN. Then, the AlexNet-DCNN model is applied to learn high-level features corresponding to different emotion classes. Finally, transfer learning is implemented on the proposed model and the presented model is fine-tuned on target datasets. The average recognition accuracy of 92.41% and 93.66% were achieved for JAFEE and CK+ datasets, respectively. Experimental results on two benchmark emotional datasets show promising performance of the proposed model that can improve the performance of current FER systems.