Seyedeh H. Erfani
Abstract
Facial expressions are part of human language and are often used to convey emotions. Since humans are very different in their emotional representation through various media, the recognition of facial expression becomes a challenging problem in machine learning methods. Emotion and sentiment analysis ...
Read More
Facial expressions are part of human language and are often used to convey emotions. Since humans are very different in their emotional representation through various media, the recognition of facial expression becomes a challenging problem in machine learning methods. Emotion and sentiment analysis also have become new trends in social media. Deep Convolutional Neural Network (DCNN) is one of the newest learning methods in recent years that model a human's brain. DCNN achieves better accuracy with big data such as images. In this paper an automatic facial expression recognition (FER) method using the deep convolutional neural network is proposed. In this work, a way is provided to overcome the overfitting problem in training the deep convolutional neural network for FER, and also an effective pre-processing phase is proposed that is improved the accuracy of facial expression recognition. Here the results for recognition of seven emotional states (neutral, happiness, sadness, surprise, anger, fear, disgust) have been presented by applying the proposed method on the two largely used public datasets (JAFFE and CK+). The results show that in the proposed method, the accuracy of the FER is better than traditional FER methods and is about 98.59% and 96.89% for JAFFE and CK+ datasets, respectively.
M. R. Fallahzadeh; F. Farokhi; A. Harimi; R. Sabbaghi-Nadooshan
Abstract
Facial Expression Recognition (FER) is one of the basic ways of interacting with machines and has been getting more attention in recent years. In this paper, a novel FER system based on a deep convolutional neural network (DCNN) is presented. Motivated by the powerful ability of DCNN to learn features ...
Read More
Facial Expression Recognition (FER) is one of the basic ways of interacting with machines and has been getting more attention in recent years. In this paper, a novel FER system based on a deep convolutional neural network (DCNN) is presented. Motivated by the powerful ability of DCNN to learn features and image classification, the goal of this research is to design a compatible and discriminative input for pre-trained AlexNet-DCNN. The proposed method consists of 4 steps: first, extracting three channels of the image including the original gray-level image, in addition to horizontal and vertical gradients of the image similar to the red, green, and blue color channels of an RGB image as the DCNN input. Second, data augmentation including scale, rotation, width shift, height shift, zoom, horizontal flip, and vertical flip of the images are prepared in addition to the original images for training the DCNN. Then, the AlexNet-DCNN model is applied to learn high-level features corresponding to different emotion classes. Finally, transfer learning is implemented on the proposed model and the presented model is fine-tuned on target datasets. The average recognition accuracy of 92.41% and 93.66% were achieved for JAFEE and CK+ datasets, respectively. Experimental results on two benchmark emotional datasets show promising performance of the proposed model that can improve the performance of current FER systems.