Amin Rahmati; Foad Ghaderi
Abstract
Every facial expression involves one or more facial action units appearing on the face. Therefore, action unit recognition is commonly used to enhance facial expression detection performance. It is important to identify subtle changes in face when particular action units occur. In this paper, we propose ...
Read More
Every facial expression involves one or more facial action units appearing on the face. Therefore, action unit recognition is commonly used to enhance facial expression detection performance. It is important to identify subtle changes in face when particular action units occur. In this paper, we propose an architecture that employs local features extracted from specific regions of face while using global features taken from the whole face. To this end, we combine the SPPNet and FPN modules to architect an end-to-end network for facial action unit recognition. First, different predefined regions of face are detected. Next, the SPPNet module captures deformations in the detected regions. The SPPNet module focuses on each region separately and can not take into account possible changes in the other areas of the face. In parallel, the FPN module finds global features related to each of the facial regions. By combining the two modules, the proposed architecture is able to capture both local and global facial features and enhance the performance of action unit recognition task. Experimental results on DISFA dataset demonstrate the effectiveness of our method.
M. M. Jaziriyan; F. Ghaderi
Abstract
Most of the existing neural machine translation (NMT) methods translate sentences without considering the context. It is shown that exploiting inter and intra-sentential context can improve the NMT models and yield to better overall translation quality. However, providing document-level data is costly, ...
Read More
Most of the existing neural machine translation (NMT) methods translate sentences without considering the context. It is shown that exploiting inter and intra-sentential context can improve the NMT models and yield to better overall translation quality. However, providing document-level data is costly, so properly exploiting contextual data from monolingual corpora would help translation quality. In this paper, we proposed a new method for context-aware neural machine translation (CA-NMT) using a combination of hierarchical attention networks (HAN) and automatic post-editing (APE) techniques to fix discourse phenomena when there is lack of context. HAN is used when we have a few document-level data, and APE can be trained on vast monolingual document-level data to improve results further. Experimental results show that combining HAN and APE can complement each other to mitigate contextual translation errors and further improve CA-NMT by achieving reasonable improvement over HAN (i.e., BLEU score of 22.91 on En-De news-commentary dataset).
H.3. Artificial Intelligence
M. Kurmanji; F. Ghaderi
Abstract
Despite considerable enhances in recognizing hand gestures from still images, there are still many challenges in the classification of hand gestures in videos. The latter comes with more challenges, including higher computational complexity and arduous task of representing temporal features. Hand movement ...
Read More
Despite considerable enhances in recognizing hand gestures from still images, there are still many challenges in the classification of hand gestures in videos. The latter comes with more challenges, including higher computational complexity and arduous task of representing temporal features. Hand movement dynamics, represented by temporal features, have to be extracted by analyzing the total frames of a video. So far, both 2D and 3D convolutional neural networks have been used to manipulate the temporal dynamics of the video frames. 3D CNNs can extract the changes in the consecutive frames and tend to be more suitable for the video classification task, however, they usually need more time. On the other hand, by using techniques like tiling it is possible to aggregate all the frames in a single matrix and preserve the temporal and spatial features. This way, using 2D CNNs, which are inherently simpler than 3D CNNs can be used to classify the video instances. In this paper, we compared the application of 2D and 3D CNNs for representing temporal features and classifying hand gesture sequences. Additionally, providing a two-stage two-stream architecture, we efficiently combined color and depth modalities and 2D and 3D CNN predictions. The effect of different types of augmentation techniques is also investigated. Our results confirm that appropriate usage of 2D CNNs outperforms a 3D CNN implementation in this task.