Document Type : Original/Review Paper

Authors

Faculty of Computer Engineering and IT, Sadjad University of Technology, Mashhad, Iran.

Abstract

Feature selection is the one of the most important steps in designing speech emotion recognition systems. Because there is uncertainty as to which speech feature is related to which emotion, many features must be taken into account and, for this purpose, identifying the most discriminative features is necessary. In the interest of selecting appropriate emotion-related speech features, the current paper focuses on a multi-task approach. For this reason, the study considers each speaker as a task and proposes a multi-task objective function to select features. As a result, the proposed method chooses one set of speaker-independent features of which the selected features are discriminative in all emotion classes. Correspondingly, multi-class classifiers are utilized directly or binary classifications simply perform multi-class classifications. In addition, the present work employs two well-known datasets, the Berlin and Enterface. The experiments also applied the openSmile toolkit to extract more than 6500 features. After feature selection phase, the results illustrated that the proposed method selects the features which is common in the different runs. Also, the runtime of proposed method is the lowest in comparison to other methods. Finally, 7 classifiers are employed and the best achieved performance is 73.76% for the Berlin dataset and 72.17% for the Enterface dataset, in the faced of a new speaker .These experimental results then show that the proposed method is superior to existing state-of-the-art methods.

Keywords

[1] France, D.J. et al.( 2000). “Acoustical properties of speech as indicators of depression and suicidal risk”. IEEE transactions on Biomedical Engineering. 47(7): p. 829-837.
[2] Chenchah, F. and Z. Lachiri.(2017). “A bio-inspired emotion recognition system under real-life conditions”. Applied Acoustics. 115: p. 6-14.
[3] Harimi, A., Shahzadi, A., Ahmadyfard, A., & Yaghmaie, K. (2014). “Classification of emotional speech using spectral pattern features”. Journal of AI and Data Mining, 2(1), 53-61.
[4] Yuanchao, L. et al. (2017). “Emotion Recognition by Combining Prosody with Text Information and Assessment Selection for Human-Robot Interaction. SIG-SLUD”. 5(03): p. 43-48.
[5] Bahreini, K., R. Nadolski, and W. Westera. (2014). "Improved multimodal emotion recognition for better game-based learning". in International Conference on Games and Learning Alliance. Springer.
[6] Poria, S. et al. (2017). "A review of affective computing: From unimodal analysis to multi-modal fusion". Information Fusion. 37: p. 98-125.
[7] Yogesh, C. et al. (2017). "A new hybrid PSO assisted biogeography-based optimization for emotion and stress recognition from speech signa"l. Expert Systems with Applications. 69: p. 149-158.
[8] Cummins, N. et al."Enhancing Speech-based Depression Detection through Gender Dependent Vowel-Level Formant Features". in Conference on Artificial Intelligence in Medicine in Europe. Springer(2017).
[9] Yang, B. and M. Lugger. (2010). "Emotion recognition from speech signals using new harmony features". signal processing. 90(5): p. 1415-1423.
[10] Kaya, H. and A.A. Karpov. (2018). "Efficient and effective strategies for cross-corpus acoustic emotion recognition”. Neurocomputing. 275: p. 1028-1034.
 
[11] Zhang, B., E.M. Provost, and G. Essl. “Cross-corpus acoustic emotion recognition from singing and speaking: A multi-task learning approach”. in ICASSP. , vol. 10, pp. 564-579, June 2016.
[12] Zhang, B., E.M. Provost, and G. Essl, (2017). “Cross-corpus acoustic emotion recognition with multi-task learning: Seeking common ground while preserving differences”. IEEE Transactions on Affective Computing,(1): p. 1-1.
[13] Zou, D. and J. Wang. (2015).  “Speech Recognition Using Locality Preserving Projection Based on Multi Kernel Learning Supervision”. , vol. 10, pp. 20-27.
[14] Xu, X. et al. (2016). “Locally Discriminant Diffusion Projection and its Application in Speech Emotion Recognition. Automatika”. 57(1): p. 37-45.
[15] Zhang, S., X. Zhao, and B. Lei. (2013). “Speech emotion recognition using an enhanced kernel isomap for human-robot interaction”. International Journal of Advanced Robotic Systems. 10(2): p. 114.
[16] Charoendee, M., A. Suchato, and P. Punyabukkana. (2017). “Speech emotion recognition using derived features from speech segment and kernel principal component analysis”. in Computer Science and Software Engineering (JCSSE), 14th International Joint Conference on. IEEE. , vol. 10, pp. 10-6, June 2017.
[17] Xie, Z. and L. Guan. (2013). “Multimodal information fusion of audio emotion recognition based on kernel entropy component analysis”. International Journal of Semantic Computing. 7(01): p. 25-42.
[18] Gao, L. et al. (2014). “A fisher discriminant framework based on Kernel Entropy Component Analysis for feature extraction and emotion recognition”. In IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE. , vol. 5, pp. 17-27.
[19] Özseven, T. (2019). “A novel feature selection method for speech emotion recognition. Applied Acoustics”. 146: p. 320-326.
[20] Özseven, T. (2018). “Investigation of the effect of spectrogram images and different texture analysis methods on speech emotion recognition”. Applied Acoustics. 142: p. 70-77.
[21] Peng, Z. et al. (2017). “Speech emotion recognition using multichannel parallel convolutional recurrent neural networks based on Gammatone Auditory Filterbank”. in 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, vol. 9, pp. 98-108.
[22] Nicolaou, M.A. et al. (2014). “Robust canonical correlation analysis: Audio-visual fusion for learning continuous interest”. in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE.
[23] Fu, J. et al. (2017). “Multimodal shared features learning for emotion recognition by enhanced sparse local discriminative canonical correlation analysis. Multimedia Systems”. 31(2): p. 1-11.
[24] Sarvestani, R.R. and R. Boostani,. (2017). “FF-SKPCCA: Kernel probabilistic canonical correlation analysis”. Applied Intelligence. 46(2): p. 438-454.
[25] Štruc, V. and F. Mihelic. (2010). “Multi-modal emotion recognition using canonical correlations and acoustic features”. in Pattern Recognition (ICPR), 2010 20th International Conference on. IEEE.
[26] Kaya, H. et al. (2014). CCA-based feature selection with application to continuous depression recognition from acoustic speech features. in Proceedings 39th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014, Florence, Italy.
[27] Yogesh, C. et al. (2017). Bispectral features and mean shift clustering for stress and emotion recognition from natural speech. Computers & Electrical Engineering. 62: p. 676-691.
[28] Yogesh, C. et al. (2017). “Hybrid BBO_PSO and higher order spectral features for emotion and stress recognition from natural speech”. Applied Soft Computing. 56: p. 217-232.
[29] Yaacob, S., H. Muthusamy, and K. Polat. (2015). “Particle Swarm Optimization Based Feature Enhancement and Feature Selection for Improved Emotion Recognition in Speech and Glottal Signals”. Applied Soft Computing. 58: p. 287-295.
[30] Sun, Y. and G. Wen. (2015). “Emotion recognition using semi-supervised feature selection with speaker normalization”. International Journal of Speech Technology. 18(3): p. 317-331.
[31] Lugger, M. and B. Yang. (2007). “The relevance of voice quality features in speaker independent emotion recognition”. in Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on. IEEE.
[32] Schuller, B. et al. (2010). “Cross-corpus acoustic emotion recognition: Variances and strategies”. IEEE Transactions on Affective Computing. 1(2): p. 119-131.
[33] Kotti, M., F. Paterno, and C. Kotropoulos. (2010). “Speaker-independent negative emotion recognition”. in Cognitive Information Processing (CIP), 2010 2nd International Workshop on. IEEE.
[34] Kotti, M. and F. Paternò. (2012). “Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema”. International journal of speech technology. 15(2): p. 131-150.
[35] Jin, Y. et al. (2014). “A feature selection and feature fusion combination method for speaker-independent speech emotion recognition”. in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE.
[36] Farrús, M. et al. (2007).” Histogram equalization in svm multimodal person verification. in International Conference on Biometrics”. Springer, 10(7): p. 411-426.
[37] Jiang, X. et al. (2017). “Emotion Recognition from Noisy Mandarin Speech Preprocessed by Compressed Sensing”. in International Conference on Intelligent Computing. Springer. 14(7): p. 100-119
[38] Liu, Z.-T. et al. (2018). “Speech emotion recognition based on feature selection and extreme learning machine decision tree”. Neurocomputing. 273: p. 271-280.
[39] Dang, T., V. Sethu, and E. Ambikairajah. (2016). Factor Analysis Based Speaker Normalisation for Continuous Emotion Prediction. in INTERSPEECH.
[40] Prince, S.J. and J.H. Elder. (2007). Probabilistic linear discriminant analysis for inferences about identity. in Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE.
[41] Zhang, Z., B. Wu, and B.r. Schuller. (2019). “Attention-augmented end-to-end multi-task learning for emotion prediction from speech”. in ICASSP 2019-IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE.
[42] Li, Y., T. Zhao, and T. Kawahara. (2019). “Improved End-to-End Speech Emotion Recognition using Self-Attention Mechanism and Multi-task Learning”. in Interspeech. 19(3): p. 317-322.
[43] Yao, Z. et al. (2020). “Speech emotion recognition using fusion of three multi-task learning-based classifiers: HSF-DNN, MS-CNN and LLD-RNN”. Speech Communication. 10(4): p. 202-215.
[44] Wang, C. et al. (2019). “Multi-Task Learning of Emotion Recognition and Facial Action Unit Detection with Adaptively Weights Sharing Network”. in 2019 IEEE International Conference on Image Processing (ICIP). IEEE. 10(2): p. 317-331
[45] Atmaja, B.T. and M. Akagi. (2020). “Dimensional speech emotion recognition from speech features and word embeddings by using multi-task learning”. APSIPA Transactions on Signal and Information Processing. 15(2): p. 10-19.
[46] Obozinski, G., B. Taskar, and M. Jordan. (2006). “Multi-task feature selection”. Statistics Department, UC Berkeley, Tech. Rep. 2,11(3): p. 12-24.
[47] Liu, J., S. Ji, and J. Ye. (2009). “Multi-task feature learning via efficient l 2, 1-norm minimization”. in Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. 2009. AUAI Press.
[48] Sun, L. et al. (2009). “Efficient recovery of jointly sparse vectors”. in Advances in Neural Information Processing Systems. 19(5): p. 110-118.
[49] Nie, F. et al. “Efficient and robust feature selection via joint ℓ2, 1-norms minimization”. in Advances in neural information processing systems. 2010. 5(1): p. 14-26.
[50] Tang, J. and H. Liu. (2012). “Unsupervised feature selection for linked social media data”. in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. 8(7): p. 65-71.
[51] Argyriou, A., T. (2007). Evgeniou, and M. Pontil. “Multi-task feature learning”. in Advances in neural information processing systems.  8(9): p. 254-268.
[52] Nemirovski, A. (2004).” Interior point polynomial time methods in convex programming”. Lecture notes. 5(3): p. 542-553.
[53] Beck, A. and M. Teboulle. (2009).” A fast iterative shrinkage-thresholding algorithm for linear inverse problems”. SIAM journal on imaging sciences. 2(1): p. 183-202.
[54] Burkhardt, F. et al. (2005). “A database of German emotional speech”. in Ninth European Conference on Speech Communication and Technology. 3(4): p. 59-64.
 
[55] Martin, O. et al. (2006). “The enterface’05 audio-visual emotion database”. in Data Engineering Workshops, 2006. Proceedings. 22nd International Conference on. IEEE. . 4(1): p. 68-75.
[56] Eyben, F., M. Wöllmer, and B. Schuller. (2010). “Opensmile: the munich versatile and fast open-source audio feature extractor”. in Proceedings of the 18th ACM international conference on Multimedia.
[57] Huang, G.-B. et al. (2012). Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 42(2): p. 513-529.
[58] Zhang, R. et al. (2019). “Feature selection with multi-view data: A survey”. Information Fusion. 50: p. 158-167.