Abstract
Emotion recognition plays an important role in several applications, such as human computer interaction and understanding affective state of users in certain tasks, e.g., within a learning process, monitoring of elderly, interactive entertainment etc. It may be based upon several modalities, e.g., by analyzing facial expressions and/or speech, using electroencephalograms, electrocardiograms etc. In certain applications the only available modality is the user’s (speaker’s) voice. In this paper we aim to analyze speakers’ emotions based solely on paralinguistic information, i.e., not depending on the linguistic aspect of speech. We compare two machine learning approaches, namely a Convolutional Neural Network and a Support Vector Machine. The former is trained using raw speech information, while the latter is trained on a set of extracted low-level features. Aiming to provide a multilingual approach, training and testing datasets contain speech from different languages.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Anagnostopoulos, C.N., T. Iliou, and I. Giannoukos. 2015. Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011. Artificial Intelligence Review 43(2):155–177.
El Ayadi, M., M.S. Kamel, and F. Karray. 2011. Survey on speech emotion recognition: features, classification schemes, and databases. Pattern Recognition 44(3):572–587.
Cowie, R., E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, and J. Taylor. 2001. Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18:32–80.
Hanjalic, A. 2006. Extracting moods from pictures and sounds: towards truly personalized tv. IEEE Signal Processing Magazine 23:90–100.
Wang, Y., and L. Guan. 2008. Recognizing human emotional state from audiovisual signals. IEEE Transactions on Multimedia 10:936–946.
Lu, L., D. Liu, and H. Zhang. 2006. Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing 14:5–18.
Yang, Y.-H., Y.-C. Lin, Y.-F. Su, and H. Chen. 2008. A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech and Language Processing 16:448–457.
Nogueiras, A., A. Moreno, A. Bonafonte, and J.B. Marino. 2001. Speech emotion recognition using hidden Markov models. In Proceedings of Eurospeech, 2679–2682.
Grimm, M., K. Kroschel, E. Mower, and S. Narayanan. 2007. Primitives-based evaluation and estimation of emotions in speech. Speech Communication 49(10–11):787–800.
Hanjalic, A., and X. Li-Qun. 2005. Affective video content representation and modeling. IEEE Transactions on Multimedia 7:143–154.
Wollmer, M., F. Eyben, S. Reiter, B. Schuller, C. Cox, E. Douglas-Cowie, R. Cowie. 2008. Abandoning emotion classes – towards continuous emotion recognition with modelling of long-range dependencies. In Proceedings of the 9th Interspeech, 597–600.
Giannakopoulos, T., A. Pikrakis, and S. Theodoridis. 2009. A dimensional approach to emotion recognition of speech from movies. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 65–68. Piscataway, NJ: IEEE.
Vapnik, V. 1998. Statistical Learning Theory, vol. 1. New York: Wiley.
Krizhevsky, A., I. Sutskever, and G.E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 1097–1105.
Hinton, G., L. Deng, D. Yu, G.E. Dahl, A.R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T.N. Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Processing Magazine 29(6):82–97.
Simard, P.Y., D. Steinkraus, and J.C. Platt. 2003. Best practices for convolutional neural networks applied to visual document analysis. In ICDAR, vol. 3, 958–962.
Giannakopoulos, T. 2015. Pyaudioanalysis: an open-source python library for audio signal analysis. PloS One 10(12):e0144610.
Haq, S., and P. Jackson. 2009. Speaker-dependent audio-visual emotion recognition. In Proceedings of International Conference on Auditory-Visual Speech Processing, Norwich, UK.
Donahue, J., L. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2625–2634.
Jia, Y., E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell. 2014. Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, 675–678. New York: ACM.
Zeiler, M.D., and R. Fergus. 2014. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, 818–833. Berlin: Springer.
Torrey, L., and J. Shavlik. 2010. Transfer learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, 242–264. Hershey, PA: IGI Global.
Russakovsky, O., J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3):211–252.
Deng, J., W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. 2009. Imagenet: a large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, 2009, 248–255. Piscataway, NJ: IEEE
Costantini, G., I. Iaderola, A. Paoloni, and M. Todisco. 2014. Emovo corpus: an Italian emotional speech database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), ed. N.C.C. Chair, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, S. Piperidis. Reykjavik, Iceland: European Language Resources Association (ELRA).
Burkhardt, F., A. Paeschke, M. Rolfes, W. Sendlmeier, and B. Weiss. 2005. A database of German emotional speech. In Proceedings of Interspeech, Lissabon, 1517–1520.
Acknowledgements
The work presented in this document is a result of MaTHiSiS project. This project has received funding from the European Union’s Horizon 2020 Programme (H2020-ICT-2015) under Grant Agreement No. 687772.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Papakostas, M., Siantikos, G., Giannakopoulos, T., Spyrou, E., Sgouropoulos, D. (2017). Recognizing Emotional States Using Speech Information. In: Vlamos, P. (eds) GeNeDis 2016 . Advances in Experimental Medicine and Biology, vol 989. Springer, Cham. https://doi.org/10.1007/978-3-319-57348-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-57348-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-57347-2
Online ISBN: 978-3-319-57348-9
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)