Document Type
Article
Department or Administrative Unit
Library
Publication Date
8-12-2021
Abstract
Emotional singing can affect vocal performance and the audience’s engagement. Chinese universities use traditional training techniques for teaching theoretical and applied knowledge. Self-imagination is the predominant training method for emotional singing. Recently, virtual reality (VR) technologies have been applied in several fields for training purposes. In this empirical comparative study, a VR training task was implemented to elicit emotions from singers and further assist them with improving their emotional singing performance. The VR training method was compared against the traditional self-imagination method. By conducting a two-stage experiment, the two methods were compared in terms of emotions’ elicitation and emotional singing performance. In the first stage, electroencephalographic (EEG) data were collected from the subjects. In the second stage, self-rating reports and third-party teachers’ evaluations were collected. The EEG data were analyzed by adopting the max-relevance and min-redundancy algorithm for feature selection and the support vector machine (SVM) for emotion recognition. Based on the results of EEG emotion classification and subjective scale, VR can better elicit the positive, neutral, and negative emotional states from the singers than not using this technology (i.e., self-imagination). Furthermore, due to the improvement of emotional activation, VR brings the improvement of singing performance. The VR hence appears to be an effective approach that may improve and complement the available vocal music teaching methods.
Recommended Citation
Zhang, J., Xu, Z., Zhou, Y., Wang, P., Fu, P., Xu, X., & Zhang, D. (2021). An Empirical Comparative Study on the Two Methods of Eliciting Singers’ Emotions in Singing: Self-Imagination and VR Training. Frontiers in Neuroscience, 15. https://doi.org/10.3389/fnins.2021.693468
Journal
Frontiers in Neuroscience
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Rights
Copyright © 2021 Zhang, Xu, Zhou, Wang, Fu, Xu and Zhang.
Comments
This article was originally published Open Access in Frontiers in Neuroscience. The full-text article from the publisher can be found here.