The topic of this MSc thesis work is determining the relationship between the psychoacoustic qualities of sound, musical emotions, and their physiological response patterning. The goal is the development of a real-time system for predicting psychoacoustic features, derived from a songs audio signal, with physiological measurements of the listener. Possible uses for this technology are in the creation of physiologically and affectively aware user interfaces, as well as in artistic expression.
The background chapter consists of an overview of the autonomous nervous system, affective computing, and music information retrieval. In the methods chapter the available tools and methods for the analysis of physiological signals and psychoacoustic features from music are evaluated. Because no readily available software is identified for real-time analysis of electrodermal activation, electrocardiography and respiration inductance plethysmography, a new software application is developed for this purpose. Audio analysis and regression modeling are approached using existing tools.
An evaluation study is conducted to determine the efficacy of the regression model. In a validated paradigm, a multiple linear regression model and an artificial neural network model are tested against a constant regressor, or dummy model. The results of the evaluation study are mixed. The dummy model outperforms the other models in prediction accuracy, but the artificial neural network model achieves significant correlations between predictions and target values. In the conclusion chapter I suggest improvements to the current system and possible future directions for this research.