Multimodal Analysis of Physiological Signals for Wearable-Based Emotion Recognition Using Machine Learning

Feryal Alskafi1, Ahsan Khandoker1, Uichin Lee2, Cheul Park2, Herbert Jelinek1
1Khalifa University, 2Korea Advanced Institute of Science and Technology


Abstract

Aims: Recent advancements in wearable technology and machine learning have led to an increased research interest in the use of peripheral physiological signals to recognize emotion granularity. The non-invasive nature of peripheral physiological signals however is usually of low quality due to low sampling rates. As a result, single-mode physio-logical signal-based emotion recognition shows low performance. In that regard, this research explores the use of multi-modal wearable-based emotion recognition.

Methods: Blood volume pulse (BVP), electrodermal activity (EDA), heart rate (HR), and skin temperature (SKT) signals that were collected using the Empatica E4 Wristband alongside the first-hand self-reported annotations of arousal and valence from the K-EmoCon dataset were analyzed singularly and combined with a battery of machine-learning algorithms including decision trees, support vector machines, k-nearest neighbors, and ensembles. The algorithms were used to predict the arousal and valence states in a two-dimensional space (Figure 1) based on the peripheral physiological signals. Performance was evaluated using accuracy, true positive rate, and area under the receiver operating characteristic curve.

Results: All algorithms failed to display any discrimination between classes when using the BVP, EDA, HR, and SKT signals on their own as single predictors, while the combined use of BVP, EDA, HR, and SKT signals as predictors generated the best results. The ensemble bagged trees approach, not only had a high level of accuracy at 83% when using an ensemble bagged tree algorithm compared to single heart rate-based emotion accuracy of 56.1%, but also had a high level of discrimination when it came to class prediction.