Embracing the Imaginary: Deep Complex-valued Networks for Heart Murmur Detection

Erika Bondareva, Georgios Rizos, Jing Han, Cecilia Mascolo
University of Cambridge


Abstract

Machine learning for detecting abnormal cardiac function sounds has the potential to scale and increase the accessibility of related healthcare services. As a result, the organisation of data challenges, such as PhysioNet 2022, motivates the proposal of numerous methodologies for effective heart murmur detection.

Although commonly adopted short-time Fourier transform-based audio representations contain amplitude and phase information, which can be effectively encoded in the complex domain, the vast majority of proposed methods, and deep learning in general, only consider the real part for modelling. For the first time, we explore the potential of complex-valued neural networks (CVNNs) with segment-wise feature pooling for heart sound classification that leverage all available input information. We propose a novel methodology for selecting high-quality segments from a single recording. Complex-valued convolutional layers learn complex representations from these segments, which are then pooled and passed through a fully connected prediction block.

We showcase the effectiveness of complex-valued neural networks for sound analysis by directly comparing them with real-valued versions of our employed neural architectures. On a patient-independent testing set, the CVNN achieves a murmur detection weighted accuracy of 86%, which is 3% higher than the accuracy achieved by an equivalent, albeit non-complex model. With further hyperparameter tuning, the model performance is bound to improve, making it comparable to the top entries of the PhysioNet 2022 challenge.

Our findings demonstrate the potential of CVNNs with segment-wise feature pooling as a promising approach for sound classification, offering a novel and effective method to advance the field of cardiac murmur detection and generally automated auscultation.