Multi-Task Prediction of Murmur and Outcome from Heart Sound Recordings

Yale Chang1, Luoluo Liu1, Corneliu Antonescu2
1Philips, 2Banner Health


Abstract

The heart sound recordings obtained from cardiac auscultations can capture abornomal heart sounds which can be functional (not associated with any pathology) or caused by either cogenital or acquired heart diseases, which can cause severe morbidities in pediatric populations. In this work, we describe our approach to automatically detect the presence of heart murmurs and predict abornomal clinical outcomes from the heart sound recordings. To train the model, each heart sound recording is first divided into multiple 3-second segments. Then each segment is transformed into a time-domain embedding vector through a convolutional neural network (CNN). In parallel, the Mel-frequency cepstrum (MFCC) representation of the segment is transformed into a frequency-domain embedding vector using CNN. These embedding vectors and the demographic variables are concatenated and then used as input to two separate networks built to predict the presence of heart murmurs and clinical outcomes respectively. The network parameters are optimized to jointly predict both targets using multi-task learning. In the test phase, the predicted probabilities from multiple segments of the same recording are averaged to make recording-level predictions. A patient is predicted to have murmur or abnormal outcome if any recording from the patient is predicted to have murmur or if the patient has an abnormal outcome. Our team prna achieved murmur weighted accuracy of 0.626 and outcome cost of 9920.278 in the official phase of the George B. Moody PhysioNet Challenge 2022.