Transfer Learning and Deep Learning for Heart Valvular Disease Detection from Heart Sound Recordings

Saman Parvaneh and Zaniar Ardalan
Edwards Lifesciences


Abstract

Introduction: Heart Valvular Disease (HVD) is associated with mortality. The echocardiogram is a well-accepted diagnostic screening tool for detecting HVD. Still, the phonocardiogram (PCG) allows accessible screening using heart sounds when access to echocardiograms is limited (e.g., in resource-constrained environments). In this paper, we evaluated the feasibility of using a deep learning algorithm and transfer learning for the automated detection of HVD from PCG. Method: A normal category (N), aortic stenosis (AS), mitral stenosis (MS), mitral regurgitation (MR), and mitral valve prolapse (MVP) from a public data set was used in this study (200 audio files per category). Furthermore, 80% and 20% of the data were split into training and test sets. A pre-trained deep convolutional neural network model, YAMNet, was utilized for transfer learning in this study. This model takes audio waveform as input and makes independent predictions for 521 audio events. Audio events include heart and respiratory sounds (e.g., breathing, heart murmur, and cough) as well as relevant noise sources (e.g., background noises) from Google AudioSet. For transfer learning of YAMNet in HVD detection, the last three layers (fully connected, softmax, and classification layers) were removed and replaced with the 5-class classification layer. Results: Our model's accuracy was 100% and 99.5% in the training and test sets, respectively. In the test set, only one MR was misclassified as MS. Conclusion: This study's promising results indicate the potential of deep learning models for automated CVD detection using heart sounds. However, external validation on larger multi-center data is necessary.