Exploring a Segmentation-Classification Deep Learning-based Heart Murmurs Detector

Daniel Eneriz1, Antonio Rodriguez-Almeida2, Himar Fabelo3, Samuel Ortega4, Francisco Balea-Fernandez5, Nicolás Medrano1, Belén Calvo1, Gustavo Callico2
1Group of Electronic Design Aragon Institute of Engineering Research (I3A) University of Zaragoza (UZ), 2Institute for Applied Microelectronics(IUMA) University of Las Palmas de Gran Canaria (ULPGC), 3Fundación Canaria Instituto de Investigación Sanitaria de Canarias (FIISC), IUMA, ULPGC, 4Norwegian Institute of Food Fisheries and Aquaculture Research, Nofima, 5Dept. of Psychology, Sociology and Social Work, IUMA, University of Las Palmas de Gran Canaria (ULPGC)


This work presents the advances of the UZ-ULPGC team in the Heart Murmur Detection from Phonocardiogram Recordings: The George B. Moody PhysioNet Challenge 2022. As the 2016 PhysioNet/CinC Challenge proved the success of the combination of a segmentation algorithm and a classifier, a deep learning-based murmur detector is developed using the sequence segmentation-classification. The F. Renna et. al 2019 model is used as the segmentation model, extracting each cardiac cycle from the PCG with state-of-the-art accuracy. Three deep models are tested for the classification: the C. Potes et al 2016 model, based on four independent 1D-convolutional feature extractors; a variation of it, enabling the combination of the features; and an autoencoder. Furthermore, to enable the unique diagnostic for the patient, a decision model that gathers all the patient-related cardiac cycles information is added. Limited performance is shown in all classifiers, probably due to the heavy classimbalance of the data at the cardiac cycle level and the minimal preprocessing chosen in the architecture. Note that our models have not been tested in the hidden challenge data. Hence, a 10-fold cross-validation over the public data is used to evaluate their performance, with the best model getting a weighted accuracy score in the murmur presence task of 0.54±0.14 (that would rank in 204th out of 305 entries) and 10735±2208 in Challenge cost score for the outcome task (that would rank in 120th out of 305 entries).