Learning Time-Frequency Representations of Phonocardiogram for Murmur Detection

Jae-Man Shin1, Hyun-Seok Kim2, Woo-Young Seo2, Sung-Hoon Kim3
1Department of Anesthesiology and Pain Medicine, Asan Medical Center, 2Biomedical Engineering Research Center, Asan Institute for Life Science, Asan Medical Center, 3Department of Anesthesiology and Pain Medicine, Asan Medical Center, University of Ulsan College of Medicine


Abstract

In this Physionet2022 challenge, our goal is to diagnose whether a patient has heart murmur or cardiac diseases. In this study, we propose new approaches to diagnose cardiac abnormality or murmur. Proposed deep learning models for detecting heart murmur were based on EEGNet and temporal convolutional networks to employ learning frequency-temporal-specific representation from phonocardiogram. To learn patient-specific representation of phonocardiogram, we also utilized demographic information: age, sex, BMI, pregnancy status. Demographic features concatenated to convolutional feature vector then predict murmur presence. Our convolutional networks predict per segmentation for recorded heart sounds. Then we utilized periodical property of heart murmur. We utilized how often murmur characteristics in systole or diastole intervals appeared from phonocardiogram recording. From view of frequentist inference, we extracted statistical features and employed to train machine learning models, RandomForest, to diagnose a patient's condition of heart sounds. In official phase, our team(amc-sh) recorded 0.689 weighted accuracy in murmur detection, 9203 challenge cost for outcome detection based on the highest scores.