Location-wise Heart Murmur Detection using CNN-Transformers

Yingyu Yang and Maxime Sermesant
Inria


Abstract

Introduction: We present a deep learning method to detect heart murmur from phonocardiogram (PCG) recordings. This method was developed by team 'Epione_Cardio' for the PhysioNet/CinC Challenge 2022.

Methods: Each local PCG was first pre-processed(i.e. downsampling, filtering and normalisation) and decomposed into 4 frequency bands(25-45,45-80,80-200,200-400Hz). The decomposed 4 recordings along with the normalised recording were then randomly clipped or zero-padded to form a fixed temporal array by size 5x8192. We used a succession of residual CNN blocks to extract temporal features and a pack of transformer encoders to learn the attention between class labels and temporal features. The model was trained to classify single-location PCG recordings into Present/Unknown/Absent classes no matter to which location it belongs. The final Present/Unknown probability of each patient was assigned to be the maximum of the corresponding class probability across all the available recording locations. The final Absent probability was set to be 1 subtracting the other two probabilities.

Results: We achieved a challenge score of 1101, accuracy and F-score of 0.83 and 0.55 respectively on a stratified 5-fold cross-validation using the training set. We scored 1711 on the hidden validation set during the unofficial phase.

Conclusion: Our method was able to detect heart murmur from PCG recordings and can provide the detailed label for each local PCG due to our location-wise classification.