Modified Variable Kernel Length ResNets for Heart Murmur Detection using Multi-positional Phonocardiogram Recordings

Vijay Vignesh Venkataramani, Akshit Garg, U. Deva Priyakumar
International Institute of Information Technology, Hyderabad


Abstract

In this work, we describe the creation of our end-to-end deep learning architecture for heart murmur detection using PCG recordings for the Physionet/Computing in Cardiology Challenge 2022. Our team, “Team_IIITH” was ranked 76th out of 166 submissions with a challenge score of 1308 on the Unofficial leaderboard and obtained a 10-fold cross-validation score of 900.633 in our local system on the provided training data.

In our current approach, the PCG signals are first downsampled to 1000 Hz before being passed through a Butterworth's low and high pass filter to remove baseline wanders and high-frequency noise present in the signal. The PCG recordings are then broken down into 10-second segments and normalized. To extract embeddings more efficiently, we built a custom 1-dimensional ResNet with variable-sized kernels to account for different length dependencies across the ResNet blocks. The output of which is fed to a 2-layer feed-forward network for final classification. To counteract class imbalance within the dataset, Cross-Entropy Loss with class weights is employed.

In the future, we hope to improve performance by trying out different recurrent and attention-based architectures, as they might be able to capture the recurrent nature of the signal more efficiently. We also plan to try out time-lagged convolutions as we expect them to detect the variability in time difference between heartbeats to improve murmur detection. As it is much easier to obtain unlabeled datasets than the labeled ones, we would like to try out various self-supervised techniques to verify if unlabeled datasets might prove beneficial in improving the model’s performance.