Detection of heart murmurs from stethoscope sounds is a key clinical technique used to identify cardiac abnormalities. We describe the creation of a cascading deep model architecture used to screen for heart murmurs from phonocardiogram recordings over multiple auscultation locations. The model was created by the team "Murmur Mia!" for the George B. Moody PhysioNet/Computing in Cardiology Challenge 2022.
The challenge invites participants to classify patients, with recordings from one or more location on the chest, into three categories: Murmur, Absent, or Unknown. We propose a two-stage, single-location classifier to address this problem. Recordings are first filtered through an isolation forest anomaly detection algorithm, with metrics commonly used to assess audio quality as input features. Outliers are assumed to be indicators of poor signal quality and are classified as Unknown. For the second stage we trained a ResNet to classify individual 5s epochs of recordings, with demographic information and hand-crafted time- and frequency-domain features added in a final fully connected layer. The model output is a single logistic unit; we implement a basic decision stump with higher and lower thresholds to perform trinary classification. To deal flexibly with patients with recordings from multiple locations, we apply a heuristic such that any classification of Murmur from any location is prioritized over Unknown, and similarly any classification of Unknown are prioritized over Absent. The decision stump thresholds are set via grid search on the model output to minimize challenge scoring metric on a validation set, and account for the potentially high clinical risks associated with false negative classifications.
In our work to date, the training set 5-fold cross-validation challenge metric score for our classifier is 513 (+-43) and corresponding unofficial phase validation set score is 2350. Such models may aid in cost-effective screening and diagnosis of heart murmurs from cardiac auscultation.