Introduction: We, the team UKJ_FSU, propose a deep learning system whose intermediate outputs are augmented by manually curated features to generate an ensemble-based decision approach for the George B. Moody PhysioNet Challenge 2022. The single-label classification of sets of phonocardiograms according to the presence, absence, or unknown status of murmurs is mainly based on temporal convolutional and recurrent networks.
Methods: The deep learning framework consists of four models of the same topology, one for each auscultation location corresponding to a heart valve. The input to each of these models consists of a time series of the phonocardiogram recorded at the respective location, its spectrogram, and some additional patient data such as age. The raw signal is processed by atrous and plain temporal convolutional layers, each followed by max-pooling and batch normalisation. After the last convolution, each intermediate feature has a receptive field that equals the time frame length of the spectrogram. Thus, they can be concatenated with their spectral counterparts and jointly processed by bidirectional LSTMs. In the last step, additional patient information is added and the final classification is computed via dense and softmax layers. During inference, the overall probability distribution is estimated by the mean of all individual decisions which are generated by the location-specific models. Based on these distribution estimates, we calculate a final prediction regarding the expected utility of the different labels given the challenge cost function. We will incorporate features extracted from the spectral coherence, i.e., the cross power spectrum, between different recordings of the same patient to enhance classification accuracy.
Results: On the official test set, the described system reached an average cost of 1133.55 by using a cost-agnostic decision. By adapting the decision based on the misclassification costs, the average cost was reduced to 615.09.