Model Ensembling for Predicting Neurological Recovery after Cardiac Arrest: Top-down or Bottom-up?

Hongliu Yang and Ronald Tetzlaff
TU Dresden


Abstract

Early electroencephalography (EEG) is commonly used for predicting the neurological recovery level of comatose patients after cardiac arrest. It becomes now popular to leverage deep learning for the automatic processing of multi-day continuous EEGs, which could liberate experienced physicians from the intensive review task, and overcome possible subjective bias. However, explaining why and how the model decision was made is imperative for health professionals to fully comprehend, admit and trust the method in clinical applications. Our aim is to build up an explainable model for the end-to-end automatic neurological prognosis.

Our team, TUD_EEG, proposes an attention-based multiple instance learning (MIL) network with three main blocks. Firstly, inception blocks with residual connections are used to extract latent representations of EEG segments. Channel-wise and temporal-wise self-attention is adopted herein to take care of the signal interdependence. Then, latent vectors for all segments of a patient are fed into a temporal convolutional network (TCN) block to capture the trend of long-term evolution of the patient's neurological state. Finally, outputs of the TCN block were aggregated via a self-attention-based MIL pooling layer. This is augmented by the demographic information e.g. age, gender and ventricular fibrillation to predict the recovery level. To understand the decision-making mechanism, attention-based and gradient-based relevance maps will be developed. Points of interest indicated by relevance maps will be compared with known characteristic EEG patterns associated with poor and good outcomes.

Our baseline model without the TCN block achieved the challenge score of 0.42, 0.42, 0.49, 0.55 for the 12h, 24h, 48h and 72h predictions of the validation set, respectively. The notable score for the 12h prediction, ranked 2nd in 90 teams, shows the data efficiency of our propsoed architecture. The model will be further optimized and the corresponding relevance maps will be investigated thoroughly to understand the model decision.