MemoryInception: Predicting Neurological Recovery from EEG Using Recurrent Inceptions

Bjørn-Jostein Singstad, Jesper Ravn, Arian Ranjbar
Akershus University Hospital


Abstract

Introduction: Cardiac arrest may cause severe brain damage, cognitive impairments and death. Monitoring neurological recovery is therefore critical for providing suitable treatment. Electroencephalogram (EEG) non-invasively records the electrical activity of the brain and monitors changes in function over time. In this study, we aim to classify neurological recovery after cardiac arrest using machine learning algorithms applied to EEG data.

Method: The study is based on EEGs measured from 0 to 72 hours after return of spontaneous circulation (ROSC). The dataset contains a 5-minute sequence from each hour of the recordings, for 606 patients in total. A one-dimensional convolutional neural network is trained on the 5-minute sequences to classify neurological recovery, using the cognitive performance category as ground truth. Specifically, the network consists of two inception blocks, where a block is built from three modules with a residual connection, each with four convolutional filters in turn. Training is done via ordinal cross-entropy loss, resulting in rank-consistent ordinal regression predictions. In the inference phase, predictions from each provided 5-minute signal are fed to a classifier, which at this stage average the predictions as final output.

Result: The model developed by our team, EEG-Attackers, was benchmarked first by using 5-fold cross-validation on the training dataset; and then, using a non-disclosed validation set of EEG recordings held by the organizers of the challenge. Evaluation was conducted on EEGs collected at intervals from 0 up to 12, 24, 48 and 72 hours after ROSC. The performance metric was defined by the sensitivity of the model when specificity is >0.95, and results can be found in Table 1.

Discussion: Future developments include implementation of a recurrent unit attached to the CNN backbone, particularly an LSTM module. This allows exploitation of temporal information lost in a simple averaging classifier, but requires careful management of missing values.