Transfer Learning for Improved Classification of Drivers in Atrial Fibrillation

Bram Hunt1, Eugene Kwan1, Tolga Tasdizen1, Jake Bergquist1, Matthias Lange2, Ben Orkild1, Rob MacLeod1, Derek Dosdall1, Ravi Ranjan1
1University of Utah, 2Universtiy of Utah


Abstract

Background. Rotors and focal ectopies, or "drivers”, have been proposed as mechanisms underlying persistent atrial fibrillation (persAF). Machine learning has been used to identify drivers automatically, but the small size of current driver datasets impairs classifier performance. Pretraining with unsupervised learning has potential to improve driver detection performance but remains untested.

Objective. We hypothesized that contrastive pretraining on a dataset of unlabeled endocardial electrograms would improve classifier accuracy on a smaller driver dataset after fine-tuning. Methods. We performed electrophysiology studies on paced canines (n=13, mongrel hound, 27-35 kg, 1-2 yrs.) at 1, 3, and 6 months post-initiation of continual persAF. In these studies, we captured dense bilateral endocardial electrograms using the Orion 64-electrode basket catheter. We divided all electrograms into 2-second, 64-electrode samples and examined a subset for the presence of drivers, yielding datasets of labeled (n=502) and unlabeled (n=113,075) samples. Unlabeled samples were used in the SimCLR framework to pretrain an 18-layer, 3D residual neural network. In this process, the model was trained to create feature space representations unique to each unlabeled sample with each representation invariant to random augmentations (crop, differentiation, blurring). Then, we fine-tuned models on driver classification and found optimal training hyperparameters for pretrained and non-pretrained networks. We then applied gradient-weighted class activation mapping to acquire attention maps of the testing data.

Results. The weighted testing accuracy of our pretrained model significantly improved over a non-pretrained model (78.6% vs 71.9%, p=0.047). Attention maps revealed model decision making primarily relied on instances of active driver electrical patterns.

Conclusion. Our pretrained model improved in accuracy over a non-pretrained model in driver detection, showing improvement in parameter initialization while also exhibiting interpretable decision making. This lays ground for development of superior driver classifiers through pretraining. Additionally, this finding supports broad application of transfer learning to other electrogram algorithms.