Combining Hough Transform and Deep Learning Approaches to Reconstruct ECG Signals From Printouts

Felix H Krones, Terry J Lyons, Adam Mahdi, Benjamin Walker
University of Oxford


Abstract

This work presents our team's (SignalSavants) contribution to the 2024 George B. Moody PhysioNet Challenge. The Challenge had two goals: reconstruct ECG signals from printouts and classify them for cardiac diseases. Our focus was on the first task. Despite many ECGs being digitally recorded today, paper ECGs remain stored worldwide. Digitising them could help building datasets with diverse data spanning long time periods. However, the presence of varying recording standards and poor image quality requires a data-centric approach for developing robust models that can generalise effectively.

Our approach combines the creation of a diverse training set, Hough transform to rotate images, a U-Net based segmentation model to identify individual signals and mask vectorisation to reconstruct the signals. We assessed the performance of our models using the 10-fold stratified cross-validation split of 21,799 recordings proposed by the PTB-XL dataset.

On the digitisation task, our model achieved a local, average signal-noise-ratio of 17.02 and an unofficial Challenge score of 12.15 on the hidden validation set.

Our study shows the challenges of building robust, generalisable, digitisation approaches. Such models require large amounts of resources (data, time and computational power) but have great potential in diversifying the data available.