A ResNet-BiLSTM Hybrid Approach to Detect Chagas Cardiomyopathy Using 12-Lead ECGs

Sachin Kurup1, Velamala Pavan Krishna2, Ulligadda Shashank2
1Amrita Vishwa Vidyapeetham, Amritapuri, 2Amrita Vishwa Vidyapeetham


Abstract

In our submission for the George B. Moody PhysioNet Challenge 2025, we developed a deep-learning solution for the automated detection of Chagas cardiomyopathy using 12-lead electrocardiograms (ECGs). Our approach utilizes a ResNet1D architecture—a one-dimensional convolutional neural network optimized for learning temporal dependencies and complex waveform patterns directly from raw ECG signals. The model is trained on fixed-length, standardized input sequences with class-weighted binary cross-entropy loss to address the inherent class imbalance in the dataset.

The proposed methodology incorporates a robust signal-processing pipeline with minimal preprocessing to establish a strong baseline. Data are first subjected to notch filtering for powerline interference removal and wavelet denoising to mitigate noise while preserving critical signal morphology. We then extract several clinically relevant features, including heart rate, RR variability, QRS energy, signal entropy, and zero-crossing rate. Additionally, by removing leads III, AVR, AVL, and AVF, we focus on the most informative leads, thereby capturing the essential characteristics of the ECG more precisely. Furthermore, patient metadata is concatenated with these features before the combined representation is passed through a multi-layer perceptron for final classification.

Our best-performing model achieved a Challenge score of 0.425—indicating promising yet limited performance based solely on waveform learning. We aim to extend our architecture with a Bidirectional Long Short-Term Memory (Bi-LSTM) network to capture richer temporal dependencies across ECG segments. Additionally, an encoder-decoder framework will be implemented wherein the ResNet serves as a feature extractor, the Bi-LSTM operates as an encoder, and an LSTM-based decoder translates context-aware embeddings into class predictions. This sequence-to-sequence strategy is expected to improve the model's ability to capture long-range dependencies and enhance robustness against noise and signal variability.