We propose an ensemble-based pipeline for analyzing 12-lead electrocardiograms (ECGs) for Chagas disease diagnosis. ECG signals are first preprocessed using a continuous wavelet transform (CWT) with a 1.5 cmor wavelet, converting raw signals into a detailed time-frequency representation over the clinically relevant 0.5–150 Hz band. This choice, based on superior localization and multi-resolution properties compared to the discrete wavelet transform, also circumvents the computational challenges of the Fractional Stockwell transform. Three parallel deep learning models extract complementary features from the CWT output. Our hybrid model is designed to capture diverse characteristics of ECG signals by combining the strengths of different neural architectures. The CNN component excels at detecting local morphological features, such as the subtle variations in QRS complexes, while the Transformer uses self-attention to capture long-range dependencies across the signal. Meanwhile, the LSTM focuses on modeling the sequential dynamics and rhythm trends over time. By merging these complementary feature sets, we create a comprehensive representation that emphasizes both fine details and broader contextual information, ensuring robust analysis. The combined feature vector is then fed into an XGBoost classifier, which is well-suited to handle high-dimensional, low-volume biological data and mitigate overfitting. Our early experiments on the challenge datasets show that our hybrid approach is effective at spotting the ECG abnormalities linked to Chagas disease. Future work will focus on optimizing GPU implementations of the fractional Stockwell Transform which has been shown to achieve 95.31% accuracy and 96.29% specificity in fetal arrhythmia detection with deep convolutional models. Moreover, finetuning of the transformer and tuning of the XGBoost's hyperparameters are also next steps.