Addressing Explainability, Transparency and Interpretability Requirements of AI Models for ECG Analysis

Durande K Nguifo1, Gabriel Tozatto Zago2, Rafael Pereira2, Rodrigo Varejao Andreao2, Anthony Fleury3
1IMT Nord Europe, 2Ifes, 3IMT Nord Europe, CERI SN


Abstract

The growing use of artificial intelligence for ECG analysis, particularly based on deep neural networks, has been followed by a major concern about the ability to explain the decision given to clinicians. This issue raised an increased attention towards explainable AI models (XAI), which are ex-pected to address requirements of explainability, interpretability, and trans-parency to favor their adoption in clinical settings. This paper presents a review of the most promising XAI approaches for ECG and points out the main challenges. For this review, we selected 60 studies from the major da-tabases. These articles were analyzed considering the type of model used (see Figure 1). We discuss the limitations of current approaches and the perspectives, aiming at pushing forward the interest and the adoption of XAI models for ECG analysis. Results: The analysis revealed that 80% of the reviewed models employed post-hoc XAI techniques such as saliency maps and SHAP values. Deep learning methods were predominant, with probabilistic models and attention mechanisms appearing less frequently. Approximately 65% of the studies reported metrics such as AUC, but less than 25% provided comparisons to non-explainable baselines. Human oversight was addressed in 42% of the articles, while only 18% adjusted model outputs based on expert feedbacks. Only 12% implemented fairness evaluations across ethnic groups, and less than 10% incorporated privacy-preserving mechanisms. 28% of papers dis-cussed potential impacts on healthcare efficiency, including triage optimiza-tion and remote diagnostics. Conclusion: Despite rapid advancements in explainable AI for ECG inter-pretation, significant gaps remain in clinical validation, personalization, and ethical governance. The findings underscore the necessity for integrating explainability with robust performances, transparency, and clinician-in-the-loop approaches. This review serves as a reference for future works aiming to bridge the gap between experimental research and clinically integrable AI systems, ultimately promoting safer and more accountable deployment of AI in cardiology.