Wearable ECG recording devices are becoming popular for long-term screening and monitoring of cardiovascular diseases. However, given that captured signals are broadly disturbed by artifacts, their quality assessment is essential to avoid ECG-based automated misinterpretation. For that purpose, pre-trained convolutional neural networks (CNNs) have recently reported promising performance. A factor strongly affecting the effectiveness of previous knowledge transfer to a CNN is layer freezing during the fine-tuning stage. In this process, some layers can be frozen to preserve their initial knowledge and then allow the CNN to learn more generic features. This work aims to analyze how freezing of a variable number of layers affects performance of a well-known CNN in ECG quality assessment.
Several versions of a pre-trained 2-D AlexNet architecture, fed with continuous Wavelet transformed scalograms of ECG intervals, were derived by progressively freezing 7 out of their 8 layers with learnable parameters. Thus, the entire CNN with no frozen layers was firstly fine-tuned. Next, layers from 1 to 7 were progressively frozen and fine-tune was repeated. The eight resulting CNN models were validated with two separate databases, which contained almost 70,000 5 second-length ECG intervals (57,439 presenting high quality and 11,168 poor-quality). Experiments were repeated 10 times and mean values of accuracy (Acc), sensitivity (Se) and specificity (Se) were obtained. No relevant differences in classification were noticed among models, but an increasing trend in Acc as the number of frozen layers increased was depicted.
The obtained results recommend freezing all layers except the last one during the fine-tuning of AlexNet for ECG quality assessment. This way classification performance can be slightly improved, while training time and computation resources could be optimized.