In The George B. Moody PhysioNet Challenge 2024, we developed a method for classifying electrocardiograms (ECGs) captured from images or paper printouts. Our prediction model consists of two neural networks. The first network predicts the image's rotation angle for its reverse rotation. Thus, the second network processes the modified image and predicts 11 possible labels for the ECG. In both cases, we applied the modern residual convolutional network ConvNeXt. We compared the performance of the image-based prediction model with that of a model for digital signal classification. As a reference model, we used the 1D variant ResNet-50. By evaluating the image processing model on our training set using 5-fold cross-validation, we obtained a macro F1 score of 0.8496 for multilabel classification. In the competition's official round, our CeZIS team achieved a Challenge score of 0.645 on the hidden validation set.