Introduction: The electrocardiogram (ECG) is crucial for diagnosing cardiovascular disease by providing information about the heart's electrical activity. With the rise of AI-based diagnostic techniques, having ECG data in a digital format is essential. However, many regions, especially underprivileged communities, still rely on paper ECGs. Digitizing these paper ECGs can create a comprehensive dataset that includes diverse patient populations. Methods: Our team, PapyrusECG, developed a pipeline combining rule-based and deep learning techniques to digitize paper-based ECGs. The pipeline begins with de-skew correction using Hough and Affine Transforms to align images properly. A trained YOLOv7 model then identifies the locations of each ECG lead. These images and lead locations are processed by a trained U-Net to binarize each lead. Finally, a custom rule-based algorithm extracts the signal, converting pixel locations to mV. We generated 7500 augmented images from the PTB-XL database to ensure robustness and generalizability using 5-fold cross-validation. Augmentations included noise, paper wrinkles, rotations, temperature changes, cropping, random resolution, and grid colors. In addition to generating augmented images, we modified the generation script to produce binary masks for training the U-Net. Results: We achieved a 5-fold cross-validation SNR of -0.455±0.145 and an SNR of -5.192 on the validation data for the official phase. Conclusion: Our pipeline provided promising results. During the official phase, further attempts will be made to improve the performance of deep learning models.