Cardiac auscultation is an effective method to screen hemodynamic abnormalities. The purpose of this paper is to propose an automatic algorithm to identify the presence or absence of murmurs of heart sound recordings from multiple auscultation locations. First, all samples from PhysioNet/CinC Challenge 2016 and PhysioNet/CinC Challenge 2022 were resampled to 4000Hz, and segmented into 5 second clips. In order to obtain the required frequency ranges and eliminate the unwanted noise, a 5th order Butterworth bandpass filter with cut-off frequencies of 20 to 450 Hz was used. Then, the Mel-Frequency Cepstral Coefficients (MFCC) was applied to extract effective heart sound features. Last, a ResNet-34 model pretrained on ImageNet was used in this work. The deep learning model which was trained with these MFCC features to distinguish normal and abnormal phonocardiograms. During training, data augmentation methods such as time-masking and frequency-masking were used to balance the dataset. With stratified five-fold cross-validation, our method achieved an AUROC, AUPRC, Accuracy, F-measure and Challenge score of 72.6%, 55.6%, 78.8%, 45.6% and 1835.92, respectively. And our team, USST_Med, received a Challenge score of 1595 on the official test set. The proposed method performed well on classifying phonocardiograms and have potential for clinical application.