Our method does not require signal segmentation or segment identification, it is simpler than identifying and segmenting S1, systole, S2 and diastole. In this competition, we design our strategy aiming at avoiding signal segmentation and segment identification. During our data preprocessing, each audio data is converted to a 128-dimensional vector by compute its mel-scaled spectrogram. After fixed-length processing, we input such data into residual convolutional neural network (ResNet), and input age, height, weight and other characteristics to MLP. Then we connect the output of ResNet and MLP, and assign the result to the fully connected layer and classify it into Present, Unknown, Absent. The loss is calculated by binary cross entropy. Our method is applied to 2022 George B. Moody PhysioNet Challenge. We used 80% of the patient data for training and 20% for validation, the validation results have an Accuracy of 91.53%, a Recall of 83.6%, a Precision of 90.29%, and a ROC AUC of 85%. Our method achieve 1204 points in the unofficial phase. We plan to use 2016 competition data to pre-train ResNet for further improvement of our method in the rest of the time.