Aims: This study aims to automatically detect heart murmur from Phonocardiogram (PCG) Recordings, especially from multiple auscultation locations, which will help the early diagnoses of some congenital and acquired heart diseases in a cost-effective way. Methods: Different from the current popular deep learning methods, we combine some acoustic properties to solve this problem. Heart sounds are the shock waves produced by the blood flowing through the heart as the heart beats. We find some differences in acoustic properties of the heart sounds at different positions due to the reflection by surrounding obstacles in the human body, which means they have different reverberation characteristics. Since these disturbances will affect people’s auditory experience, PCG recordings from different locations should be processed uniformly. Therefore, we design a two-stage heart murmur detection model: ReverbNet. First, we use a generative adversarial network (GAN) with U-Net as the generator to de-reverberate the short-time Fourier transform (STFT) images for preprocessing. Then we utilize an STFT Transformer, which is a novel approach to using a Vision transformer-like model on audio data, to classify the uniformed signals. Finally, we combine the classification results of PCG recordings from different locations to determine whether the patient has a heart murmur. Results: The dataset contains 1568 patients’ PCG recordings from 5 locations identified by The George B. Moody PhysioNet Challenge 2022. Due to time constraints, we just realized the de-reverberation model and used a simple random forest classifier in the unofficial stage. We used the revealed 60% of the data for training and did a 5-fold cross-validation. The results showed the average accuracy and F-measure are 0.885 and 0.811, respectively. The challenge score was 803, improved by 15.23% compared with no de-reverberation process. We believe that the completed model will be much better than it.