Congenital heart disease (CHD) is the most common genetic birth defect as it afflicts approximately 1% of live births and has considerable morbidity and mortality. However, many developing countries lack infrastructure and cardiology specialists for diagnosing and treating congenital and acquired heart diseases in children. Phonocardiograph is an economical option for non-invasive diagnosis and monitoring of heart disorders. Phonocardiograph produces a phonocardiogram (PCG), a specific waveform that plots in high fidelity the intensity of heart sounds over time. In this study, we propose a deep learning-based model that automatically identifies heart murmurs from PCG. We converted the heartbeat sound into 2D features in the frequency-time domain through feature extraction techniques such as log-mel spectrogram, Short Time Fourier Transform (STFT), and Constant Q Transform (CQT). The frequency-temporal 2D features were modeled using voice classification models such as Convolutional neural networks (CNN), BC-ResNet, and ResMax. The initial model using log-melspectrogram and five-layer CNN was ranked 30 out of 166 submitted methods (14 out of 76 participating groups) with a score of 689 in the unofficial phase of the George B. Moody PhysioNet Challenge. We believe that our deep learning based heart murmur detection system will be a promising system for automatic heart murmur detection from PCG.