In recent years, the low-light image enhancement methods based on deep learning are inspired by Retinex theory. First, the illumination map is estimated to adjust the brightness, and then the reflectance is restored to achieve low-light enhancement. Therefore, by analyzing the similarity between the low-light scene reflection map and reference reflection map, a low-light image enhancement Network guided by Reflection Prior map (RP-Net) was proposed. Firstly, the similar reflection map was generated by decomposing in Lab color space, and an Reflection Prior feature Adaptive Extractor (RPAE) was designed to re-encode and filter the guiding features from the similar reflection map in the backbone network at different scales. Then, the guiding information was injected into the backbone network through the designed Reflection Prior feature-Guided attention Block (RPGB). In addition, aiming at the limitations of traditional pixel-by-pixel L1 loss, a harmonic loss function of frequency domain was designed from the perspective of frequency domain analysis, so as to optimize the enhancement effect from the global spectral distribution. Experimental results on LOLv1, LOLv2 and LSRW datasets show that the proposed method is superior to the existing mainstream methods in Structural Similarity (SSIM), and has the Peak Signal-to-Noise Ratio (PSNR) 1.29 dB and 2.08 dB higher than that of Retinexformer and SAFNet (Spatial And Frequency Network), on the LOLv2-syn and LSRW datasets respectively, and performs well in balancing color fidelity and enhancement effect.