Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Dual-branch network guided by local entropy for dynamic scene high dynamic range imaging
Ying HUANG, Changsheng LI, Hui PENG, Su LIU
Journal of Computer Applications    2025, 45 (1): 204-213.   DOI: 10.11772/j.issn.1001-9081.2023121726
Abstract111)   HTML6)    PDF (7127KB)(39)       Save

For addressing the issues of motion artifacts and exposure distortion in High Dynamic Range (HDR) imaging tasks based on a sequence of multiple exposed images when there is camera shake or subject movement, a dual-branch network guided by local entropy for dynamic scene HDR imaging was proposed. Firstly, the Discrete Wavelet Transform (DWT) was employed to separate the low-frequency illumination-related information and high-frequency motion-related information from the input images, enabling the network to address exposure and subject movement purposefully. Secondly, for the low-frequency illumination-related information branch, a module was designed to calculate attention using image local entropy, thereby guiding the network to reduce the extraction of exposure features lacking details. For the high-frequency motion-related information branch, a lightweight feature alignment module was introduced for consistent alignment of scene, thereby reducing the extraction of motion features. Finally, a time-domain self-attention module was constructed by integrating channel attention, thereby enhancing the mutual dependence of exposure image sequence in temporal domain, so as to further improve the quality of the results. Evaluation was performed on public datasets Kalantari, Sen, and Tursun. Experimental results on Kalantari dataset show that the proposed network achieves the first place in PSNR-l (42.20 dB) and the third place in SSIM-l (0.988 9) compared to some latest methods. By integrating experimental results on the remaining datasets, it can be seen that the proposed network can reduce exposure distortion and motion artifacts effectively, and generate images with abundant details and excellent visual effect.

Table and Figures | Reference | Related Articles | Metrics