Journal of Computer Applications ›› 0, Vol. ›› Issue (): 184-191.DOI: 10.11772/j.issn.1001-9081.2023121786

• Multimedia computing and computer simulation • Previous Articles     Next Articles

Variation-aware online dynamic illumination estimation method for indoor scenes

Yuwan LIU1, Zhiyi GUO2, Guanyu XING2(), Yanli LIU1,2   

  1. 1.College of Computer Science,Sichuan University,Chengdu Sichuan 610065,China
    2.National Key Laboratory of Fundamental Science on Synthetic Vision,Sichuan University,Chengdu Sichuan 610065,China
  • Received:2023-12-27 Revised:2024-03-17 Accepted:2024-03-25 Online:2025-01-24 Published:2024-12-31
  • Contact: Guanyu XING

差异感知的室内场景动态光照在线估计方法

刘玉婉1, 郭智溢2, 邢冠宇2(), 刘艳丽1,2   

  1. 1.四川大学 计算机学院,成都 610065
    2.四川大学 视觉合成图形图像技术国家级重点实验室,成都 610065
  • 通讯作者: 邢冠宇
  • 作者简介:刘玉婉(1999—),女,安徽淮南人,硕士研究生,CCF会员,主要研究方向:计算机图形学、图像处理
    郭智溢(1994—),男,四川成都人,硕士,主要研究方向:计算机视觉、图像处理
    邢冠宇(1985—)男,吉林德惠人,副教授,博士,CCF会员,主要研究方向为:计算机图形学、增强现实
    刘艳丽(1981—),女,河南沈丘人,教授,博士,主要研究方向:虚拟/增强现实、计算机图形学、计算机视觉。
  • 基金资助:
    国家自然科学基金资助项目(62172290);四川省重点研发计划项目(2023YFS0454)

Abstract:

In order to enhance the sense of realism of the integration of virtual and real objects in augmented reality scenes, a variation-aware online dynamic illumination estimation method for indoor scenes was proposed. Unlike the existing methods that directly calculate lighting parameters or generate lightmaps, the lighting variation images of the scene under different lighting conditions are estimated by this method to implement the dynamic update of the scene illumination, which can obtain dynamic lighting of the scene more accurately and retain detailed information of the scene. The Convolutional Neural Network (CNN) of the proposed network includes two sub-networks, namely Low Dynamic Range (LDR) image feature extraction network and illumination estimation network. The whole network structure took a High Dynamic Range (HDR) panoramic lightmap with all the main light sources open in the scene as the initial lightmap, and this lightmap and the LDR image with limited field of view after lighting change were used as the input together. Firstly, the CNN was built based on AlexNet to extract the LDR image features, and these features were connected with the HDR lightmap features in illumination estimation network sharing encoder. Then, the U-Net structure was used to estimate the lighting variation image and the mask of light source by introducing the attention mechanism, so as to update the dynamic illumination of the scene. In the numerical evaluation of panoramic lightmap, the Mean Squared Error (MSE) indicator of the proposed method was improved by about 79%, 65%, 38%, 17%, and 87% compared with those of Gardner’s method, Garon’s method, EMLight, Guo’s method, and coupled dual StyleGAN panoramic synthesis network StyleLight, respectively, and other indicators were also improved. The above proves the effectiveness of the proposed method from both qualitative and quantitative aspects.

Key words: indoor scene, dynamic illumination, deep learning, synthetic dataset, High Dynamic Range (HDR)

摘要:

为了提高增强现实场景中虚实融合的真实感,提出一种差异感知的室内场景动态光照在线估计方法。与现有方法直接计算光照参数或生成光照贴图不同,该方法通过估计不同光照条件下场景的光照差异图像实现对于室内场景中光照的动态更新,从而更准确地获取场景动态光照并保留场景中的细节信息。所提方法的卷积神经网络(CNN)包括2个子网络,分别是低动态范围(LDR)图像特征提取网络和光照估计网络。整体网络结构以一张场景内所有主要光源开启时采集的高动态范围(HDR)全景光照贴图作为初始光照贴图,并把该光照贴图与光照变化后的有限视界的LDR图像共同作为输入。首先,基于AlexNet搭建CNN提取LDR图像特征,并在光照估计网络共享编码器中连接这些特征与HDR光照贴图特征;其次,利用U-Net结构,通过引入注意力机制,实现对光照差异图像和光源掩膜的估计,进而实现对场景动态光照的更新。在全景光照贴图的数值评估中,所提方法的均方误差(MSE)指标相较于Gardner方法、Garon方法、EMLight、Guo方法以及耦合的双StyleGAN全景合成网络StyleLight分别降低约79%、65%、38%、17%、87%,其他性能也有所提升。以上从定性和定量方面均证明了所提方法的有效性。

关键词: 室内场景, 动态光照, 深度学习, 合成数据集, 高动态范围

CLC Number: