《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (2): 572-579.DOI: 10.11772/j.issn.1001-9081.2023020123
所属专题: 多媒体计算与计算机仿真
收稿日期:
2023-02-15
修回日期:
2023-05-08
接受日期:
2023-05-11
发布日期:
2023-08-14
出版日期:
2024-02-10
通讯作者:
郑伯川
作者简介:
黄巧玲(1998—),女,四川绵阳人,硕士研究生,主要研究方向:机器学习、深度学习、图像修复基金资助:
Qiaoling HUANG1, Bochuan ZHENG1,2(), Zicheng DING2, Zedong WU2
Received:
2023-02-15
Revised:
2023-05-08
Accepted:
2023-05-11
Online:
2023-08-14
Published:
2024-02-10
Contact:
Bochuan ZHENG
About author:
HUANG Qiaoling, born in 1998, M. S. candidate. Her research interests include machine learning, deep learning, image inpainting.Supported by:
摘要:
非规则缺失区域的图像修复技术用途广泛但具有挑战性。针对现有修复方法对高分辨率图像可能会产生伪影、扭曲结构和模糊纹理的问题,提出一种融合监督注意力模块(SAM)和跨阶段特征融合(CSFF)的图像修复改进网络(Gconv_CS)。在Gconv的两阶段网络模型上,引入了SAM与CSFF模块。SAM通过提供真实图像监督信号,监督上阶段输出特征,确保传入下阶段特征信息的有效性。CSFF将上阶段编码器-解码器的特征融合后送入下阶段的编码器,以弥补上阶段修复中特征信息的损失。实验结果表明,在缺失区域占比为1%~10%时,相较于基线模型Gconv,Gconv_CS在CelebA-HQ数据集上峰值信噪比(PSNR)和结构相似性指数(SSIM)分别提高了1.5%和0.5%,Fréchet起始距离(FID)和L1损失分别降低了21.8%、14.8%;在Place2数据集上,前2个指标分别提高了26.7%和0.8%,后2个指标分别降低了7.9%、37.9%。将Gconv_CS用于去除大熊猫面部遮挡物时,取得了较好的修复视觉效果。
中图分类号:
黄巧玲, 郑伯川, 丁梓成, 吴泽东. 融合监督注意力模块和跨阶段特征融合的图像修复改进网络[J]. 计算机应用, 2024, 44(2): 572-579.
Qiaoling HUANG, Bochuan ZHENG, Zicheng DING, Zedong WU. Improved image inpainting network incorporating supervised attention module and cross-stage feature fusion[J]. Journal of Computer Applications, 2024, 44(2): 572-579.
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
GConv | k=5; s=1; p=2 | |
↓ | k=3; s=2; p=1 | |
GConv | k=3; s=1; p=1 | |
↓ | k=3; s=2; p=1 | |
GConv ×2 | k=3; s=1; p=1 | |
D-GConv ×4 | k=3; s=1; p=[ | |
GConv | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 | |
SAM | — |
表1 粗网络中编解码器参数
Tab. 1 Parameters of encoder-decoder in coarse network
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
GConv | k=5; s=1; p=2 | |
↓ | k=3; s=2; p=1 | |
GConv | k=3; s=1; p=1 | |
↓ | k=3; s=2; p=1 | |
GConv ×2 | k=3; s=1; p=1 | |
D-GConv ×4 | k=3; s=1; p=[ | |
GConv | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 | |
SAM | — |
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
↓ | k=3; s=2; p=1 | |
GConv | k=3; s=1; p=1 | |
↓ | k=3; s=2; p=1 | |
GConv ×2 | k=3; s=1; p=1 | |
D-GConv ×4 | k=3; s=1;p=[ |
表2 细网络中编码分支结构参数
Tab. 2 Parameters of encoding branch structure in fine network
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
↓ | k=3; s=2; p=1 | |
GConv | k=3; s=1; p=1 | |
↓ | k=3; s=2; p=1 | |
GConv ×2 | k=3; s=1; p=1 | |
D-GConv ×4 | k=3; s=1;p=[ |
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
↓ | k=3; s=2; p=1 | |
GConv | k=3; s=1; p=1 | |
↓ | k=3; s=2; p=1 | |
GConv ×2 | k=3; s=1; p=1 | |
Contextual Attention | — | |
GConv ×2 | k=3; s=1; p=1 |
表3 细网络中上下文注意力编码分支结构参数
Tab. 3 Parameters of contextual attention encoding branch structure in fine network
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
↓ | k=3; s=2; p=1 | |
GConv | k=3; s=1; p=1 | |
↓ | k=3; s=2; p=1 | |
GConv ×2 | k=3; s=1; p=1 | |
Contextual Attention | — | |
GConv ×2 | k=3; s=1; p=1 |
层 | 参数 | 特征尺寸 |
---|---|---|
GConv ×2 | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 |
表4 细网络中解码器参数
Tab. 4 Parameters of decoder in fine network
层 | 参数 | 特征尺寸 |
---|---|---|
GConv ×2 | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 | |
T-GConv | k=3; s=1; p=1 | |
GConv | k=3; s=1; p=1 |
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
Conv | k=7; s=1; p=3 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 |
表5 鉴别器参数
Tab. 5 Parameters of discriminator
层 | 参数 | 特征尺寸 |
---|---|---|
Input | — | |
Conv | k=7; s=1; p=3 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 | |
Conv | k=4; s=2; p=1 |
Gconv | CSFF | SAM | PSNR/dB | SSIM | FID | L1损失 |
---|---|---|---|---|---|---|
√ | 37.801 | 0.987 3 | 0.876 | 0.002 74 | ||
√ | √ | 38.004 | 0.987 7 | 0.782 | 0.002 60 | |
√ | √ | 38.021 | 0.987 9 | 0.775 | 0.002 59 | |
√ | √ | √ | 38.364 | 0.992 4 | 0.684 | 0.002 32 |
表6 消融实验结果对比
Tab. 6 Ablation experiment result comparison
Gconv | CSFF | SAM | PSNR/dB | SSIM | FID | L1损失 |
---|---|---|---|---|---|---|
√ | 37.801 | 0.987 3 | 0.876 | 0.002 74 | ||
√ | √ | 38.004 | 0.987 7 | 0.782 | 0.002 60 | |
√ | √ | 38.021 | 0.987 9 | 0.775 | 0.002 59 | |
√ | √ | √ | 38.364 | 0.992 4 | 0.684 | 0.002 32 |
Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 | Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 |
---|---|---|---|---|---|---|---|---|---|---|---|
[1%,10%) | CA | 32.14 | 0.968 | 2.96 | 0.016 8 | [30%,40%) | CA | 21.58 | 0.843 | 18.62 | 0.045 6 |
PEN | 33.21 | 0.973 | 2.41 | 0.012 3 | PEN | 24.72 | 0.894 | 16.68 | 0.037 1 | ||
PConv | 35.65 | 0.986 | 1.58 | 0.008 6 | PConv | 24.62 | 0.905 | 12.69 | 0.032 5 | ||
Gconv | 37.80 | 0.987 | 0.87 | 0.002 7 | Gconv | 25.65 | 0.921 | 6.86 | 0.016 8 | ||
Gconv_CS | 38.36 | 0.992 | 0.68 | 0.002 3 | Gconv_CS | 26.97 | 0.925 | 6.14 | 0.016 2 | ||
[10%,20%) | CA | 27.32 | 0.945 | 5.68 | 0.019 7 | [40%,50%) | CA | 19.98 | 0.796 | 23.45 | 0.067 5 |
PEN | 28.05 | 0.955 | 4.19 | 0.013 1 | PEN | 23.26 | 0.849 | 20.72 | 0.053 2 | ||
PConv | 30.01 | 0.962 | 3.68 | 0.011 2 | PConv | 23.96 | 0.886 | 18.56 | 0.043 4 | ||
Gconv | 31.22 | 0.968 | 2.05 | 0.005 9 | Gconv | 24.25 | 0.894 | 9.02 | 0.023 1 | ||
Gconv_CS | 32.68 | 0.970 | 1.92 | 0.005 8 | Gconv_CS | 25.04 | 0.897 | 8.55 | 0.021 3 | ||
[20%,30%) | CA | 23.89 | 0.892 | 9.62 | 0.030 2 | [50%,60%) | CA | 17.85 | 0.711 | 58.35 | 0.086 6 |
PEN | 25.74 | 0.913 | 8.38 | 0.024 2 | PEN | 20.82 | 0.764 | 49.38 | 0.076 4 | ||
PConv | 27.22 | 0.926 | 6.28 | 0.016 8 | PConv | 20.96 | 0.808 | 38.25 | 0.065 2 | ||
Gconv | 27.58 | 0.945 | 3.93 | 0.007 1 | Gconv | 21.23 | 0.850 | 19.05 | 0.040 2 | ||
Gconv_CS | 29.41 | 0.947 | 3.73 | 0.006 8 | Gconv_CS | 21.71 | 0.854 | 17.32 | 0.039 3 |
表7 在CelebA-HQ数据集上的定量实验结果比较
Tab. 7 Quantitative experiment result comparison on CelebA-HQ dataset
Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 | Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 |
---|---|---|---|---|---|---|---|---|---|---|---|
[1%,10%) | CA | 32.14 | 0.968 | 2.96 | 0.016 8 | [30%,40%) | CA | 21.58 | 0.843 | 18.62 | 0.045 6 |
PEN | 33.21 | 0.973 | 2.41 | 0.012 3 | PEN | 24.72 | 0.894 | 16.68 | 0.037 1 | ||
PConv | 35.65 | 0.986 | 1.58 | 0.008 6 | PConv | 24.62 | 0.905 | 12.69 | 0.032 5 | ||
Gconv | 37.80 | 0.987 | 0.87 | 0.002 7 | Gconv | 25.65 | 0.921 | 6.86 | 0.016 8 | ||
Gconv_CS | 38.36 | 0.992 | 0.68 | 0.002 3 | Gconv_CS | 26.97 | 0.925 | 6.14 | 0.016 2 | ||
[10%,20%) | CA | 27.32 | 0.945 | 5.68 | 0.019 7 | [40%,50%) | CA | 19.98 | 0.796 | 23.45 | 0.067 5 |
PEN | 28.05 | 0.955 | 4.19 | 0.013 1 | PEN | 23.26 | 0.849 | 20.72 | 0.053 2 | ||
PConv | 30.01 | 0.962 | 3.68 | 0.011 2 | PConv | 23.96 | 0.886 | 18.56 | 0.043 4 | ||
Gconv | 31.22 | 0.968 | 2.05 | 0.005 9 | Gconv | 24.25 | 0.894 | 9.02 | 0.023 1 | ||
Gconv_CS | 32.68 | 0.970 | 1.92 | 0.005 8 | Gconv_CS | 25.04 | 0.897 | 8.55 | 0.021 3 | ||
[20%,30%) | CA | 23.89 | 0.892 | 9.62 | 0.030 2 | [50%,60%) | CA | 17.85 | 0.711 | 58.35 | 0.086 6 |
PEN | 25.74 | 0.913 | 8.38 | 0.024 2 | PEN | 20.82 | 0.764 | 49.38 | 0.076 4 | ||
PConv | 27.22 | 0.926 | 6.28 | 0.016 8 | PConv | 20.96 | 0.808 | 38.25 | 0.065 2 | ||
Gconv | 27.58 | 0.945 | 3.93 | 0.007 1 | Gconv | 21.23 | 0.850 | 19.05 | 0.040 2 | ||
Gconv_CS | 29.41 | 0.947 | 3.73 | 0.006 8 | Gconv_CS | 21.71 | 0.854 | 17.32 | 0.039 3 |
Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 | Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 |
---|---|---|---|---|---|---|---|---|---|---|---|
[1%,10%) | CA | 29.06 | 0.961 | 2.08 | 0.008 9 | [30%,40%) | CA | 18.90 | 0.783 | 18.46 | 0.051 9 |
PEN | 30.54 | 0.964 | 1.99 | 0.007 4 | PEN | 20.18 | 0.795 | 17.96 | 0.039 7 | ||
PConv | 31.04 | 0.971 | 1.86 | 0.006 8 | PConv | 20.44 | 0.820 | 16.33 | 0.034 3 | ||
Gconv | 32.25 | 0.970 | 1.51 | 0.006 6 | Gconv | 21.56 | 0.796 | 15.62 | 0.040 8 | ||
Gconv_CS | 33.11 | 0.978 | 1.39 | 0.004 1 | Gconv_CS | 22.36 | 0.861 | 34.26 | 0.031 4 | ||
[10%,20%) | CA | 24.82 | 0.906 | 7.33 | 0.020 7 | [40%,50%) | CA | 16.84 | 0.721 | 56.89 | 0.070 7 |
PEN | 25.17 | 0.917 | 7.14 | 0.015 8 | PEN | 18.60 | 0.733 | 34.04 | 0.054 0 | ||
PConv | 26.75 | 0.928 | 6.85 | 0.012 8 | PConv | 18.86 | 0.762 | 28.63 | 0.046 2 | ||
Gconv | 27.05 | 0.921 | 5.85 | 0.015 5 | Gconv | 19.65 | 0.727 | 24.66 | 0.056 8 | ||
Gconv_CS | 27.96 | 0.944 | 4.12 | 0.010 2 | Gconv_CS | 20.94 | 0.819 | 22.35 | 0.044 6 | ||
[20%,30%) | CA | 20.93 | 0.844 | 17.36 | 0.035 4 | [50%,60%) | CA | 15.11 | 0.668 | 82.67 | 0.101 1 |
PEN | 21.15 | 0.856 | 15.66 | 0.027 2 | PEN | 16.62 | 0.664 | 52.27 | 0.077 6 | ||
PConv | 22.59 | 0.875 | 13.83 | 0.024 2 | PConv | 16.85 | 0.667 | 47.09 | 0.077 4 | ||
Gconv | 23.84 | 0.860 | 12.40 | 0.027 3 | Gconv | 17.96 | 0.626 | 32.90 | 0.080 9 | ||
Gconv_CS | 24.62 | 0.903 | 34.26 | 0.022 3 | Gconv_CS | 18.98 | 0.766 | 31.35 | 0.076 2 |
表8 在Places2数据集上的定量实验结果比较
Tab. 8 Quantitative experiment result comparison on Places2 dataset
Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 | Masks | 模型 | PSNR/dB | SSIM | FID | L1损失 |
---|---|---|---|---|---|---|---|---|---|---|---|
[1%,10%) | CA | 29.06 | 0.961 | 2.08 | 0.008 9 | [30%,40%) | CA | 18.90 | 0.783 | 18.46 | 0.051 9 |
PEN | 30.54 | 0.964 | 1.99 | 0.007 4 | PEN | 20.18 | 0.795 | 17.96 | 0.039 7 | ||
PConv | 31.04 | 0.971 | 1.86 | 0.006 8 | PConv | 20.44 | 0.820 | 16.33 | 0.034 3 | ||
Gconv | 32.25 | 0.970 | 1.51 | 0.006 6 | Gconv | 21.56 | 0.796 | 15.62 | 0.040 8 | ||
Gconv_CS | 33.11 | 0.978 | 1.39 | 0.004 1 | Gconv_CS | 22.36 | 0.861 | 34.26 | 0.031 4 | ||
[10%,20%) | CA | 24.82 | 0.906 | 7.33 | 0.020 7 | [40%,50%) | CA | 16.84 | 0.721 | 56.89 | 0.070 7 |
PEN | 25.17 | 0.917 | 7.14 | 0.015 8 | PEN | 18.60 | 0.733 | 34.04 | 0.054 0 | ||
PConv | 26.75 | 0.928 | 6.85 | 0.012 8 | PConv | 18.86 | 0.762 | 28.63 | 0.046 2 | ||
Gconv | 27.05 | 0.921 | 5.85 | 0.015 5 | Gconv | 19.65 | 0.727 | 24.66 | 0.056 8 | ||
Gconv_CS | 27.96 | 0.944 | 4.12 | 0.010 2 | Gconv_CS | 20.94 | 0.819 | 22.35 | 0.044 6 | ||
[20%,30%) | CA | 20.93 | 0.844 | 17.36 | 0.035 4 | [50%,60%) | CA | 15.11 | 0.668 | 82.67 | 0.101 1 |
PEN | 21.15 | 0.856 | 15.66 | 0.027 2 | PEN | 16.62 | 0.664 | 52.27 | 0.077 6 | ||
PConv | 22.59 | 0.875 | 13.83 | 0.024 2 | PConv | 16.85 | 0.667 | 47.09 | 0.077 4 | ||
Gconv | 23.84 | 0.860 | 12.40 | 0.027 3 | Gconv | 17.96 | 0.626 | 32.90 | 0.080 9 | ||
Gconv_CS | 24.62 | 0.903 | 34.26 | 0.022 3 | Gconv_CS | 18.98 | 0.766 | 31.35 | 0.076 2 |
1 | BERTALMIO M, SAPIRO G, CASELLES V, et al. Image inpainting[C]// Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. New York:ACM,2000: 417-424. 10.1145/344779.344972 |
2 | BALLESTER C, BERTALMIO M, CASELLES V, et al. Filling-in by joint interpolation of vector fields and gray levels[J]. IEEE Transactions on Image Processing, 2001, 10(8): 1200-1211. 10.1109/83.935036 |
3 | TSCHUMPERLÉ D, DERICHE R. Vector-valued image regularization with PDEs: a common framework for different applications[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(4): 506-517. 10.1109/tpami.2005.87 |
4 | BARNES C, SHECHTMAN E, FINKELSTEIN A, et al. PatchMatch: a randomized correspondence algorithm for structural image editing[J]. ACM Transactions on Graphics, 2009, 28(3): Article No. 24. 10.1145/1531326.1531330 |
5 | HUANG J-B, KANG S B, AHUJA N, et al. Image completion using planar structure guidance[J]. ACM Transactions on Graphics, 2014, 33(4): No. 129. 10.1145/2601097.2601205 |
6 | DING D, RAM S, RODRÍGUEZ J J. Image inpainting using nonlocal texture matching and nonlinear filtering[J]. IEEE Transactions on Image Processing, 2019, 28(4): 1705-1719. 10.1109/tip.2018.2880681 |
7 | PATHAK D, KRÄHENBÜHL P, DONAHUE J, et al. Context encoders: feature learning by inpainting[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE,2016: 2536-2544. 10.1109/cvpr.2016.278 |
8 | GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks [J]. Communications of the ACM, 2020, 63(11): 139-144. 10.1145/3422622 |
9 | LIZUKA S, SIMO-SERRA E, ISHIKAWA H. Globally and locally consistent image completion [J]. ACM Transactions on Graphics, 2017, 36(4): Article No. 107. 10.1145/3072959.3073659 |
10 | LIU G, REDA F A, SHIH K J, et al. Image inpainting for irregular holes using partial convolutions[C]// Proceedings of the 15th European Conference on Computer Vision. Cham: Springer,2018: 89-105. 10.1007/978-3-030-01252-6_6 |
11 | XIE C, LIU S, LI C, et al. Image inpainting with learnable bidirectional attention maps[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 8857-8866. 10.1109/iccv.2019.00895 |
12 | ZENG Y, FU J, CHAO H, et al. Learning pyramid-context encoder network for high-quality image inpainting[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 1486-1494. 10.1109/cvpr.2019.00158 |
13 | LIU H, JIANG B, SONG Y, et al. Rethinking image inpainting via a mutual encoder-decoder with feature equalizations[C]// Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 725-741. 10.1007/978-3-030-58536-5_43 |
14 | LIAO L, XIAO J, WANG Z, et al. Guidance and evaluation: semantic-aware image inpainting for mixed scenes[C]// Proceedings of the 10th European Conference on Computer Vision. Cham: Springer, 2020: 683-700. 10.1007/978-3-030-58583-9_41 |
15 | ZHANG R, QUAN W, WU B, et al. Pixel‐wise dense detector for image inpainting[J]. Computer Graphics Forum, 2020, 39(7): 471-482. 10.1111/cgf.14160 |
16 | WANG N, ZHANG Y, ZHANG L. Dynamic selection network for image inpainting[J]. IEEE Transactions on Image Processing, 2021, 30: 1784-1798. 10.1109/tip.2020.3048629 |
17 | ZHU M, HE D, LI X, et al. Image inpainting by end-to-end cascaded refinement with mask awareness[J]. IEEE Transactions on Image Processing, 2021, 30: 4855-4866. 10.1109/tip.2021.3076310 |
18 | YU J, LIN Z, YANG J, et al. Generative image inpainting with contextual attention[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE,2018: 5505-5514. 10.1109/cvpr.2018.00577 |
19 | YU J, LIN Z, YANG J, et al. Free-form image inpainting with gated convolution[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE,2019: 4470-4479. 10.1109/iccv.2019.00457 |
20 | ZHANG H, HU Z, LUO C, et al. Semantic image inpainting with progressive generative networks[C]// Proceedings of the 26th ACM International Conference on Multimedia. New York:ACM, 2018: 1939-1947. 10.1145/3240508.3240625 |
21 | NAZERI K, NG E, JOSEPH T, et al. EdgeConnect: structure guided image inpainting using edge prediction[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshops.Piscataway: IEEE, 2019: 3265-3274. 10.1109/iccvw.2019.00408 |
22 | REN Y, YU X, ZHANG R, et al. StructureFlow: image inpainting via structure-aware appearance flow[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE,2019: 181-190. 10.1109/iccv.2019.00027 |
23 | WU H, ZHOU J, LI Y. Deep generative model for image inpainting with local binary pattern learning and spatial attention[EB/OL].[2023-02-01]. . 10.1109/tmm.2021.3111491 |
24 | 袁琳君,蒋旻,罗敦浪,等.基于生成对抗网络的人像修复[J].计算机应用,2020,40(3):842-846. |
YUAN L J, JIANG M, LUO D L, et al. Portrait inpainting based on generative adversarial networks [J]. Journal of Computer Applications,2020,40(3):842-846. | |
25 | 冯浪,张玲,张晓龙.基于扩张卷积的图像修复[J].计算机应用,2020,40(3):825-831. 10.11772/j.issn.1001-9081.2019081471 |
FENG L, ZHANG L, ZHANG X L. Image inpainting based on dilated convolution[J]. Journal of Computer Applications,2020,40(3):825-831. 10.11772/j.issn.1001-9081.2019081471 | |
26 | HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. 10.1162/neco.1997.9.8.1735 |
27 | GUO Z, CHEN Z, YU T, et al. Progressive image inpainting with full-resolution residual network[C]// Proceedings of the 27th ACM International Conference on Multimedia. New York:ACM, 2019: 2496-2504. 10.1145/3343031.3351022 |
28 | LI J, HE F, ZHANG L, et al. Progressive reconstruction of visual structure for image inpainting[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 5961-5970. 10.1109/iccv.2019.00606 |
29 | LI J, WANG N, ZHANG L, et al. Recurrent feature reasoning for image inpainting[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 7760-7768. 10.1109/cvpr42600.2020.00778 |
30 | ZENG Y, LIN Z, YANG J, et al. High-resolution image inpainting with iterative confidence feedback and guided upsampling[C]// Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 1-17. 10.1007/978-3-030-58529-7_1 |
31 | KARRAS T, AILA T, LAINE S, et al. Progressive growing of GANs for improved quality, stability, and variation[EB/OL].[2023-02-01]. . 10.1109/cvpr42600.2020.00813 |
32 | ZHOU B, LAPEDRIZA A, KHOSLA A, et al. Places: A 10 million image database for scene recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(6): 1452-1464. 10.1109/tpami.2017.2723009 |
33 | ZAMIR S W, ARORA A, KHAN S, et al. Multi-stage progressive image restoration[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE,2021: 14816-14826. 10.1109/cvpr46437.2021.01458 |
[1] | 邓凯丽, 魏伟波, 潘振宽. 改进掩码自编码器的工业缺陷检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2595-2603. |
[2] | 周菊香, 刘金生, 甘健侯, 吴迪, 李子杰. 基于多尺度时序感知网络的课堂语音情感识别方法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1636-1643. |
[3] | 陈刚, 廖永为, 杨振国, 刘文印. 基于多特征融合的多尺度生成对抗网络图像修复算法[J]. 《计算机应用》唯一官方网站, 2023, 43(2): 536-544. |
[4] | 沈万里, 张玉金, 胡万. 面向图像修复取证的U型特征金字塔网络[J]. 《计算机应用》唯一官方网站, 2023, 43(2): 545-551. |
[5] | 冯浪, 张玲, 张晓龙. 基于扩张卷积的图像修复[J]. 计算机应用, 2020, 40(3): 825-831. |
[6] | 王肖, 魏嘉旺, 袁玉波. 基于特征部位圆形域的人脸图像修复方法[J]. 计算机应用, 2020, 40(3): 847-853. |
[7] | 袁琳君, 蒋旻, 罗敦浪, 江佳俊, 郭嘉. 基于生成对抗网络的人像修复[J]. 计算机应用, 2020, 40(3): 842-846. |
[8] | 杨文霞, 王萌, 张亮. 基于密集连接块U-Net的语义人脸图像修复[J]. 计算机应用, 2020, 40(12): 3651-3657. |
[9] | 杨文霞, 张亮. 基于图像结构纹理分解及局部总变分最小化的图像修复模型[J]. 计算机应用, 2018, 38(8): 2386-2392. |
[10] | 杨文霞, 张亮. 基于对数函数的非局部总变分图像修复模型[J]. 计算机应用, 2018, 38(6): 1784-1789. |
[11] | 孟红月, 翟东海, 李梦雪, 曹大命. 参照四邻域裁剪样本的图像修复算法[J]. 计算机应用, 2018, 38(4): 1111-1116. |
[12] | 曹大命, 翟东海, 孟红月, 李梦雪, 冯炎. 基于先验约束和统计的图像修复算法[J]. 计算机应用, 2018, 38(2): 533-538. |
[13] | 林金勇, 邓德祥, 颜佳, 林晓英. 基于自适应相似组稀疏表示的图像修复算法[J]. 计算机应用, 2017, 37(4): 1169-1173. |
[14] | 邹玮刚, 周志辉, 王洋. 基于非降采样轮廓波变换的图像修复算法[J]. 计算机应用, 2017, 37(2): 553-558. |
[15] | 李梦雪, 翟东海, 孟红月, 曹大命. 划分特征子区域的图像修复算法[J]. 计算机应用, 2017, 37(12): 3541-3546. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||