《计算机应用》唯一官方网站 ›› 0, Vol. ›› Issue (): 217-222.DOI: 10.11772/j.issn.1001-9081.2024040492
收稿日期:
2024-04-23
修回日期:
2024-07-08
接受日期:
2024-07-10
发布日期:
2025-01-24
出版日期:
2024-12-31
通讯作者:
杨大为
作者简介:
孙超(1998—),男,山东威海人,硕士研究生,主要研究方向:图像超分辨率、深度学习
Chao SUN1, Qiang WANG2, Dawei YANG1()
Received:
2024-04-23
Revised:
2024-07-08
Accepted:
2024-07-10
Online:
2025-01-24
Published:
2024-12-31
Contact:
Dawei YANG
摘要:
针对基于Transformer的超分辨率网络无法充分利用周围信息的问题,提出一种基于邻域注意力的Transformer混合注意力图像超分辨率网络(MAT)。首先,利用一个卷积层提取浅层特征,并利用一系列残差混合注意组(RMAG)和一个3×3卷积层进行深度特征提取,从而充分利用邻域注意力和通道注意力这2种方法的互补优势,即能够同时利用全局统计量和较强的局部拟合的能力;此外,引入重叠的交叉注意力模块增强相邻窗口特征之间的交互作用;其次,添加一个全局残差连接,以融合浅层特征和深层特征;最后,重构模块采用像素混洗法对融合后的特征进行上采样。在多个数据集上,MAT与RCAN(Residual Channel Attention Network)-it等多个算法的实验对比结果表明,MAT的峰值信噪比(PSNR)比先进方法提高0.3~1.0 dB。可见,MAT在图像超分辨率任务中有效提高了图像的恢复效果。
中图分类号:
孙超, 王强, 杨大为. 基于邻域注意力的混合注意力图像超分辨率网络[J]. 计算机应用, 0, (): 217-222.
Chao SUN, Qiang WANG, Dawei YANG. Mixed attention image super-resolution network based on neighborhood attention[J]. Journal of Computer Applications, 0, (): 217-222.
邻域范围 | Set5 | Urban100 |
---|---|---|
(7,7) | 38.38 | 33.53 |
(11,11) | 38.39 | 33.64 |
表1 不同邻域范围的PSNR比较 (dB)
邻域范围 | Set5 | Urban100 |
---|---|---|
(7,7) | 38.38 | 33.53 |
(11,11) | 38.39 | 33.64 |
CAB | OCAB | PSNR/dB |
---|---|---|
× | × | 38.16 |
√ | × | 38.24 |
√ | √ | 38.39 |
表2 消融实验结果 (Set5数据集)
CAB | OCAB | PSNR/dB |
---|---|---|
× | × | 38.16 |
√ | × | 38.24 |
√ | √ | 38.39 |
倍数 | 方法 | Set5 | Set14 | BSD100 | Urban100 | Manga109 | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
PSNR/dB | SSIM | PSNR/dB | SSIM | PSNR/dB | SSIM | PSNR/dB | SSIM | PSNR/dB | SSIM | ||
2 | EDSR | 38.11 | 0.960 2 | 33.92 | 0.919 5 | 32.32 | 0.901 3 | 32.93 | 0.935 1 | 39.10 | 0.977 3 |
RCAN | 38.27 | 0.961 4 | 34.12 | 0.921 6 | 32.41 | 0.902 7 | 33.34 | 0.938 4 | 39.44 | 0.978 6 | |
SAN | 38.31 | 0.962 0 | 34.07 | 0.921 3 | 32.42 | 0.902 8 | 33.10 | 0.937 0 | 39.32 | 0.979 2 | |
RFANet | 38.26 | 0.961 5 | 34.16 | 0.922 0 | 32.41 | 0.902 6 | 33.33 | 0.938 9 | 39.44 | 0.978 3 | |
HAN | 38.27 | 0.961 4 | 34.16 | 0.921 7 | 32.41 | 0.902 7 | 33.35 | 0.938 5 | 39.46 | 0.978 5 | |
CSNL | 38.28 | 0.961 6 | 34.12 | 0.922 3 | 32.40 | 0.902 4 | 33.25 | 0.938 6 | 39.37 | 0.978 5 | |
NLSA | 38.34 | 0.961 8 | 34.08 | 0.923 1 | 32.43 | 0.902 7 | 33.42 | 0.939 4 | 39.59 | 0.978 9 | |
ELAN | 38.36 | 0.962 0 | 34.20 | 0.922 8 | 32.45 | 0.903 0 | 33.44 | 0.939 1 | 39.62 | 0.979 3 | |
RCAN-it | 34.49 | 0.925 0 | 32.48 | 0.903 4 | 33.62 | 0.941 0 | 39.88 | 0.979 9 | |||
MAT | 38.39 | 0.962 0 | 33.64 | 0.941 2 | |||||||
3 | EDSR | 34.65 | 0.928 0 | 30.52 | 0.846 2 | 29.25 | 0.809 3 | 28.80 | 0.865 3 | 34.17 | 0.947 6 |
RCAN | 34.74 | 0.929 9 | 30.65 | 0.848 2 | 29.32 | 0.811 1 | 29.09 | 0.870 2 | 34.44 | 0.949 9 | |
SAN | 34.75 | 0.930 0 | 30.59 | 0.847 6 | 29.33 | 0.811 2 | 28.93 | 0.867 1 | 34.30 | 0.949 4 | |
RFANet | 34.79 | 0.930 0 | 30.67 | 0.848 7 | 29.34 | 0.811 5 | 29.15 | 0.872 0 | 34.59 | 0.950 6 | |
HAN | 34.75 | 0.929 9 | 30.67 | 0.848 3 | 29.32 | 0.811 0 | 29.10 | 0.870 5 | 34.48 | 0.950 0 | |
CSNLN | 34.74 | 0.930 0 | 30.66 | 0.848 2 | 29.33 | 0.810 5 | 29.13 | 0.871 2 | 34.45 | 0.950 2 | |
NLSA | 34.85 | 0.930 6 | 30.70 | 0.848 5 | 29.34 | 0.811 7 | 29.25 | 0.872 6 | 34.57 | 0.950 8 | |
ELAN | 29.38 | 0.812 4 | 29.32 | 0.874 5 | 34.73 | 0.951 7 | |||||
RCAN-it | 34.86 | 0.930 8 | 30.76 | 0.850 5 | |||||||
MAT | 34.91 | 0.931 3 | 30.80 | 0.851 4 | 29.40 | 0.813 1 | 29.52 | 0.878 7 | 35.01 | 0.952 7 | |
4 | EDSR | 32.46 | 0.896 8 | 28.80 | 0.787 6 | 27.71 | 0.742 0 | 26.64 | 0.803 3 | 31.02 | 0.914 8 |
RCAN | 32.63 | 0.900 2 | 28.87 | 0.788 9 | 27.77 | 0.743 6 | 26.82 | 0.808 7 | 31.22 | 0.917 3 | |
SAN | 32.64 | 0.900 3 | 28.92 | 0.788 8 | 27.78 | 0.743 6 | 26.79 | 0.806 8 | 31.18 | 0.916 9 | |
RFANet | 32.66 | 0.900 4 | 28.88 | 0.789 4 | 27.79 | 0.744 2 | 26.92 | 0.811 2 | 31.41 | 0.918 0 | |
HAN | 32.64 | 0.900 2 | 28.90 | 0.789 0 | 27.80 | 0.744 2 | 26.85 | 0.809 4 | 31.42 | 0.917 7 | |
CSNLN | 32.68 | 0.900 4 | 28.95 | 0.788 8 | 27.80 | 0.743 9 | 27.22 | 0.816 8 | 31.43 | 0.920 1 | |
NLSA | 32.59 | 0.900 0 | 28.87 | 0.789 1 | 27.78 | 0.744 4 | 26.96 | 0.810 9 | 31.27 | 0.918 4 | |
ELAN | 32.75 | 0.902 2 | 28.96 | 0.791 4 | 27.83 | 0.745 9 | 27.13 | 0.816 7 | 31.68 | 0.922 6 | |
RCAN-it | |||||||||||
MAT | 32.78 | 0.902 8 | 29.06 | 0.793 5 | 27.88 | 0.747 5 | 27.43 | 0.823 8 | 32.00 | 0.924 6 |
表3 与先进的方法的PSNR和SSIM比较
倍数 | 方法 | Set5 | Set14 | BSD100 | Urban100 | Manga109 | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
PSNR/dB | SSIM | PSNR/dB | SSIM | PSNR/dB | SSIM | PSNR/dB | SSIM | PSNR/dB | SSIM | ||
2 | EDSR | 38.11 | 0.960 2 | 33.92 | 0.919 5 | 32.32 | 0.901 3 | 32.93 | 0.935 1 | 39.10 | 0.977 3 |
RCAN | 38.27 | 0.961 4 | 34.12 | 0.921 6 | 32.41 | 0.902 7 | 33.34 | 0.938 4 | 39.44 | 0.978 6 | |
SAN | 38.31 | 0.962 0 | 34.07 | 0.921 3 | 32.42 | 0.902 8 | 33.10 | 0.937 0 | 39.32 | 0.979 2 | |
RFANet | 38.26 | 0.961 5 | 34.16 | 0.922 0 | 32.41 | 0.902 6 | 33.33 | 0.938 9 | 39.44 | 0.978 3 | |
HAN | 38.27 | 0.961 4 | 34.16 | 0.921 7 | 32.41 | 0.902 7 | 33.35 | 0.938 5 | 39.46 | 0.978 5 | |
CSNL | 38.28 | 0.961 6 | 34.12 | 0.922 3 | 32.40 | 0.902 4 | 33.25 | 0.938 6 | 39.37 | 0.978 5 | |
NLSA | 38.34 | 0.961 8 | 34.08 | 0.923 1 | 32.43 | 0.902 7 | 33.42 | 0.939 4 | 39.59 | 0.978 9 | |
ELAN | 38.36 | 0.962 0 | 34.20 | 0.922 8 | 32.45 | 0.903 0 | 33.44 | 0.939 1 | 39.62 | 0.979 3 | |
RCAN-it | 34.49 | 0.925 0 | 32.48 | 0.903 4 | 33.62 | 0.941 0 | 39.88 | 0.979 9 | |||
MAT | 38.39 | 0.962 0 | 33.64 | 0.941 2 | |||||||
3 | EDSR | 34.65 | 0.928 0 | 30.52 | 0.846 2 | 29.25 | 0.809 3 | 28.80 | 0.865 3 | 34.17 | 0.947 6 |
RCAN | 34.74 | 0.929 9 | 30.65 | 0.848 2 | 29.32 | 0.811 1 | 29.09 | 0.870 2 | 34.44 | 0.949 9 | |
SAN | 34.75 | 0.930 0 | 30.59 | 0.847 6 | 29.33 | 0.811 2 | 28.93 | 0.867 1 | 34.30 | 0.949 4 | |
RFANet | 34.79 | 0.930 0 | 30.67 | 0.848 7 | 29.34 | 0.811 5 | 29.15 | 0.872 0 | 34.59 | 0.950 6 | |
HAN | 34.75 | 0.929 9 | 30.67 | 0.848 3 | 29.32 | 0.811 0 | 29.10 | 0.870 5 | 34.48 | 0.950 0 | |
CSNLN | 34.74 | 0.930 0 | 30.66 | 0.848 2 | 29.33 | 0.810 5 | 29.13 | 0.871 2 | 34.45 | 0.950 2 | |
NLSA | 34.85 | 0.930 6 | 30.70 | 0.848 5 | 29.34 | 0.811 7 | 29.25 | 0.872 6 | 34.57 | 0.950 8 | |
ELAN | 29.38 | 0.812 4 | 29.32 | 0.874 5 | 34.73 | 0.951 7 | |||||
RCAN-it | 34.86 | 0.930 8 | 30.76 | 0.850 5 | |||||||
MAT | 34.91 | 0.931 3 | 30.80 | 0.851 4 | 29.40 | 0.813 1 | 29.52 | 0.878 7 | 35.01 | 0.952 7 | |
4 | EDSR | 32.46 | 0.896 8 | 28.80 | 0.787 6 | 27.71 | 0.742 0 | 26.64 | 0.803 3 | 31.02 | 0.914 8 |
RCAN | 32.63 | 0.900 2 | 28.87 | 0.788 9 | 27.77 | 0.743 6 | 26.82 | 0.808 7 | 31.22 | 0.917 3 | |
SAN | 32.64 | 0.900 3 | 28.92 | 0.788 8 | 27.78 | 0.743 6 | 26.79 | 0.806 8 | 31.18 | 0.916 9 | |
RFANet | 32.66 | 0.900 4 | 28.88 | 0.789 4 | 27.79 | 0.744 2 | 26.92 | 0.811 2 | 31.41 | 0.918 0 | |
HAN | 32.64 | 0.900 2 | 28.90 | 0.789 0 | 27.80 | 0.744 2 | 26.85 | 0.809 4 | 31.42 | 0.917 7 | |
CSNLN | 32.68 | 0.900 4 | 28.95 | 0.788 8 | 27.80 | 0.743 9 | 27.22 | 0.816 8 | 31.43 | 0.920 1 | |
NLSA | 32.59 | 0.900 0 | 28.87 | 0.789 1 | 27.78 | 0.744 4 | 26.96 | 0.810 9 | 31.27 | 0.918 4 | |
ELAN | 32.75 | 0.902 2 | 28.96 | 0.791 4 | 27.83 | 0.745 9 | 27.13 | 0.816 7 | 31.68 | 0.922 6 | |
RCAN-it | |||||||||||
MAT | 32.78 | 0.902 8 | 29.06 | 0.793 5 | 27.88 | 0.747 5 | 27.43 | 0.823 8 | 32.00 | 0.924 6 |
1 | DONG C, LOY C C, HE K, et al. Learning a deep convolutional network for image super-resolution[C]// Proceedings of the 2014 European Conference, LNCS 8692. Cham: Springer, 2014: 184-199. |
2 | ZHANG Y, LI K, LI K, et al. Image super-resolution using very deep residual channel attention networks[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11211. Cham: Springer, 2018: 294-310. |
3 | DAI T, CAI J, ZHANG Y, et al. Second-order attention network for single image super-resolution[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11057-11066. |
4 | 孙旭,李晓光,李嘉锋,等. 基于深度学习的图像超分辨率复原研究进展[J]. 自动化学报, 2017, 43(5): 697-709. |
5 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 6000-6010. |
6 | LI W, LU X, QIAN S, et al. On efficient transformer-based image pre-training for low-level vision[EB/OL]. [2023-03-19]. . |
7 | LIM B, SON S, KIM H, et al. Enhanced deep residual networks for single image super-resolution[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2017: 1132-1140. |
8 | 程德强,陈杰,寇旗旗,等. 融合层次特征和注意力机制的轻量化矿井图像超分辨率重建方法[J]. 仪器仪表学报, 2022, 43(8): 73-84. |
9 | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[EB/OL]. [2023-02-22]. . |
10 | LIANG J, CAO J, SUN G, et al. SwinIR: image restoration using Swin Transformer[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops. Piscataway: IEEE, 2021: 1833-1844. |
11 | HASSANI A, WALTON S, LI J, et al. Neighborhood attention transformer[C]// Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 6185-6194. |
12 | KIM J, LEE J K, LEE K M. Accurate image super-resolution using very deep convolutional networks[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1646-1654. |
13 | KIM J, LEE J K, LEE K M. Deeply-recursive convolutional network for image super-resolution[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1637-1645. |
14 | TAI Y, YANG J, LIU X. Image super-resolution via deep recursive residual network[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2790-2798. |
15 | TAI Y, YANG J, LIU X, et al. MemNet: a persistent memory network for image restoration[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 4549-4557. |
16 | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
17 | NIU B, WEN W, REN W, et al. Single image super-resolution via a holistic attention network[C]// Proceedings of the 2020 European Conference, LNCS 12357. Cham: Springer, 2020: 191-207. |
18 | MEI Y, FAN Y, ZHOU Y, et al. Image super-resolution with cross-scale non-local attention and exhaustive self-exemplars mining[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 5689-5698. |
19 | ZHANG Y, LI K, LI K, et al. Residual non-local attention networks for image restoration[EB/OL]. [2023-03-24]. . |
20 | CHEN L, ZHANG H, XIAO J, et al. SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 6298-6306. |
21 | FU J, LIU J, TIAN H, et al. Dual attention network for scene segmentation[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 3141-3149. |
22 | CHEN Q, WU Q, WANG J, et al. MixFormer: mixing features across windows and dimensions[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 5249-5259. |
23 | DOSOVITSKIY A. An image is worth 16×16 words: Transformers for image recognition at scale[EB/OL]. (2020-10-20)[2023-03-24]. . |
24 | ZHOU S, ZHANG J, ZUO W, et al. Cross-scale internal graph neural network for image super-resolution[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2020: 3499-3509. |
25 | CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12346. Cham: Springer, 2020: 213-229. |
26 | TOUVRON H, CORD M, DOUZE M, et al. Training data-efficient image transformers & distillation through attention[C]// Proceedings of the 38th International Conference on Machine Learning. New York: JMLR.org, 2021: 10347-10357. |
27 | HASSANI A, WALTON S, SHAH N, et al. Escaping the big data paradigm with compact Transformers [EB/OL]. [2023-02-12]. . |
28 | CARON M, TOUVRON H, MISRA I, et al. Emerging properties in self-supervised vision Transformers[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9650-9660. |
29 | RAMACHANDRAN P, PARMAR N, VASWANI A, et al. Stand-alone self-attention in vision models[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2019:68-80. |
30 | CAO J, LI Y, ZHANG K, et al. Video super-resolution Transformer[EB/OL]. [2023-09-12]. . |
31 | LIU Z, LIN Y, CAO Y, et al. Swin Transformer: hierarchical vision transformer using shifted windows[C]// Proceedings of the 2021 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2021: 10012-10022. |
32 | VANSWANI A, RAMACHANDRAN P, SRINIVAS A, et al. Scaling local self-attention for parameter efficient visual backbones[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 12889-12899. |
33 | SHI W, CABALLERO J, HUSZÁR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1874-1883. |
34 | HENDRYCKS D, GIMPEL K. Gaussian Error Linear Units (GELUs)[EB/OL]. [2023-02-27]. . |
35 | CHEN X, WANG X, ZHOU J, et al. Activating more pixels in image super-resolution transformer[C]// Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 22367-22377. |
36 | TIMOFTE R, AGUSTSSON E, VAN GOOL L, et al. NTIRE 2017 challenge on single image super-resolution: methods and results[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2017: 1110-1121. |
37 | LIU J, ZHANG W, TANG Y, et al. Residual feature aggregation network for image super-resolution[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 2356-2365. |
38 | MEI Y, FAN Y, ZHOU Y. Image super-resolution with non-local sparse attention[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 3516-3525. |
39 | ZHANG X, ZENG H, GUO S, et al. Efficient long-range attention network for image super-resolution[C]// Proceedings of the 2022 European Conference on Computer Vision, LNCS 13677. Cham: Springer, 2022: 649-667. |
40 | LIN Z, GARG P, BANERJEE A, et al. Revisiting RCAN: improved training for image super-resolution [EB/OL]. [2023-01-27]. . |
[1] | 袁宝华, 陈佳璐, 王欢. 融合多尺度语义和双分支并行的医学图像分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 988-995. |
[2] | 孟海腾, 赵小乐, 李天瑞. 基于非对称信息蒸馏网络的轻量级图像超分辨重建[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 601-609. |
[3] | 王雅伦, 张仰森, 朱思文. 面向知识推理的位置编码标题生成模型[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 345-353. |
[4] | 梁杰涛, 罗兵, 付兰慧, 常青玲, 李楠楠, 易宁波, 冯其, 何鑫, 邓辅秦. 基于坐标几何采样的点云配准方法[J]. 《计算机应用》唯一官方网站, 2025, 45(1): 214-222. |
[5] | 方介泼, 陶重犇. 应对零日攻击的混合车联网入侵检测系统[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2763-2769. |
[6] | 任烈弘, 黄铝文, 田旭, 段飞. 基于DFT的频率敏感双分支Transformer多变量长时间序列预测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2739-2746. |
[7] | 贾洁茹, 杨建超, 张硕蕊, 闫涛, 陈斌. 基于自蒸馏视觉Transformer的无监督行人重识别[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2893-2902. |
[8] | 黄云川, 江永全, 黄骏涛, 杨燕. 基于元图同构网络的分子毒性预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2964-2969. |
[9] | 杨鑫, 陈雪妮, 吴春江, 周世杰. 结合变种残差模型和Transformer的城市公路短时交通流预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2947-2951. |
[10] | 李金金, 桑国明, 张益嘉. APK-CNN和Transformer增强的多域虚假新闻检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2674-2682. |
[11] | 丁宇伟, 石洪波, 李杰, 梁敏. 基于局部和全局特征解耦的图像去噪网络[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2571-2579. |
[12] | 邓凯丽, 魏伟波, 潘振宽. 改进掩码自编码器的工业缺陷检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2595-2603. |
[13] | 杨帆, 邹窈, 朱明志, 马振伟, 程大伟, 蒋昌俊. 基于图注意力Transformer神经网络的信用卡欺诈检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2634-2642. |
[14] | 陈彤, 杨丰玉, 熊宇, 严荭, 邱福星. 基于多尺度频率通道注意力融合的声纹库构建方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2407-2413. |
[15] | 唐媛, 陈艳平, 扈应, 黄瑞章, 秦永彬. 基于多尺度混合注意力卷积神经网络的关系抽取模型[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2011-2017. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||