《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (6): 1736-1742.DOI: 10.11772/j.issn.1001-9081.2022060852
所属专题: CCF第37届中国计算机应用大会 (CCF NCCA 2022)
• CCF第37届中国计算机应用大会 (CCF NCCA 2022) • 上一篇 下一篇
李文举1, 李梦颖1, 崔柳1, 储王慧1, 张益1, 高慧2()
收稿日期:
2022-06-14
修回日期:
2022-08-05
接受日期:
2022-08-11
发布日期:
2022-10-08
出版日期:
2023-06-10
通讯作者:
高慧
作者简介:
李文举(1964—),男,辽宁营口人,教授,博士,CCF高级会员,主要研究方向:计算机视觉、模式识别、智能检测
Wenju LI1, Mengying LI1, Liu CUI1, Wanghui CHU1, Yi ZHANG1, Hui GAO2()
Received:
2022-06-14
Revised:
2022-08-05
Accepted:
2022-08-11
Online:
2022-10-08
Published:
2023-06-10
Contact:
Hui GAO
About author:
LI Wenju, born in 1964, Ph. D., professor. His research interests include computer vision, pattern recognition, intelligent detection.Supported by:
摘要:
针对目前单目图像在深度估计中依然存在边缘以及深度最大区域预测不准确的问题,提出了一种基于金字塔分割注意力网络的单目深度估计方法(PS-Net)。首先,PS-Net以边界引导和场景聚合网络(BS-Net)为基础,引入金字塔分割注意力(PSA)模块处理多尺度特征的空间信息并且有效建立多尺度通道注意力间的长期依赖关系,从而提取深度梯度变化剧烈的边界和深度最大的区域;然后,使用Mish函数作为解码器中的激活函数,以进一步提升网络的性能;最后,在NYUD v2(New York University Depth dataset v2)和iBims-1(independent Benchmark images and matched scans v1)数据集上进行训练评估。iBims-1数据集上的实验结果显示,所提网络在衡量定向深度误差(DDE)方面与BS-Net相比减小了1.42个百分点,正确预测深度像素的比例达到81.69%。以上表明所提网络在深度预测上具有较高的准确性。
中图分类号:
李文举, 李梦颖, 崔柳, 储王慧, 张益, 高慧. 基于金字塔分割注意力网络的单目深度估计方法[J]. 计算机应用, 2023, 43(6): 1736-1742.
Wenju LI, Mengying LI, Liu CUI, Wanghui CHU, Yi ZHANG, Hui GAO. Monocular depth estimation method based on pyramid split attention network[J]. Journal of Computer Applications, 2023, 43(6): 1736-1742.
方法 | 平面性误差 | 深度边界误差/px | 定向深度误差/% | ||||
---|---|---|---|---|---|---|---|
PE_plan↓/cm | PE_ori↓/(°) | DBE_acc↓ | DBE_com↓ | DDE_0↑ | DDE_m↓ | DDE_ p↓ | |
文献[ | 6.97 | 28.56 | 5.07 | 7.83 | 70.10 | 29.46 | 0.43 |
文献[ | 6.46 | 19.13 | 6.19 | 9.17 | 81.02 | 17.01 | 1.97 |
文献[ | 3.45 | 43.44 | 2.98 | 4.96 | 82.27 | 16.38 | 1.34 |
文献[ | 6.67 | 16.52 | 2.15 | 84.96 | |||
文献[ | 4.33 | 27.89 | 2.17 | 5.33 | 80.89 | 17.70 | 1.42 |
本文方法 | 3.89 | 29.44 | 2.14 | 5.18 | 81.69 | 16.28 | 2.02 |
表1 在iBims-1数据集上的平面性误差、深度边界误差和定向深度误差
Tab. 1 Planarity errors, depth boundary errors, directed depth errors on iBims-1 dataset
方法 | 平面性误差 | 深度边界误差/px | 定向深度误差/% | ||||
---|---|---|---|---|---|---|---|
PE_plan↓/cm | PE_ori↓/(°) | DBE_acc↓ | DBE_com↓ | DDE_0↑ | DDE_m↓ | DDE_ p↓ | |
文献[ | 6.97 | 28.56 | 5.07 | 7.83 | 70.10 | 29.46 | 0.43 |
文献[ | 6.46 | 19.13 | 6.19 | 9.17 | 81.02 | 17.01 | 1.97 |
文献[ | 3.45 | 43.44 | 2.98 | 4.96 | 82.27 | 16.38 | 1.34 |
文献[ | 6.67 | 16.52 | 2.15 | 84.96 | |||
文献[ | 4.33 | 27.89 | 2.17 | 5.33 | 80.89 | 17.70 | 1.42 |
本文方法 | 3.89 | 29.44 | 2.14 | 5.18 | 81.69 | 16.28 | 2.02 |
数据集 | 方法 | RMSE | REL | Log10 | |||
---|---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | |||||
iBims-1 | 文献[ | 1.610 | 0.350 | 0.190 | 0.220 | 0.550 | 0.780 |
文献[ | 1.200 | 0.260 | 0.130 | 0.500 | 0.780 | 0.910 | |
文献[ | 1.140 | 0.220 | 0.110 | 0.510 | 0.840 | 0.940 | |
文献[ | 1.160 | 0.230 | 0.120 | 0.510 | 0.830 | 0.930 | |
本文方法 | 1.140 | 0.230 | 0.110 | 0.520 | 0.840 | 0.940 | |
NYUD v2 | 文献[ | 0.586 | 0.121 | 0.052 | 0.811 | 0.954 | 0.987 |
文献[ | 0.624 | 0.156 | 0.776 | 0.953 | 0.989 | ||
文献[ | 0.573 | 0.127 | 0.055 | 0.811 | 0.953 | 0.988 | |
文献[ | 0.572 | 0.139 | 0.815 | 0.963 | 0.991 | ||
文献[ | 0.582 | 0.120 | 0.055 | 0.817 | 0.954 | 0.987 | |
文献[ | 0.559 | 0.126 | 0.055 | 0.843 | 0.965 | 0.991 | |
本文方法 | 0.558 | 0.126 | 0.054 | 0.843 | 0.968 | 0.991 |
表2 在iBims-1和NYUD v2数据集上的相关深度误差和精度
Tab. 2 Relative depth errors and accuracies on iBims-1 and NYUD v2 datasets
数据集 | 方法 | RMSE | REL | Log10 | |||
---|---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | |||||
iBims-1 | 文献[ | 1.610 | 0.350 | 0.190 | 0.220 | 0.550 | 0.780 |
文献[ | 1.200 | 0.260 | 0.130 | 0.500 | 0.780 | 0.910 | |
文献[ | 1.140 | 0.220 | 0.110 | 0.510 | 0.840 | 0.940 | |
文献[ | 1.160 | 0.230 | 0.120 | 0.510 | 0.830 | 0.930 | |
本文方法 | 1.140 | 0.230 | 0.110 | 0.520 | 0.840 | 0.940 | |
NYUD v2 | 文献[ | 0.586 | 0.121 | 0.052 | 0.811 | 0.954 | 0.987 |
文献[ | 0.624 | 0.156 | 0.776 | 0.953 | 0.989 | ||
文献[ | 0.573 | 0.127 | 0.055 | 0.811 | 0.953 | 0.988 | |
文献[ | 0.572 | 0.139 | 0.815 | 0.963 | 0.991 | ||
文献[ | 0.582 | 0.120 | 0.055 | 0.817 | 0.954 | 0.987 | |
文献[ | 0.559 | 0.126 | 0.055 | 0.843 | 0.965 | 0.991 | |
本文方法 | 0.558 | 0.126 | 0.054 | 0.843 | 0.968 | 0.991 |
数据集 | 方法 | m=6 | m=12 | m=24 |
---|---|---|---|---|
iBims-1 | 文献[ | 0.193 | 0.201 | 0.213 |
文献[ | 0.169 | 0.192 | 0.210 | |
文献[ | 0.170 | 0.190 | 0.203 | |
文献[ | 0.181 | 0.200 | 0.211 | |
本文方法 | 0.176 | 0.193 | 0.200 | |
NYUD v2 | 文献[ | 0.157 | 0.173 | 0.180 |
文献[ | 0.116 | 0.140 | 0.157 | |
文献[ | 0.110 | 0.124 | 0.138 | |
本文方法 | 0.104 | 0.128 | 0.134 |
表3 在iBims-1和NYUD v2数据集上不同划分率下的深度最深区域的距离误差
Tab. 3 Distance errors of the farthest region under different partition rates on iBims-1 and NYUD v2 datasets
数据集 | 方法 | m=6 | m=12 | m=24 |
---|---|---|---|---|
iBims-1 | 文献[ | 0.193 | 0.201 | 0.213 |
文献[ | 0.169 | 0.192 | 0.210 | |
文献[ | 0.170 | 0.190 | 0.203 | |
文献[ | 0.181 | 0.200 | 0.211 | |
本文方法 | 0.176 | 0.193 | 0.200 | |
NYUD v2 | 文献[ | 0.157 | 0.173 | 0.180 |
文献[ | 0.116 | 0.140 | 0.157 | |
文献[ | 0.110 | 0.124 | 0.138 | |
本文方法 | 0.104 | 0.128 | 0.134 |
方法 | 阈值>0.25 | 阈值>0.50 | 阈值>1.00 | ||||||
---|---|---|---|---|---|---|---|---|---|
准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | |
文献[ | 0.516 | 0.400 | 0.436 | 0.600 | 0.366 | 0.439 | 0.794 | 0.407 | 0.525 |
文献[ | 0.577 | 0.626 | 0.591 | 0.531 | 0.509 | 0.506 | 0.617 | 0.489 | 0.533 |
文献[ | 0.489 | 0.435 | 0.454 | 0.536 | 0.422 | 0.463 | 0.670 | 0.479 | 0.548 |
文献[ | 0.639 | 0.502 | 0.556 | 0.663 | 0.504 | 0.565 | 0.756 | 0.537 | 0.620 |
本文方法 | 0.640 | 0.503 | 0.557 | 0.666 | 0.507 | 0.566 | 0.759 | 0.540 | 0.621 |
表4 不同方法在NYUD v2数据集上的深度边界精度
Tab. 4 Depth boundary accuracies of different methods on NYUD v2 dataset
方法 | 阈值>0.25 | 阈值>0.50 | 阈值>1.00 | ||||||
---|---|---|---|---|---|---|---|---|---|
准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | |
文献[ | 0.516 | 0.400 | 0.436 | 0.600 | 0.366 | 0.439 | 0.794 | 0.407 | 0.525 |
文献[ | 0.577 | 0.626 | 0.591 | 0.531 | 0.509 | 0.506 | 0.617 | 0.489 | 0.533 |
文献[ | 0.489 | 0.435 | 0.454 | 0.536 | 0.422 | 0.463 | 0.670 | 0.479 | 0.548 |
文献[ | 0.639 | 0.502 | 0.556 | 0.663 | 0.504 | 0.565 | 0.756 | 0.537 | 0.620 |
本文方法 | 0.640 | 0.503 | 0.557 | 0.666 | 0.507 | 0.566 | 0.759 | 0.540 | 0.621 |
变体 | td | REL | RMS | Log10 | ||
---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | ||||
Baseline | 0.507 | 0.823 | 0.926 | 0.243 | 1.172 | 0.122 |
+PSA | 0.512 | 0.826 | 0.930 | 0.239 | 1.170 | 0.120 |
+DCE+BUBF | 0.518 | 0.831 | 0.931 | 0.235 | 1.162 | 0.120 |
+PSA+DCE+BUBF | 0.523 | 0.836 | 0.941 | 0.235 | 1.147 | 0.117 |
表5 iBims-1数据集上消融实验的预测结果
Tab. 5 Prediction results of ablation experiments on iBims-1 dataset
变体 | td | REL | RMS | Log10 | ||
---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | ||||
Baseline | 0.507 | 0.823 | 0.926 | 0.243 | 1.172 | 0.122 |
+PSA | 0.512 | 0.826 | 0.930 | 0.239 | 1.170 | 0.120 |
+DCE+BUBF | 0.518 | 0.831 | 0.931 | 0.235 | 1.162 | 0.120 |
+PSA+DCE+BUBF | 0.523 | 0.836 | 0.941 | 0.235 | 1.147 | 0.117 |
阈值 | 方法 | 准确率 | 召回率 | 综合指标 |
---|---|---|---|---|
>0.25 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.644 | 0.483 | 0.546 |
0.639 | 0.502 | 0.556 | ||
0.640 | 0.503 | 0.557 | ||
>0.50 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.667 | 0.488 | 0.558 |
0.663 | 0.504 | 0.565 | ||
0.666 | 0.507 | 0.566 | ||
>1.00 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.764 | 0.525 | 0.614 |
0.756 | 0.537 | 0.620 | ||
0.759 | 0.540 | 0.621 |
表6 NYUD v2数据集上不同阈值下预测边界像素的精度
Tab. 6 Accuracies of predicted boundary pixels in depth maps under different thresholds on NYUD v2 dataset
阈值 | 方法 | 准确率 | 召回率 | 综合指标 |
---|---|---|---|---|
>0.25 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.644 | 0.483 | 0.546 |
0.639 | 0.502 | 0.556 | ||
0.640 | 0.503 | 0.557 | ||
>0.50 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.667 | 0.488 | 0.558 |
0.663 | 0.504 | 0.565 | ||
0.666 | 0.507 | 0.566 | ||
>1.00 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.764 | 0.525 | 0.614 |
0.756 | 0.537 | 0.620 | ||
0.759 | 0.540 | 0.621 |
1 | SNAVELY N, SEITZ S M, SZELISKI R. Skeletal graphs for efficient structure from motion[C]// Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2008: 1-8. 10.1109/cvpr.2008.4587678 |
2 | ZHANG R, TSAI P S, CRYER J E, et al. Shape-from-shading: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(8): 690-706. 10.1109/34.784284 |
3 | 毕天腾,刘越,翁冬冬,等. 基于监督学习的单幅图像深度估计综述[J]. 计算机辅助设计与图形学学报, 2018, 30(8): 1383-1393. 10.3724/sp.j.1089.2018.16882 |
BI T T, LIU Y, WENG D D, et al. Survey on supervised learning based depth estimation from a single image [J]. Journal of Computer-Aided Design and Computer Graphics, 2018, 30(8): 1383-1393. 10.3724/sp.j.1089.2018.16882 | |
4 | ROY A, TODOROVIC S. Monocular depth estimation using neural regression forest [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 5506-5514. 10.1109/cvpr.2016.594 |
5 | CHEN X T, CHEN X J, ZHA Z J. Structure-aware residual pyramid network for monocular depth estimation[C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2019: 694-700. 10.24963/ijcai.2019/98 |
6 | HU J J, OZAY M, ZHANG Y, et al. Revisiting single image depth estimation: toward higher resolution maps with accurate object boundaries [C]// Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2019: 1043-1051. 10.1109/wacv.2019.00116 |
7 | GUIZILINI V, AMBRUŞ R, PILLAI S, et al. 3D packing for self-supervised monocular depth estimation[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 2482-2491. 10.1109/cvpr42600.2020.00256 |
8 | XUE F, CAO J F, ZHOU Y, et al. Boundary-induced and scene-aggregated network for monocular depth prediction[J]. Pattern Recognition, 2021, 115: No.107901. 10.1016/j.patcog.2021.107901 |
9 | MISRA D. Mish: a self regularized non-monotonic activation function[C]// Proceedings of the 2020 British Machine Vision Conference. Durham: BMVA Press, 2020: No.928. |
10 | MALIK J, ROSENHOLTZ R. Computing local surface orientation and shape from texture for curved surfaces[J]. International Journal of Computer Vision, 1997, 23(2): 149-168. 10.1023/a:1007958829620 |
11 | SAXENA A, SCHULTE J, NG A Y. Depth estimation using monocular and stereo cues[C]// Proceedings of the 20th International Joint Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press, 2007: 2197-2203. |
12 | LIU B Y, GOULD S, KOLLER D. Single image depth estimation from predicted semantic labels [C]// Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2010: 1253-1260. 10.1109/cvpr.2010.5539823 |
13 | 李阳,陈秀万,王媛,等. 基于深度学习的单目图像深度估计的研究进展[J]. 激光与光电子学进展, 2019, 56(19): No.190001. 10.3788/lop56.190001 |
LI Y, CHEN X W, WANG Y, et al. Progress in deep learning based monocular image depth estimation [J]. Lasers and Optoelectronics Progress, 2019, 56(19): No.190001. 10.3788/lop56.190001 | |
14 | EIGEN D, PUHRSCH C, FERGUS R. Depth map prediction from a single image using a multi-scale deep network [C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014, 2: 2366-2374. |
15 | EIGEN D, FERGUS R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 2650-2658. 10.1109/iccv.2015.304 |
16 | LIU F Y, SHEN C H, LIN G S. Deep convolutional neural fields for depth estimation from a single image [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 5162-5170. 10.1109/cvpr.2015.7299152 |
17 | LIU F Y, SHEN C H, LIN G S, et al. Learning depth from single monocular images using deep convolutional neural fields [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2024-2039. 10.1109/tpami.2015.2505283 |
18 | LI B, SHEN C H, DAI Y C, et al. Depth and surface normal estimation from monocular images using regression on deep features and hierarchical CRFs [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1119-1127. 10.1109/cvpr.2015.7298715 |
19 | ALI U, BAYRAMLI B, ALSARHAN T, et al. A lightweight network for monocular depth estimation with decoupled body and edge supervision [J]. Image and Vision Computing, 2021, 113: No.104261. 10.1016/j.imavis.2021.104261 |
20 | GARG R, VIJAY KUMAR B G, CARNEIRO G, et al. Unsupervised CNN for single view depth estimation: geometry to the rescue [C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9912. Cham: Springer, 2016: 740-756. |
21 | GODARD C, AODHA O MAC, BROSTOW G J. Unsupervised monocular depth estimation with left-right consistency[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 6602-6611. 10.1109/cvpr.2017.699 |
22 | LAINA I, RUPPRECHT C, BELAGIANNIS V, et al. Deeper depth prediction with fully convolutional residual networks[C]// Proceedings of the 4th International Conference on 3D Vision. Piscataway: IEEE, 2016: 239-248. 10.1109/3dv.2016.32 |
23 | ZHAN H, ZU K K, LU J, et al. EPSANet: an efficient pyramid split attention block on convolutional neural network[C]// Proceedings of the 2022 Asian Conference on Computer Vision, LNCS 13843. Cham: Springer, 2023: 541-557. |
24 | 上海应用技术大学. 一种基于金字塔分割注意力的深度估计方法及装置: 202210186323.9[P]. 2022-05-31. |
Shanghai Institute of Technology. A depth estimation method and device based on pyramid split attention: 202210186323.9[P]. 2022-05-31. | |
25 | KOCH T, LIEBEL L, FRAUNDORFER F, et al. Evaluation of CNN-based single-image depth estimation methods[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11131. Cham: Springer, 2019: 331-348. |
26 | SWAMI K, BONDADA P V, BAJPAI P K. ACED: accurate and edge-consistent monocular depth estimation[C]// Proceedings of the 2020 IEEE International Conference on Image Processing. Piscataway: IEEE, 2020: 1376-1380. 10.1109/icip40778.2020.9191113 |
27 | DHARMASIRI T, SPEK A, DRUMMOND T. Joint prediction of depths, normals and surface curvature from RGB images using CNNs[C]// Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2017: 1505-1512. 10.1109/iros.2017.8205954 |
28 | XU D, RICCI E, OUYANG W L, et al. Multi-scale continuous CRFs as sequential deep networks for monocular depth estimation[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 161-169. 10.1109/cvpr.2017.25 |
29 | LEE J H, HEO M, KIM K R, et al. Single-image depth estimation based on Fourier domain analysis [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 330-339. 10.1109/cvpr.2018.00042 |
30 | XU D, OUYANG W L, WANG X G, et al. PAD-Net: multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 675-684. 10.1109/cvpr.2018.00077 |
[1] | 唐廷杰, 黄佳进, 秦进. 基于图辅助学习的会话推荐[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2711-2718. |
[2] | 贾洁茹, 杨建超, 张硕蕊, 闫涛, 陈斌. 基于自蒸馏视觉Transformer的无监督行人重识别[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2893-2902. |
[3] | 张英俊, 李牛牛, 谢斌红, 张睿, 陆望东. 课程学习指导下的半监督目标检测框架[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2326-2333. |
[4] | 周妍, 李阳. 用于脑卒中病灶分割的具有注意力机制的校正交叉伪监督方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1942-1948. |
[5] | 韩贵金, 张馨渊, 张文涛, 黄娅. 基于多特征融合的自监督图像配准算法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1597-1604. |
[6] | 汪炅, 唐韬韬, 贾彩燕. 无负采样的正样本增强图对比学习推荐方法PAGCL[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1485-1492. |
[7] | 朱子蒙, 李志新, 郇战, 陈瑛, 梁久祯. 基于三元中心引导的弱监督视频异常检测[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1452-1457. |
[8] | 夏吾吉, 黄鹤鸣, 更藏措毛, 范玉涛. 基于无监督学习和监督学习的抽取式文本摘要综述[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1035-1048. |
[9] | 黄荣, 宋俊杰, 周树波, 刘浩. 基于自监督视觉Transformer的图像美学质量评价方法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1269-1276. |
[10] | 江锐, 刘威, 陈成, 卢涛. 非对称端到端的无监督图像去雨网络[J]. 《计算机应用》唯一官方网站, 2024, 44(3): 922-930. |
[11] | 熊炜, 陈奕博, 张丽真, 杨茜, 邹勤. 利用多帧序列影像的自监督单目深度估计[J]. 《计算机应用》唯一官方网站, 2024, 44(12): 3907-3914. |
[12] | 胡立华, 李小平, 胡建华, 张素兰. 基于四叉树先验辅助的多视图立体方法[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3556-3564. |
[13] | 赵培, 乔焰, 胡荣耀, 袁新宇, 李敏悦, 张本初. 基于多域特征提取的多变量时间序列异常检测[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3419-3426. |
[14] | 胡能兵, 蔡彪, 李旭, 曹旦华. 基于图池化对比学习的图分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3327-3334. |
[15] | 张帅华, 张淑芬, 周明川, 徐超, 陈学斌. 基于半监督联邦学习的恶意流量检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3487-3494. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||