Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (6): 1736-1742.DOI: 10.11772/j.issn.1001-9081.2022060852
Special Issue: CCF第37届中国计算机应用大会 (CCF NCCA 2022)
• The 37 CCF National Conference of Computer Applications (CCF NCCA 2022) • Previous Articles Next Articles
Wenju LI1, Mengying LI1, Liu CUI1, Wanghui CHU1, Yi ZHANG1, Hui GAO2()
Received:
2022-06-14
Revised:
2022-08-05
Accepted:
2022-08-11
Online:
2022-10-08
Published:
2023-06-10
Contact:
Hui GAO
About author:
LI Wenju, born in 1964, Ph. D., professor. His research interests include computer vision, pattern recognition, intelligent detection.Supported by:
李文举1, 李梦颖1, 崔柳1, 储王慧1, 张益1, 高慧2()
通讯作者:
高慧
作者简介:
李文举(1964—),男,辽宁营口人,教授,博士,CCF高级会员,主要研究方向:计算机视觉、模式识别、智能检测CLC Number:
Wenju LI, Mengying LI, Liu CUI, Wanghui CHU, Yi ZHANG, Hui GAO. Monocular depth estimation method based on pyramid split attention network[J]. Journal of Computer Applications, 2023, 43(6): 1736-1742.
李文举, 李梦颖, 崔柳, 储王慧, 张益, 高慧. 基于金字塔分割注意力网络的单目深度估计方法[J]. 《计算机应用》唯一官方网站, 2023, 43(6): 1736-1742.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022060852
方法 | 平面性误差 | 深度边界误差/px | 定向深度误差/% | ||||
---|---|---|---|---|---|---|---|
PE_plan↓/cm | PE_ori↓/(°) | DBE_acc↓ | DBE_com↓ | DDE_0↑ | DDE_m↓ | DDE_ p↓ | |
文献[ | 6.97 | 28.56 | 5.07 | 7.83 | 70.10 | 29.46 | 0.43 |
文献[ | 6.46 | 19.13 | 6.19 | 9.17 | 81.02 | 17.01 | 1.97 |
文献[ | 3.45 | 43.44 | 2.98 | 4.96 | 82.27 | 16.38 | 1.34 |
文献[ | 6.67 | 16.52 | 2.15 | 84.96 | |||
文献[ | 4.33 | 27.89 | 2.17 | 5.33 | 80.89 | 17.70 | 1.42 |
本文方法 | 3.89 | 29.44 | 2.14 | 5.18 | 81.69 | 16.28 | 2.02 |
Tab. 1 Planarity errors, depth boundary errors, directed depth errors on iBims-1 dataset
方法 | 平面性误差 | 深度边界误差/px | 定向深度误差/% | ||||
---|---|---|---|---|---|---|---|
PE_plan↓/cm | PE_ori↓/(°) | DBE_acc↓ | DBE_com↓ | DDE_0↑ | DDE_m↓ | DDE_ p↓ | |
文献[ | 6.97 | 28.56 | 5.07 | 7.83 | 70.10 | 29.46 | 0.43 |
文献[ | 6.46 | 19.13 | 6.19 | 9.17 | 81.02 | 17.01 | 1.97 |
文献[ | 3.45 | 43.44 | 2.98 | 4.96 | 82.27 | 16.38 | 1.34 |
文献[ | 6.67 | 16.52 | 2.15 | 84.96 | |||
文献[ | 4.33 | 27.89 | 2.17 | 5.33 | 80.89 | 17.70 | 1.42 |
本文方法 | 3.89 | 29.44 | 2.14 | 5.18 | 81.69 | 16.28 | 2.02 |
数据集 | 方法 | RMSE | REL | Log10 | |||
---|---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | |||||
iBims-1 | 文献[ | 1.610 | 0.350 | 0.190 | 0.220 | 0.550 | 0.780 |
文献[ | 1.200 | 0.260 | 0.130 | 0.500 | 0.780 | 0.910 | |
文献[ | 1.140 | 0.220 | 0.110 | 0.510 | 0.840 | 0.940 | |
文献[ | 1.160 | 0.230 | 0.120 | 0.510 | 0.830 | 0.930 | |
本文方法 | 1.140 | 0.230 | 0.110 | 0.520 | 0.840 | 0.940 | |
NYUD v2 | 文献[ | 0.586 | 0.121 | 0.052 | 0.811 | 0.954 | 0.987 |
文献[ | 0.624 | 0.156 | 0.776 | 0.953 | 0.989 | ||
文献[ | 0.573 | 0.127 | 0.055 | 0.811 | 0.953 | 0.988 | |
文献[ | 0.572 | 0.139 | 0.815 | 0.963 | 0.991 | ||
文献[ | 0.582 | 0.120 | 0.055 | 0.817 | 0.954 | 0.987 | |
文献[ | 0.559 | 0.126 | 0.055 | 0.843 | 0.965 | 0.991 | |
本文方法 | 0.558 | 0.126 | 0.054 | 0.843 | 0.968 | 0.991 |
Tab. 2 Relative depth errors and accuracies on iBims-1 and NYUD v2 datasets
数据集 | 方法 | RMSE | REL | Log10 | |||
---|---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | |||||
iBims-1 | 文献[ | 1.610 | 0.350 | 0.190 | 0.220 | 0.550 | 0.780 |
文献[ | 1.200 | 0.260 | 0.130 | 0.500 | 0.780 | 0.910 | |
文献[ | 1.140 | 0.220 | 0.110 | 0.510 | 0.840 | 0.940 | |
文献[ | 1.160 | 0.230 | 0.120 | 0.510 | 0.830 | 0.930 | |
本文方法 | 1.140 | 0.230 | 0.110 | 0.520 | 0.840 | 0.940 | |
NYUD v2 | 文献[ | 0.586 | 0.121 | 0.052 | 0.811 | 0.954 | 0.987 |
文献[ | 0.624 | 0.156 | 0.776 | 0.953 | 0.989 | ||
文献[ | 0.573 | 0.127 | 0.055 | 0.811 | 0.953 | 0.988 | |
文献[ | 0.572 | 0.139 | 0.815 | 0.963 | 0.991 | ||
文献[ | 0.582 | 0.120 | 0.055 | 0.817 | 0.954 | 0.987 | |
文献[ | 0.559 | 0.126 | 0.055 | 0.843 | 0.965 | 0.991 | |
本文方法 | 0.558 | 0.126 | 0.054 | 0.843 | 0.968 | 0.991 |
数据集 | 方法 | m=6 | m=12 | m=24 |
---|---|---|---|---|
iBims-1 | 文献[ | 0.193 | 0.201 | 0.213 |
文献[ | 0.169 | 0.192 | 0.210 | |
文献[ | 0.170 | 0.190 | 0.203 | |
文献[ | 0.181 | 0.200 | 0.211 | |
本文方法 | 0.176 | 0.193 | 0.200 | |
NYUD v2 | 文献[ | 0.157 | 0.173 | 0.180 |
文献[ | 0.116 | 0.140 | 0.157 | |
文献[ | 0.110 | 0.124 | 0.138 | |
本文方法 | 0.104 | 0.128 | 0.134 |
Tab. 3 Distance errors of the farthest region under different partition rates on iBims-1 and NYUD v2 datasets
数据集 | 方法 | m=6 | m=12 | m=24 |
---|---|---|---|---|
iBims-1 | 文献[ | 0.193 | 0.201 | 0.213 |
文献[ | 0.169 | 0.192 | 0.210 | |
文献[ | 0.170 | 0.190 | 0.203 | |
文献[ | 0.181 | 0.200 | 0.211 | |
本文方法 | 0.176 | 0.193 | 0.200 | |
NYUD v2 | 文献[ | 0.157 | 0.173 | 0.180 |
文献[ | 0.116 | 0.140 | 0.157 | |
文献[ | 0.110 | 0.124 | 0.138 | |
本文方法 | 0.104 | 0.128 | 0.134 |
方法 | 阈值>0.25 | 阈值>0.50 | 阈值>1.00 | ||||||
---|---|---|---|---|---|---|---|---|---|
准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | |
文献[ | 0.516 | 0.400 | 0.436 | 0.600 | 0.366 | 0.439 | 0.794 | 0.407 | 0.525 |
文献[ | 0.577 | 0.626 | 0.591 | 0.531 | 0.509 | 0.506 | 0.617 | 0.489 | 0.533 |
文献[ | 0.489 | 0.435 | 0.454 | 0.536 | 0.422 | 0.463 | 0.670 | 0.479 | 0.548 |
文献[ | 0.639 | 0.502 | 0.556 | 0.663 | 0.504 | 0.565 | 0.756 | 0.537 | 0.620 |
本文方法 | 0.640 | 0.503 | 0.557 | 0.666 | 0.507 | 0.566 | 0.759 | 0.540 | 0.621 |
Tab. 4 Depth boundary accuracies of different methods on NYUD v2 dataset
方法 | 阈值>0.25 | 阈值>0.50 | 阈值>1.00 | ||||||
---|---|---|---|---|---|---|---|---|---|
准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | 准确率 | 召回率 | 综合指标 | |
文献[ | 0.516 | 0.400 | 0.436 | 0.600 | 0.366 | 0.439 | 0.794 | 0.407 | 0.525 |
文献[ | 0.577 | 0.626 | 0.591 | 0.531 | 0.509 | 0.506 | 0.617 | 0.489 | 0.533 |
文献[ | 0.489 | 0.435 | 0.454 | 0.536 | 0.422 | 0.463 | 0.670 | 0.479 | 0.548 |
文献[ | 0.639 | 0.502 | 0.556 | 0.663 | 0.504 | 0.565 | 0.756 | 0.537 | 0.620 |
本文方法 | 0.640 | 0.503 | 0.557 | 0.666 | 0.507 | 0.566 | 0.759 | 0.540 | 0.621 |
变体 | td | REL | RMS | Log10 | ||
---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | ||||
Baseline | 0.507 | 0.823 | 0.926 | 0.243 | 1.172 | 0.122 |
+PSA | 0.512 | 0.826 | 0.930 | 0.239 | 1.170 | 0.120 |
+DCE+BUBF | 0.518 | 0.831 | 0.931 | 0.235 | 1.162 | 0.120 |
+PSA+DCE+BUBF | 0.523 | 0.836 | 0.941 | 0.235 | 1.147 | 0.117 |
Tab. 5 Prediction results of ablation experiments on iBims-1 dataset
变体 | td | REL | RMS | Log10 | ||
---|---|---|---|---|---|---|
1.25 | 1.252 | 1.253 | ||||
Baseline | 0.507 | 0.823 | 0.926 | 0.243 | 1.172 | 0.122 |
+PSA | 0.512 | 0.826 | 0.930 | 0.239 | 1.170 | 0.120 |
+DCE+BUBF | 0.518 | 0.831 | 0.931 | 0.235 | 1.162 | 0.120 |
+PSA+DCE+BUBF | 0.523 | 0.836 | 0.941 | 0.235 | 1.147 | 0.117 |
阈值 | 方法 | 准确率 | 召回率 | 综合指标 |
---|---|---|---|---|
>0.25 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.644 | 0.483 | 0.546 |
0.639 | 0.502 | 0.556 | ||
0.640 | 0.503 | 0.557 | ||
>0.50 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.667 | 0.488 | 0.558 |
0.663 | 0.504 | 0.565 | ||
0.666 | 0.507 | 0.566 | ||
>1.00 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.764 | 0.525 | 0.614 |
0.756 | 0.537 | 0.620 | ||
0.759 | 0.540 | 0.621 |
Tab. 6 Accuracies of predicted boundary pixels in depth maps under different thresholds on NYUD v2 dataset
阈值 | 方法 | 准确率 | 召回率 | 综合指标 |
---|---|---|---|---|
>0.25 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.644 | 0.483 | 0.546 |
0.639 | 0.502 | 0.556 | ||
0.640 | 0.503 | 0.557 | ||
>0.50 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.667 | 0.488 | 0.558 |
0.663 | 0.504 | 0.565 | ||
0.666 | 0.507 | 0.566 | ||
>1.00 | +PSA +DCE+BUBF +PSA+DCE+BUBF | 0.764 | 0.525 | 0.614 |
0.756 | 0.537 | 0.620 | ||
0.759 | 0.540 | 0.621 |
1 | SNAVELY N, SEITZ S M, SZELISKI R. Skeletal graphs for efficient structure from motion[C]// Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2008: 1-8. 10.1109/cvpr.2008.4587678 |
2 | ZHANG R, TSAI P S, CRYER J E, et al. Shape-from-shading: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(8): 690-706. 10.1109/34.784284 |
3 | 毕天腾,刘越,翁冬冬,等. 基于监督学习的单幅图像深度估计综述[J]. 计算机辅助设计与图形学学报, 2018, 30(8): 1383-1393. 10.3724/sp.j.1089.2018.16882 |
BI T T, LIU Y, WENG D D, et al. Survey on supervised learning based depth estimation from a single image [J]. Journal of Computer-Aided Design and Computer Graphics, 2018, 30(8): 1383-1393. 10.3724/sp.j.1089.2018.16882 | |
4 | ROY A, TODOROVIC S. Monocular depth estimation using neural regression forest [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 5506-5514. 10.1109/cvpr.2016.594 |
5 | CHEN X T, CHEN X J, ZHA Z J. Structure-aware residual pyramid network for monocular depth estimation[C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2019: 694-700. 10.24963/ijcai.2019/98 |
6 | HU J J, OZAY M, ZHANG Y, et al. Revisiting single image depth estimation: toward higher resolution maps with accurate object boundaries [C]// Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2019: 1043-1051. 10.1109/wacv.2019.00116 |
7 | GUIZILINI V, AMBRUŞ R, PILLAI S, et al. 3D packing for self-supervised monocular depth estimation[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 2482-2491. 10.1109/cvpr42600.2020.00256 |
8 | XUE F, CAO J F, ZHOU Y, et al. Boundary-induced and scene-aggregated network for monocular depth prediction[J]. Pattern Recognition, 2021, 115: No.107901. 10.1016/j.patcog.2021.107901 |
9 | MISRA D. Mish: a self regularized non-monotonic activation function[C]// Proceedings of the 2020 British Machine Vision Conference. Durham: BMVA Press, 2020: No.928. |
10 | MALIK J, ROSENHOLTZ R. Computing local surface orientation and shape from texture for curved surfaces[J]. International Journal of Computer Vision, 1997, 23(2): 149-168. 10.1023/a:1007958829620 |
11 | SAXENA A, SCHULTE J, NG A Y. Depth estimation using monocular and stereo cues[C]// Proceedings of the 20th International Joint Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press, 2007: 2197-2203. |
12 | LIU B Y, GOULD S, KOLLER D. Single image depth estimation from predicted semantic labels [C]// Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2010: 1253-1260. 10.1109/cvpr.2010.5539823 |
13 | 李阳,陈秀万,王媛,等. 基于深度学习的单目图像深度估计的研究进展[J]. 激光与光电子学进展, 2019, 56(19): No.190001. 10.3788/lop56.190001 |
LI Y, CHEN X W, WANG Y, et al. Progress in deep learning based monocular image depth estimation [J]. Lasers and Optoelectronics Progress, 2019, 56(19): No.190001. 10.3788/lop56.190001 | |
14 | EIGEN D, PUHRSCH C, FERGUS R. Depth map prediction from a single image using a multi-scale deep network [C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014, 2: 2366-2374. |
15 | EIGEN D, FERGUS R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 2650-2658. 10.1109/iccv.2015.304 |
16 | LIU F Y, SHEN C H, LIN G S. Deep convolutional neural fields for depth estimation from a single image [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 5162-5170. 10.1109/cvpr.2015.7299152 |
17 | LIU F Y, SHEN C H, LIN G S, et al. Learning depth from single monocular images using deep convolutional neural fields [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2024-2039. 10.1109/tpami.2015.2505283 |
18 | LI B, SHEN C H, DAI Y C, et al. Depth and surface normal estimation from monocular images using regression on deep features and hierarchical CRFs [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1119-1127. 10.1109/cvpr.2015.7298715 |
19 | ALI U, BAYRAMLI B, ALSARHAN T, et al. A lightweight network for monocular depth estimation with decoupled body and edge supervision [J]. Image and Vision Computing, 2021, 113: No.104261. 10.1016/j.imavis.2021.104261 |
20 | GARG R, VIJAY KUMAR B G, CARNEIRO G, et al. Unsupervised CNN for single view depth estimation: geometry to the rescue [C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9912. Cham: Springer, 2016: 740-756. |
21 | GODARD C, AODHA O MAC, BROSTOW G J. Unsupervised monocular depth estimation with left-right consistency[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 6602-6611. 10.1109/cvpr.2017.699 |
22 | LAINA I, RUPPRECHT C, BELAGIANNIS V, et al. Deeper depth prediction with fully convolutional residual networks[C]// Proceedings of the 4th International Conference on 3D Vision. Piscataway: IEEE, 2016: 239-248. 10.1109/3dv.2016.32 |
23 | ZHAN H, ZU K K, LU J, et al. EPSANet: an efficient pyramid split attention block on convolutional neural network[C]// Proceedings of the 2022 Asian Conference on Computer Vision, LNCS 13843. Cham: Springer, 2023: 541-557. |
24 | 上海应用技术大学. 一种基于金字塔分割注意力的深度估计方法及装置: 202210186323.9[P]. 2022-05-31. |
Shanghai Institute of Technology. A depth estimation method and device based on pyramid split attention: 202210186323.9[P]. 2022-05-31. | |
25 | KOCH T, LIEBEL L, FRAUNDORFER F, et al. Evaluation of CNN-based single-image depth estimation methods[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11131. Cham: Springer, 2019: 331-348. |
26 | SWAMI K, BONDADA P V, BAJPAI P K. ACED: accurate and edge-consistent monocular depth estimation[C]// Proceedings of the 2020 IEEE International Conference on Image Processing. Piscataway: IEEE, 2020: 1376-1380. 10.1109/icip40778.2020.9191113 |
27 | DHARMASIRI T, SPEK A, DRUMMOND T. Joint prediction of depths, normals and surface curvature from RGB images using CNNs[C]// Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2017: 1505-1512. 10.1109/iros.2017.8205954 |
28 | XU D, RICCI E, OUYANG W L, et al. Multi-scale continuous CRFs as sequential deep networks for monocular depth estimation[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 161-169. 10.1109/cvpr.2017.25 |
29 | LEE J H, HEO M, KIM K R, et al. Single-image depth estimation based on Fourier domain analysis [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 330-339. 10.1109/cvpr.2018.00042 |
30 | XU D, OUYANG W L, WANG X G, et al. PAD-Net: multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 675-684. 10.1109/cvpr.2018.00077 |
[1] | Tingjie TANG, Jiajin HUANG, Jin QIN. Session-based recommendation with graph auxiliary learning [J]. Journal of Computer Applications, 2024, 44(9): 2711-2718. |
[2] | Jieru JIA, Jianchao YANG, Shuorui ZHANG, Tao YAN, Bin CHEN. Unsupervised person re-identification based on self-distilled vision Transformer [J]. Journal of Computer Applications, 2024, 44(9): 2893-2902. |
[3] | Yingjun ZHANG, Niuniu LI, Binhong XIE, Rui ZHANG, Wangdong LU. Semi-supervised object detection framework guided by curriculum learning [J]. Journal of Computer Applications, 2024, 44(8): 2326-2333. |
[4] | Yan ZHOU, Yang LI. Rectified cross pseudo supervision method with attention mechanism for stroke lesion segmentation [J]. Journal of Computer Applications, 2024, 44(6): 1942-1948. |
[5] | Zimeng ZHU, Zhixin LI, Zhan HUAN, Ying CHEN, Jiuzhen LIANG. Weakly supervised video anomaly detection based on triplet-centered guidance [J]. Journal of Computer Applications, 2024, 44(5): 1452-1457. |
[6] | Guijin HAN, Xinyuan ZHANG, Wentao ZHANG, Ya HUANG. Self-supervised image registration algorithm based on multi-feature fusion [J]. Journal of Computer Applications, 2024, 44(5): 1597-1604. |
[7] | Jiong WANG, Taotao TANG, Caiyan JIA. PAGCL: positive augmentation graph contrastive learning recommendation method without negative sampling [J]. Journal of Computer Applications, 2024, 44(5): 1485-1492. |
[8] | Xiawuji, Heming HUANG, Gengzangcuomao, Yutao FAN. Survey of extractive text summarization based on unsupervised learning and supervised learning [J]. Journal of Computer Applications, 2024, 44(4): 1035-1048. |
[9] | Rong HUANG, Junjie SONG, Shubo ZHOU, Hao LIU. Image aesthetic quality evaluation method based on self-supervised vision Transformer [J]. Journal of Computer Applications, 2024, 44(4): 1269-1276. |
[10] | Rui JIANG, Wei LIU, Cheng CHEN, Tao LU. Asymmetric unsupervised end-to-end image deraining network [J]. Journal of Computer Applications, 2024, 44(3): 922-930. |
[11] | Wei XIONG, Yibo CHEN, Lizhen ZHANG, Qian YANG, Qin ZOU. Self-supervised monocular depth estimation using multi-frame sequence images [J]. Journal of Computer Applications, 2024, 44(12): 3907-3914. |
[12] | Lihua HU, Xiaoping LI, Jianhua HU, Sulan ZHANG. Multi-view stereo method based on quadtree prior assistance [J]. Journal of Computer Applications, 2024, 44(11): 3556-3564. |
[13] | Nengbing HU, Biao CAI, Xu LI, Danhua CAO. Graph classification method based on graph pooling contrast learning [J]. Journal of Computer Applications, 2024, 44(11): 3327-3334. |
[14] | Shuaihua ZHANG, Shufen ZHANG, Mingchuan ZHOU, Chao XU, Xuebin CHEN. Malicious traffic detection model based on semi-supervised federated learning [J]. Journal of Computer Applications, 2024, 44(11): 3487-3494. |
[15] | Pei ZHAO, Yan QIAO, Rongyao HU, Xinyu YUAN, Minyue LI, Benchu ZHANG. Multivariate time series anomaly detection based on multi-domain feature extraction [J]. Journal of Computer Applications, 2024, 44(11): 3419-3426. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||