Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (1): 261-274.DOI: 10.11772/j.issn.1001-9081.2023121776
• Multimedia computing and computer simulation • Previous Articles Next Articles
Lu WANG1, Dong LIU1(), Weiguang LIU2
Received:
2023-12-27
Revised:
2024-05-07
Accepted:
2024-05-08
Online:
2024-05-21
Published:
2025-01-10
Contact:
Dong LIU
About author:
WANG Lu, born in 1972, Ph. D., associate professor. His research interests include machine vision, artificial intelligence, parallel algorithm.通讯作者:
刘东
作者简介:
王璐(1972—),男,辽宁抚顺人,副教授,博士,主要研究方向:机器视觉、人工智能、并行算法;CLC Number:
Lu WANG, Dong LIU, Weiguang LIU. Interpretability study on deformable convolutional network and its application in butterfly species recognition models[J]. Journal of Computer Applications, 2025, 45(1): 261-274.
王璐, 刘东, 刘卫光. 可变形卷积网络的解释性研究及其在蝴蝶物种识别模型中的应用[J]. 《计算机应用》唯一官方网站, 2025, 45(1): 261-274.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2023121776
种类 | 数量最小值 | 数量最大值 | 图像数 |
---|---|---|---|
合计 | 68 | 96 | 6 497 |
标本图 | 11 | 32 | 1 410 |
生态图 | 46 | 75 | 5 087 |
Tab. 1 Training set image quantity distribution
种类 | 数量最小值 | 数量最大值 | 图像数 |
---|---|---|---|
合计 | 68 | 96 | 6 497 |
标本图 | 11 | 32 | 1 410 |
生态图 | 46 | 75 | 5 087 |
网络模型 | 可变形卷积数 | 可变形卷积位置 | 准确率/% |
---|---|---|---|
ResNet50 | 93.67 | ||
D_v2-ResNet50@L0 | 1 | Conv1 | 95.79 |
D_v2-ResNet50@L1 | 1 | layer1[0].conv2 | 95.83 |
layer1[ | 95.88 | ||
layer1[ | 95.86 | ||
D_v2-ResNet50@L2 | 1 | Layer2[0].conv2 | 95.78 |
D_v2-ResNet50@L3 | 1 | Layer3[ | 95.82 |
D_v2-ResNet50@L4 | 1 | Layer4[0].conv2 | 95.52 |
D_v2-ResNet50@L1 | 2 | Layer1[0].conv2;Layer1[ | 95.68 |
Layer1[0].conv2;Layer1[ | 95.84 | ||
D_v2-ResNet50@L0~L1 | 2 | Conv1;Layer1[ | 95.89 |
Conv1;Layer1[ | 95.98 | ||
D_v2-ResNet50@L1~L3 | 3 | Layer1[0].conv2;Layer2[ | 95.99 |
D_v2-ResNet50@L0~L3 | 4 | Conv1;Layer1[ | 95.77 |
Conv1;Layer1[ | 95.97 | ||
D_v2-ResNet50@L0~L2 | 3 | Conv1;Layer1[ | 96.07 |
Layer1[0].conv2;Layer2[ | 96.01 | ||
Conv1;Layer1[ | 96.28 |
Tab. 2 Results of ablation experiments with introducing deformable convolution in different layers
网络模型 | 可变形卷积数 | 可变形卷积位置 | 准确率/% |
---|---|---|---|
ResNet50 | 93.67 | ||
D_v2-ResNet50@L0 | 1 | Conv1 | 95.79 |
D_v2-ResNet50@L1 | 1 | layer1[0].conv2 | 95.83 |
layer1[ | 95.88 | ||
layer1[ | 95.86 | ||
D_v2-ResNet50@L2 | 1 | Layer2[0].conv2 | 95.78 |
D_v2-ResNet50@L3 | 1 | Layer3[ | 95.82 |
D_v2-ResNet50@L4 | 1 | Layer4[0].conv2 | 95.52 |
D_v2-ResNet50@L1 | 2 | Layer1[0].conv2;Layer1[ | 95.68 |
Layer1[0].conv2;Layer1[ | 95.84 | ||
D_v2-ResNet50@L0~L1 | 2 | Conv1;Layer1[ | 95.89 |
Conv1;Layer1[ | 95.98 | ||
D_v2-ResNet50@L1~L3 | 3 | Layer1[0].conv2;Layer2[ | 95.99 |
D_v2-ResNet50@L0~L3 | 4 | Conv1;Layer1[ | 95.77 |
Conv1;Layer1[ | 95.97 | ||
D_v2-ResNet50@L0~L2 | 3 | Conv1;Layer1[ | 96.07 |
Layer1[0].conv2;Layer2[ | 96.01 | ||
Conv1;Layer1[ | 96.28 |
网络模型 | 准确率/% | 参数量/106 | 浮点运算量/GFLOPs |
---|---|---|---|
Inception V3 | 93.37 | 20.93 | 2.85 |
AlexNet | 89.87 | 54.67 | 0.71 |
GoogLeNet | 93.94 | 5.41 | 1.51 |
Xception | 93.88 | 9.22 | 2.41 |
SqueezeNet V1.1 | 93.49 | 0.73 | 0.28 |
MobileNet V2 | 92.37 | 2.21 | 2.41 |
VGG16 | 92.03 | 128.35 | 15.50 |
ResNet50 | 93.67 | 22.57 | 4.12 |
DenseNet121 | 94.21 | 6.71 | 2.88 |
Tab. 3 Butterfly species recognition performance of different network models
网络模型 | 准确率/% | 参数量/106 | 浮点运算量/GFLOPs |
---|---|---|---|
Inception V3 | 93.37 | 20.93 | 2.85 |
AlexNet | 89.87 | 54.67 | 0.71 |
GoogLeNet | 93.94 | 5.41 | 1.51 |
Xception | 93.88 | 9.22 | 2.41 |
SqueezeNet V1.1 | 93.49 | 0.73 | 0.28 |
MobileNet V2 | 92.37 | 2.21 | 2.41 |
VGG16 | 92.03 | 128.35 | 15.50 |
ResNet50 | 93.67 | 22.57 | 4.12 |
DenseNet121 | 94.21 | 6.71 | 2.88 |
网络模型 | 可变形卷积数 | 可变形卷积位置 | 准确率/% | 参数量/106 | 浮点运算量/GFLOPs |
---|---|---|---|---|---|
VGG16 | 92.03 | 128.35 | 15.50 | ||
D_v2-VGG16 | 2 | feature[0] feature[ | 93.79 | 128.36 | 15.73 |
ResNet50 | 93.67 | 22.57 | 4.12 | ||
D_v2-ResNet50 | 2 | Conv1 Layer1[ Layer2[ | 96.28 | 22.62 | 4.24 |
DenseNet121 | 94.21 | 6.71 | 2.88 | ||
D_v2-DenseNet121 | 3 | conv0 denseblock1.denselayer2.conv2 denseblock1.denselayer5.conv2 | 97.03 | 6.77 | 3.13 |
Tab. 4 Experimental results of networks model before and after improvement
网络模型 | 可变形卷积数 | 可变形卷积位置 | 准确率/% | 参数量/106 | 浮点运算量/GFLOPs |
---|---|---|---|---|---|
VGG16 | 92.03 | 128.35 | 15.50 | ||
D_v2-VGG16 | 2 | feature[0] feature[ | 93.79 | 128.36 | 15.73 |
ResNet50 | 93.67 | 22.57 | 4.12 | ||
D_v2-ResNet50 | 2 | Conv1 Layer1[ Layer2[ | 96.28 | 22.62 | 4.24 |
DenseNet121 | 94.21 | 6.71 | 2.88 | ||
D_v2-DenseNet121 | 3 | conv0 denseblock1.denselayer2.conv2 denseblock1.denselayer5.conv2 | 97.03 | 6.77 | 3.13 |
CAM | SR@MoRF_Area↓ | SR@LeRF_Area↑ | ||||
---|---|---|---|---|---|---|
图13(h) | 图14(h) | 图15(h) | 图13(i) | 图14(i) | 图15(i) | |
GradCAM | 6.13 | 22.78 | 22.69 | 85.95 | 77.20 | 92.95 |
GradCAM++ | 4.65 | 18.66 | 23.08 | 87.93 | 78.54 | 92.24 |
ScoreCAM | 7.67 | 25.33 | 21.62 | 86.61 | 75.75 | 92.07 |
LayerCAM | 4.62 | 18.90 | 22.78 | 87.90 | 78.23 | 92.13 |
EigenCAM | 7.74 | 32.44 | 23.09 | 85.34 | 66.36 | 90.11 |
FullGrad | 3.91 | 17.85 | 21.00 | 85.95 | 77.20 | 92.95 |
Tab. 5 Results of SR@MoRF_Area and SR@LeRF_Area for evaluating CAM
CAM | SR@MoRF_Area↓ | SR@LeRF_Area↑ | ||||
---|---|---|---|---|---|---|
图13(h) | 图14(h) | 图15(h) | 图13(i) | 图14(i) | 图15(i) | |
GradCAM | 6.13 | 22.78 | 22.69 | 85.95 | 77.20 | 92.95 |
GradCAM++ | 4.65 | 18.66 | 23.08 | 87.93 | 78.54 | 92.24 |
ScoreCAM | 7.67 | 25.33 | 21.62 | 86.61 | 75.75 | 92.07 |
LayerCAM | 4.62 | 18.90 | 22.78 | 87.90 | 78.23 | 92.13 |
EigenCAM | 7.74 | 32.44 | 23.09 | 85.34 | 66.36 | 90.11 |
FullGrad | 3.91 | 17.85 | 21.00 | 85.95 | 77.20 | 92.95 |
CAM | SR@MoRF_D↑ | SR@LeRF_R↑ | ||||||
---|---|---|---|---|---|---|---|---|
移除阈值为10% | 移除阈值为30% | 移除阈值为50% | 移除阈值为70% | 移除阈值为10% | 移除阈值为30% | 移除阈值为50% | 移除阈值为70% | |
GradCAM | 38.6 | 53.2 | 76.3 | 86.8 | 92.4 | 76.3 | 75.8 | 50.1 |
GradCAM++ | 41.5 | 57.7 | 78.7 | 87.1 | 95.7 | 81.7 | 76.2 | 51.2 |
ScoreCAM | 38.2 | 56.9 | 79.4 | 87.9 | 95.3 | 82.4 | 74.9 | 49.6 |
LayerCAM | 39.8 | 59.6 | 70.8 | 87.7 | 96.6 | 80.9 | 76.7 | 51.9 |
EigenCAM | 30.7 | 47.8 | 76.1 | 88.5 | 95.2 | 81.4 | 77.3 | 49.4 |
FullGrad | 55.9 | 75.2 | 81.1 | 88.4 | 98.3 | 91.8 | 86.7 | 51.6 |
Tab. 6 Results of SR@MoRF_D and SR@LeRF_R for evaluating CAM
CAM | SR@MoRF_D↑ | SR@LeRF_R↑ | ||||||
---|---|---|---|---|---|---|---|---|
移除阈值为10% | 移除阈值为30% | 移除阈值为50% | 移除阈值为70% | 移除阈值为10% | 移除阈值为30% | 移除阈值为50% | 移除阈值为70% | |
GradCAM | 38.6 | 53.2 | 76.3 | 86.8 | 92.4 | 76.3 | 75.8 | 50.1 |
GradCAM++ | 41.5 | 57.7 | 78.7 | 87.1 | 95.7 | 81.7 | 76.2 | 51.2 |
ScoreCAM | 38.2 | 56.9 | 79.4 | 87.9 | 95.3 | 82.4 | 74.9 | 49.6 |
LayerCAM | 39.8 | 59.6 | 70.8 | 87.7 | 96.6 | 80.9 | 76.7 | 51.9 |
EigenCAM | 30.7 | 47.8 | 76.1 | 88.5 | 95.2 | 81.4 | 77.3 | 49.4 |
FullGrad | 55.9 | 75.2 | 81.1 | 88.4 | 98.3 | 91.8 | 86.7 | 51.6 |
1 | DAI J, QI H, XIONG Y, et al. Deformable convolutional networks [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 764-773. |
2 | ZHU X, HAN H, LIN S, et al. Deformable ConvNets V2: more deformable, better results [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 9300-9308. |
3 | 刘卫光,刘东,王璐.可变形卷积网络研究综述[J].计算机科学与探索, 2023, 17(7): 1549-1564. |
LIU W G, LIU D, WANG L. Survey of deformable convolutional networks [J]. Journal Frontiers of Computer Science and Technology, 2023, 17(7): 1549-1564. | |
4 | LAI S C, TAN H K, LEE P Y. 3D deformable convolution for action classification in videos [C]// Proceedings of the SPIE 11766, 2021 International Workshop on Advanced Imaging Technology. Bellingham, WA: SPIE, 2021: No.117660R. |
5 | 谢娟英,鲁银圆,孔维轩,等.基于改进RetinaNet的自然环境中蝴蝶种类识别[J].计算机研究与发展, 2021, 58(8): 1686-1704. |
XIE J Y, LU Y Y, KONG W X, et al. Butterfly species identification from natural environment based on improved RetinaNet [J]. Journal of Computer Research and Development, 2021, 58(8): 1686-1704. | |
6 | 赵云霄,钱宇华,王克琪.基于可变形卷积的多人人体姿态估计[J].模式识别与人工智能, 2020, 33(10): 944-950. |
ZHAO Y X, QIAN Y H, WANG K Q. Multi-person human pose estimation based on deformable convolution [J]. Pattern Recognition and Artificial Intelligence, 2020, 33(10): 944-950. | |
7 | CHEN T, PU T, WU H, et al. Cross-domain facial expression recognition: a unified evaluation benchmark and adversarial graph learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(12): 9887-9903. |
8 | YASMIN R, DAS A, ROZARIO L J, et al. Butterfly detection and classification techniques: a review [J]. Intelligent Systems with Applications, 2023, 18: No.200214. |
9 | KANG S H, SONG S H, LEE S H. Identification of butterfly species with a single neural network system [J]. Journal of Asia-Pacific Entomology, 2012, 15(3): 431-435. |
10 | KANG S H, CHO J H, LEE S H. Identification of butterfly based on their shapes when viewed from different angles using an artificial neural network [J]. Journal of Asia-Pacific Entomology, 2014, 17(2): 143-149. |
11 | KAYA Y, KAYCI L. Application of artificial neural network for automatic detection of butterfly species using color and texture features [J]. The Visual Computer, 2014, 30(1): 71-79. |
12 | KAYA Y, KAYCI L, TEKIN R, et al. Evaluation of texture features for automatic detecting butterfly species using extreme learning machine [J]. Journal of Experimental and Theoretical Artificial Intelligence, 2014, 26(2): 267-281. |
13 | KAYA Y, FARUK ERTUĞRUL Ö, TEKIN R. Two novel local binary pattern descriptors for texture analysis [J]. Applied Soft Computing, 2015, 34: 728-735. |
14 | LIN Z, JIA J, GAO W, et al. Increasingly specialized perception network for fine-grained visual categorization of butterfly specimens [J]. IEEE Access, 2019, 7: 123367-123392. |
15 | LIN Z, JIA J, GAO W, et al. Fine-grained visual categorization of butterfly specimens at sub-species level via a convolutional neural network with skip-connections [J]. Neurocomputing, 2020, 384: 295-313. |
16 | CARVAJAL J A, ROMERO D G, SAPPA A D. Fine-tuning based deep convolutional networks for lepidopterous genus recognition [C]// Proceedings of the 2016 Iberoamerican Congress on Pattern Recognition. Cham: Springer, 2017: 467-475. |
17 | LIU S, DENG W. Very deep convolutional neural network based image classification using small training sample size [C]// Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition. Piscataway: IEEE, 2015: 730-734. |
18 | KAMARON ARZAR N N, SABRI N, MOHD JOHARI N F, et al. Butterfly species identification using Convolutional Neural Network (CNN) [C]// Proceedings of the 2019 IEEE International Conference on Automatic Control and Intelligent Systems. Piscataway: IEEE, 2019: 221-224. |
19 | SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1-9. |
20 | ALMRYAD A, KUTUCU H. Automatic identification for field butterflies by convolutional neural networks [J]. Engineering Science and Technology, an International Journal, 2020, 23(1): 189-195. |
21 | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
22 | THEIVAPRAKASHAM H. Identification of Indian butterflies using deep convolutional neural network [J]. Journal of Asia-Pacific Entomology, 2021, 24(1): 329-340. |
23 | HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2261-2269. |
24 | FATHIMATHUL R P P, ORBAN R, VADIVEL K S, et al. A novel method for the classification of butterfly species using pre-trained CNN models [J]. Electronics, 2022, 11(13): No.2016. |
25 | 谢娟英,侯琦,史颖欢,等.蝴蝶种类自动识别研究[J].计算机研究与发展, 2018, 55(8): 1609-1618. |
XIE J Y, HOU Q, SHI Y H, et al. The automatic identification of butterfly species [J]. Journal of Computer Research and Development, 2018, 55(8): 1609-1618. | |
26 | XIE J, LU Y, WU Z, et al. Investigations of butterfly species identification from images in natural environments [J]. International Journal of Machine Learning and Cybernetics, 2021, 12(8): 2431-2442. |
27 | LIANG B, WU S, XU K, et al. Butterfly detection and classification based on integrated YOLO algorithm [C]// Proceedings of the 2019 International Conference on Genetic and Evolutionary Computing, AISC 1107. Singapore: Springer, 2020: 500-512. |
28 | SEMIH KAYHAN O, VAN GEMERT J C. On translation invariance in CNNs: convolutional layers can exploit absolute spatial location [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 14262-14273. |
29 | 周飞燕,金林鹏,董军.卷积神经网络研究综述[J].计算机学报, 2017, 40(6): 1229-1251. |
ZHOU F Y, JIN L P, DONG J. Review of convolutional neural network [J]. Chinese Journal of Computers, 2017, 40(6): 1229-1251. | |
30 | ZEILER M D, FERGUS R. Visualizing and understanding convolutional networks [C]// Proceedings of the 2014 European Conference on Computer Vision, LNCS 8689. Cham: Springer, 2014: 813-833. |
31 | ZHOU B, KHOSLA A, LAPEDRIZA A, et al. Learning deep features for discriminative localization [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2921-2929. |
32 | SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 618-626. |
33 | CHATTOPADHYAY A, SARKAR A, HOWLADER P, et al. Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks [C]// Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2018: 839-847. |
34 | WANG H, WANG Z, DU M, et al. Score-CAM: score-weighted visual explanations for convolutional neural networks [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2020: 111-119. |
35 | JIANG P T, ZHANG C B, HOU Q, et al. LayerCAM: exploring hierarchical class activation maps for localization [J]. IEEE Transactions on Image Processing, 2021, 30: 5875-5888. |
36 | BANY MUHAMMAD M, YEASIN M. Eigen-CAM: visual explanations for deep convolutional neural networks [J]. SN Computer Science, 2021, 2: No.47. |
37 | SRINIVAS S, FLEURET F. Full-gradient representation for neural network visualization [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 4124-4133. |
38 | 魏靖,王玉亭,袁会珠,等.基于深度学习与特征可视化方法的草地贪夜蛾及其近缘种成虫识别[J].智慧农业(中英文), 2020, 2(3): 75-85. |
WEI J, WANG Y T, YUAN H Z, et al. Identification and morphological analysis of adult Spodoptera frugiperda and its close related species using deep learning [J]. Smart Agriculture (Chinese and English), 2020, 2(3): 75-85. | |
39 | MONTAVON G, SAMEK W, MÜLLER K R. Methods for interpreting and understanding deep neural networks [J]. Digital Signal Processing, 2018, 73: 1-15. |
40 | MONTAVON G, LAPUSCHKIN S, BINDER A, et al. Explaining nonlinear classification decisions with deep Taylor decomposition [J]. Pattern Recognition, 2017, 65: 211-222. |
41 | LUNDBERG S M, LEE S I. A unified approach to interpreting model predictions [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 4768-4777. |
42 | SUNDARARAJAN M, TALY A, YAN Q. Axiomatic attribution for deep networks [C]// Proceedings of the 34th International Conference on Machine Learning. New York: JMLR.org, 2017: 3319-3328. |
43 | HOOKER S, ERHAN D, KINDERMANS P J, et al. A benchmark for interpretability methods in deep neural networks [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 9734-9745. |
44 | TOMSETT R, HARBORNE D, CHAKRABORTY S, et al. Sanity checks for saliency metrics [C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2019: 6021-6029. |
45 | YONA G, GREENFELD D. Revisiting sanity checks for saliency maps [EB/OL]. [2023-10-22]. . |
46 | RONG Y, LEEMANN T, BORISOV V, et al. A consistent and efficient evaluation strategy for attribution methods [C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 18770-18795. |
47 | PETSIUK V, DAS A, SAENKO K. RISE: randomized input sampling for explanation of black-box models [EB/OL]. [2023-11-22]. . |
[1] | Ying YANG, Xiaoyan HAO, Dan YU, Yao MA, Yongle CHEN. Graph data generation approach for graph neural network model extraction attacks [J]. Journal of Computer Applications, 2024, 44(8): 2483-2492. |
[2] | Xiaomin ZHOU, Fei TENG, Yi ZHANG. Automatic international classification of diseases coding model based on meta-network [J]. Journal of Computer Applications, 2023, 43(9): 2721-2726. |
[3] | Hanqing LIU, Xiaodong KANG, Fuqing ZHANG, Xiuyuan ZHAO, Jingyi YANG, Xiaotian WANG, Mengfan LI. Image detection algorithm of cerebral arterial stenosis by improved Libra region-convolutional neural network [J]. Journal of Computer Applications, 2022, 42(9): 2909-2916. |
[4] | Su GAO, Junzhong BAO, Xin WANG, Lidong WANG. Interpretable ordered clustering method and its application analysis [J]. Journal of Computer Applications, 2022, 42(2): 457-462. |
[5] | Lingmin LI, Mengran HOU, Kun CHEN, Junmin LIU. Survey on interpretability research of deep learning [J]. Journal of Computer Applications, 2022, 42(12): 3639-3650. |
[6] | Xia LEI, Xionglin LUO. Review on interpretability of deep learning [J]. Journal of Computer Applications, 2022, 42(11): 3588-3602. |
[7] | Keyang CHENG, Chunyun MENG, Wenshan WANG, Wenxi SHI, Yongzhao ZHAN. Research advances in disentangled representation learning [J]. Journal of Computer Applications, 2021, 41(12): 3409-3418. |
[8] | ZHANG Zhancheng, ZHANG Dalong, LUO Xiaoqing. Pulmonary nodule detection method with semantic feature score [J]. Journal of Computer Applications, 2020, 40(3): 925-930. |
[9] | LEI Hengxin, LIU Jinglei. Improving feature selection and matrix recovery ability by CUR matrix decomposition [J]. Journal of Computer Applications, 2017, 37(3): 640-646. |
[10] | Hua-Fei YANG Ruo-Yu YANG Tong LU Shi-Jie CAI. Interpretability-based recognition method of architectural drawings [J]. Journal of Computer Applications, 2007, 27(9): 2242-2245. |
[11] | . Structure risk minimization based weighted partial least-squared method [J]. Journal of Computer Applications, 2007, 27(4): 939-941. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||