Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (4): 1269-1277.DOI: 10.11772/j.issn.1001-9081.2022030333
Special Issue: 多媒体计算与计算机仿真
• Multimedia computing and computer simulation • Previous Articles Next Articles
Zhi CHEN1, Xin LI1, Liyan LIN2, Jing ZHONG3, Peng SHI1()
Received:
2022-03-22
Revised:
2022-07-29
Accepted:
2022-08-15
Online:
2023-01-11
Published:
2023-04-10
Contact:
Peng SHI
About author:
CHEN Zhi, born in 1998, M. S. candidate. His research interests include deep learning, medical image processing.Supported by:
通讯作者:
时鹏
作者简介:
陈志(1998—),男,江西九江人,硕士研究生,主要研究方向:深度学习、医学图像处理;基金资助:
CLC Number:
Zhi CHEN, Xin LI, Liyan LIN, Jing ZHONG, Peng SHI. Multi-channel pathological image segmentation with gated axial self-attention[J]. Journal of Computer Applications, 2023, 43(4): 1269-1277.
陈志, 李歆, 林丽燕, 钟婧, 时鹏. 引入门控轴向自注意力的多通道病理图像分割[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1269-1277.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022030333
数据集 | 细胞核 总数 | 图像数 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
乳腺 | 肝脏 | 肾脏 | 前列腺 | 膀胱 | 结肠 | 胃 | 肺 | 大脑 | ||
总计 | 28 846 | 8 | 6 | 9 | 8 | 4 | 3 | 2 | 2 | 2 |
训练集 | 21 623 | 6 | 6 | 6 | 6 | 2 | 2 | 2 | ― | ― |
测试集 | 7 223 | 2 | ― | 3 | 2 | 2 | 1 | ― | 2 | 2 |
Tab. 1 Details of MoNuSeg2020 dataset
数据集 | 细胞核 总数 | 图像数 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
乳腺 | 肝脏 | 肾脏 | 前列腺 | 膀胱 | 结肠 | 胃 | 肺 | 大脑 | ||
总计 | 28 846 | 8 | 6 | 9 | 8 | 4 | 3 | 2 | 2 | 2 |
训练集 | 21 623 | 6 | 6 | 6 | 6 | 2 | 2 | 2 | ― | ― |
测试集 | 7 223 | 2 | ― | 3 | 2 | 2 | 1 | ― | 2 | 2 |
ResBlock | 轴向自注意力 | DiceBCELoss | 准确率 | 精确度 | 召回率 | 特异度 | F1 | JS | DC |
---|---|---|---|---|---|---|---|---|---|
91.75 | 78.85 | 81.17 | 94.26 | 79.44 | 66.04 | 79.44 | |||
√ | 91.60 | 76.61 | 85.34 | 93.00 | 80.10 | 66.96 | 80.10 | ||
√ | √ | 91.95 | 77.83 | 84.53 | 93.64 | 80.63 | 67.67 | 80.63 | |
√ | √ | √ | 92.00 | 78.30 | 85.13 | 93.61 | 80.87 | 68.03 | 80.87 |
Tab. 2 Ablation experiment results of model
ResBlock | 轴向自注意力 | DiceBCELoss | 准确率 | 精确度 | 召回率 | 特异度 | F1 | JS | DC |
---|---|---|---|---|---|---|---|---|---|
91.75 | 78.85 | 81.17 | 94.26 | 79.44 | 66.04 | 79.44 | |||
√ | 91.60 | 76.61 | 85.34 | 93.00 | 80.10 | 66.96 | 80.10 | ||
√ | √ | 91.95 | 77.83 | 84.53 | 93.64 | 80.63 | 67.67 | 80.63 | |
√ | √ | √ | 92.00 | 78.30 | 85.13 | 93.61 | 80.87 | 68.03 | 80.87 |
模型类型 | 模型 | F1 | IoU |
---|---|---|---|
基于卷积 | U-Net[ | 76.45±2.62 | 62.86±3.00 |
UNet++[ | 77.01±2.01 | 63.04±2.54 | |
Attention U-Net[ | 76.67±1.06 | 63.47±1.16 | |
基于Transformer | MedT[ | 77.46±2.38 | 63.37±3.11 |
TransUNet[ | 78.53±1.06 | 65.05±1.28 | |
Swin-Unet[ | 77.69±0.94 | 63.77±1.15 | |
UCTransNet[ | 79.08±0.67 | 65.50±0.91 | |
本文模型 | 79.11±0.99 | 65.63±1.33 |
Tab. 3 Comparison of segmentation performance among different models
模型类型 | 模型 | F1 | IoU |
---|---|---|---|
基于卷积 | U-Net[ | 76.45±2.62 | 62.86±3.00 |
UNet++[ | 77.01±2.01 | 63.04±2.54 | |
Attention U-Net[ | 76.67±1.06 | 63.47±1.16 | |
基于Transformer | MedT[ | 77.46±2.38 | 63.37±3.11 |
TransUNet[ | 78.53±1.06 | 65.05±1.28 | |
Swin-Unet[ | 77.69±0.94 | 63.77±1.15 | |
UCTransNet[ | 79.08±0.67 | 65.50±0.91 | |
本文模型 | 79.11±0.99 | 65.63±1.33 |
模型 | 参数量/106 | 计算量/GFLOPS |
---|---|---|
U-Net[ | 31.04 | 437.71 |
UNet++[ | 36.63 | 1 108.74 |
Attention U-Net[ | 34.88 | 533.93 |
MedT[ | 14.18 | 1.37 |
TransUNet[ | 66.78 | 260.39 |
Swin-Unet[ | 27.12 | 61.21 |
UCTransNet[ | 66.22 | 343.46 |
本文模型 | 58.01 | 622.97 |
Tab. 4 Comparison of parameter amount and calculation amount among different models
模型 | 参数量/106 | 计算量/GFLOPS |
---|---|---|
U-Net[ | 31.04 | 437.71 |
UNet++[ | 36.63 | 1 108.74 |
Attention U-Net[ | 34.88 | 533.93 |
MedT[ | 14.18 | 1.37 |
TransUNet[ | 66.78 | 260.39 |
Swin-Unet[ | 27.12 | 61.21 |
UCTransNet[ | 66.22 | 343.46 |
本文模型 | 58.01 | 622.97 |
1 | AL-HABSI Z, AL-NOUMANI H, HASHMI I AL. Determinants of health-related quality of life among Omanis hospitalized patients with cancer: a cross-sectional study[J]. Quality of Life Research, 2022, 31(7): 2061-2070. 10.1007/s11136-021-03061-3 |
2 | HU W M, LI C, LI X Y, et al. GasHisSDB: a new gastric histopathology image dataset for computer aided diagnosis of gastric cancer[J]. Computers in Biology and Medicine, 2022, 142: No.105207. 10.1016/j.compbiomed.2021.105207 |
3 | JAVED S, MAHMOOD A, DIAS J, et al. Multi-level feature fusion for nucleus detection in histology images using correlation filters[J]. Computers in Biology and Medicine, 2022, 143: No.105281. 10.1016/j.compbiomed.2022.105281 |
4 | KUMAR N, VERMA R, ANAND D, et al. A multi-organ nucleus segmentation challenge[J]. IEEE Transactions on Medical Imaging, 2020, 39(5): 1380-1391. |
5 | KUMAR N, VERMA R, SHARMA S, et al. A dataset and a technique for generalized nuclear segmentation for computational pathology[J]. IEEE Transactions on Medical Imaging, 2017, 36(7): 1550-1560. 10.1109/tmi.2017.2677499 |
6 | 吴崇数,林霖,薛蕴菁,等. 基于自监督学习的病理图像层次分割[J]. 计算机应用, 2020, 40(6):1856-1862. 10.11772/j.issn.1001-9081.2019101863 |
WU C S, LIN L, XUE Y J, et al. Hierarchical segmentation of pathological images based on self-supervised learning[J]. Journal of Computer Applications, 2020, 40(6): 1856-1862. 10.11772/j.issn.1001-9081.2019101863 | |
7 | 林天予,宋亮,高智凡,等. 基于深度学习的二维心脏超声图像分割模型在小规模数据集上的性能评估[J]. 暨南大学学报(自然科学与医学版), 2022, 43(2):191-198. |
LIN T Y, SONG L, GAO Z F, et al. Evaluation of a deep learning-based model for 2-D echocardiography segmentation on small datasets[J]. Journal of Jinan University (Natural Science and Medicine Edition), 2022, 43(2): 191-198. | |
8 | LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 3431-3440. 10.1109/cvpr.2015.7298965 |
9 | RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]// Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 9351. Cham: Springer, 2015: 234-241. |
10 | ZHOU Z W, SIDDIQUEE M M R, TAJBAKHSH N, et al. UNet++: redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Transactions on Medical Imaging, 2020, 39(6): 1856-1867. 10.1109/tmi.2019.2959609 |
11 | QIN J, HE Y J, ZHOU Y, et al. REU-Net: region-enhanced nuclei segmentation network[J]. Computers in Biology and Medicine, 2022, 146: No.105546. 10.1016/j.compbiomed.2022.105546 |
12 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019: 4171-4186. 10.18653/v1/n18-2 |
13 | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[EB/OL]. (2021-06-03) [2022-02-23].. |
14 | WANG H Y, ZHU Y K, GREEN B, et al. Axial-DeepLab: stand-alone axial-attention for panoptic segmentation[C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12349. Cham: Springer, 2020: 108-126. 10.1007/978-3-030-58548-8_7 |
15 | HO J, KALCHBRENNER N, WEISSENBORN D, et al. Axial attention in multidimensional transformers[EB/OL]. (2019-12-20) [2022-02-23].. |
16 | ZHENG S X, LU J C, ZHAO H S, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 6877-6886. 10.1109/cvpr46437.2021.00681 |
17 | ZHANG Y L, HIGASHITA R, FU H Z, et al. A multi-branch hybrid transformer network for corneal endothelial cell segmentation[C]// Proceedings of the 2021 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 12901. Cham: Springer, 2021: 99-108. |
18 | CHEN J N, LU Y Y, YU Q H, et al. TransUNet: Transformers make strong encoders for medical image segmentation[EB/OL]. (2021-02-08) [2022-02-23].. |
19 | VALANARASU J M J, OZA P, HACIHALILOGLU I, et al. Medical transformer: gated axial-attention for medical image segmentation[C]// Proceedings of the 2021 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 12901. Cham: Springer, 2021: 36-46. |
20 | LIU Z, LIN Y T, CAO Y, et al. Swin Transformer: hierarchical vision transformer using shifted windows[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9992-10002. 10.1109/iccv48922.2021.00986 |
21 | CAO H, WANG Y Y, CHEN J, et al. Swin-Unet: Unet-like pure transformer for medical image segmentation[EB/OL]. (2021-05-12) [2022-02-23].. 10.1007/978-3-031-25066-8_9 |
22 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017:6000-6010. |
23 | ZHANG Y D, LIU H Y, HU Q. TransFuse: fusing Transformers and CNNs for medical image segmentation[C]// Proceedings of the 2021 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 12901. Cham: Springer, 2021: 14-24. |
24 | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. 10.1109/cvpr.2016.90 |
25 | VEIT A, WILBER M J, BELONGIE S. Residual networks behave like ensembles of relatively shallow networks[C]// Proceedings of the 30th International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2016: 550-558. |
26 | 罗恺锴,王婷,叶芳芳,等. 引入注意力机制和多视角融合的脑肿瘤MR图像U-Net分割模型[J]. 中国图象图形学报, 2021, 26(9):2208-2218. 10.11834/jig.200584 |
LUO K K, WANG T, YE F F, et al. U-Net segmentation model of brain tumor MR image based on attention mechanism and multi-view fusion[J]. Journal of Image and Graphics, 2021, 26(9):2208-2218. 10.11834/jig.200584 | |
27 | OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: learning where to look for the pancreas[EB/OL]. (2018-05-20) [2022-02-23].. |
28 | WANG H N, CAO P, WANG J Q, et al. UCTransNet: rethinking the skip connections in U-Net from a channel-wise perspective with transformer[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2022: 2441-2449. 10.1609/aaai.v36i3.20144 |
29 | WU C S, ZHONG J, LIN L, et al. Segmentation of HE-stained meningioma pathological images based on pseudo-labels[J]. PLoS ONE, 2022, 17(2): No.e0263006. 10.1371/journal.pone.0263006 |
[1] | Xin YANG, Xueni CHEN, Chunjiang WU, Shijie ZHOU. Short-term traffic flow prediction of urban highway based on variant residual model and Transformer [J]. Journal of Computer Applications, 2024, 44(9): 2947-2951. |
[2] | Xiang GUO, Wengang JIANG, Yuhang WANG. Encrypted traffic classification method based on improved Inception-ResNet [J]. Journal of Computer Applications, 2023, 43(8): 2471-2476. |
[3] | XU Xuebin, ZHANG Jiada, LIU Wei, LU Longbin, ZHAO Yuqing. High-precision classification method for breast cancer fusing spatial features and channel features [J]. Journal of Computer Applications, 2021, 41(10): 3025-3032. |
[4] | SUN Zhongjie, WAN Tao, CHEN Dong, WANG Hao, ZHAO Yanli, QIN Zengchang. Application of deep learning in histopathological image classification of aortic medial degeneration [J]. Journal of Computer Applications, 2021, 41(1): 280-285. |
[5] | WU Chongshu, LIN Lin, XUE Yunjing, SHI Peng. Hierarchical segmentation of pathological images based on self-supervised learning [J]. Journal of Computer Applications, 2020, 40(6): 1856-1862. |
[6] | XU Yining, HE Xiaohai, ZHANG Jin, QING Linbo. Text-to-image synthesis method based on multi-level progressive resolution generative adversarial networks [J]. Journal of Computer Applications, 2020, 40(12): 3612-3617. |
[7] | GONG Lei, XU Jun, WANG Guanhao, WU Jianzhong, TANG Jinhai. Multi-feature based descriptions for automated grading on breast histopathology [J]. Journal of Computer Applications, 2015, 35(12): 3570-3575. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||