《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (3): 996-1002.DOI: 10.11772/j.issn.1001-9081.2024030359
收稿日期:
2024-04-01
修回日期:
2024-06-04
接受日期:
2024-06-11
发布日期:
2024-10-12
出版日期:
2025-03-10
通讯作者:
李洋
作者简介:
蒋占军(1975—),男,宁夏中卫人,教授,博士,主要研究方向:数字图像处理、未来移动通信基金资助:
Zhanjun JIANG, Yang LI(), Jing LIAN, Xinfa MIAO
Received:
2024-04-01
Revised:
2024-06-04
Accepted:
2024-06-11
Online:
2024-10-12
Published:
2025-03-10
Contact:
Yang LI
About author:
JIANG Zhanjun, born in 1975, Ph. D., professor. His research interests include digital image processing, future mobile communication.Supported by:
摘要:
针对脑肿瘤图像分割模型对肿瘤区域关注度不够及易丢失空间上下文信息,导致对肿瘤区域分割效果不佳的问题,提出一种融合坐标增强学习机制(CEL)与多源采样的TransUNet脑肿瘤分割网络。首先,提出一种CEL,结合ResNetv2作为模型的浅层特征提取网络,增加对脑肿瘤区域的关注度;其次,设计深层混合采样特征提取器,并利用可变形注意力与自注意力机制对脑肿瘤的全局与局部信息进行多源采样;最后,在编码器与解码器之间设计交互层级融合(ILF)模块,从而在实现深层与浅层特征信息交互的同时减少参数的计算量。在BraTS2018和BraTS2019数据集上的实验结果表明:相较于基准TransUNet,所提模型的平均相似性系数(mDice)、平均交并比(mIoU)、平均精度均值(mAP)和平均召回率(mRecall)分别提高4.84、7.21、3.83和3.15个百分点,模型大小降低了16.9 MB。
中图分类号:
蒋占军, 李洋, 廉敬, 苗新法. 坐标增强与多源采样的脑肿瘤图像分割[J]. 计算机应用, 2025, 45(3): 996-1002.
Zhanjun JIANG, Yang LI, Jing LIAN, Xinfa MIAO. Coordinate enhancement and multi-source sampling for brain tumor image segmentation[J]. Journal of Computer Applications, 2025, 45(3): 996-1002.
分类 | NET | ED | ET |
---|---|---|---|
WT | √ | √ | √ |
TC | √ | √ | |
ET | √ |
表1 三类标签包含的病变区域
Tab. 1 Three types of labels containing lesion areas
分类 | NET | ED | ET |
---|---|---|---|
WT | √ | √ | √ |
TC | √ | √ | |
ET | √ |
标签 | 模型 | mDice | mIoU | mAP | mPrecision | mRecall |
---|---|---|---|---|---|---|
WT | DeepLabV3+ | 63.96 | 55.39 | 65.28 | 54.37 | 62.30 |
U-Net3+ | 69.36 | 62.21 | 70.46 | 62.27 | 67.83 | |
AttentionUNet | 78.01 | 74.92 | 73.09 | 79.11 | 71.68 | |
SwinUNet | 84.18 | 79.81 | 82.59 | 85.23 | 82.61 | |
TransUNet | 86.71 | 83.24 | 87.29 | 89.41 | 87.13 | |
本文模型 | 91.51 | 89.47 | 90.25 | 92.63 | 90.41 | |
TC | DeepLabV3+ | 59.31 | 51.14 | 55.27 | 50.34 | 52.78 |
U-Net3+ | 59.91 | 54.74 | 62.06 | 55.72 | 62.28 | |
AttentionUNet | 78.92 | 64.38 | 63.56 | 66.36 | 63.47 | |
SwinUNet | 75.84 | 74.66 | 73.15 | 75.34 | 76.83 | |
TransUNet | 79.42 | 76.98 | 78.09 | 80.43 | 80.16 | |
本文模型 | 83.11 | 83.17 | 83.41 | 82.58 | 81.37 | |
ET | DeepLabV3+ | 49.22 | 42.48 | 61.55 | 43.12 | 54.54 |
U-Net3+ | 52.88 | 54.76 | 53.84 | 50.31 | 57.73 | |
AttentionUNet | 62.87 | 61.01 | 63.17 | 60.34 | 64.95 | |
SwinUNet | 69.63 | 64.03 | 68.31 | 73.13 | 70.64 | |
TransUNet | 71.39 | 66.83 | 72.15 | 75.45 | 73.52 | |
本文模型 | 77.42 | 76.03 | 75.37 | 77.25 | 78.49 |
表2 不同模型在BraTS数据集上的分割实验结果 (%)
Tab. 2 Segmentation experimental results of different models on BraTS dataset
标签 | 模型 | mDice | mIoU | mAP | mPrecision | mRecall |
---|---|---|---|---|---|---|
WT | DeepLabV3+ | 63.96 | 55.39 | 65.28 | 54.37 | 62.30 |
U-Net3+ | 69.36 | 62.21 | 70.46 | 62.27 | 67.83 | |
AttentionUNet | 78.01 | 74.92 | 73.09 | 79.11 | 71.68 | |
SwinUNet | 84.18 | 79.81 | 82.59 | 85.23 | 82.61 | |
TransUNet | 86.71 | 83.24 | 87.29 | 89.41 | 87.13 | |
本文模型 | 91.51 | 89.47 | 90.25 | 92.63 | 90.41 | |
TC | DeepLabV3+ | 59.31 | 51.14 | 55.27 | 50.34 | 52.78 |
U-Net3+ | 59.91 | 54.74 | 62.06 | 55.72 | 62.28 | |
AttentionUNet | 78.92 | 64.38 | 63.56 | 66.36 | 63.47 | |
SwinUNet | 75.84 | 74.66 | 73.15 | 75.34 | 76.83 | |
TransUNet | 79.42 | 76.98 | 78.09 | 80.43 | 80.16 | |
本文模型 | 83.11 | 83.17 | 83.41 | 82.58 | 81.37 | |
ET | DeepLabV3+ | 49.22 | 42.48 | 61.55 | 43.12 | 54.54 |
U-Net3+ | 52.88 | 54.76 | 53.84 | 50.31 | 57.73 | |
AttentionUNet | 62.87 | 61.01 | 63.17 | 60.34 | 64.95 | |
SwinUNet | 69.63 | 64.03 | 68.31 | 73.13 | 70.64 | |
TransUNet | 71.39 | 66.83 | 72.15 | 75.45 | 73.52 | |
本文模型 | 77.42 | 76.03 | 75.37 | 77.25 | 78.49 |
模型 | mDice | mIoU | mAP | mPrecision | mRecall |
---|---|---|---|---|---|
DeepLabV3+ | 72.14 | 69.77 | 73.31 | 74.56 | 73.36 |
U-Net3+ | 76.12 | 70.91 | 74.34 | 76.26 | 77.67 |
AttentionUNet | 83.25 | 81.48 | 83.39 | 82.76 | 87.57 |
SwinUNet | 89.69 | 84.23 | 87.16 | 88.03 | 89.41 |
TransUNet | 90.08 | 87.47 | 89.20 | 90.45 | 91.82 |
本文模型 | 93.63 | 90.65 | 93.36 | 93.91 | 92.45 |
表3 不同模型在Kaggle_3m数据集上的分割实验结果 (%)
Tab. 3 Segmentation experimental results of different models on Kaggle_3m dataset
模型 | mDice | mIoU | mAP | mPrecision | mRecall |
---|---|---|---|---|---|
DeepLabV3+ | 72.14 | 69.77 | 73.31 | 74.56 | 73.36 |
U-Net3+ | 76.12 | 70.91 | 74.34 | 76.26 | 77.67 |
AttentionUNet | 83.25 | 81.48 | 83.39 | 82.76 | 87.57 |
SwinUNet | 89.69 | 84.23 | 87.16 | 88.03 | 89.41 |
TransUNet | 90.08 | 87.47 | 89.20 | 90.45 | 91.82 |
本文模型 | 93.63 | 90.65 | 93.36 | 93.91 | 92.45 |
结构 | 模型大小/MB | mDice/% | mIoU/% | mAP/% | mPrecision/% | mRecall/% |
---|---|---|---|---|---|---|
(a) | 125.714 512 | 78.25 | 73.59 | 76.80 | 77.99 | 72.90 |
(b) | 138.151 416 | 80.12 | 76.55 | 81.51 | 82.53 | 80.17 |
(c) | 174.437 261 | 80.98 | 75.45 | 77.90 | 80.23 | 80.07 |
表4 不同增强学习结构的实验结果对比
Tab. 4 Comparison of experimental results of different enhanced learning structures
结构 | 模型大小/MB | mDice/% | mIoU/% | mAP/% | mPrecision/% | mRecall/% |
---|---|---|---|---|---|---|
(a) | 125.714 512 | 78.25 | 73.59 | 76.80 | 77.99 | 72.90 |
(b) | 138.151 416 | 80.12 | 76.55 | 81.51 | 82.53 | 80.17 |
(c) | 174.437 261 | 80.98 | 75.45 | 77.90 | 80.23 | 80.07 |
采样方式 | 采样次数 | 模型大小/MB | mDice/% | mIoU/% | mAP/% | mPrecision/% | mRecall/% |
---|---|---|---|---|---|---|---|
Self Attention | 12 | 138.151 416 | 80.12 | 76.55 | 81.51 | 82.53 | 80.17 |
Deformable Attention | 12 | 114.073 432 | 73.16 | 68.35 | 71.74 | 66.20 | 67.82 |
本文方法 | 11 | 135.285 914 | 80.70 | 78.32 | 81.02 | 82.44 | 80.06 |
10 | 132.709 619 | 81.43 | 79.06 | 81.29 | 83.20 | 81.24 | |
9 | 129.158 172 | 81.56 | 81.38 | 81.71 | 84.27 | 82.16 | |
8 | 126.265 402 | 81.85 | 81.57 | 81.68 | 84.47 | 82.40 | |
7 | 124.602 914 | 80.03 | 80.84 | 79.85 | 80.51 | 79.21 |
表5 不同采样方式的实验结果对比
Tab. 5 Comparison of experimental results of different sampling methods
采样方式 | 采样次数 | 模型大小/MB | mDice/% | mIoU/% | mAP/% | mPrecision/% | mRecall/% |
---|---|---|---|---|---|---|---|
Self Attention | 12 | 138.151 416 | 80.12 | 76.55 | 81.51 | 82.53 | 80.17 |
Deformable Attention | 12 | 114.073 432 | 73.16 | 68.35 | 71.74 | 66.20 | 67.82 |
本文方法 | 11 | 135.285 914 | 80.70 | 78.32 | 81.02 | 82.44 | 80.06 |
10 | 132.709 619 | 81.43 | 79.06 | 81.29 | 83.20 | 81.24 | |
9 | 129.158 172 | 81.56 | 81.38 | 81.71 | 84.27 | 82.16 | |
8 | 126.265 402 | 81.85 | 81.57 | 81.68 | 84.47 | 82.40 | |
7 | 124.602 914 | 80.03 | 80.84 | 79.85 | 80.51 | 79.21 |
CEL | DBS | ILF | 模型大小/MB | mDice/% | mIoU/% | mAP/% | mPrecision/% | mRecall/% |
---|---|---|---|---|---|---|---|---|
— | — | — | 112.220 080 | 79.17 | 75.68 | 79.18 | 81.76 | 80.27 |
√ | — | — | 138.151 416 | 80.12 | 76.55 | 81.51 | 82.53 | 80.17 |
√ | √ | — | 126.265 402 | 81.85 | 81.57 | 81.68 | 84.47 | 82.40 |
√ | √ | √ | 95.276 356 | 84.01 | 82.89 | 83.01 | 84.15 | 83.42 |
表6 整体的改进实验结果对比
Tab. 6 Comparison of overall improvement experimental results
CEL | DBS | ILF | 模型大小/MB | mDice/% | mIoU/% | mAP/% | mPrecision/% | mRecall/% |
---|---|---|---|---|---|---|---|---|
— | — | — | 112.220 080 | 79.17 | 75.68 | 79.18 | 81.76 | 80.27 |
√ | — | — | 138.151 416 | 80.12 | 76.55 | 81.51 | 82.53 | 80.17 |
√ | √ | — | 126.265 402 | 81.85 | 81.57 | 81.68 | 84.47 | 82.40 |
√ | √ | √ | 95.276 356 | 84.01 | 82.89 | 83.01 | 84.15 | 83.42 |
1 | Global Burden of Disease Cancer Collaboration. Global regional and national cancer incidence mortality years of life lost years lived with disability and disability-adjusted life-years for 32 cancer groups, 1990 to 2015: a systematic analysis for the global burden of disease study [J]. JAMA Oncology, 2017, 3(4):524-548. |
2 | WANG Y, JI Y, XIAO H. A data augmentation method for fully automatic brain tumor segmentation [J]. Computers in Biology and Medicine, 2022, 149: No.106039. |
3 | LIU Z, TONG L, CHEN L, et al. Deep learning based brain tumor segmentation: a survey [J]. Complex and Intelligent Systems, 2023, 9: 1001-1026. |
4 | BERGER L, EOIN H, CARDOSO, M J, et al. An adaptive sampling scheme to efficiently train fully convolutional networks for semantic segmentation [C]// Proceedings of the 2018 Annual Conference on Medical Image Understanding and Analysis, CCIS 894. Cham: Springer, 2018: 277-286. |
5 | RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation [C]// Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 9351. Cham: Springer, 2015: 234-241. |
6 | ZHOU Z, RAHMAN SIDDIQUEE M M, TAJBAKHSH N, et al. UNet++: a nested U-Net architecture for medical image segmentation [C]// Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis and 8th International Workshop on Multimodal Learning for Clinical Decision Support, LNCS 11045. Cham: Springer, 2018: 3-11. |
7 | HUANG H, LIN L, TONG R, et al. UNet 3+: a full-scale connected UNet for medical image segmentation [C]// Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2020: 1055-1059. |
8 | KONG X, SUN G, WU Q, et al. Hybrid pyramid U-Net model for brain tumor segmentation [C]// Proceedings of the 2018 International Conference on Intelligent Information Processing, IFIPAICT 538. Cham: Springer, 2018: 346-355. |
9 | ZHANG M, LIU G, TINA J, et al. Improved U-Net with multi-scale cross connection and dilated convolution for brain tumor segmentation [C]// Proceedings of the 2019 International Conference on Medical Imaging Physics and Engineering. Piscataway: IEEE, 2019: 1-5. |
10 | 艾玲梅,李天东,廖福元,等. 基于注意力U-Net的脑肿瘤磁共振图像分割[J]. 激光与光电子学进展, 2020, 57(14): No.141030. |
AI L M, LI T D, LIAO F Y, et al. Magnetic resonance brain tumor image segmentation based on attention U-Net [J]. Lasers and Optoelectronics Progress, 2020, 57(14): No.141030. | |
11 | CAO H, WANG Y, CHEN J, et al. Swin-UNet: UNet-like pure Transformer for medical image segmentation [C]// Proceedings of the 2023 European Conference on Computer Vision Workshops, LNCS 13803. Cham: Springer, 2023: 205-218. |
12 | 王浩宇,朱文韬,肖刚,等. 基于TransUNet和残差结构的脑肿瘤图像分割方法[J]. 中国体视学与图像分析, 2023, 28(1): 98-107. |
WANG H Y, ZHU W T, XIAO G, et al. Research on brain tumor segmentation by optimizing a TransUNet based algorithm [J]. Chinese Journal of Somatology and Image Analysis, 2023, 28(1): 98-107. | |
13 | WU M, YE H L, WU Y, et al. Brain tumor image segmentation based on grouped convolution [J]. Journal of Physics: Conference Series, 2022, 2278: No.012042. |
14 | HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 13708-13717. |
15 | ZHU X, SU W, LU L, et al. Deformable DETR: deformable Transformers for end-to-end object detection [EB/OL]. [2024-03-10]. . |
16 | LIU T, ZHAO Y, WEI Y, et al. Concealed object detection for activate millimeter wave image [J]. IEEE Transactions on Industrial Electronics, 2019, 66(12): 9909-9917. |
17 | FAN T, WANG G, LI Y, et al. MA-Net: a multi-scale attention network for liver and tumor segmentation [J]. IEEE Access, 2020, 8: 179656-179665. |
18 | ZHANG Q L, YANG Y B. SA-Net: shuffle attention for deep convolutional neural networks [C]// Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2021: 2235-2239. |
19 | YUAN S, QIU Z, LI P, et al. RMAU-Net: breast tumor segmentation network based on residual depthwise separable convolution and multiscale channel attention gates [J]. Applied Sciences, 2023, 13(20): No.11362. |
20 | PETRELLA J R, PROVENZALE J M. MR perfusion imaging of the brain: techniques and applications [J]. American Journal of Roentgenology, 2000, 175(1): 207-219. |
21 | CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11211. Cham: Springer, 2018: 833-851. |
22 | ZUO Q, CHEN S, WANG Z. R2AU-Net: attention recurrent residual convolutional neural network for multimodal medical image segmentation [J]. Security and Communication Networks, 2021, 2021: No.6625688. |
[1] | 袁宝华, 陈佳璐, 王欢. 融合多尺度语义和双分支并行的医学图像分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 988-995. |
[2] | 许立君, 黎辉, 刘祖阳, 陈侃松, 马为駽. 基于3D‑Ghost卷积神经网络的脑胶质瘤MRI图像分割算法3D‑GA‑Unet[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1294-1302. |
[3] | 顾聪, 段其强, 任思雨. 基于上下文感知网络的息肉分割算法[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3617-3622. |
[4] | 梁军, 洪泽泓, 余松森. 基于改进粒子群优化算法和遗传变异的图像分割模型[J]. 《计算机应用》唯一官方网站, 2023, 43(6): 1743-1749. |
[5] | 傅励瑶, 尹梦晓, 杨锋. 基于Transformer的U型医学图像分割网络综述[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1584-1595. |
[6] | 蒋溢, 伍书平, 胡昆, 龙林波. 基于Lasso和构造性覆盖算法的不均衡数据分类方法[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1086-1093. |
[7] | 兰冬雷, 王晓东, 姚宇, 王辛, 周继陶. 基于邻近切片注意力融合的直肠癌分割网络[J]. 《计算机应用》唯一官方网站, 2023, 43(12): 3918-3926. |
[8] | 窦猛, 陈哲彬, 王辛, 周继陶, 姚宇. 基于深度学习的多模态医学图像分割综述[J]. 《计算机应用》唯一官方网站, 2023, 43(11): 3385-3395. |
[9] | 林荐壮, 杨文忠, 谭思翔, 周乐鑫, 陈丹妮. 融合滤波增强和反转注意力网络用于息肉分割[J]. 《计算机应用》唯一官方网站, 2023, 43(1): 265-272. |
[10] | 徐光柱, 林文杰, 陈莎, 匡婉, 雷帮军, 周军. U-Net与自适应阈值脉冲耦合神经网络相结合的眼底血管分割方法[J]. 《计算机应用》唯一官方网站, 2022, 42(3): 825-832. |
[11] | 李鸿, 邹俊颖, 谭茜成, 李贵洋. 面向医学图像分割的多注意力融合网络[J]. 《计算机应用》唯一官方网站, 2022, 42(12): 3891-3899. |
[12] | 杨瑞, 钱晓军, 孙振强, 许振. 自然场景下多区域特征融合的混合航拍图像分割算法[J]. 计算机应用, 2021, 41(8): 2445-2452. |
[13] | 沈雪雯, 王晓东, 姚宇. 基于空间分频的超声图像分割注意力网络[J]. 计算机应用, 2021, 41(6): 1828-1835. |
[14] | 黄梨, 卢龙. 基于长距离依赖编码与深度残差U-Net的缺血性卒中病灶分割[J]. 计算机应用, 2021, 41(6): 1820-1827. |
[15] | 罗琴, 王艳. 无需初始轮廓的图像分割模型[J]. 计算机应用, 2021, 41(4): 1179-1183. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||