《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (9): 3017-3025.DOI: 10.11772/j.issn.1001-9081.2024081188
• 多媒体计算与计算机仿真 • 上一篇
收稿日期:2024-08-21
									
				
											修回日期:2024-10-14
									
				
											接受日期:2024-10-21
									
				
											发布日期:2024-11-07
									
				
											出版日期:2025-09-10
									
				
			通讯作者:
					胡静
							作者简介:王芳(1989—),女,山西太原人,讲师,硕士,主要研究方向:图形图像处理、深度学习基金资助:
        
                                                                                                                            Fang WANG, Jing HU( ), Rui ZHANG, Wenting FAN
), Rui ZHANG, Wenting FAN
			  
			
			
			
                
        
    
Received:2024-08-21
									
				
											Revised:2024-10-14
									
				
											Accepted:2024-10-21
									
				
											Online:2024-11-07
									
				
											Published:2025-09-10
									
			Contact:
					Jing HU   
							About author:WANG Fang, born in 1989, M. S., lecturer. Her research interests include graphic and image processing, deep learning.Supported by:摘要:
针对当前医学图像分割领域缺乏使用传统图像分割算法引导卷积神经网络(CNN)进行分割的问题,提出内容引导下多角度特征融合医学图像分割网络(CGMAFF-Net)。首先,利用灰度图以及Otsu阈值分割图像通过基于Transformer的小微U型特征提取模块生成病变区域引导图,并使用自适应组合赋权(ACW)将它们赋权于原始医学图像以进行初始引导;其次,使用残差网络(ResNet)对赋权后的医学图像进行下采样特征提取,并使用多角度特征融合(MAFF)模块对1/16和1/8的特征图进行特征融合;最后,使用反向注意力(RA)上采样并逐步还原特征图的大小,从而实现对关键病变区域的预测。在CVC-ClinicDB、Kvasir-SEG和ISIC 2018数据集上的实验结果表明,与目前分割性能最好的多尺度空间反向注意力网络MSRAformer相比,CGMAFF-Net的平均交并比(mIoU)分别提升了0.97、0.78和0.11个百分点;与经典网络U-Net相比,CGMAFF-Net的mIoU则分别提升了2.66、8.94和1.69个百分点,充分验证了CGMAFF-Net的有效性与先进性。
中图分类号:
王芳, 胡静, 张睿, 范文婷. 内容引导下多角度特征融合医学图像分割网络[J]. 计算机应用, 2025, 45(9): 3017-3025.
Fang WANG, Jing HU, Rui ZHANG, Wenting FAN. Medical image segmentation network with content-guided multi-angle feature fusion[J]. Journal of Computer Applications, 2025, 45(9): 3017-3025.
| 数据集 | 图像大小 | 样本总数 | 训练样本数 | 测试样本数 | 
|---|---|---|---|---|
| CVC-ClinicDB | 384×288 | 612 | 551 | 61 | 
| Kvasir-SEG | 变化 | 1 000 | 900 | 100 | 
| ISIC 2018 | 变化 | 2 594 | 2 335 | 259 | 
表1 3个生物医学数据集的详细信息
Tab. 1 Detailed information of three biomedical datasets
| 数据集 | 图像大小 | 样本总数 | 训练样本数 | 测试样本数 | 
|---|---|---|---|---|
| CVC-ClinicDB | 384×288 | 612 | 551 | 61 | 
| Kvasir-SEG | 变化 | 1 000 | 900 | 100 | 
| ISIC 2018 | 变化 | 2 594 | 2 335 | 259 | 
| 数据集 | 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|---|
| CVC-ClinicDB | backbone+RA | 94.35 | 99.23 | 90.46 | 92.15 | 93.41 | 98.93 | 92.94 | 
| backbone+Up | 95.26 | 99.09 | 88.07 | 91.00 | 93.18 | 99.01 | 92.22 | |
| backbone+RA+CGM | 95.01 | 99.37 | 91.89 | 93.25 | 94.25 | 99.16 | 93.97 | |
| backbone+RA+MAFF | 94.58 | 99.36 | 92.61 | 91.85 | 92.85 | 99.12 | 93.25 | |
| backbone+RA+CGM+MAFF | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
| Kvasir-SEG | backbone+RA | 88.30 | 98.71 | 92.83 | 88.25 | 88.03 | 96.12 | 88.35 | 
| backbone+Up | 92.01 | 97.68 | 89.45 | 87.93 | 89.38 | 95.99 | 88.37 | |
| backbone+RA+CGM | 91.20 | 98.36 | 92.53 | 90.40 | 90.63 | 96.38 | 89.72 | |
| backbone+RA+MAFF | 90.66 | 98.31 | 93.19 | 90.24 | 90.11 | 96.41 | 89.77 | |
| backbone+RA+CGM+MAFF | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
| ISIC 2018 | backbone+RA | 90.79 | 96.50 | 91.02 | 89.47 | 89.74 | 96.24 | 88.07 | 
| backbone+Up | 94.79 | 95.42 | 86.67 | 89.16 | 91.80 | 96.33 | 87.78 | |
| backbone+RA+CGM | 91.81 | 95.97 | 90.70 | 89.78 | 90.46 | 96.45 | 88.36 | |
| backbone+RA+MAFF | 91.31 | 96.53 | 91.19 | 89.93 | 90.43 | 96.45 | 88.58 | |
| backbone+RA+CGM+MAFF | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 | 
表2 不同模型在3个数据集上的消融实验结果 (%)
Tab. 2 Ablation experiment results of different models on three datasets
| 数据集 | 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|---|
| CVC-ClinicDB | backbone+RA | 94.35 | 99.23 | 90.46 | 92.15 | 93.41 | 98.93 | 92.94 | 
| backbone+Up | 95.26 | 99.09 | 88.07 | 91.00 | 93.18 | 99.01 | 92.22 | |
| backbone+RA+CGM | 95.01 | 99.37 | 91.89 | 93.25 | 94.25 | 99.16 | 93.97 | |
| backbone+RA+MAFF | 94.58 | 99.36 | 92.61 | 91.85 | 92.85 | 99.12 | 93.25 | |
| backbone+RA+CGM+MAFF | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
| Kvasir-SEG | backbone+RA | 88.30 | 98.71 | 92.83 | 88.25 | 88.03 | 96.12 | 88.35 | 
| backbone+Up | 92.01 | 97.68 | 89.45 | 87.93 | 89.38 | 95.99 | 88.37 | |
| backbone+RA+CGM | 91.20 | 98.36 | 92.53 | 90.40 | 90.63 | 96.38 | 89.72 | |
| backbone+RA+MAFF | 90.66 | 98.31 | 93.19 | 90.24 | 90.11 | 96.41 | 89.77 | |
| backbone+RA+CGM+MAFF | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
| ISIC 2018 | backbone+RA | 90.79 | 96.50 | 91.02 | 89.47 | 89.74 | 96.24 | 88.07 | 
| backbone+Up | 94.79 | 95.42 | 86.67 | 89.16 | 91.80 | 96.33 | 87.78 | |
| backbone+RA+CGM | 91.81 | 95.97 | 90.70 | 89.78 | 90.46 | 96.45 | 88.36 | |
| backbone+RA+MAFF | 91.31 | 96.53 | 91.19 | 89.93 | 90.43 | 96.45 | 88.58 | |
| backbone+RA+CGM+MAFF | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 | 
| 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|
| U-Net | 92.19 | 99.27 | 89.94 | 90.61 | 91.45 | 98.80 | 91.69 | 
| UNet++ | 88.88 | 99.45 | 91.96 | 89.18 | 88.92 | 98.69 | 91.04 | 
| ResUNet++ | 80.94 | 98.94 | 84.53 | 79.39 | 80.04 | 97.96 | 84.41 | 
| MSRAformer | 96.01 | 99.13 | 91.40 | 92.96 | 94.25 | 98.93 | 93.38 | 
| CaraNet | 92.51 | 99.56 | 94.40 | 92.17 | 92.30 | 99.23 | 93.24 | 
| DCSAU-Net | 81.39 | 99.25 | 89.34 | 82.52 | 81.66 | 98.08 | 86.41 | 
| ACC-UNet | 78.54 | 98.56 | 80.08 | 75.98 | 77.20 | 97.15 | 82.04 | 
| CFANet | 82.51 | 98.38 | 83.93 | 79.40 | 80.37 | 97.46 | 83.77 | 
| MISSFormer | 90.89 | 99.31 | 91.85 | 90.43 | 90.57 | 98.88 | 91.40 | 
| DTA-UNet | 91.06 | 99.37 | 92.67 | 90.34 | 90.53 | 98.88 | 91.62 | 
| CGMAFF-Net | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | 
表3 不同网络模型在CVC-ClinicDB数据集上的分割结果 (%)
Tab. 3 Segmentation results of different network models onCVC-ClinicDB dataset
| 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|
| U-Net | 92.19 | 99.27 | 89.94 | 90.61 | 91.45 | 98.80 | 91.69 | 
| UNet++ | 88.88 | 99.45 | 91.96 | 89.18 | 88.92 | 98.69 | 91.04 | 
| ResUNet++ | 80.94 | 98.94 | 84.53 | 79.39 | 80.04 | 97.96 | 84.41 | 
| MSRAformer | 96.01 | 99.13 | 91.40 | 92.96 | 94.25 | 98.93 | 93.38 | 
| CaraNet | 92.51 | 99.56 | 94.40 | 92.17 | 92.30 | 99.23 | 93.24 | 
| DCSAU-Net | 81.39 | 99.25 | 89.34 | 82.52 | 81.66 | 98.08 | 86.41 | 
| ACC-UNet | 78.54 | 98.56 | 80.08 | 75.98 | 77.20 | 97.15 | 82.04 | 
| CFANet | 82.51 | 98.38 | 83.93 | 79.40 | 80.37 | 97.46 | 83.77 | 
| MISSFormer | 90.89 | 99.31 | 91.85 | 90.43 | 90.57 | 98.88 | 91.40 | 
| DTA-UNet | 91.06 | 99.37 | 92.67 | 90.34 | 90.53 | 98.88 | 91.62 | 
| CGMAFF-Net | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | 
| 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|
| U-Net | 80.75 | 97.91 | 86.29 | 79.01 | 79.54 | 93.66 | 81.79 | 
| UNet++ | 82.49 | 98.22 | 88.60 | 81.41 | 81.48 | 94.02 | 82.93 | 
| ResUNet++ | 65.77 | 97.01 | 74.55 | 63.23 | 63.48 | 90.58 | 71.27 | 
| MSRAformer | 92.34 | 98.35 | 89.20 | 89.46 | 90.71 | 97.22 | 89.95 | 
| CaraNet | 90.38 | 98.07 | 90.46 | 87.89 | 88.99 | 96.08 | 88.15 | 
| DCSAU-Net | 67.73 | 97.57 | 80.87 | 68.15 | 67.18 | 91.85 | 74.41 | 
| ACC-UNet | 63.50 | 95.83 | 75.03 | 60.82 | 60.89 | 89.55 | 69.25 | 
| CFANet | 69.05 | 96.19 | 78.10 | 65.47 | 66.11 | 91.21 | 72.32 | 
| MISSFormer | 86.48 | 97.10 | 83.27 | 82.14 | 83.42 | 94.81 | 83.51 | 
| DTA-UNet | 87.45 | 97.62 | 88.37 | 84.84 | 85.48 | 95.13 | 85.54 | 
| CGMAFF-Net | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | 
表4 不同网络模型在Kvasir-SEG数据集上的分割结果 (%)
Tab. 4 Segmentation results of different network models on Kvasir-SEG dataset
| 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|
| U-Net | 80.75 | 97.91 | 86.29 | 79.01 | 79.54 | 93.66 | 81.79 | 
| UNet++ | 82.49 | 98.22 | 88.60 | 81.41 | 81.48 | 94.02 | 82.93 | 
| ResUNet++ | 65.77 | 97.01 | 74.55 | 63.23 | 63.48 | 90.58 | 71.27 | 
| MSRAformer | 92.34 | 98.35 | 89.20 | 89.46 | 90.71 | 97.22 | 89.95 | 
| CaraNet | 90.38 | 98.07 | 90.46 | 87.89 | 88.99 | 96.08 | 88.15 | 
| DCSAU-Net | 67.73 | 97.57 | 80.87 | 68.15 | 67.18 | 91.85 | 74.41 | 
| ACC-UNet | 63.50 | 95.83 | 75.03 | 60.82 | 60.89 | 89.55 | 69.25 | 
| CFANet | 69.05 | 96.19 | 78.10 | 65.47 | 66.11 | 91.21 | 72.32 | 
| MISSFormer | 86.48 | 97.10 | 83.27 | 82.14 | 83.42 | 94.81 | 83.51 | 
| DTA-UNet | 87.45 | 97.62 | 88.37 | 84.84 | 85.48 | 95.13 | 85.54 | 
| CGMAFF-Net | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | 
| 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|
| U-Net | 89.73 | 97.31 | 90.17 | 88.25 | 88.55 | 95.32 | 86.93 | 
| UNet++ | 89.83 | 97.13 | 89.70 | 87.88 | 88.34 | 95.23 | 86.73 | 
| ResUNet++ | 89.12 | 97.43 | 88.34 | 86.13 | 87.31 | 94.59 | 85.69 | 
| MSRAformer | 92.23 | 97.43 | 90.08 | 89.54 | 90.82 | 96.49 | 88.51 | 
| CaraNet | 90.87 | 96.51 | 91.07 | 89.68 | 89.88 | 96.37 | 88.12 | 
| DCSAU-Net | 87.07 | 97.62 | 91.95 | 87.24 | 86.55 | 94.64 | 86.34 | 
| ACC-UNet | 87.29 | 97.28 | 90.43 | 86.55 | 86.13 | 94.89 | 85.58 | 
| CFANet | 85.65 | 97.43 | 90.73 | 85.07 | 84.97 | 95.43 | 85.14 | 
| MISSFormer | 88.22 | 97.77 | 91.98 | 88.14 | 87.72 | 95.98 | 87.28 | 
| DTA-UNet | 89.05 | 96.61 | 89.74 | 86.71 | 87.36 | 95.37 | 86.23 | 
| CGMAFF-Net | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 | 
表5 不同网络模型在ISIC 2018数据集上的分割结果 (%)
Tab. 5 Segmentation results of different network models on ISIC 2018 dataset
| 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|
| U-Net | 89.73 | 97.31 | 90.17 | 88.25 | 88.55 | 95.32 | 86.93 | 
| UNet++ | 89.83 | 97.13 | 89.70 | 87.88 | 88.34 | 95.23 | 86.73 | 
| ResUNet++ | 89.12 | 97.43 | 88.34 | 86.13 | 87.31 | 94.59 | 85.69 | 
| MSRAformer | 92.23 | 97.43 | 90.08 | 89.54 | 90.82 | 96.49 | 88.51 | 
| CaraNet | 90.87 | 96.51 | 91.07 | 89.68 | 89.88 | 96.37 | 88.12 | 
| DCSAU-Net | 87.07 | 97.62 | 91.95 | 87.24 | 86.55 | 94.64 | 86.34 | 
| ACC-UNet | 87.29 | 97.28 | 90.43 | 86.55 | 86.13 | 94.89 | 85.58 | 
| CFANet | 85.65 | 97.43 | 90.73 | 85.07 | 84.97 | 95.43 | 85.14 | 
| MISSFormer | 88.22 | 97.77 | 91.98 | 88.14 | 87.72 | 95.98 | 87.28 | 
| DTA-UNet | 89.05 | 96.61 | 89.74 | 86.71 | 87.36 | 95.37 | 86.23 | 
| CGMAFF-Net | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 | 
| 模型 | 训练 时长/s | 参数量/106 | 运算量/GFLOPs | 帧率/(frame·s-1) | 
|---|---|---|---|---|
| U-Net[ | 25 | 13.40 | 248.86 | 19.30 | 
| UNet++[ | 33 | 9.16 | 279.23 | 18.07 | 
| ResUNet++[ | 95 | 14.48 | 567.95 | 15.61 | 
| MSRAformer[ | 26 | 68.03 | 170.34 | 16.07 | 
| CaraNet[ | 26 | 44.59 | 92.11 | 17.61 | 
| DCSAU-Net[ | 29 | 2.60 | 55.32 | 18.63 | 
| ACC-UNet[ | 209 | 4.26 | 122.69 | 12.84 | 
| CFANet[ | 32 | 25.24 | 234.25 | 15.63 | 
| MISSFormer[ | 37 | 35.45 | 58.01 | 18.10 | 
| DTA-UNet[ | 44 | 19.61 | 275.04 | 15.82 | 
| 本文模型 | 21 | 33.94 | 127.31 | 24.71 | 
表6 时间复杂度分析
Tab. 6 Time complexity analysis
| 模型 | 训练 时长/s | 参数量/106 | 运算量/GFLOPs | 帧率/(frame·s-1) | 
|---|---|---|---|---|
| U-Net[ | 25 | 13.40 | 248.86 | 19.30 | 
| UNet++[ | 33 | 9.16 | 279.23 | 18.07 | 
| ResUNet++[ | 95 | 14.48 | 567.95 | 15.61 | 
| MSRAformer[ | 26 | 68.03 | 170.34 | 16.07 | 
| CaraNet[ | 26 | 44.59 | 92.11 | 17.61 | 
| DCSAU-Net[ | 29 | 2.60 | 55.32 | 18.63 | 
| ACC-UNet[ | 209 | 4.26 | 122.69 | 12.84 | 
| CFANet[ | 32 | 25.24 | 234.25 | 15.63 | 
| MISSFormer[ | 37 | 35.45 | 58.01 | 18.10 | 
| DTA-UNet[ | 44 | 19.61 | 275.04 | 15.82 | 
| 本文模型 | 21 | 33.94 | 127.31 | 24.71 | 
| 实验 | 实验目的 | 
|---|---|
| 实验1、实验2、实验7 | 验证直接引入Otsu阈值分割图像会降低分割准确率 | 
| 实验3与实验4, 实验5与实验6 | 验证精心设计特征提取模块的必要性 | 
| 实验3与实验5, 实验4与实验6 | 验证自适应组合赋权的有效性 | 
| 实验1、实验5、实验7 | 验证引入原始数据灰度图像的有效性 | 
表7 实验设置目的
Tab. 7 Purpose of experimental setups
| 实验 | 实验目的 | 
|---|---|
| 实验1、实验2、实验7 | 验证直接引入Otsu阈值分割图像会降低分割准确率 | 
| 实验3与实验4, 实验5与实验6 | 验证精心设计特征提取模块的必要性 | 
| 实验3与实验5, 实验4与实验6 | 验证自适应组合赋权的有效性 | 
| 实验1、实验5、实验7 | 验证引入原始数据灰度图像的有效性 | 
| 数据集 | 实验 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|---|
| CVC- ClinicDB | 1 | 92.36 | 99.61 | 94.11 | 92.11 | 92.23 | 99.31 | 93.64 | 
| 2 | 85.80 | 98.77 | 86.62 | 83.21 | 84.48 | 97.91 | 86.79 | |
| 3 | 91.64 | 99.68 | 96.31 | 92.78 | 92.78 | 92.78 | 93.72 | |
| 4 | 92.10 | 99.56 | 94.68 | 94.68 | 91.92 | 99.28 | 93.31 | |
| 5 | 93.83 | 99.46 | 92.65 | 93.03 | 93.44 | 99.22 | 93.91 | |
| 6 | 94.49 | 99.49 | 92.57 | 92.83 | 93.65 | 99.26 | 93.67 | |
| 7 | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
| Kvasir- SEG | 1 | 90.91 | 96.62 | 91.00 | 89.66 | 89.66 | 96.32 | 88.29 | 
| 2 | 82.39 | 97.32 | 87.11 | 80.39 | 81.13 | 94.24 | 82.36 | |
| 3 | 87.59 | 98.29 | 91.33 | 87.35 | 87.17 | 95.52 | 87.45 | |
| 4 | 85.60 | 98.43 | 92.17 | 85.75 | 85.43 | 95.85 | 86.99 | |
| 5 | 88.98 | 98.21 | 98.21 | 87.83 | 88.06 | 95.76 | 88.05 | |
| 6 | 88.06 | 98.03 | 98.03 | 87.24 | 87.38 | 95.76 | 87.63 | |
| 7 | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
| ISIC 2018 | 1 | 94.22 | 95.66 | 86.99 | 89.09 | 91.43 | 96.22 | 87.66 | 
| 2 | 91.12 | 96.41 | 89.71 | 88.77 | 89.50 | 95.78 | 87.43 | |
| 3 | 91.32 | 96.81 | 90.79 | 89.60 | 90.03 | 96.01 | 88.31 | |
| 4 | 92.06 | 96.44 | 89.67 | 89.38 | 90.29 | 96.17 | 88.08 | |
| 5 | 92.08 | 96.42 | 90.47 | 89.96 | 90.60 | 96.26 | 88.52 | |
| 6 | 93.27 | 95.25 | 89.18 | 90.08 | 91.54 | 96.34 | 88.40 | |
| 7 | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 | 
表8 3个数据集内部组件性能评估实验结果 (%)
Tab. 8 Performance evaluation experiment results of internal components on three datasets
| 数据集 | 实验 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU | 
|---|---|---|---|---|---|---|---|---|
| CVC- ClinicDB | 1 | 92.36 | 99.61 | 94.11 | 92.11 | 92.23 | 99.31 | 93.64 | 
| 2 | 85.80 | 98.77 | 86.62 | 83.21 | 84.48 | 97.91 | 86.79 | |
| 3 | 91.64 | 99.68 | 96.31 | 92.78 | 92.78 | 92.78 | 93.72 | |
| 4 | 92.10 | 99.56 | 94.68 | 94.68 | 91.92 | 99.28 | 93.31 | |
| 5 | 93.83 | 99.46 | 92.65 | 93.03 | 93.44 | 99.22 | 93.91 | |
| 6 | 94.49 | 99.49 | 92.57 | 92.83 | 93.65 | 99.26 | 93.67 | |
| 7 | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
| Kvasir- SEG | 1 | 90.91 | 96.62 | 91.00 | 89.66 | 89.66 | 96.32 | 88.29 | 
| 2 | 82.39 | 97.32 | 87.11 | 80.39 | 81.13 | 94.24 | 82.36 | |
| 3 | 87.59 | 98.29 | 91.33 | 87.35 | 87.17 | 95.52 | 87.45 | |
| 4 | 85.60 | 98.43 | 92.17 | 85.75 | 85.43 | 95.85 | 86.99 | |
| 5 | 88.98 | 98.21 | 98.21 | 87.83 | 88.06 | 95.76 | 88.05 | |
| 6 | 88.06 | 98.03 | 98.03 | 87.24 | 87.38 | 95.76 | 87.63 | |
| 7 | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
| ISIC 2018 | 1 | 94.22 | 95.66 | 86.99 | 89.09 | 91.43 | 96.22 | 87.66 | 
| 2 | 91.12 | 96.41 | 89.71 | 88.77 | 89.50 | 95.78 | 87.43 | |
| 3 | 91.32 | 96.81 | 90.79 | 89.60 | 90.03 | 96.01 | 88.31 | |
| 4 | 92.06 | 96.44 | 89.67 | 89.38 | 90.29 | 96.17 | 88.08 | |
| 5 | 92.08 | 96.42 | 90.47 | 89.96 | 90.60 | 96.26 | 88.52 | |
| 6 | 93.27 | 95.25 | 89.18 | 90.08 | 91.54 | 96.34 | 88.40 | |
| 7 | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 | 
| [1] | GITE S, MISHRA A, KOTECHA K. Enhanced lung image segmentation using deep learning [J]. Neural Computing and Applications, 2023, 35(31): 22839-22853. | 
| [2] | 林荐壮,杨文忠,谭思翔,等. 融合滤波增强和反转注意力网络用于息肉分割[J]. 计算机应用, 2023, 43(1): 265-272. | 
| LIN J Z, YANG W Z, TAN S X, et al. Fusing filter enhancement and reverse attention network for polyp segmentation [J]. Journal of Computer Applications, 2023, 43(1): 265-272. | |
| [3] | HU B, ZHOU P, YU H, et al. LeaNet: lightweight U-shaped architecture for high-performance skin cancer image segmentation[J]. Computers in Biology and Medicine, 2024, 169: No.107919. | 
| [4] | OTSU N. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1979, 9(1): 62-66. | 
| [5] | TIZHOOSH H R. Image thresholding using type Ⅱ fuzzy sets [J]. Pattern Recognition, 2005, 38(12): 2363-2372. | 
| [6] | 刘少鹏,洪佳明,梁杰鹏,等. 面向医学图像分割的半监督条件生成对抗网络[J]. 软件学报, 2020, 31(8): 2588-2602. | 
| LIU S P, HONG J M, LIANG J P, et al. Medical image segmentation using semi-supervised conditional generative adversarial nets [J]. Journal of Software, 2020, 31(8): 2588-2602. | |
| [7] | 徐蓬泉,梁宇翔,李英. 融合多尺度语义和剩余瓶颈注意力的医学图像分割[J]. 计算机工程, 2023, 49(10): 162-170. | 
| XU P Q, LIANG Y X, LI Y. Medical image segmentation fusing multi-scale semantic and residual bottleneck attention [J]. Computer Engineering, 2023, 49(10):162-170. | |
| [8] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. | 
| [9] | BASAK H, KUNDU R, SARKAR R. MFSNet: a multi focus segmentation network for skin lesion segmentation [J]. Pattern Recognition, 2022, 128: No.108673. | 
| [10] | RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation [C]// Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 9351. Cham: Springer, 2015: 234-241. | 
| [11] | MILLETARI F, NAVAB N, AHMADI S A. V-Net: fully convolutional neural networks for volumetric medical image segmentation [C]// Proceedings of the 4th International Conference on 3D Vision. Piscataway: IEEE, 2016: 565-571. | 
| [12] | ZHOU Z, RAHMAN SIDDIQUEE M M, TAJBAKHSH N, et al. UNet++: a nested U-Net architecture for medical image segmentation [C]// Proceedings of the 2018 International Workshop on Deep Learning in Medical Image Analysis/ 2018 International Workshop on Multimodal Learning for Clinical Decision Support, LNCS 11045. Cham: Springer, 2018: 3-11. | 
| [13] | WU J, FU R, FANG H, et al. MedSegDiff: medical image segmentation with diffusion probabilistic model [C]// Proceedings of the 2024 International Conference on Medical Imaging with Deep Learning. New York: JMLR.org, 2024: 1623-1639. | 
| [14] | LI Z, LI Y, LI Q, et al. LViT: language meets Vision Transformer in medical image segmentation [J]. IEEE Transactions on Medical Imaging, 2023, 43(1): 96-107. | 
| [15] | YUAN F, ZHANG Z, FANG Z. An effective CNN and Transformer complementary network for medical image segmentation [J]. Pattern Recognition, 2023, 136: No.109228. | 
| [16] | HOORALI F, KHOSRAVI H, MORADI B. IRUNet for medical image segmentation [J]. Expert Systems with Applications, 2022, 191: No.116399. | 
| [17] | QI Y, HU C, ZUO L, et al. Cardiac magnetic resonance image segmentation method based on multi-scale feature fusion and sequence relationship learning [J]. Sensors, 2023, 23(2): No.690. | 
| [18] | RYU J, REHMAN M U, NIZAMI I F, et al. SegR-Net: a deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation [J]. Computers in Biology and Medicine, 2023, 163: No.107132. | 
| [19] | ZHANG J, ZHANG Y, JIN Y, et al. MDU-Net: multi-scale densely connected U-Net for biomedical image segmentation [J]. Health Information Science and Systems, 2023, 11: No.13. | 
| [20] | YIN Y, HAN Z, JIAN M, et al. AMSUnet: a neural network using atrous multi-scale convolution for medical image segmentation [J]. Computers in Biology and Medicine, 2023, 162: No.107120. | 
| [21] | DING Z, LI H, GUO Y, et al. M4FNet: multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration [J]. Computers in Biology and Medicine, 2023, 159: No.106923. | 
| [22] | REIS H C, TURK V. Transfer learning approach and nucleus segmentation with MedCLNet colon cancer database [J]. Journal of Digital Imaging, 2023, 36(1): 306-325. | 
| [23] | TOMAR N K, JHA D, RIEGLER M A, et al. FANet: a feedback attention network for improved biomedical image segmentation [J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(11): 9375-9388. | 
| [24] | JHA D, SMEDSRUD P H, RIEGLER M A, et al. ResUNet++: an advanced architecture for medical image segmentation [C]// Proceedings of the 2019 IEEE International Symposium on Multimedia. Piscataway: IEEE, 2019: 225-230. | 
| [25] | WU C, LONG C, LI S, et al. MSRAformer: multiscale spatial reverse attention network for polyp segmentation [J]. Computers in Biology and Medicine, 2022, 151(Pt A): No.106274. | 
| [26] | LOU A, GUAN S, LOEW M. CaraNet: context axial reverse attention network for segmentation of small medical objects [J]. Journal of Medical Imaging, 2023, 10(1): No.014005. | 
| [27] | XU Q, MA Z, HE N, et al. DCSAU-Net: a deeper and more compact split-attention U-Net for medical image segmentation [J]. Computers in Biology and Medicine, 2023, 154: No.106626. | 
| [28] | IBTEHAZ N, KIHARA D. ACC-UNet: a completely convolutional UNet model for the 2020s [C]// Proceedings of the 2023 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 14222. Cham: Springer, 2023: 692-702. | 
| [29] | ZHOU T, ZHOU Y, HE K, et al. Cross-level feature aggregation network for polyp segmentation [J]. Pattern Recognition, 2023, 140: No.109555. | 
| [30] | HUANG X, DENG Z, LI D, et al. MISSFormer: an effective Transformer for 2D medical image segmentation [J]. IEEE Transactions on Medical Imaging, 2023, 42(5): 1484-1494. | 
| [31] | LI Y, YAN B, HOU J, et al. UNet based on dynamic convolution decomposition and triplet attention [J]. Scientific Reports, 2024, 14: No.271. | 
| [1] | 许志雄, 李波, 边小勇, 胡其仁. 对抗样本嵌入注意力U型网络的3D医学图像分割[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 3011-3016. | 
| [2] | 张嘉祥, 李晓明, 张佳慧. 结合新类特征增强与度量机制的小样本目标检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2984-2992. | 
| [3] | 吕景刚, 彭绍睿, 高硕, 周金. 复频域注意力和多尺度频域增强驱动的语音增强网络[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2957-2965. | 
| [4] | 王闯, 俞璐, 陈健威, 潘成, 杜文博. 开集域适应综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2727-2736. | 
| [5] | 李进, 刘立群. 基于残差Swin Transformer的SAR与可见光图像融合[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2949-2956. | 
| [6] | 周金, 李玉芝, 张徐, 高硕, 张立, 盛家川. 复杂电磁环境下的调制识别网络[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2672-2682. | 
| [7] | 孙雨阳, 张敏婕, 胡婕. 基于语义前缀微调的零样本对话状态跟踪领域迁移模型[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2221-2228. | 
| [8] | 陶永鹏, 柏诗淇, 周正文. 基于卷积和Transformer神经网络架构搜索的脑胶质瘤多组织分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2378-2386. | 
| [9] | 冯博, 于海征, 边红. 基于掩码增强自训练的域适应语义分割[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2132-2137. | 
| [10] | 陈凯, 叶海良, 曹飞龙. 基于局部-全局交互与结构Transformer的点云分类算法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1671-1676. | 
| [11] | 陈鹏宇, 聂秀山, 李南君, 李拓. 基于时空解耦和区域鲁棒性增强的半监督视频目标分割方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1379-1386. | 
| [12] | 李慧, 贾炳志, 王晨曦, 董子宇, 李纪龙, 仲兆满, 陈艳艳. 基于Swin Transformer的生成对抗网络水下图像增强模型[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1439-1446. | 
| [13] | 许鹏程, 何磊, 李川, 钱炜祺, 赵暾. 基于Transformer的深度符号回归方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1455-1463. | 
| [14] | 姜坤元, 李小霞, 王利, 曹耀丹, 张晓强, 丁楠, 周颖玥. 引入解耦残差自注意力的边界交叉监督语义分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1120-1129. | 
| [15] | 丁美荣, 卓金鑫, 陆玉武, 刘庆龙, 郎济聪. 融合环境标签平滑与核范数差异的领域自适应[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1130-1138. | 
| 阅读次数 | ||||||
| 全文 |  | |||||
| 摘要 |  | |||||