Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (9): 3017-3025.DOI: 10.11772/j.issn.1001-9081.2024081188
• Multimedia computing and computer simulation • Previous Articles
Fang WANG, Jing HU(), Rui ZHANG, Wenting FAN
Received:
2024-08-21
Revised:
2024-10-14
Accepted:
2024-10-21
Online:
2024-11-07
Published:
2025-09-10
Contact:
Jing HU
About author:
WANG Fang, born in 1989, M. S., lecturer. Her research interests include graphic and image processing, deep learning.Supported by:
通讯作者:
胡静
作者简介:
王芳(1989—),女,山西太原人,讲师,硕士,主要研究方向:图形图像处理、深度学习基金资助:
CLC Number:
Fang WANG, Jing HU, Rui ZHANG, Wenting FAN. Medical image segmentation network with content-guided multi-angle feature fusion[J]. Journal of Computer Applications, 2025, 45(9): 3017-3025.
王芳, 胡静, 张睿, 范文婷. 内容引导下多角度特征融合医学图像分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 3017-3025.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024081188
数据集 | 图像大小 | 样本总数 | 训练样本数 | 测试样本数 |
---|---|---|---|---|
CVC-ClinicDB | 384×288 | 612 | 551 | 61 |
Kvasir-SEG | 变化 | 1 000 | 900 | 100 |
ISIC 2018 | 变化 | 2 594 | 2 335 | 259 |
Tab. 1 Detailed information of three biomedical datasets
数据集 | 图像大小 | 样本总数 | 训练样本数 | 测试样本数 |
---|---|---|---|---|
CVC-ClinicDB | 384×288 | 612 | 551 | 61 |
Kvasir-SEG | 变化 | 1 000 | 900 | 100 |
ISIC 2018 | 变化 | 2 594 | 2 335 | 259 |
数据集 | 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|---|
CVC-ClinicDB | backbone+RA | 94.35 | 99.23 | 90.46 | 92.15 | 93.41 | 98.93 | 92.94 |
backbone+Up | 95.26 | 99.09 | 88.07 | 91.00 | 93.18 | 99.01 | 92.22 | |
backbone+RA+CGM | 95.01 | 99.37 | 91.89 | 93.25 | 94.25 | 99.16 | 93.97 | |
backbone+RA+MAFF | 94.58 | 99.36 | 92.61 | 91.85 | 92.85 | 99.12 | 93.25 | |
backbone+RA+CGM+MAFF | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
Kvasir-SEG | backbone+RA | 88.30 | 98.71 | 92.83 | 88.25 | 88.03 | 96.12 | 88.35 |
backbone+Up | 92.01 | 97.68 | 89.45 | 87.93 | 89.38 | 95.99 | 88.37 | |
backbone+RA+CGM | 91.20 | 98.36 | 92.53 | 90.40 | 90.63 | 96.38 | 89.72 | |
backbone+RA+MAFF | 90.66 | 98.31 | 93.19 | 90.24 | 90.11 | 96.41 | 89.77 | |
backbone+RA+CGM+MAFF | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
ISIC 2018 | backbone+RA | 90.79 | 96.50 | 91.02 | 89.47 | 89.74 | 96.24 | 88.07 |
backbone+Up | 94.79 | 95.42 | 86.67 | 89.16 | 91.80 | 96.33 | 87.78 | |
backbone+RA+CGM | 91.81 | 95.97 | 90.70 | 89.78 | 90.46 | 96.45 | 88.36 | |
backbone+RA+MAFF | 91.31 | 96.53 | 91.19 | 89.93 | 90.43 | 96.45 | 88.58 | |
backbone+RA+CGM+MAFF | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 |
Tab. 2 Ablation experiment results of different models on three datasets
数据集 | 模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|---|
CVC-ClinicDB | backbone+RA | 94.35 | 99.23 | 90.46 | 92.15 | 93.41 | 98.93 | 92.94 |
backbone+Up | 95.26 | 99.09 | 88.07 | 91.00 | 93.18 | 99.01 | 92.22 | |
backbone+RA+CGM | 95.01 | 99.37 | 91.89 | 93.25 | 94.25 | 99.16 | 93.97 | |
backbone+RA+MAFF | 94.58 | 99.36 | 92.61 | 91.85 | 92.85 | 99.12 | 93.25 | |
backbone+RA+CGM+MAFF | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
Kvasir-SEG | backbone+RA | 88.30 | 98.71 | 92.83 | 88.25 | 88.03 | 96.12 | 88.35 |
backbone+Up | 92.01 | 97.68 | 89.45 | 87.93 | 89.38 | 95.99 | 88.37 | |
backbone+RA+CGM | 91.20 | 98.36 | 92.53 | 90.40 | 90.63 | 96.38 | 89.72 | |
backbone+RA+MAFF | 90.66 | 98.31 | 93.19 | 90.24 | 90.11 | 96.41 | 89.77 | |
backbone+RA+CGM+MAFF | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
ISIC 2018 | backbone+RA | 90.79 | 96.50 | 91.02 | 89.47 | 89.74 | 96.24 | 88.07 |
backbone+Up | 94.79 | 95.42 | 86.67 | 89.16 | 91.80 | 96.33 | 87.78 | |
backbone+RA+CGM | 91.81 | 95.97 | 90.70 | 89.78 | 90.46 | 96.45 | 88.36 | |
backbone+RA+MAFF | 91.31 | 96.53 | 91.19 | 89.93 | 90.43 | 96.45 | 88.58 | |
backbone+RA+CGM+MAFF | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 |
模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|
U-Net | 92.19 | 99.27 | 89.94 | 90.61 | 91.45 | 98.80 | 91.69 |
UNet++ | 88.88 | 99.45 | 91.96 | 89.18 | 88.92 | 98.69 | 91.04 |
ResUNet++ | 80.94 | 98.94 | 84.53 | 79.39 | 80.04 | 97.96 | 84.41 |
MSRAformer | 96.01 | 99.13 | 91.40 | 92.96 | 94.25 | 98.93 | 93.38 |
CaraNet | 92.51 | 99.56 | 94.40 | 92.17 | 92.30 | 99.23 | 93.24 |
DCSAU-Net | 81.39 | 99.25 | 89.34 | 82.52 | 81.66 | 98.08 | 86.41 |
ACC-UNet | 78.54 | 98.56 | 80.08 | 75.98 | 77.20 | 97.15 | 82.04 |
CFANet | 82.51 | 98.38 | 83.93 | 79.40 | 80.37 | 97.46 | 83.77 |
MISSFormer | 90.89 | 99.31 | 91.85 | 90.43 | 90.57 | 98.88 | 91.40 |
DTA-UNet | 91.06 | 99.37 | 92.67 | 90.34 | 90.53 | 98.88 | 91.62 |
CGMAFF-Net | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 |
Tab. 3 Segmentation results of different network models onCVC-ClinicDB dataset
模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|
U-Net | 92.19 | 99.27 | 89.94 | 90.61 | 91.45 | 98.80 | 91.69 |
UNet++ | 88.88 | 99.45 | 91.96 | 89.18 | 88.92 | 98.69 | 91.04 |
ResUNet++ | 80.94 | 98.94 | 84.53 | 79.39 | 80.04 | 97.96 | 84.41 |
MSRAformer | 96.01 | 99.13 | 91.40 | 92.96 | 94.25 | 98.93 | 93.38 |
CaraNet | 92.51 | 99.56 | 94.40 | 92.17 | 92.30 | 99.23 | 93.24 |
DCSAU-Net | 81.39 | 99.25 | 89.34 | 82.52 | 81.66 | 98.08 | 86.41 |
ACC-UNet | 78.54 | 98.56 | 80.08 | 75.98 | 77.20 | 97.15 | 82.04 |
CFANet | 82.51 | 98.38 | 83.93 | 79.40 | 80.37 | 97.46 | 83.77 |
MISSFormer | 90.89 | 99.31 | 91.85 | 90.43 | 90.57 | 98.88 | 91.40 |
DTA-UNet | 91.06 | 99.37 | 92.67 | 90.34 | 90.53 | 98.88 | 91.62 |
CGMAFF-Net | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 |
模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|
U-Net | 80.75 | 97.91 | 86.29 | 79.01 | 79.54 | 93.66 | 81.79 |
UNet++ | 82.49 | 98.22 | 88.60 | 81.41 | 81.48 | 94.02 | 82.93 |
ResUNet++ | 65.77 | 97.01 | 74.55 | 63.23 | 63.48 | 90.58 | 71.27 |
MSRAformer | 92.34 | 98.35 | 89.20 | 89.46 | 90.71 | 97.22 | 89.95 |
CaraNet | 90.38 | 98.07 | 90.46 | 87.89 | 88.99 | 96.08 | 88.15 |
DCSAU-Net | 67.73 | 97.57 | 80.87 | 68.15 | 67.18 | 91.85 | 74.41 |
ACC-UNet | 63.50 | 95.83 | 75.03 | 60.82 | 60.89 | 89.55 | 69.25 |
CFANet | 69.05 | 96.19 | 78.10 | 65.47 | 66.11 | 91.21 | 72.32 |
MISSFormer | 86.48 | 97.10 | 83.27 | 82.14 | 83.42 | 94.81 | 83.51 |
DTA-UNet | 87.45 | 97.62 | 88.37 | 84.84 | 85.48 | 95.13 | 85.54 |
CGMAFF-Net | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 |
Tab. 4 Segmentation results of different network models on Kvasir-SEG dataset
模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|
U-Net | 80.75 | 97.91 | 86.29 | 79.01 | 79.54 | 93.66 | 81.79 |
UNet++ | 82.49 | 98.22 | 88.60 | 81.41 | 81.48 | 94.02 | 82.93 |
ResUNet++ | 65.77 | 97.01 | 74.55 | 63.23 | 63.48 | 90.58 | 71.27 |
MSRAformer | 92.34 | 98.35 | 89.20 | 89.46 | 90.71 | 97.22 | 89.95 |
CaraNet | 90.38 | 98.07 | 90.46 | 87.89 | 88.99 | 96.08 | 88.15 |
DCSAU-Net | 67.73 | 97.57 | 80.87 | 68.15 | 67.18 | 91.85 | 74.41 |
ACC-UNet | 63.50 | 95.83 | 75.03 | 60.82 | 60.89 | 89.55 | 69.25 |
CFANet | 69.05 | 96.19 | 78.10 | 65.47 | 66.11 | 91.21 | 72.32 |
MISSFormer | 86.48 | 97.10 | 83.27 | 82.14 | 83.42 | 94.81 | 83.51 |
DTA-UNet | 87.45 | 97.62 | 88.37 | 84.84 | 85.48 | 95.13 | 85.54 |
CGMAFF-Net | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 |
模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|
U-Net | 89.73 | 97.31 | 90.17 | 88.25 | 88.55 | 95.32 | 86.93 |
UNet++ | 89.83 | 97.13 | 89.70 | 87.88 | 88.34 | 95.23 | 86.73 |
ResUNet++ | 89.12 | 97.43 | 88.34 | 86.13 | 87.31 | 94.59 | 85.69 |
MSRAformer | 92.23 | 97.43 | 90.08 | 89.54 | 90.82 | 96.49 | 88.51 |
CaraNet | 90.87 | 96.51 | 91.07 | 89.68 | 89.88 | 96.37 | 88.12 |
DCSAU-Net | 87.07 | 97.62 | 91.95 | 87.24 | 86.55 | 94.64 | 86.34 |
ACC-UNet | 87.29 | 97.28 | 90.43 | 86.55 | 86.13 | 94.89 | 85.58 |
CFANet | 85.65 | 97.43 | 90.73 | 85.07 | 84.97 | 95.43 | 85.14 |
MISSFormer | 88.22 | 97.77 | 91.98 | 88.14 | 87.72 | 95.98 | 87.28 |
DTA-UNet | 89.05 | 96.61 | 89.74 | 86.71 | 87.36 | 95.37 | 86.23 |
CGMAFF-Net | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 |
Tab. 5 Segmentation results of different network models on ISIC 2018 dataset
模型 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|
U-Net | 89.73 | 97.31 | 90.17 | 88.25 | 88.55 | 95.32 | 86.93 |
UNet++ | 89.83 | 97.13 | 89.70 | 87.88 | 88.34 | 95.23 | 86.73 |
ResUNet++ | 89.12 | 97.43 | 88.34 | 86.13 | 87.31 | 94.59 | 85.69 |
MSRAformer | 92.23 | 97.43 | 90.08 | 89.54 | 90.82 | 96.49 | 88.51 |
CaraNet | 90.87 | 96.51 | 91.07 | 89.68 | 89.88 | 96.37 | 88.12 |
DCSAU-Net | 87.07 | 97.62 | 91.95 | 87.24 | 86.55 | 94.64 | 86.34 |
ACC-UNet | 87.29 | 97.28 | 90.43 | 86.55 | 86.13 | 94.89 | 85.58 |
CFANet | 85.65 | 97.43 | 90.73 | 85.07 | 84.97 | 95.43 | 85.14 |
MISSFormer | 88.22 | 97.77 | 91.98 | 88.14 | 87.72 | 95.98 | 87.28 |
DTA-UNet | 89.05 | 96.61 | 89.74 | 86.71 | 87.36 | 95.37 | 86.23 |
CGMAFF-Net | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 |
模型 | 训练 时长/s | 参数量/106 | 运算量/GFLOPs | 帧率/(frame·s-1) |
---|---|---|---|---|
U-Net[ | 25 | 13.40 | 248.86 | 19.30 |
UNet++[ | 33 | 9.16 | 279.23 | 18.07 |
ResUNet++[ | 95 | 14.48 | 567.95 | 15.61 |
MSRAformer[ | 26 | 68.03 | 170.34 | 16.07 |
CaraNet[ | 26 | 44.59 | 92.11 | 17.61 |
DCSAU-Net[ | 29 | 2.60 | 55.32 | 18.63 |
ACC-UNet[ | 209 | 4.26 | 122.69 | 12.84 |
CFANet[ | 32 | 25.24 | 234.25 | 15.63 |
MISSFormer[ | 37 | 35.45 | 58.01 | 18.10 |
DTA-UNet[ | 44 | 19.61 | 275.04 | 15.82 |
本文模型 | 21 | 33.94 | 127.31 | 24.71 |
Tab. 6 Time complexity analysis
模型 | 训练 时长/s | 参数量/106 | 运算量/GFLOPs | 帧率/(frame·s-1) |
---|---|---|---|---|
U-Net[ | 25 | 13.40 | 248.86 | 19.30 |
UNet++[ | 33 | 9.16 | 279.23 | 18.07 |
ResUNet++[ | 95 | 14.48 | 567.95 | 15.61 |
MSRAformer[ | 26 | 68.03 | 170.34 | 16.07 |
CaraNet[ | 26 | 44.59 | 92.11 | 17.61 |
DCSAU-Net[ | 29 | 2.60 | 55.32 | 18.63 |
ACC-UNet[ | 209 | 4.26 | 122.69 | 12.84 |
CFANet[ | 32 | 25.24 | 234.25 | 15.63 |
MISSFormer[ | 37 | 35.45 | 58.01 | 18.10 |
DTA-UNet[ | 44 | 19.61 | 275.04 | 15.82 |
本文模型 | 21 | 33.94 | 127.31 | 24.71 |
实验 | 实验目的 |
---|---|
实验1、实验2、实验7 | 验证直接引入Otsu阈值分割图像会降低分割准确率 |
实验3与实验4, 实验5与实验6 | 验证精心设计特征提取模块的必要性 |
实验3与实验5, 实验4与实验6 | 验证自适应组合赋权的有效性 |
实验1、实验5、实验7 | 验证引入原始数据灰度图像的有效性 |
Tab. 7 Purpose of experimental setups
实验 | 实验目的 |
---|---|
实验1、实验2、实验7 | 验证直接引入Otsu阈值分割图像会降低分割准确率 |
实验3与实验4, 实验5与实验6 | 验证精心设计特征提取模块的必要性 |
实验3与实验5, 实验4与实验6 | 验证自适应组合赋权的有效性 |
实验1、实验5、实验7 | 验证引入原始数据灰度图像的有效性 |
数据集 | 实验 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|---|
CVC- ClinicDB | 1 | 92.36 | 99.61 | 94.11 | 92.11 | 92.23 | 99.31 | 93.64 |
2 | 85.80 | 98.77 | 86.62 | 83.21 | 84.48 | 97.91 | 86.79 | |
3 | 91.64 | 99.68 | 96.31 | 92.78 | 92.78 | 92.78 | 93.72 | |
4 | 92.10 | 99.56 | 94.68 | 94.68 | 91.92 | 99.28 | 93.31 | |
5 | 93.83 | 99.46 | 92.65 | 93.03 | 93.44 | 99.22 | 93.91 | |
6 | 94.49 | 99.49 | 92.57 | 92.83 | 93.65 | 99.26 | 93.67 | |
7 | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
Kvasir- SEG | 1 | 90.91 | 96.62 | 91.00 | 89.66 | 89.66 | 96.32 | 88.29 |
2 | 82.39 | 97.32 | 87.11 | 80.39 | 81.13 | 94.24 | 82.36 | |
3 | 87.59 | 98.29 | 91.33 | 87.35 | 87.17 | 95.52 | 87.45 | |
4 | 85.60 | 98.43 | 92.17 | 85.75 | 85.43 | 95.85 | 86.99 | |
5 | 88.98 | 98.21 | 98.21 | 87.83 | 88.06 | 95.76 | 88.05 | |
6 | 88.06 | 98.03 | 98.03 | 87.24 | 87.38 | 95.76 | 87.63 | |
7 | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
ISIC 2018 | 1 | 94.22 | 95.66 | 86.99 | 89.09 | 91.43 | 96.22 | 87.66 |
2 | 91.12 | 96.41 | 89.71 | 88.77 | 89.50 | 95.78 | 87.43 | |
3 | 91.32 | 96.81 | 90.79 | 89.60 | 90.03 | 96.01 | 88.31 | |
4 | 92.06 | 96.44 | 89.67 | 89.38 | 90.29 | 96.17 | 88.08 | |
5 | 92.08 | 96.42 | 90.47 | 89.96 | 90.60 | 96.26 | 88.52 | |
6 | 93.27 | 95.25 | 89.18 | 90.08 | 91.54 | 96.34 | 88.40 | |
7 | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 |
Tab. 8 Performance evaluation experiment results of internal components on three datasets
数据集 | 实验 | Recall | Spe | Pre | F1 | F2 | Acc | mIoU |
---|---|---|---|---|---|---|---|---|
CVC- ClinicDB | 1 | 92.36 | 99.61 | 94.11 | 92.11 | 92.23 | 99.31 | 93.64 |
2 | 85.80 | 98.77 | 86.62 | 83.21 | 84.48 | 97.91 | 86.79 | |
3 | 91.64 | 99.68 | 96.31 | 92.78 | 92.78 | 92.78 | 93.72 | |
4 | 92.10 | 99.56 | 94.68 | 94.68 | 91.92 | 99.28 | 93.31 | |
5 | 93.83 | 99.46 | 92.65 | 93.03 | 93.44 | 99.22 | 93.91 | |
6 | 94.49 | 99.49 | 92.57 | 92.83 | 93.65 | 99.26 | 93.67 | |
7 | 95.18 | 99.54 | 93.84 | 94.20 | 94.71 | 99.29 | 94.35 | |
Kvasir- SEG | 1 | 90.91 | 96.62 | 91.00 | 89.66 | 89.66 | 96.32 | 88.29 |
2 | 82.39 | 97.32 | 87.11 | 80.39 | 81.13 | 94.24 | 82.36 | |
3 | 87.59 | 98.29 | 91.33 | 87.35 | 87.17 | 95.52 | 87.45 | |
4 | 85.60 | 98.43 | 92.17 | 85.75 | 85.43 | 95.85 | 86.99 | |
5 | 88.98 | 98.21 | 98.21 | 87.83 | 88.06 | 95.76 | 88.05 | |
6 | 88.06 | 98.03 | 98.03 | 87.24 | 87.38 | 95.76 | 87.63 | |
7 | 92.25 | 98.18 | 92.96 | 91.37 | 91.67 | 96.89 | 90.73 | |
ISIC 2018 | 1 | 94.22 | 95.66 | 86.99 | 89.09 | 91.43 | 96.22 | 87.66 |
2 | 91.12 | 96.41 | 89.71 | 88.77 | 89.50 | 95.78 | 87.43 | |
3 | 91.32 | 96.81 | 90.79 | 89.60 | 90.03 | 96.01 | 88.31 | |
4 | 92.06 | 96.44 | 89.67 | 89.38 | 90.29 | 96.17 | 88.08 | |
5 | 92.08 | 96.42 | 90.47 | 89.96 | 90.60 | 96.26 | 88.52 | |
6 | 93.27 | 95.25 | 89.18 | 90.08 | 91.54 | 96.34 | 88.40 | |
7 | 91.70 | 96.81 | 90.94 | 89.88 | 90.31 | 96.39 | 88.62 |
[1] | GITE S, MISHRA A, KOTECHA K. Enhanced lung image segmentation using deep learning [J]. Neural Computing and Applications, 2023, 35(31): 22839-22853. |
[2] | 林荐壮,杨文忠,谭思翔,等. 融合滤波增强和反转注意力网络用于息肉分割[J]. 计算机应用, 2023, 43(1): 265-272. |
LIN J Z, YANG W Z, TAN S X, et al. Fusing filter enhancement and reverse attention network for polyp segmentation [J]. Journal of Computer Applications, 2023, 43(1): 265-272. | |
[3] | HU B, ZHOU P, YU H, et al. LeaNet: lightweight U-shaped architecture for high-performance skin cancer image segmentation[J]. Computers in Biology and Medicine, 2024, 169: No.107919. |
[4] | OTSU N. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1979, 9(1): 62-66. |
[5] | TIZHOOSH H R. Image thresholding using type Ⅱ fuzzy sets [J]. Pattern Recognition, 2005, 38(12): 2363-2372. |
[6] | 刘少鹏,洪佳明,梁杰鹏,等. 面向医学图像分割的半监督条件生成对抗网络[J]. 软件学报, 2020, 31(8): 2588-2602. |
LIU S P, HONG J M, LIANG J P, et al. Medical image segmentation using semi-supervised conditional generative adversarial nets [J]. Journal of Software, 2020, 31(8): 2588-2602. | |
[7] | 徐蓬泉,梁宇翔,李英. 融合多尺度语义和剩余瓶颈注意力的医学图像分割[J]. 计算机工程, 2023, 49(10): 162-170. |
XU P Q, LIANG Y X, LI Y. Medical image segmentation fusing multi-scale semantic and residual bottleneck attention [J]. Computer Engineering, 2023, 49(10):162-170. | |
[8] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
[9] | BASAK H, KUNDU R, SARKAR R. MFSNet: a multi focus segmentation network for skin lesion segmentation [J]. Pattern Recognition, 2022, 128: No.108673. |
[10] | RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation [C]// Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 9351. Cham: Springer, 2015: 234-241. |
[11] | MILLETARI F, NAVAB N, AHMADI S A. V-Net: fully convolutional neural networks for volumetric medical image segmentation [C]// Proceedings of the 4th International Conference on 3D Vision. Piscataway: IEEE, 2016: 565-571. |
[12] | ZHOU Z, RAHMAN SIDDIQUEE M M, TAJBAKHSH N, et al. UNet++: a nested U-Net architecture for medical image segmentation [C]// Proceedings of the 2018 International Workshop on Deep Learning in Medical Image Analysis/ 2018 International Workshop on Multimodal Learning for Clinical Decision Support, LNCS 11045. Cham: Springer, 2018: 3-11. |
[13] | WU J, FU R, FANG H, et al. MedSegDiff: medical image segmentation with diffusion probabilistic model [C]// Proceedings of the 2024 International Conference on Medical Imaging with Deep Learning. New York: JMLR.org, 2024: 1623-1639. |
[14] | LI Z, LI Y, LI Q, et al. LViT: language meets Vision Transformer in medical image segmentation [J]. IEEE Transactions on Medical Imaging, 2023, 43(1): 96-107. |
[15] | YUAN F, ZHANG Z, FANG Z. An effective CNN and Transformer complementary network for medical image segmentation [J]. Pattern Recognition, 2023, 136: No.109228. |
[16] | HOORALI F, KHOSRAVI H, MORADI B. IRUNet for medical image segmentation [J]. Expert Systems with Applications, 2022, 191: No.116399. |
[17] | QI Y, HU C, ZUO L, et al. Cardiac magnetic resonance image segmentation method based on multi-scale feature fusion and sequence relationship learning [J]. Sensors, 2023, 23(2): No.690. |
[18] | RYU J, REHMAN M U, NIZAMI I F, et al. SegR-Net: a deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation [J]. Computers in Biology and Medicine, 2023, 163: No.107132. |
[19] | ZHANG J, ZHANG Y, JIN Y, et al. MDU-Net: multi-scale densely connected U-Net for biomedical image segmentation [J]. Health Information Science and Systems, 2023, 11: No.13. |
[20] | YIN Y, HAN Z, JIAN M, et al. AMSUnet: a neural network using atrous multi-scale convolution for medical image segmentation [J]. Computers in Biology and Medicine, 2023, 162: No.107120. |
[21] | DING Z, LI H, GUO Y, et al. M4FNet: multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration [J]. Computers in Biology and Medicine, 2023, 159: No.106923. |
[22] | REIS H C, TURK V. Transfer learning approach and nucleus segmentation with MedCLNet colon cancer database [J]. Journal of Digital Imaging, 2023, 36(1): 306-325. |
[23] | TOMAR N K, JHA D, RIEGLER M A, et al. FANet: a feedback attention network for improved biomedical image segmentation [J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(11): 9375-9388. |
[24] | JHA D, SMEDSRUD P H, RIEGLER M A, et al. ResUNet++: an advanced architecture for medical image segmentation [C]// Proceedings of the 2019 IEEE International Symposium on Multimedia. Piscataway: IEEE, 2019: 225-230. |
[25] | WU C, LONG C, LI S, et al. MSRAformer: multiscale spatial reverse attention network for polyp segmentation [J]. Computers in Biology and Medicine, 2022, 151(Pt A): No.106274. |
[26] | LOU A, GUAN S, LOEW M. CaraNet: context axial reverse attention network for segmentation of small medical objects [J]. Journal of Medical Imaging, 2023, 10(1): No.014005. |
[27] | XU Q, MA Z, HE N, et al. DCSAU-Net: a deeper and more compact split-attention U-Net for medical image segmentation [J]. Computers in Biology and Medicine, 2023, 154: No.106626. |
[28] | IBTEHAZ N, KIHARA D. ACC-UNet: a completely convolutional UNet model for the 2020s [C]// Proceedings of the 2023 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 14222. Cham: Springer, 2023: 692-702. |
[29] | ZHOU T, ZHOU Y, HE K, et al. Cross-level feature aggregation network for polyp segmentation [J]. Pattern Recognition, 2023, 140: No.109555. |
[30] | HUANG X, DENG Z, LI D, et al. MISSFormer: an effective Transformer for 2D medical image segmentation [J]. IEEE Transactions on Medical Imaging, 2023, 42(5): 1484-1494. |
[31] | LI Y, YAN B, HOU J, et al. UNet based on dynamic convolution decomposition and triplet attention [J]. Scientific Reports, 2024, 14: No.271. |
[1] | Chuang WANG, Lu YU, Jianwei CHEN, Cheng PAN, Wenbo DU. Review of open set domain adaptation [J]. Journal of Computer Applications, 2025, 45(9): 2727-2736. |
[2] | Jinggang LYU, Shaorui PENG, Shuo GAO, Jin ZHOU. Speech enhancement network driven by complex frequency attention and multi-scale frequency enhancement [J]. Journal of Computer Applications, 2025, 45(9): 2957-2965. |
[3] | Zhixiong XU, Bo LI, Xiaoyong BIAN, Qiren HU. Adversarial sample embedded attention U-Net for 3D medical image segmentation [J]. Journal of Computer Applications, 2025, 45(9): 3011-3016. |
[4] | Jiaxiang ZHANG, Xiaoming LI, Jiahui ZHANG. Few-shot object detection algorithm based on new category feature enhancement and metric mechanism [J]. Journal of Computer Applications, 2025, 45(9): 2984-2992. |
[5] | Li LI, Han SONG, Peihe LIU, Hanlin CHEN. Named entity recognition for sensitive information based on data augmentation and residual networks [J]. Journal of Computer Applications, 2025, 45(9): 2790-2797. |
[6] | Yiming LIANG, Jing FAN, Wenze CHAI. Multi-scale feature fusion sentiment classification based on bidirectional cross attention [J]. Journal of Computer Applications, 2025, 45(9): 2773-2782. |
[7] | Jin LI, Liqun LIU. SAR and visible image fusion based on residual Swin Transformer [J]. Journal of Computer Applications, 2025, 45(9): 2949-2956. |
[8] | Jin ZHOU, Yuzhi LI, Xu ZHANG, Shuo GAO, Li ZHANG, Jiachuan SHENG. Modulation recognition network for complex electromagnetic environments [J]. Journal of Computer Applications, 2025, 45(8): 2672-2682. |
[9] | Jing WANG, Jiaxing LIU, Wanying SONG, Jiaxing XUE, Wenxin DING. Few-shot skin image classification model based on spatial transformer network and feature distribution calibration [J]. Journal of Computer Applications, 2025, 45(8): 2720-2726. |
[10] | Haoyu LIU, Pengwei KONG, Yaoli WANG, Qing CHANG. Pedestrian detection algorithm based on multi-view information [J]. Journal of Computer Applications, 2025, 45(7): 2325-2332. |
[11] | Bo FENG, Haizheng YU, Hong BIAN. Domain adaptive semantic segmentation based on masking enhanced self-training [J]. Journal of Computer Applications, 2025, 45(7): 2132-2137. |
[12] | Yuyang SUN, Minjie ZHANG, Jie HU. Zero-shot dialogue state tracking domain transfer model based on semantic prefix-tuning [J]. Journal of Computer Applications, 2025, 45(7): 2221-2228. |
[13] | Yongpeng TAO, Shiqi BAI, Zhengwen ZHOU. Neural architecture search for multi-tissue segmentation using convolutional and transformer-based networks in glioma segmentation [J]. Journal of Computer Applications, 2025, 45(7): 2378-2386. |
[14] | Dehui ZHOU, Jun ZHAO, Jinfeng CHENG. Tiny defect detection algorithm for bearing surface based on RT-DETR [J]. Journal of Computer Applications, 2025, 45(6): 1987-1997. |
[15] | Sheping ZHAI, Yan HUANG, Qing YANG, Rui YANG. Multi-view entity alignment combining triples and text attributes [J]. Journal of Computer Applications, 2025, 45(6): 1793-1800. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||