Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (12): 4037-4044.DOI: 10.11772/j.issn.1001-9081.2024111673
• Multimedia computing and computer simulation • Previous Articles Next Articles
Ming DENG1,2, Jinfan XU2, Hongxiang XIAO2, Xiaolan XIE2
Received:2024-11-27
Revised:2025-04-11
Accepted:2025-04-16
Online:2025-04-18
Published:2025-12-10
Contact:
Hongxiang XIAO
About author:DENG Ming, born in 1979, M. S., associate professor. His research interests include image processing, intelligent computing.Supported by:邓酩1,2, 徐锦凡2, 肖洪祥2, 谢晓兰2
通讯作者:
肖洪祥
作者简介:邓酩(1979—),男,湖南永州人,副教授,硕士,主要研究方向:图像处理、智能计算基金资助:CLC Number:
Ming DENG, Jinfan XU, Hongxiang XIAO, Xiaolan XIE. Medical image segmentation network based on improved TransUNet with efficient channel attention[J]. Journal of Computer Applications, 2025, 45(12): 4037-4044.
邓酩, 徐锦凡, 肖洪祥, 谢晓兰. 改进TransUNet的高效通道注意力医学图像分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(12): 4037-4044.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024111673
| 操作说明 | 张量维度 | 说明 |
|---|---|---|
| 输入特征图 | [24,64,112,112] | Batch为24, 通道数为64, 高宽为112×112 |
生成查询矩阵 特征映射 | [24,8,112,112] | 通过1×1卷积得到, 降低通道数 |
生成键矩阵 特征映射 | [24,8,112,112] | 通过1×1卷积得到, 降低通道数 |
生成值矩阵 V 特征映射 | [24,64,112,112] | 通过1×1卷积得到, 保持原始通道数 |
| Q 进行维度重塑 | [24,112,112×8] | 合并空间维度与通道, 矩阵乘法运算 |
| K 进行维度重塑 | [24,112×8,112] | 计算空间注意力权重 |
| 相关性计算 | [24,112,112] | 空间位置的相关性 |
Softmax归一化 得到 A | [24,112,112] | 进行Softmax归一化, 形成注意力权重 |
| 特征融合 | [24,64,112,112] | 融合注意力权重与 V 特征 |
| 输出特征图 | [24,64,112,112] | 输出特征得到增强 |
Tab.1 CCA block dimension transformation table
| 操作说明 | 张量维度 | 说明 |
|---|---|---|
| 输入特征图 | [24,64,112,112] | Batch为24, 通道数为64, 高宽为112×112 |
生成查询矩阵 特征映射 | [24,8,112,112] | 通过1×1卷积得到, 降低通道数 |
生成键矩阵 特征映射 | [24,8,112,112] | 通过1×1卷积得到, 降低通道数 |
生成值矩阵 V 特征映射 | [24,64,112,112] | 通过1×1卷积得到, 保持原始通道数 |
| Q 进行维度重塑 | [24,112,112×8] | 合并空间维度与通道, 矩阵乘法运算 |
| K 进行维度重塑 | [24,112×8,112] | 计算空间注意力权重 |
| 相关性计算 | [24,112,112] | 空间位置的相关性 |
Softmax归一化 得到 A | [24,112,112] | 进行Softmax归一化, 形成注意力权重 |
| 特征融合 | [24,64,112,112] | 融合注意力权重与 V 特征 |
| 输出特征图 | [24,64,112,112] | 输出特征得到增强 |
| 操作说明 | 张量维度 | 说明 |
|---|---|---|
| 输入特征图 | [ | Batch为24,序列长度为196, 通道数为768 |
| Layer Norm | [ | 特征标准化,无维度变化 |
| 调整维度 | [24,768,14,14] | 将序列维度196转为空间维度 14×14,便于卷积 |
| SimAM模块 | [24,768,14,14] | 通过1×1卷积得到,保持原始通道数 |
| 恢复维度 | [ | 将空间维度重新恢复为 序列维度196 |
| 残差连接 | [ | 维度相同,相加后维度不变 |
| Layer Norm | [ | 特征标准化,无维度变化 |
| 调整维度 | [24,768,14,14] | 为后续卷积操作调整维度 |
| ECA模块 | [24,768,14,14] | 替代MLP模块,输出维度不变 |
| 恢复维度 | [ | 将空间维度重新恢复为 序列维度196 |
Tab.2 ES-Transformer module dimension transformation
| 操作说明 | 张量维度 | 说明 |
|---|---|---|
| 输入特征图 | [ | Batch为24,序列长度为196, 通道数为768 |
| Layer Norm | [ | 特征标准化,无维度变化 |
| 调整维度 | [24,768,14,14] | 将序列维度196转为空间维度 14×14,便于卷积 |
| SimAM模块 | [24,768,14,14] | 通过1×1卷积得到,保持原始通道数 |
| 恢复维度 | [ | 将空间维度重新恢复为 序列维度196 |
| 残差连接 | [ | 维度相同,相加后维度不变 |
| Layer Norm | [ | 特征标准化,无维度变化 |
| 调整维度 | [24,768,14,14] | 为后续卷积操作调整维度 |
| ECA模块 | [24,768,14,14] | 替代MLP模块,输出维度不变 |
| 恢复维度 | [ | 将空间维度重新恢复为 序列维度196 |
| 操作说明 | 张量维度 | 说明 |
|---|---|---|
| 输入特征图 X | [24,512,28,28] | Batch为24,通道数为512 |
| SimAM | [24,512,28,28] | 计算注意力权重, 维度不变 |
| 3×3分组卷积 | [24,512,28,28] | 提取 L 的静态上下文表示 |
| 拼接键矩阵 L 与 I | [24,1 024,28,28] | 沿通道维度拼接 L 和 I |
第一个1×1卷积 (带ReLU激活) | [24,512,28,28] | 特征变换并引入非线性 |
第二个1×1卷积 (无激活) | [24,512,28,28] | 得到动态注意力矩阵 |
使用Softmax归一化 得到Attention | [24,512,28,28] | 得到注意力权重 |
J 与Attention 进行点乘 | [24,512,28,28] | 根据注意力权重得到 动态上下文特征 |
L 与动态上下文特征 求和的输出特征 Z | [24,512,28,28] | 最终输出特征 |
Tab.3 SCOT block dimension transformation table
| 操作说明 | 张量维度 | 说明 |
|---|---|---|
| 输入特征图 X | [24,512,28,28] | Batch为24,通道数为512 |
| SimAM | [24,512,28,28] | 计算注意力权重, 维度不变 |
| 3×3分组卷积 | [24,512,28,28] | 提取 L 的静态上下文表示 |
| 拼接键矩阵 L 与 I | [24,1 024,28,28] | 沿通道维度拼接 L 和 I |
第一个1×1卷积 (带ReLU激活) | [24,512,28,28] | 特征变换并引入非线性 |
第二个1×1卷积 (无激活) | [24,512,28,28] | 得到动态注意力矩阵 |
使用Softmax归一化 得到Attention | [24,512,28,28] | 得到注意力权重 |
J 与Attention 进行点乘 | [24,512,28,28] | 根据注意力权重得到 动态上下文特征 |
L 与动态上下文特征 求和的输出特征 Z | [24,512,28,28] | 最终输出特征 |
| 网络 | Ave DSC/% | HD | 各器官的DSC/% | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 主动脉 | 胆囊 | 左肾 | 右肾 | 肝脏 | 胰腺 | 脾脏 | 胃 | |||
| VNet[ | 68.81 | — | 75.34 | 51.87 | 77.10 | 80.75 | 87.84 | 40.05 | 80.56 | 56.98 |
| DETR[ | 69.77 | — | 74.74 | 53.77 | 72.31 | 73.24 | 94.08 | 54.18 | 89.90 | 45.96 |
| U-Net[ | 76.85 | 39.70 | 89.07 | 69.72 | 77.77 | 68.60 | 93.43 | 53.98 | 86.67 | 75.58 |
| U-Net++[ | 76.91 | 36.93 | 88.19 | 68.89 | 81.76 | 75.27 | 93.01 | 58.20 | 83.44 | 70.52 |
| Residual U-Net[ | 76.95 | 38.44 | 87.06 | 66.05 | 83.43 | 76.83 | 93.99 | 51.86 | 85.25 | 70.13 |
| Att-UNet[ | 77.77 | 36.02 | 89.55 | 68.88 | 77.98 | 71.11 | 93.57 | 58.04 | 87.30 | 75.75 |
| MultiResUNet[ | 77.42 | 36.84 | 87.73 | 65.67 | 82.08 | 70.43 | 93.49 | 60.09 | 85.23 | 75.66 |
| TransUNet[ | 77.48 | 31.69 | 87.23 | 63.13 | 81.87 | 77.02 | 94.08 | 55.86 | 85.08 | 75.62 |
| Swin-Unet[ | 79.13 | 21.55 | 85.47 | 66.53 | 83.28 | 79.61 | 94.29 | 56.58 | 90.66 | 76.60 |
| CoT-TransUNet-50[ | 79.56 | 22.97 | 89.99 | 60.56 | 85.66 | 84.80 | 94.46 | 59.25 | 87.81 | 73.99 |
| DouTransUNet[ | 78.24 | 23.75 | 88.69 | 62.56 | 83.33 | 76.91 | 94.57 | 55.23 | 86.36 | 78.28 |
| ES-TransUNet | 79.85 | 22.00 | 87.88 | 68.58 | 80.62 | 76.34 | 94.64 | 62.03 | 88.79 | 79.88 |
Tab. 4 Comparison of segmentation accuracy among different networks on Synapse multi-organ CT dataset
| 网络 | Ave DSC/% | HD | 各器官的DSC/% | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 主动脉 | 胆囊 | 左肾 | 右肾 | 肝脏 | 胰腺 | 脾脏 | 胃 | |||
| VNet[ | 68.81 | — | 75.34 | 51.87 | 77.10 | 80.75 | 87.84 | 40.05 | 80.56 | 56.98 |
| DETR[ | 69.77 | — | 74.74 | 53.77 | 72.31 | 73.24 | 94.08 | 54.18 | 89.90 | 45.96 |
| U-Net[ | 76.85 | 39.70 | 89.07 | 69.72 | 77.77 | 68.60 | 93.43 | 53.98 | 86.67 | 75.58 |
| U-Net++[ | 76.91 | 36.93 | 88.19 | 68.89 | 81.76 | 75.27 | 93.01 | 58.20 | 83.44 | 70.52 |
| Residual U-Net[ | 76.95 | 38.44 | 87.06 | 66.05 | 83.43 | 76.83 | 93.99 | 51.86 | 85.25 | 70.13 |
| Att-UNet[ | 77.77 | 36.02 | 89.55 | 68.88 | 77.98 | 71.11 | 93.57 | 58.04 | 87.30 | 75.75 |
| MultiResUNet[ | 77.42 | 36.84 | 87.73 | 65.67 | 82.08 | 70.43 | 93.49 | 60.09 | 85.23 | 75.66 |
| TransUNet[ | 77.48 | 31.69 | 87.23 | 63.13 | 81.87 | 77.02 | 94.08 | 55.86 | 85.08 | 75.62 |
| Swin-Unet[ | 79.13 | 21.55 | 85.47 | 66.53 | 83.28 | 79.61 | 94.29 | 56.58 | 90.66 | 76.60 |
| CoT-TransUNet-50[ | 79.56 | 22.97 | 89.99 | 60.56 | 85.66 | 84.80 | 94.46 | 59.25 | 87.81 | 73.99 |
| DouTransUNet[ | 78.24 | 23.75 | 88.69 | 62.56 | 83.33 | 76.91 | 94.57 | 55.23 | 86.36 | 78.28 |
| ES-TransUNet | 79.85 | 22.00 | 87.88 | 68.58 | 80.62 | 76.34 | 94.64 | 62.03 | 88.79 | 79.88 |
| 网络 | Ave DSC/% | 各器官的DSC/% | ||
|---|---|---|---|---|
| 右心室 | 心肌 | 左心室 | ||
| VNet[ | 84.75 | 84.34 | 78.46 | 91.45 |
| DETR[ | 85.83 | 85.21 | 77.59 | 94.69 |
| U-Net[ | 87.55 | 87.10 | 80.63 | 94.92 |
| U-Net++[ | 81.45 | 81.46 | 70.71 | 92.18 |
| Residual U-Net[ | 87.57 | 86.07 | 81.88 | 94.75 |
| Att-UNet[ | 86.75 | 87.58 | 79.20 | 93.47 |
| MultiResUNet[ | 87.32 | 86.95 | 79.76 | 95.25 |
| TransUNet[ | 89.71 | 88.86 | 84.53 | 95.73 |
| Swin-Unet[ | 89.88 | 88.58 | 85.36 | 95.70 |
| CoT-TransUNet-50[ | 89.94 | 88.97 | 85.46 | 95.39 |
| DouTransUNet[ | 90.30 | 89.02 | 85.66 | 96.24 |
| ES-TransUNet | 91.28 | 90.43 | 86.48 | 96.93 |
Tab. 5 Comparison of segmentation accuracy among different networks on ACDC dataset
| 网络 | Ave DSC/% | 各器官的DSC/% | ||
|---|---|---|---|---|
| 右心室 | 心肌 | 左心室 | ||
| VNet[ | 84.75 | 84.34 | 78.46 | 91.45 |
| DETR[ | 85.83 | 85.21 | 77.59 | 94.69 |
| U-Net[ | 87.55 | 87.10 | 80.63 | 94.92 |
| U-Net++[ | 81.45 | 81.46 | 70.71 | 92.18 |
| Residual U-Net[ | 87.57 | 86.07 | 81.88 | 94.75 |
| Att-UNet[ | 86.75 | 87.58 | 79.20 | 93.47 |
| MultiResUNet[ | 87.32 | 86.95 | 79.76 | 95.25 |
| TransUNet[ | 89.71 | 88.86 | 84.53 | 95.73 |
| Swin-Unet[ | 89.88 | 88.58 | 85.36 | 95.70 |
| CoT-TransUNet-50[ | 89.94 | 88.97 | 85.46 | 95.39 |
| DouTransUNet[ | 90.30 | 89.02 | 85.66 | 96.24 |
| ES-TransUNet | 91.28 | 90.43 | 86.48 | 96.93 |
| 网络框架 | Ave DSC/% | 参数量/106 | 推理时间/ms | 计算量/GFLOPs |
|---|---|---|---|---|
| U-Net[ | 76.85 | 31.13 | 223 | 55.84 |
| Swin-Unet[ | 77.65 | 96.34 | 238 | 42.68 |
| CoT-TransUNet-50[ | 78.24 | 83.54 | 184 | 29.64 |
| DA-TransUNet[ | 79.80 | 94.51 | 165 | 25.49 |
| ES-TransUNet | 79.85 | 65.58 | 158 | 16.70 |
Tab. 6 Comparison experimental results on lightweight design
| 网络框架 | Ave DSC/% | 参数量/106 | 推理时间/ms | 计算量/GFLOPs |
|---|---|---|---|---|
| U-Net[ | 76.85 | 31.13 | 223 | 55.84 |
| Swin-Unet[ | 77.65 | 96.34 | 238 | 42.68 |
| CoT-TransUNet-50[ | 78.24 | 83.54 | 184 | 29.64 |
| DA-TransUNet[ | 79.80 | 94.51 | 165 | 25.49 |
| ES-TransUNet | 79.85 | 65.58 | 158 | 16.70 |
| 网络 | CCA | ES-Transformer | SCOT | Dysample | Ave DSC |
|---|---|---|---|---|---|
| TransUNet[ | 77.48 | ||||
| A | √ | 77.88 | |||
| B | √ | 78.85 | |||
| C | √ | 79.67 | |||
| D | √ | 78.79 | |||
| ES-TransUNet | √ | √ | √ | √ | 79.85 |
Tab.7 Ablation experimental results on ES-TransUNet structure
| 网络 | CCA | ES-Transformer | SCOT | Dysample | Ave DSC |
|---|---|---|---|---|---|
| TransUNet[ | 77.48 | ||||
| A | √ | 77.88 | |||
| B | √ | 78.85 | |||
| C | √ | 79.67 | |||
| D | √ | 78.79 | |||
| ES-TransUNet | √ | √ | √ | √ | 79.85 |
| 分辨率 | Ave DSC/% | 各器官的DSC/% | |||||||
|---|---|---|---|---|---|---|---|---|---|
| 主动脉 | 胆囊 | 左肾 | 右肾 | 肝脏 | 胰腺 | 脾脏 | 胃 | ||
| 224×224 | 79.85 | 87.88 | 68.58 | 80.62 | 76.34 | 94.64 | 62.03 | 88.79 | 79.88 |
| 512×512 | 80.77 | 91.22 | 63.78 | 81.90 | 79.18 | 95.65 | 68.18 | 88.95 | 77.37 |
Tab. 8 Ablation experimental results on different input image resolutions
| 分辨率 | Ave DSC/% | 各器官的DSC/% | |||||||
|---|---|---|---|---|---|---|---|---|---|
| 主动脉 | 胆囊 | 左肾 | 右肾 | 肝脏 | 胰腺 | 脾脏 | 胃 | ||
| 224×224 | 79.85 | 87.88 | 68.58 | 80.62 | 76.34 | 94.64 | 62.03 | 88.79 | 79.88 |
| 512×512 | 80.77 | 91.22 | 63.78 | 81.90 | 79.18 | 95.65 | 68.18 | 88.95 | 77.37 |
| [1] | 马金林,邓媛媛,马自萍. 肝脏肿瘤CT图像深度学习分割方法综述[J]. 中国图象图形学报, 2020, 25(10): 2024-2046. |
| MA J L, DEND Y Y, MA Z P. Review of deep learning segmentation methods for CT images of liver tumors[J]. Journal of Image and Graphics, 2020, 25(10): 2024-2046. | |
| [2] | RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation [C]// Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 9351. Cham: Springer, 2015: 234-241. |
| [3] | 胡帅,李华玲,郝德琛. 改进U-Net的多级边缘增强医学图像分割网络[J]. 计算机工程, 2024, 50(4): 286-293. |
| HU S, LI H L, HAO D S. Improved multistage edge-enhanced medical image segmentation network of U-Net[J]. Computer Engineering, 2024, 50(4): 286-293. | |
| [4] | ABDOLLAHI A, PRADHAN B, ALAMRI A. VNet: an end-to-end fully convolutional neural network for road extraction from high-resolution remote sensing data[J]. IEEE Access, 2020, 8: 179424-179436. |
| [5] | OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: learning where to look for the pancreas[EB/OL]. [2024-09-20].. |
| [6] | ZHOU Z, SIDDIQUEE M M R, TAJBAKHSH N, et al. UNet++: redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Transactions on Medical Imaging, 2020, 39(6): 1856-1867. |
| [7] | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2024-09-20].. |
| [8] | BADRINARAYANAN V, KENDALL A, CIPOLLA R. SegNet: a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495. |
| [9] | CHAURASIA A, CULURCIELLO E. LinkNet: exploiting encoder representations for efficient semantic segmentation[C]// Proceedings of the 2017 IEEE International Conference on Visual Communications and Image Processing. Piscataway: IEEE, 2017: 1-4. |
| [10] | CHEN J, LU Y, YU Q, et al. TransUNet: Transformers make strong encoders for medical image segmentation[EB/OL]. [2024-09-20].. |
| [11] | CAO H, WANG Y, CHEN J, et al. Swin-Unet: Unet-like pure Transformer for medical image segmentation[C]// Proceedings of the 2022 European Conference on Computer Vision Workshops, LNCS 13803. Cham: Springer, 2023: 205-218. |
| [12] | LIU Z, LIN Y, CAO Y, et al. Swin Transformer: hierarchical vision Transformer using shifted windows[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9992-10002. |
| [13] | JAMALI A, ROY S K, LI J, et al. TransU-Net++: rethinking attention gated TransU-Net for deforestation mapping[J]. International Journal of Applied Earth Observation and Geoinformation, 2023, 120(25): No.103332. |
| [14] | LIN A, CHEN B, XU J, et al. DS-TransUNet: dual Swin Transformer U-Net for medical image segmentation[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: No.4005615. |
| [15] | YANG Y, MEHRKANOON S. AA-TransUNet: attention augmented TransUNet for nowcasting tasks[C]// Proceedings of the 2022 International Joint Conference on Neural Networks. Piscataway: IEEE, 2022: 1-8. |
| [16] | 杨鹤,柏正尧.CoT-TransUNet:轻量化的上下文Transformer医学图像分割网络[J]. 计算机工程与应用, 2023, 59(3): 218-225. |
| YANG H, BAI Z Y. CoT-TransUNet: lightweight context Transformer medical image segmentation network[J]. Computer Engineering and Applications, 2023, 59(3): 218-225. | |
| [17] | 冯嘉钦,邱卫根,张立臣. 基于多尺度UNet的肾脏CT图像分割[J]. 计算机应用与软件, 2023, 40(8): 221-227, 243. |
| FENG J Q, QIU W G, ZHANG L C. Kidney CT image segmentation based on multiscale UNet[J]. Computer Applications and Software, 2023, 40(8): 221-227, 243. | |
| [18] | YANG L, ZHANG R Y, LI L, et al. SimAM: a simple, parameter-free attention module for convolutional neural networks[C]// Proceedings of the 38th International Conference on Machine Learning. New York: JMLR.org, 2021: 11863-11874. |
| [19] | WANG Q, WU B, ZHU P, et al. ECA-Net: efficient channel attention for deep convolutional neural networks[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 11531-11539. |
| [20] | WANG W, CHEN W, QIU Q, et al. CrossFormer++: a versatile vision Transformer hinging on cross-scale attention[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(5): 3123-3136. |
| [21] | LI Y, YAO T, PAN Y, et al. Contextual Transformer networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(2): 1489-1500. |
| [22] | LANDMAN B, XU Z, IGELSIAS J E, et al. Segmentation outside the cranial vault challenge [C]// Proceedings of the MICCAI 2015 Multi Atlas Labeling Beyond Cranial Vault-Workshop Challenge. Heidelberg: Springer, 2015: 341-350. |
| [23] | IBTEHAZ N, RAHMAN M S. MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation[J]. Neural Networks, 2020, 121: 74-87. |
| [24] | CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with Transformers [C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12346. Cham: Springer, 2020: 213-229. |
| [25] | ZHANG Z, LIU Q, WANG Y. Road extraction by deep residual U-Net[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(5): 749-753. |
| [26] | SUN G, PAN Y, KONG W, et al. DA-TransUNet: integrating spatial and channel dual attention with Transformer U-net for medical image segmentation[J]. Frontiers in Bioengineering and Biotechnology, 2024, 12: No.1398237. |
| [27] | 杨澜. 一种基于TransUNet的双分支并行医学图像分割模型[J]. 计算机科学与应用, 2024, 14(10): 74-84. |
| YANG L. A two-branch parallel medical image segmentation model based on TransUNet[J]. Computer Science and Application, 2024, 14(10): 74-84. |
| [1] | Weigang LI, Jiale SHAO, Zhiqiang TIAN. Point cloud classification and segmentation network based on dual attention mechanism and multi-scale fusion [J]. Journal of Computer Applications, 2025, 45(9): 3003-3010. |
| [2] | Xiang WANG, Zhixiang CHEN, Guojun MAO. Multivariate time series prediction method combining local and global correlation [J]. Journal of Computer Applications, 2025, 45(9): 2806-2816. |
| [3] | Zhixiong XU, Bo LI, Xiaoyong BIAN, Qiren HU. Adversarial sample embedded attention U-Net for 3D medical image segmentation [J]. Journal of Computer Applications, 2025, 45(9): 3011-3016. |
| [4] | Fang WANG, Jing HU, Rui ZHANG, Wenting FAN. Medical image segmentation network with content-guided multi-angle feature fusion [J]. Journal of Computer Applications, 2025, 45(9): 3017-3025. |
| [5] | Li LI, Han SONG, Peihe LIU, Hanlin CHEN. Named entity recognition for sensitive information based on data augmentation and residual networks [J]. Journal of Computer Applications, 2025, 45(9): 2790-2797. |
| [6] | Yiming LIANG, Jing FAN, Wenze CHAI. Multi-scale feature fusion sentiment classification based on bidirectional cross attention [J]. Journal of Computer Applications, 2025, 45(9): 2773-2782. |
| [7] | Jin LI, Liqun LIU. SAR and visible image fusion based on residual Swin Transformer [J]. Journal of Computer Applications, 2025, 45(9): 2949-2956. |
| [8] | Jinggang LYU, Shaorui PENG, Shuo GAO, Jin ZHOU. Speech enhancement network driven by complex frequency attention and multi-scale frequency enhancement [J]. Journal of Computer Applications, 2025, 45(9): 2957-2965. |
| [9] | Jing WANG, Jiaxing LIU, Wanying SONG, Jiaxing XUE, Wenxin DING. Few-shot skin image classification model based on spatial transformer network and feature distribution calibration [J]. Journal of Computer Applications, 2025, 45(8): 2720-2726. |
| [10] | Haifeng WU, Liqing TAO, Yusheng CHENG. Partial label regression algorithm integrating feature attention and residual connection [J]. Journal of Computer Applications, 2025, 45(8): 2530-2536. |
| [11] | Chao JING, Yutao QUAN, Yan CHEN. Improved multi-layer perceptron and attention model-based power consumption prediction algorithm [J]. Journal of Computer Applications, 2025, 45(8): 2646-2655. |
| [12] | Jinhao LIN, Chuan LUO, Tianrui LI, Hongmei CHEN. Thoracic disease classification method based on cross-scale attention network [J]. Journal of Computer Applications, 2025, 45(8): 2712-2719. |
| [13] | Jin ZHOU, Yuzhi LI, Xu ZHANG, Shuo GAO, Li ZHANG, Jiachuan SHENG. Modulation recognition network for complex electromagnetic environments [J]. Journal of Computer Applications, 2025, 45(8): 2672-2682. |
| [14] | Chen LIANG, Yisen WANG, Qiang WEI, Jiang DU. Source code vulnerability detection method based on Transformer-GCN [J]. Journal of Computer Applications, 2025, 45(7): 2296-2303. |
| [15] | Yongpeng TAO, Shiqi BAI, Zhengwen ZHOU. Neural architecture search for multi-tissue segmentation using convolutional and transformer-based networks in glioma segmentation [J]. Journal of Computer Applications, 2025, 45(7): 2378-2386. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||