《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (9): 2948-2954.DOI: 10.11772/j.issn.1001-9081.2022081242
收稿日期:
2022-08-22
修回日期:
2023-01-05
接受日期:
2023-01-06
发布日期:
2023-09-10
出版日期:
2023-09-10
通讯作者:
乔志伟
作者简介:
陈蒙蒙(1998—),女,山西介休人,硕士研究生,主要研究方向:医学图像重建、图像处理;
基金资助:
Received:
2022-08-22
Revised:
2023-01-05
Accepted:
2023-01-06
Online:
2023-09-10
Published:
2023-09-10
Contact:
Zhiwei QIAO
About author:
CHEN Mengmeng, born in 1998, M. S. candidate. Her research interests include medical image reconstruction, image processing.
Supported by:
摘要:
针对解析法稀疏重建中产生的条状伪影问题,提出一种融合通道注意力的U型Transformer(CA-Uformer),以实现高精度计算机断层成像(CT)的稀疏重建。CA-Uformer融合了通道注意力和Transformer中的空间注意力,双注意力机制使网络更容易学习到图像细节信息;采用优秀的U型架构融合多尺度图像信息;采用卷积操作实现前向反馈网络设计,从而进一步耦合卷积神经网络(CNN)的局部信息关联能力和Transformer的全局信息捕捉能力。实验结果表明,与经典U-Net相比,CA-Uformer的峰值信噪比(PSNR)、结构相似性(SSIM)提高了3.27 dB、3.14%,均方根误差(RMSE)降低了35.29%,提升效果明显。可见,CA-Uformer稀疏重建精度更高,压制伪影能力更强。
中图分类号:
陈蒙蒙, 乔志伟. 基于融合通道注意力的Uformer的CT图像稀疏重建[J]. 计算机应用, 2023, 43(9): 2948-2954.
Mengmeng CHEN, Zhiwei QIAO. Sparse reconstruction of CT images based on Uformer with fused channel attention[J]. Journal of Computer Applications, 2023, 43(9): 2948-2954.
算法 | PSNR/dB | SSIM | RMSE | 参数量/106 | 浮点运算量/GFLOPs | 重建时间/s |
---|---|---|---|---|---|---|
DnCNN | 32.18 | 0.939 | 0.026 | 0.14 | 18.45 | 0.18 |
RED-CNN | 32.96 | 0.963 | 0.023 | 0.21 | 24.60 | 0.18 |
U-Net | 36.03 | 0.955 | 0.017 | 7.77 | 27.42 | 0.18 |
FBPConvNet | 37.69 | 0.962 | 0.014 | 9.16 | 29.60 | 0.19 |
Uformer | 38.54 | 0.982 | 0.012 | 20.77 | 82.01 | 0.31 |
CA-Uformer | 39.30 | 0.985 | 0.011 | 76.61 | 311.25 | 0.30 |
表1 不同算法在测试集上的实验结果
Tab. 1 Experimental results of different algorithms on test set
算法 | PSNR/dB | SSIM | RMSE | 参数量/106 | 浮点运算量/GFLOPs | 重建时间/s |
---|---|---|---|---|---|---|
DnCNN | 32.18 | 0.939 | 0.026 | 0.14 | 18.45 | 0.18 |
RED-CNN | 32.96 | 0.963 | 0.023 | 0.21 | 24.60 | 0.18 |
U-Net | 36.03 | 0.955 | 0.017 | 7.77 | 27.42 | 0.18 |
FBPConvNet | 37.69 | 0.962 | 0.014 | 9.16 | 29.60 | 0.19 |
Uformer | 38.54 | 0.982 | 0.012 | 20.77 | 82.01 | 0.31 |
CA-Uformer | 39.30 | 0.985 | 0.011 | 76.61 | 311.25 | 0.30 |
稀疏角度数 | PSNR/dB | SSIM | RMSE |
---|---|---|---|
15 | 33.11 | 0.956 | 0.022 |
30 | 36.40 | 0.974 | 0.015 |
60 | 39.30 | 0.985 | 0.011 |
90 | 40.36 | 0.988 | 0.010 |
表2 不同稀疏角度数时测试集的实验结果
Tab. 2 Experimental results on test set with different sparse angles
稀疏角度数 | PSNR/dB | SSIM | RMSE |
---|---|---|---|
15 | 33.11 | 0.956 | 0.022 |
30 | 36.40 | 0.974 | 0.015 |
60 | 39.30 | 0.985 | 0.011 |
90 | 40.36 | 0.988 | 0.010 |
模型 | PSNR/dB | SSIM | RMSE | 参数量 /106 | 浮点运算量/FLOPs | 重建时间/s |
---|---|---|---|---|---|---|
No-CA | 36.44 | 0.975 | 0.015 | 76.56 | 311.22 | 0.30 |
No-Res | 38.83 | 0.984 | 0.012 | 76.61 | 311.25 | 0.30 |
CA-Uformer | 39.30 | 0.985 | 0.011 | 76.61 | 311.25 | 0.30 |
表3 不同模块在测试集上的结果
Tab. 3 Results on test set with different modules
模型 | PSNR/dB | SSIM | RMSE | 参数量 /106 | 浮点运算量/FLOPs | 重建时间/s |
---|---|---|---|---|---|---|
No-CA | 36.44 | 0.975 | 0.015 | 76.56 | 311.22 | 0.30 |
No-Res | 38.83 | 0.984 | 0.012 | 76.61 | 311.25 | 0.30 |
CA-Uformer | 39.30 | 0.985 | 0.011 | 76.61 | 311.25 | 0.30 |
模块位置设计 | PSNR/dB | SSIM | RMSE |
---|---|---|---|
CA-Identity | 38.04 | 0.984 | 0.013 |
Transformer-CA | 38.57 | 0.983 | 0.012 |
CA-Uformer | 39.30 | 0.985 | 0.011 |
表4 模块位置不同在测试集上的结果
Tab. 4 Results of different module positions on test set
模块位置设计 | PSNR/dB | SSIM | RMSE |
---|---|---|---|
CA-Identity | 38.04 | 0.984 | 0.013 |
Transformer-CA | 38.57 | 0.983 | 0.012 |
CA-Uformer | 39.30 | 0.985 | 0.011 |
LFE块实现方式 | PSNR/dB | SSIM | RMSE | 参数量/106 | 浮点运算量/GFLOPs | 重建时间/s |
---|---|---|---|---|---|---|
MLP | 38.26 | 0.982 | 0.012 | 19.27 | 78.41 | 0.27 |
3×3Conv+BN+GELU | 38.53 | 0.974 | 0.012 | 20.66 | 84.24 | 0.27 |
LeFF | 38.62 | 0.973 | 0.012 | 19.43 | 80.13 | 0.31 |
Conv | 39.30 | 0.985 | 0.011 | 76.61 | 311.25 | 0.30 |
表5 不同LFE块在测试集上的结果
Tab. 5 Results of different LFE blocks on test set
LFE块实现方式 | PSNR/dB | SSIM | RMSE | 参数量/106 | 浮点运算量/GFLOPs | 重建时间/s |
---|---|---|---|---|---|---|
MLP | 38.26 | 0.982 | 0.012 | 19.27 | 78.41 | 0.27 |
3×3Conv+BN+GELU | 38.53 | 0.974 | 0.012 | 20.66 | 84.24 | 0.27 |
LeFF | 38.62 | 0.973 | 0.012 | 19.43 | 80.13 | 0.31 |
Conv | 39.30 | 0.985 | 0.011 | 76.61 | 311.25 | 0.30 |
1 | SIDKY E Y, KAO C M, PAN X C. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT[J]. Journal of X-Ray Science and Technology, 2006, 14(2): 119-139. |
2 | 乔志伟. 总变差约束的数据分离最小图像重建模型及其 Chambolle-Pock求解算法[J]. 物理学报, 2018, 67(19):338-351. 10.7498/aps.67.20180839 |
QIAO Z W. The total variation constrained data divergence minimization model for image reconstruction and its Chambolle-Pock solving algorithm[J]. Acta Physica Sinica,2018, 67(19):338-351. 10.7498/aps.67.20180839 | |
3 | BROOKS T, MILDENHALL B, XUE T F, et al. Unprocessing images for learned raw denoising[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11028-11037. 10.1109/cvpr.2019.01129 |
4 | LEE D, YOO J, YE J C. Deep residual learning for compressed sensing MRI[C]// Proceedings of the IEEE 14th International Symposium on Biomedical Imaging. Piscataway: IEEE, 2017: 15-18. 10.1109/isbi.2017.7950457 |
5 | WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11211. Cham: Springer, 2018: 3-19. |
6 | HUANG G, LIU Z, L van der MAATEN, et al. Densely connected convolutional networks[C]// Proceedings of the 2017 IEEE conference on computer vision and pattern recognition. Piscataway: IEEE, 2017: 2261-2269. 10.1109/cvpr.2017.243 |
7 | GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. Cambridge: MIT Press, 2014: 2672-2680. |
8 | RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[C]// Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 9351. Cham: Springer, 2015: 234-241. |
9 | JIN K H, McCANN M T, FROUSTEY E, et al. Deep convolutional neural network for inverse problems in imaging[J]. IEEE Transactions on Image Processing, 2017, 26(9): 4509-4522. 10.1109/tip.2017.2713099 |
10 | PAN X C, SIDKY E Y, VANNIER M. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?[J]. Inverse Problems, 2009, 25(12): No.123009. 10.1088/0266-5611/25/12/123009 |
11 | CHEN H, ZHANG Y, KALRA M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2524-2535. 10.1109/tmi.2017.2715284 |
12 | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. 10.1109/cvpr.2016.90 |
13 | HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7132-7141. 10.1109/cvpr.2018.00745 |
14 | ZHANG Z C, LIANG X K, DONG X, et al. A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1407-1417. 10.1109/tmi.2018.2823338 |
15 | XU L, REN J S J, LIU C, et al. Deep convolutional neural network for image deconvolution[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1. Cambridge: MIT Press, 2014: 1790-1798. |
16 | WOLTERINK J M, LEINER T, VIERGEVER M A, et al. Generative adversarial networks for noise reduction in low-dose CT[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2536-2545. 10.1109/tmi.2017.2708987 |
17 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017:6000-6010. |
18 | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[EB/OL]. (2021-06-03) [2022-10-22].. |
19 | HUANG Z L, WANG X G, HUANG L C, et al. CCNet: criss-cross attention for semantic segmentation[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 603-612. 10.1109/iccv.2019.00069 |
20 | LIU Z, LIN Y, CAO Y, et al. Swin Transformer: hierarchical vision transformer using shifted windows[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9992-10002. 10.1109/iccv48922.2021.00986 |
21 | WANG X L, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7794-7803. 10.1109/cvpr.2018.00813 |
22 | WANG Z D, CUN X D, BAO J M, et al. Uformer: a general U-shaped Transformer for image restoration[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 17662-17672. 10.1109/cvpr52688.2022.01716 |
23 | PENG Z L, HUANG W, GU S Z, et al. Conformer: local features coupling global representations for visual recognition[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 357-366. 10.1109/iccv48922.2021.00042 |
24 | YUAN K, GUO S P, LIU Z W, et al. Incorporating convolution designs into visual Transformers[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 559-568. 10.1109/iccv48922.2021.00062 |
25 | ZHANG K, ZUO W M, CHEN Y J, et al. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising[J]. IEEE Transactions on Image Processing, 2017, 26(7): 3142-3155. 10.1109/tip.2017.2662206 |
[1] | 段升位, 程欣宇, 王浩舟, 王飞. 基于改进的YOLOv5的大坝表面病害检测算法[J]. 《计算机应用》唯一官方网站, 2023, 43(8): 2619-2629. |
[2] | 梁美佳, 刘昕武, 胡晓鹏. 基于改进YOLOv3的列车运行环境图像小目标检测算法[J]. 《计算机应用》唯一官方网站, 2023, 43(8): 2611-2618. |
[3] | 周静, 胡怡宇, 胡成玉, 王天江. 基于点云补全和多分辨Transformer的弱感知目标检测方法[J]. 《计算机应用》唯一官方网站, 2023, 43(7): 2155-2165. |
[4] | 王利, 宣士斌, 秦续阳, 李紫薇. 基于双解码器的Transformer多目标跟踪方法[J]. 《计算机应用》唯一官方网站, 2023, 43(6): 1919-1929. |
[5] | 刘阳, 陆志扬, 王骏, 施俊. 基于自注意力连接UNet的磁共振成像去吉布斯伪影算法[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1606-1611. |
[6] | 蒋瑞林, 覃仁超. 基于深度可分离卷积的多神经网络恶意代码检测模型[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1527-1533. |
[7] | 张凯, 覃正楚, 刘月, 秦心怡. 多学习行为协同的知识追踪模型[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1422-1429. |
[8] | 傅励瑶, 尹梦晓, 杨锋. 基于Transformer的U型医学图像分割网络综述[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1584-1595. |
[9] | 郭劲文, 马兴华, 骆功宁, 王玮, 曹阳, 王宽全. 基于Transformer的结构强化IVOCT导丝伪影去除方法[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1596-1605. |
[10] | 顾勇翔, 蓝鑫, 伏博毅, 秦小林. 基于几何适应与全局感知的遥感图像目标检测算法[J]. 《计算机应用》唯一官方网站, 2023, 43(3): 916-922. |
[11] | 何雪东, 宣士斌, 王款, 陈梦楠. 融合累积分布函数和通道注意力机制的DeepLabV3+图像分割算法[J]. 《计算机应用》唯一官方网站, 2023, 43(3): 936-942. |
[12] | 彭涛, 康亚龙, 余锋, 张自力, 刘军平, 胡新荣, 何儒汉, 李丽. 基于多头软注意力图卷积网络的行人轨迹预测[J]. 《计算机应用》唯一官方网站, 2023, 43(3): 736-743. |
[13] | 李海丰, 张凡, 朴敏楠, 王怀超, 李南莎, 桂仲成. 基于通道和空间注意力的机场道面地下目标自动检测[J]. 《计算机应用》唯一官方网站, 2023, 43(3): 930-935. |
[14] | 朱利安, 张鸿. 基于双分支条件生成对抗网络的非均匀图像去雾[J]. 《计算机应用》唯一官方网站, 2023, 43(2): 567-574. |
[15] | 周佳航, 邢红杰. 基于双自编码器和Transformer网络的异常检测方法[J]. 《计算机应用》唯一官方网站, 2023, 43(1): 22-29. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||