Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (9): 2948-2954.DOI: 10.11772/j.issn.1001-9081.2022081242

• Multimedia computing and computer simulation • Previous Articles     Next Articles

Sparse reconstruction of CT images based on Uformer with fused channel attention

Mengmeng CHEN, Zhiwei QIAO()   

  1. School of Computer and Information Technology,Shanxi University,Taiyuan Shanxi 030006,China
  • Received:2022-08-22 Revised:2023-01-05 Accepted:2023-01-06 Online:2023-09-10 Published:2023-09-10
  • Contact: Zhiwei QIAO
  • About author:CHEN Mengmeng, born in 1998, M. S. candidate. Her research interests include medical image reconstruction, image processing.
  • Supported by:
    National Natural Science Foundation of China(62071281)


陈蒙蒙, 乔志伟()   

  1. 山西大学 计算机与信息技术学院,太原 030006
  • 通讯作者: 乔志伟
  • 作者简介:陈蒙蒙(1998—),女,山西介休人,硕士研究生,主要研究方向:医学图像重建、图像处理;
  • 基金资助:


Concerning the problem of streak artifacts generated in the sparse reconstruction of analytic method, a Channel Attention U-shaped Transformer (CA-Uformer) was proposed to achieve high-precision Computed Tomography (CT) sparse reconstruction. In CA-Uformer, channel attention and spatial attention in Transformer were fused, and with the dual-attention mechanism, image detail information was easier learnt by the network; an excellent U-shaped architecture was adopted to fuse multi-scale image information; a forward feedback network design was implemented by using convolutional operations, which further coupled the local information association ability of Convolutional Neural Network (CNN) and the global information capturing ability of Transformer. Experimental results show that CA-Uformer has the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) 3.27 dB and 3.14% higher, and Root Mean Square Error (RMSE) 35.29% lower than the classical U-Net, which is a significant improvement. It can be seen that CA-Uformer has sparse reconstruction with higher precision and better ability to suppress artifacts.

Key words: Computed Tomography (CT), sparse reconstruction, strip artifact, Transformer, channel attention


针对解析法稀疏重建中产生的条状伪影问题,提出一种融合通道注意力的U型Transformer(CA-Uformer),以实现高精度计算机断层成像(CT)的稀疏重建。CA-Uformer融合了通道注意力和Transformer中的空间注意力,双注意力机制使网络更容易学习到图像细节信息;采用优秀的U型架构融合多尺度图像信息;采用卷积操作实现前向反馈网络设计,从而进一步耦合卷积神经网络(CNN)的局部信息关联能力和Transformer的全局信息捕捉能力。实验结果表明,与经典U-Net相比,CA-Uformer的峰值信噪比(PSNR)、结构相似性(SSIM)提高了3.27 dB、3.14%,均方根误差(RMSE)降低了35.29%,提升效果明显。可见,CA-Uformer稀疏重建精度更高,压制伪影能力更强。

关键词: 计算机断层成像, 稀疏重建, 条状伪影, Transformer, 通道注意力

CLC Number: