Journal of Computer Applications ›› 2026, Vol. 46 ›› Issue (1): 270-279.DOI: 10.11772/j.issn.1001-9081.2024121765
• Multimedia computing and computer simulation • Previous Articles Next Articles
Lifang WANG1, Wenjing REN1, Xiaodong GUO2(
), Rongguo ZHANG1, Lihua HU1
Received:2024-12-16
Revised:2025-03-17
Accepted:2025-03-18
Online:2026-01-10
Published:2026-01-10
Contact:
Xiaodong GUO
About author:WANG Lifang, born in 1975, Ph. D., associate professor. Her research interests include image and graphics processing, intelligent optimization.Supported by:
王丽芳1, 任文婧1, 郭晓东2(
), 张荣国1, 胡立华1
通讯作者:
郭晓东
作者简介:王丽芳(1975—),女,山西和顺人,副教授,博士, CCF会员,主要研究方向:图形图像处理、智能优化基金资助:CLC Number:
Lifang WANG, Wenjing REN, Xiaodong GUO, Rongguo ZHANG, Lihua HU. Trident generative adversarial network for low-dose CT image denoising[J]. Journal of Computer Applications, 2026, 46(1): 270-279.
王丽芳, 任文婧, 郭晓东, 张荣国, 胡立华. 用于低剂量CT图像降噪的多路特征生成对抗网络[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 270-279.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024121765
| 方法 | PSNR/dB | SSIM |
|---|---|---|
| LDCT | 26.789 1±1.721 8 | 0.824 4±0.042 6 |
| BM3D | 27.741 2±1.524 8 | 0.834 5±0.026 5 |
| RED-CNN | 27.783 2±1.548 8 | 0.844 2±0.022 6 |
| pix2pix | 28.325 3±1.339 4 | 0.823 1±0.033 9 |
| HFSGAN | 30.081 8±1.167 5 | 0.852 1±0.018 4 |
| CNCL | 29.065 8±2.353 4 | 0.860 9±0.022 4 |
| TED-Net | 31.087 1±1.754 4 | 0.878 3±0.039 9 |
| DehazeFormer | 31.128 1±1.803 3 | 0.880 5±0.040 0 |
| DualED-GAN | 31.473 6±1.749 1 | 0.876 6±0.041 2 |
| Trident GAN | 31.5193±1.9214 | 0.8830±0.0422 |
Tab. 1 Average quantified performance on Mayo test set
| 方法 | PSNR/dB | SSIM |
|---|---|---|
| LDCT | 26.789 1±1.721 8 | 0.824 4±0.042 6 |
| BM3D | 27.741 2±1.524 8 | 0.834 5±0.026 5 |
| RED-CNN | 27.783 2±1.548 8 | 0.844 2±0.022 6 |
| pix2pix | 28.325 3±1.339 4 | 0.823 1±0.033 9 |
| HFSGAN | 30.081 8±1.167 5 | 0.852 1±0.018 4 |
| CNCL | 29.065 8±2.353 4 | 0.860 9±0.022 4 |
| TED-Net | 31.087 1±1.754 4 | 0.878 3±0.039 9 |
| DehazeFormer | 31.128 1±1.803 3 | 0.880 5±0.040 0 |
| DualED-GAN | 31.473 6±1.749 1 | 0.876 6±0.041 2 |
| Trident GAN | 31.5193±1.9214 | 0.8830±0.0422 |
| 方法 | 胸部LDCT图像 | 腹部LDCT图像 | ||
|---|---|---|---|---|
| PSNR/dB | SSIM | PSNR/dB | SSIM | |
| LDCT | 15.354 | 0.483 | 21.327 | 0.483 |
| BM3D | 16.001 | 0.499 | 25.792 | 0.621 |
| RED-CNN | 17.338 | 0.600 | 25.994 | 0.616 |
| pix2pix | 18.122 | 0.613 | 23.131 | 0.462 |
| HFSGAN | 18.483 | 0.638 | 25.352 | 0.561 |
| CNCL | 18.425 | 0.634 | 23.620 | 0.550 |
| TED-Net | 18.788 | 0.652 | 25.872 | 0.584 |
| DehazeFormer | 19.006 | 0.667 | 26.024 | 0.616 |
| DualED-GAN | 18.740 | 0.612 | 26.056 | 0.599 |
| Trident GAN | 19.615 | 0.675 | 26.490 | 0.621 |
Tab. 2 Average quantified performance of representative local ROI
| 方法 | 胸部LDCT图像 | 腹部LDCT图像 | ||
|---|---|---|---|---|
| PSNR/dB | SSIM | PSNR/dB | SSIM | |
| LDCT | 15.354 | 0.483 | 21.327 | 0.483 |
| BM3D | 16.001 | 0.499 | 25.792 | 0.621 |
| RED-CNN | 17.338 | 0.600 | 25.994 | 0.616 |
| pix2pix | 18.122 | 0.613 | 23.131 | 0.462 |
| HFSGAN | 18.483 | 0.638 | 25.352 | 0.561 |
| CNCL | 18.425 | 0.634 | 23.620 | 0.550 |
| TED-Net | 18.788 | 0.652 | 25.872 | 0.584 |
| DehazeFormer | 19.006 | 0.667 | 26.024 | 0.616 |
| DualED-GAN | 18.740 | 0.612 | 26.056 | 0.599 |
| Trident GAN | 19.615 | 0.675 | 26.490 | 0.621 |
| 方法 | PSNR/dB | SSIM |
|---|---|---|
| LDCT | 32.437 6±3.032 2 | 0.913 9±0.035 4 |
| BM3D | 32.740 1±3.002 1 | 0.934 6±0.022 4 |
| RED-CNN | 32.127 2±2.893 9 | 0.918 6±0.022 0 |
| pix2pix | 33.033 6±2.704 7 | 0.911 7±0.033 4 |
| HFSGAN | 33.240 6±2.404 0 | 0.923 6±0.031 2 |
| CNCL | 32.924 0±2.322 1 | 0.927 0±0.027 3 |
| TED-Net | 32.827 5±2.545 7 | 0.932 9±0.025 0 |
| DehazeFormer | 33.395 3±2.896 8 | 0.937 7±0.024 4 |
| DualED-GAN | 33.471 8±2.748 3 | 0.936 3±0.025 6 |
| Trident GAN | 33.6331±2.6214 | 0.9478±0.0322 |
Tab. 3 Average quantified performance on Piglet test set
| 方法 | PSNR/dB | SSIM |
|---|---|---|
| LDCT | 32.437 6±3.032 2 | 0.913 9±0.035 4 |
| BM3D | 32.740 1±3.002 1 | 0.934 6±0.022 4 |
| RED-CNN | 32.127 2±2.893 9 | 0.918 6±0.022 0 |
| pix2pix | 33.033 6±2.704 7 | 0.911 7±0.033 4 |
| HFSGAN | 33.240 6±2.404 0 | 0.923 6±0.031 2 |
| CNCL | 32.924 0±2.322 1 | 0.927 0±0.027 3 |
| TED-Net | 32.827 5±2.545 7 | 0.932 9±0.025 0 |
| DehazeFormer | 33.395 3±2.896 8 | 0.937 7±0.024 4 |
| DualED-GAN | 33.471 8±2.748 3 | 0.936 3±0.025 6 |
| Trident GAN | 33.6331±2.6214 | 0.9478±0.0322 |
| 方法 | 参数量/106 | 单幅测试时间/s |
|---|---|---|
| BM3D | — | 1.207 |
| RED-CNN | 1.84 | 0.104 |
| pix2pix | 57.19 | 0.067 |
| HFSGAN | 108.87 | 0.088 |
| CNCL | 46.59 | 0.109 |
| TED-Net | 1.75 | 0.142 |
| DehazeFormer | 2.25 | 0.067 |
| DualED-GAN | 25.11 | 0.072 |
| Trident GAN | 26.59 | 0.056 |
Tab. 4 Comparison of parameter count and test time
| 方法 | 参数量/106 | 单幅测试时间/s |
|---|---|---|
| BM3D | — | 1.207 |
| RED-CNN | 1.84 | 0.104 |
| pix2pix | 57.19 | 0.067 |
| HFSGAN | 108.87 | 0.088 |
| CNCL | 46.59 | 0.109 |
| TED-Net | 1.75 | 0.142 |
| DehazeFormer | 2.25 | 0.067 |
| DualED-GAN | 25.11 | 0.072 |
| Trident GAN | 26.59 | 0.056 |
| 模型 | 子模块 | PSNR/dB | SSIM | |||
|---|---|---|---|---|---|---|
| Trident Block | SCAM | MFF | MSPD | |||
| w/o Tri1 | √ | √ | √ | 29.712 8 | 0.879 9 | |
| w/o Tri2 | √ | √ | √ | 31.052 1 | 0.867 0 | |
| w/o MFF | √ | √ | √ | 31.120 5 | 0.877 2 | |
| w/o MSPD | √ | √ | √ | 30.975 4 | 0.875 1 | |
| Trident GAN | √ | √ | √ | √ | 31.5193 | 0.8830 |
Tab. 5 Average quantified performance of ablation experiments
| 模型 | 子模块 | PSNR/dB | SSIM | |||
|---|---|---|---|---|---|---|
| Trident Block | SCAM | MFF | MSPD | |||
| w/o Tri1 | √ | √ | √ | 29.712 8 | 0.879 9 | |
| w/o Tri2 | √ | √ | √ | 31.052 1 | 0.867 0 | |
| w/o MFF | √ | √ | √ | 31.120 5 | 0.877 2 | |
| w/o MSPD | √ | √ | √ | 30.975 4 | 0.875 1 | |
| Trident GAN | √ | √ | √ | √ | 31.5193 | 0.8830 |
| 模型 | PSNR/dB | SSIM |
|---|---|---|
| 1-FPA | 31.249 3 | 0.876 1 |
| Trident GAN (2-FPA) | 31.5193 | 0.883 0 |
| 3-FPA | 31.349 1 | 0.8850 |
Tab. 6 Average quantified performance of FPA module count
| 模型 | PSNR/dB | SSIM |
|---|---|---|
| 1-FPA | 31.249 3 | 0.876 1 |
| Trident GAN (2-FPA) | 31.5193 | 0.883 0 |
| 3-FPA | 31.349 1 | 0.8850 |
| [1] | XIA W, SHAN H, WANG G, et al. Physics-/model-based and data-driven methods for low-dose CT: a survey [J]. IEEE Signal Processing Magazine, 2023, 40(2): 89-100. |
| [2] | 李云飞,李淑婷,张帅,等.深度学习在肿瘤影像分类中的研究进展[J].中华肿瘤防治杂志, 2024, 31(12): 719-724. |
| LI Y F, LI S T, ZHANG S, et al. Research progress of deep learning in tumor images classification [J]. Chinese Journal of Cancer Prevention and Treatment, 2024, 31(12): 719-724. | |
| [3] | RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation [C]// Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, LNCS 9351. Cham: Springer, 2015: 234-241. |
| [4] | ZHANG J, NIU Y, SHANGGUAN Z, et al. A novel denoising method for CT images based on U-net and multi-attention [J]. Computers in Biology and Medicine, 2023, 152: No.106387. |
| [5] | GAO Q, LI Z L, ZHANG J, et al. CoreDiff: contextual error-modulated generalized diffusion model for low-dose CT denoising and generalization [J]. IEEE Transactions on Medical Imaging, 2024, 43(2): 745-759. |
| [6] | XIA W, LYU Q, WANG G. Low-dose CT using denoising diffusion probabilistic model for 20× speedup [EB/OL]. [2024-11-29]. . |
| [7] | GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks [J]. Communications of the ACM, 2020, 63(11): 139-144. |
| [8] | WOLTERINK J M, LEINER T, VIERGEVER M A, et al. Generative adversarial networks for noise reduction in low-dose CT [J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2536-2545. |
| [9] | CHI J, WU C, YU X, et al. Single low-dose CT image denoising using a generative adversarial network with modified U-Net generator and multi-level discriminator [J]. IEEE Access, 2020, 8: 133470-133487. |
| [10] | YANG C, SHEN Y, XU Y, et al. Improving GANs with a dynamic discriminator [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 15093-15104. |
| [11] | YANG L, SHANGUAN H, ZHANG X, et al. High-frequency sensitive generative adversarial network for low-dose CT image denoising [J]. IEEE Access, 2020, 8: 930-943. |
| [12] | MAO X, LI Q, XIE H, et al. Least squares generative adversarial networks [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2813-2821. |
| [13] | 刘安阳,赵怀慈,蔡文龙,等.基于主动判别机制的自适应生成对抗网络图像去模糊算法[J].计算机应用, 2023, 43(7): 2288-2294. |
| LIU A Y, ZHAO H C, CAI W L, et al. Adaptive image deblurring generative adversarial network algorithm based on active discrimination mechanism [J]. Journal of Computer Applications, 2023, 43(7): 2288-2294. | |
| [14] | DALMAZ O, YURT M, ÇUKUR T. ResViT: residual vision transformers for multimodal medical image synthesis [J]. IEEE Transactions on Medical Imaging, 2022, 41(10): 2598-2614. |
| [15] | ZHAO J, LI D, KASSAM Z, et al. Tripartite-GAN: synthesizing liver contrast-enhanced MRI to improve tumor detection [J]. Medical Image Analysis, 2020, 63: No.101667. |
| [16] | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale [EB/OL]. [2024-11-29]. . |
| [17] | CHEN Z, NIU C, GAO Q, et al. LIT-Former: linking in-plane and through-place Transformers for simultaneous CT image denoising and deblurring [J]. IEEE Transactions on Medical Imaging, 2024, 43(5): 1880-1894. |
| [18] | LI H, YANG X, YANG S, et al. Transformer with double enhancement for low-dose CT denoising [J]. IEEE Journal of Biomedical and Health Informatics, 2023, 27(10): 4660-4671. |
| [19] | PENG J, LI X, ZHANG X. MDnT: a multi-scale denoising Transformer beyond real noise image denoising [C]// Proceedings of the 7th International Conference on Electronic Technology and Information Science. Piscataway: IEEE, 2022: 1-5. |
| [20] | WANG J, ZHANG B, WANG Y, et al. CrossU-Net: dual-modality cross-attention U-Net for segmentation of precancerous lesions in gastric cancer [J]. Computerized Medical Image and Graphics, 2024, 112: No.102339. |
| [21] | WANG P, ZHU H, ZHANG H, et al. LRB-T: local reasoning back-projection transformer for the removal of bad weather effects in images [J]. Neural Computing and Applications, 2024, 36(2): 773-789. |
| [22] | AMIR S W, ARORA A, KHAN S, et al. Restormer: efficient Transformer for high-resolution image restoration [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 5718-5729. |
| [23] | CHEN C F R, FAN Q, PANDA R. CrossViT: cross-attention multi-scale Vision Transformer for image classification [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 347-356. |
| [24] | American Association of Physicists in Medicine. Low dose CT grand challenge [EB/OL]. [2024-04-20]. . |
| [25] | Piglet dataset [DB/OL]. [2024-04-26]. . |
| [26] | LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial network [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 105-114. |
| [27] | DABOV K, FOI A, KATKOVNIK V, et al. Image denoising by sparse 3-D transform-domain collaborative filtering [J]. IEEE Transactions on Image Processing, 2007, 16(8): 2080-2095. |
| [28] | CHEN H, ZHANG Y, KALRA M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network [J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2524-2535. |
| [29] | ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with condition adversarial networks [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 5967-5976. |
| [30] | GENG M, MENG X, YU J, et al. Content-noise complementary learning for medical image denoising [J]. IEEE Transactions on Medical Imaging, 2022, 41(2): 407-419. |
| [31] | WANG D, WU Z, YU H. TED-Net: convolution-free T2T vision transformer-based encoder-decoder dilation network for low-dose CT denoising [C]// Proceedings of the 2021 International Workshop on Machine Learning in Medical Imaging, LNCS 12966. Cham: Springer, 2021: 416-425. |
| [32] | SONG Y, HE Z, QIAN H, et al. Vision Transformers for single image dehazing [J]. IEEE Transactions on Image Processing, 2023, 32: 1927-1941. |
| [33] | 上官宏,任慧莹,张雄,等.基于双编码器双解码器GAN的低剂量CT降噪模型[J].计算机应用, 2025, 45(2): 624-632. |
| SHANGGUAN H, REN H Y, ZHANG X, et al. Low-dose CT denoising model based on dual encoder-decoder generative adversarial network [J]. Journal of Computer Applications, 2025, 45(2): 624-632. |
| [1] | Yingjie MA, Jingying QIN, Geng ZHAO, Jing XIAO. Deep compressive sensing network for IoT images and its chaotic encryption protection method [J]. Journal of Computer Applications, 2026, 46(1): 144-151. |
| [2] | Junheng WU, Xiaodong WANG, Qixue HE. Time series prediction model based on statistical distribution sensing and frequency domain dual-channel fusion [J]. Journal of Computer Applications, 2026, 46(1): 113-123. |
| [3] | Na FAN, Chuang LUO, Zehui ZHANG, Mengyao ZHANG, Ding MU. Semantic privacy protection mechanism of vehicle trajectory based on improved generative adversarial network [J]. Journal of Computer Applications, 2026, 46(1): 169-180. |
| [4] | Yanan LI, Mengyang GUO, Guojun DENG, Yunfeng CHEN, Jianji REN, Yongliang YUAN. Method for life prediction of parallel branching engine based on multi-modal fusion features [J]. Journal of Computer Applications, 2026, 46(1): 305-313. |
| [5] | Lijin YAO, Di ZHANG, Piyu ZHOU, Zhijian QU, Haipeng WANG. Transformer and gated recurrent unit-based de novo sequencing algorithm for phosphopeptides [J]. Journal of Computer Applications, 2026, 46(1): 297-304. |
| [6] | Yu SANG, Tong GONG, Chen ZHAO, Bowen YU, Siman LI. Domain-adaptive nighttime object detection method with photometric alignment [J]. Journal of Computer Applications, 2026, 46(1): 242-251. |
| [7] | Yinlong JIAN, Xuebin CHEN, Zhongrui JING, Qi ZHONG, Zhenbo ZHANG. Data augmentation scheme based on conditional generative adversarial network in federated learning [J]. Journal of Computer Applications, 2026, 46(1): 21-32. |
| [8] | Yilin DENG, Fajiang YU. Pseudo random number generator based on LSTM and separable self-attention mechanism [J]. Journal of Computer Applications, 2025, 45(9): 2893-2901. |
| [9] | Jinggang LYU, Shaorui PENG, Shuo GAO, Jin ZHOU. Speech enhancement network driven by complex frequency attention and multi-scale frequency enhancement [J]. Journal of Computer Applications, 2025, 45(9): 2957-2965. |
| [10] | Yiming LIANG, Jing FAN, Wenze CHAI. Multi-scale feature fusion sentiment classification based on bidirectional cross attention [J]. Journal of Computer Applications, 2025, 45(9): 2773-2782. |
| [11] | Jin LI, Liqun LIU. SAR and visible image fusion based on residual Swin Transformer [J]. Journal of Computer Applications, 2025, 45(9): 2949-2956. |
| [12] | Weigang LI, Jiale SHAO, Zhiqiang TIAN. Point cloud classification and segmentation network based on dual attention mechanism and multi-scale fusion [J]. Journal of Computer Applications, 2025, 45(9): 3003-3010. |
| [13] | Xiang WANG, Zhixiang CHEN, Guojun MAO. Multivariate time series prediction method combining local and global correlation [J]. Journal of Computer Applications, 2025, 45(9): 2806-2816. |
| [14] | Fang WANG, Jing HU, Rui ZHANG, Wenting FAN. Medical image segmentation network with content-guided multi-angle feature fusion [J]. Journal of Computer Applications, 2025, 45(9): 3017-3025. |
| [15] | Li LI, Han SONG, Peihe LIU, Hanlin CHEN. Named entity recognition for sensitive information based on data augmentation and residual networks [J]. Journal of Computer Applications, 2025, 45(9): 2790-2797. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||