[1] SHEN R, CHENG I, BASU A. Cross-scale coefficient selection for volumetric medical image fusion[J]. IEEE Transactions on Biomedical Engineering, 2013, 60(4):1069-1079. [2] TAO L, QIAN Z. An improved medical image fusion algorithm based on wavelet transform[C]//Proceedings of the 7th International Conference on Natural Computation. Piscataway:IEEE, 2011:76-78. [3] 江泽涛,杨阳,郭川.基于提升小波变换的图像融合改进算法的研究[J].图像与信号处理,2015,4:11-19.(JANG Z T, YANG Y, GUO C. Study on the improvement of image fusion algorithm based on lifting wavelet transform[J]. Journal of Image and Signal Processing, 2015, 4:11-19.) [4] 曹义亲,雷章明,黄晓生.基于区域的非下采样形态小波医学图像融合算法[J].计算机应用研究,2012,29(6):2379-2381.(CAO Y Q, LEI Z M, HUANG X S. Region-based algorithm for non-sampling morphological wavelet medical image fusion[J]. Application Research of Computers, 2012, 29(6):2379-2381.) [5] LI S, YANG B, HU J. Performance comparison of different multi-resolution transforms for image fusion[J]. Information Fusion, 2011, 12(2):74-84. [6] WANG J, PENG J, FENG X, et al. Fusion method for infrared and visible images by using non-negative sparse representation[J]. Infrared Physics and Technology, 2014, 67:477-489. [7] XIANG T, YAN L, GAO R. A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain[J]. Infrared Physics and Technology, 2015, 69:53-61. [8] MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics and Technology, 2017, 82:8-17. [9] HE G, XING S, HE X, et al. Image fusion method based on simultaneous sparse representation with non-subsampled contourlet transform[J]. IET Computer Vision, 2019, 13(2):240-248. [10] DING S, ZHAO X, XU H, et al. NSCT-PCNN image fusion based on image gradient motivation[J]. IET Computer Vision, 2018, 12(4):377-383. [11] 蔺素珍,韩泽.基于深度堆叠卷积神经网络的图像融合[J].计算机学报,2017,40(11):2506-2518.(LIN S Z, HAN Z. Images fusion based on deep stack convolutional neural network[J]. Chinese Journal of Computers, 2017, 40(11):2506-2518.) [12] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Red Hook:Curran Associates Inc., 2012:1097-1105. [13] RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL].[2019-01-20]. https://arxiv.org/pdf/1511.06434.pdf. [14] PATHAK D, KRÄHENBÜHL P, DONAHUE J, et al. Context encoders:feature learning by inpainting[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2016:2536-2544. [15] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2017:5967-5976. [16] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge, MA:MIT Press, 2014:2672-2680. [17] 黄宣珲.基于深度学习的医学图像处理[J].中国新通信,2019,21(3):103-105.(HUANG X H. Medical image processing based on deep learning[J]. China New Telecommunications, 2019, 21(3):103-105.) [18] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2016:770-778. [19] LIU Y, CHEN X, WARD R K, et al. Medical image fusion via convolutional sparsity based morphological component analysis[J]. IEEE Signal Processing Letters, 2019, 26(3):485-489. [20] XU X, WANG Y, CHEN S. Medical image fusion using discrete fractional wavelet transform[J]. Biomedical Signal Processing and Control, 2016, 27:103-111. [21] LI H, MANJUNATH B S, MITRA S K. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3):235-245. [22] DAS S, KUNDU M K. NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency[J]. Medical and Biological Engineering and Computing, 2012, 50(10):1105-1114. [23] YANG B, LI S. Multifocus image fusion and restoration with sparse representation[J]. IEEE Transactions on Instrumentation and Measurement, 2010, 59(4):884-892. [24] ZONG J, QIU T. Medical image fusion based on sparse representation of classified image patches[J]. Biomedical Signal Processing and Control, 2017, 34:195-205. [25] VEIT A, WILBER M J, BELONGIE S. Residual networks behave like ensembles of relatively shallow networks[C]//Proceedings of the 29th International Conference on Neural Information Processing Systems. Cambridge, MA:MIT Press, 2016:550-558. [26] HAN D, KIM J, KIM J. Deep pyramidal residual networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2017:6307-6315. [27] IOFFE S, SZEGEDY C. Batch normalization:accelerating deep network training by reducing internal covariate shift[EB/OL].[2019-01-20]. https://arxiv.org/pdf/1502.03167.pdf. [28] BA J L, KIROS J R, HINTON G E. Layer normalization[EB/OL].[2019-01-20]. https://arxiv.org/pdf/1607.06450.pdf. [29] LIM B, SON S, KIM H, et al. Enhanced deep residual networks for single image super-resolution[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2017:1132-1140. |