[1] HASSEN R, WANG Z, SALAMA M M A. Objective quality assessment for multiexposure multi-focus image fusion[J]. IEEE Transactions on Image Processing, 2015, 24(9):2712-2724.
[2] MALVIYA A, BHIRUD S G. Objective criterion for performance evaluation of image fusion techniques[J]. International Journal of Computer Applications, 2010, 1(25):57-60.
[3] PETROVIC V. Subjective tests for image fusion evaluation and objective metric validation[J]. Information Fusion, 2007, 8(2):208-216.
[4] 张小利,李雄飞,李军.融合图像质量评价指标的相关性分析及性能评估[J]. 自动化学报,2014,40(2):306-315.(ZHANG X L, LI X F, LI J. Validation and correlation analysis of metrics for evaluating performance of image fusion[J]. Acta Automatica Sinica, 2014, 40(2):306-315.)
[5] TIRUPAL T, MOHAN B C, KUMAR S S. Multimodal medical image fusion based on Yager's intuitionistic fuzzy sets[J]. Iranian Journal of Fuzzy Systems, 2019, 16(1):33-48.
[6] ZHANG X, FENG X, WANG W. et al. Edge strength similarity for image quality assessment[J]. IEEE Signal Processing Letters, 2013, 20(4):319-322.
[7] SHAH P, MERCHANT S N, DESAI U B. Multifocus and multispectral image fusion based on pixel significance using multiresolution decomposition[J]. Signal, Image and Video Processing, 2013, 7(1):95-109.
[8] HOSSNY M, NAHAVANDI S, CREIGHTON D. Comments on ‘Information measure for performance of image fusion’[J]. Electronics Letters, 2008, 44(18):1066-1067.
[9] LIU Z, BLASCH E, XUE Z, et al. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision:a comparative study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(1):94-109.
[10] CVEJIC N, CANAGARAJAH C N, BULL D R. Image fusion metric based on mutual information and tsallis entropy[J]. Electronics Letters, 2006, 42(11):626-627.
[11] WANG Z, BOVIK A C, SHEIKH H R. et al. Image quality assessment:from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4):600-612.
[12] PIELLA G, HEIJMANS H. A new quality metric for image fusion[C]//Proceedings of the 2003 International Conference on Image Processing. Piscataway, NJ:IEEE, 2003:173-176.
[13] PISTONESI S, MARTINEZ J, OJEDA S M, et al. Structural similarity metrics for quality image fusion assessment:algorithms[J]. Image Processing on Line, 2018, 8:345-368.
[14] CHEN H, VARSHNEY P K. A human perception inspired quality metric for image fusion based on regional information[J]. Information Fusion, 2007, 8(2):193-207.
[15] LI S, HONG R, WU X, et al. A noval similarity based quality metric for image fusion[C]//Proceeding of the 2008 International Conference on Audio, Language and Image Processing. Piscataway, NJ:IEEE, 2008:167-172.
[16] CHEN H, VARSHENEY P K. A human perception inspired quality metric for image fusion based on regional information[J]. Image Fusion, 2007, 8(2):193-207.
[17] CHEN Y. BLUM R S. A new automated quality assessment algorithm for image fusion[J]. Image and Vision Computing, 2009, 27(10):1421-1432.
[18] 程刚,王春恒.基于结构和纹理特征融合的场景图像分类[J].计算机工程,2011,37(5):227-229.(CHENG G, WANG C H. Scene image categorization based on structure and texture feature fusion[J]. Computer Engineering, 2011, 37(5):227-229.)
[19] FANG Y, CHEN Q, SUN L, et al. Decomposition and extraction:a new framework for visual classification[J]. IEEE Transactions on Image Processing, 2014, 23(8):3412-3427.
[20] WANG Z, WANG W, SU B. Multi-sensor image fusion algorithm based on multiresolution analysis[J]. International Journal of Online Engineering, 2018, 14(6):44-57.
[21] BEHRMANN M, KIMCHI R. What does visual agnosia tell us about perceptual organization and its relationship to object perception?[J]. Journal of Experimental Psychology:Human Perception and Performance, 2003, 29(1):19-42.
[22] 陈浩,王延杰.基于拉普拉斯金字塔变换的图像融合算法研究[J].激光与红外,2009,39(4):439-442.(CHEN H, WANG Y J. Research on image fusion algorithm based on Laplacian pyramid transform[J]. Laser and Infrared, 2009, 39(4):439-442.)
[23] LIU Y, LIU S. WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24:147-164.
[24] BHATNAGAR G, WU Q M J, LIU Z. Directive contrast based multimodal medical image fusion in NSCT domain[J]. IEEE Transactions on Multimedia, 2013, 15(5):1014-1024.
[25] LI S, KANG X, HU J. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7):2864-2875.
[26] 屈小波,闫敬文,肖弘智,等.非降采样Contourlet域内空间频率激励的PCNN图像融合算法[J].自动化学报,2008,34(12):1508-1514.(QU X B, YAN J W, XIAO H Z, et al. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled Contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12):1508-1514.)
[27] LIU Y, CHEN X, WANG Z, et al. Deep learning for pixel-level image fusion recent:advances and future prospects[J]. Information Fusion, 2018, 42:158-173. |