[1] 张鸿, 吴飞, 张晓龙. 基于关系矩阵融合的多媒体数据聚类[J]. 计算机学报,2011,34(9):1705-1711.(ZHANG H,WU F, ZHANG X L. Multimedia data clustering based on correlation matrix fusion[J]. Chinese Journal of Computers,2011,34(9):1705-1711.) [2] 黄育, 张鸿. 基于潜语义主题加强的跨媒体检索算法[J]. 计算机应用,2017,37(4):1061-1064,1110.(HUANG Y,ZHANG H. Cross-media retrieval based on latent semantic topic reinforce[J]. Journal of Computer Applications,2017,37(4):1061-1064, 1110.) [3] GONG X L,HUANG L P,WANG F W. Deep semantic correlation learning based hashing for multimedia cross-modal retrieval[C]//Proceedings of the 2018 IEEE International Conference on Data Mining. Piscataway:IEEE,2018:117-126. [4] PENG Y X,HUAGN X,ZHAO Y Z. An overview of cross-media retrieval:concepts,methodologies,benchmarks,and challenges[J]. IEEE Transactions on Circuits and Systems for Video Technology,2018,28(9):2372-2385. [5] XU X,SHEN F M,YANG Y,et al. Learning discriminative binary codes for large-scale cross-modal retrieval[J]. IEEE Transactions on Image Processing,2017,26(5):2494-2507. [6] WANG D, GAO X B, WANG X M, et al. Semantic topic multimodal hashing for cross-media retrieval[C]//Proceedings of the 24th International Conference on Artificial Intelligence. Palo Alto,CA:AAAI Press,2015:3890-3896. [7] DING G G,GUO Y C,ZHOU J L. Collective matrix factorization hashing for multimodal data[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2014:2083-2090. [8] JIANG Q Y,LI W J. Deep cross-modal hashing[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2017:3270-3278. [9] CRESWELL A,WHITE T,DUMOULIN V,et al. Generative adversarial networks:an overview[J]. IEEE Signal Processing Magazine,2018,35(1):53-65. [10] ZHANG J,PENG Y X,YUAN M K. SCH-GAN:semi-supervised cross-modal hashing by generative adversarial network[J]. IEEE Transactions on Cybernetics,2020,50(2):489-502. [11] LI C,DENG C,LI N,et al. Self-supervised adversarial hashing networks for cross-modal retrieval[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2018:4242-4251. [12] SHI Y F, YOU X G, ZHENG F, et al. Equally-guided discriminative hashing for cross-modal retrieval[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Palo Alto,CA:AAAI Press,2019:4767-4773. [13] BRONSTEIN M M,BRONSTEIN A M,MICHEL F,et al. Data fusion through cross-modality metric learning using similaritysensitive hashing[C]//Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2010:3594-3601. [14] ZHANG D Q,LI W J. Large-scale supervised multimodal hashing with semantic correlation maximization[C]//Proceedings of the 28th AAAI Conference on Artificial Intelligence. Palo Alto,CA:AAAI Press,2014:2177-2183. [15] LIN Z J,DING G G,HU M Q,et al. Semantics-preserving hashing for cross-view retrieval[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2015:3864-3872. [16] YANG E K,DENG C,LIU W,et al. Pairwise relationship guided deep hashing for cross-modal retrieval[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence. Palo Alto,CA:AAAI Press,2017:1618-1625. [17] HUISKES M J,LEW M S. The MIR Flickr retrieval evaluation[C]//Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval. New York:ACM, 2008:39-43. [18] CHUA T S,TANG J H,HONG R C,et al. NUS-WIDE:a realworld web image database from National University of Singapore[C]//Proceedings of the 2009 ACM International Conference on Image and Video Retrieval. New York:ACM,2009:No. 48. [19] CHATFIELD K,SIMONYAN K,VEDALDI A,et al. Return of the devil in the details:delving deep into convolutional nets[C]//Proceedings of the 2014 British Machine Vision Conference. Durham:BMVA Press,2014:No. 054. [20] DENG J,DONNG W,SOCHER R,et al. ImageNet:a largescale hierarchical image database[C]//Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2009:248-255. |