| [1] DONG C, DENG Y, CHEN C L, et al. Compression artifacts reduction by a deep convolutional network[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2015: 576-584. [2] GUO J, CHAO H. Building dual-domain representations for compression artifacts reduction[C]// ECCV 2016: Proceedings of the 2016 European Conference on Computer Vision. Berlin: Springer, 2016: 628-644.
 [3] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J/OL]. arXiv Preprint, 2014, 2014: arXiv:1406.2661[2014-06-10]. https://arxiv.org/abs/1406.2661.
 [4] GUO J, CHAO H. One-to-many network for visually pleasing compression artifacts reduction[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 4867-4876.
 [5] GALTERI L, SEIDENARI L, BERTINI M, et al. Deep generative adversarial compression artifact removal[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2017: 4836-4845.
 [6] 杨丽丽,盛国.一种基于卷积神经网络的矿井视频图像降噪方法[J]. 矿业研究与开发, 2018, 38(2): 106-109. (YANG L L, SHENG G. A mine video image denoising method based on convolutional neural network[J]. Mining Research and Development, 2018, 38(2): 106-109.)
 [7] REN W, PAN J, CAO X, et al. Video deblurring via semantic segmentation and pixel-wise non-linear kernel[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2017: 1086-1094.
 [8] SAJJADI M S M, VEMULAPALLI R, BROWN M. Frame-recurrent video super-resolution[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 6626-6634.
 [9] TAO X, GAO H, LIAO R, et al. Detail-revealing deep video super-resolution[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 6626-6634.
 [10] 李玲慧,杜军平,梁美玉,等.基于时空特征和神经网络的视频超分辨率算法[J].北京邮电大学学报,2016, 39(4):1-6. (LI L H, DU J P, LIANG M Y, et al. Video super resolution algorithm based on spatiotemporal features and neural networks[J]. Journal of Beijing University of Posts and Telecommunications, 2016, 39(4):1-6.)
 [11] WANG T, CHEN M, CHAO H. A novel deep learning-based method of improving coding efficiency from the decoder-end for HEVC[C]// Proceedings of the 2017 Data Compression Conference. Piscataway, NJ: IEEE, 2017: 410-419.
 [12] YANG R, XU M, WANG Z. Decoder-side HEVC quality enhancement with scalable convolutional neural network[C]// Proceedings of the 2017 IEEE International Conference on Multimedia and Expo. Piscataway, NJ: IEEE, 2017: 817-822.
 [13] YANG R, XU M, WANG Z, et al. Enhancing quality for HEVC compressed videos[J/OL]. arXiv Preprint, 2018, 2018: arXiv:1709.06734(2017-09-20)[2018-07-06]. https://arxiv.org/abs/1709.06734.
 [14] YANG R, XU M, LIU T, et al. Multi-frame quality enhancement for compressed video[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 6664-6673.
 [15] DOSOVITSKIY A, FISCHERY P, ILG E, et al. FlowNet: learning optical flow with convolutional networks[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2015: 2758-2766.
 [16] BAILER C, TAETZ B, STRICKER D. Flow fields: dense correspondence fields for highly accurate large displacement optical flow estimation[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2015: 4015-4023.
 [17] REVAUD J, WEINZAEPFEL P, HARCHAOUI Z, et al. EpicFlow: edge-preserving interpolation of correspondences for optical flow[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2015: 1164-1172.
 [18] ILG E, MAYER N, SAIKIA T, et al. FlowNet2.0: evolution of optical flow estimation with deep networks[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017, 2: 6.
 [19] MAHAJAN D, HUANG F C, MATUSIK W, et al. Moving gradients: a path-based method for plausible image interpolation[J]. ACM Transactions on Graphics, 2009, 28(3): Article No. 42.
 [20] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge, MA: MIT Press, 2015: 2017-2025.
 [21] NIKLAUS S, MAI L, LIU F. Video frame interpolation via adaptive separable convolution[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Washington, DC: IEEE Computer Society, 2017: 261-270.
 [22] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770-778.
 [23] HE K, ZHANG X, REN S, et al. Identity mappings in deep residual networks[C]// ECCV 2016: Proceedings of the 2016 European Conference on Computer Vision. Berlin: Springer, 2016: 630-645.
 [24] DROZDZAL M, VORONTSOV E, CHARTRAND G, et al. The importance of skip connections in biomedical image segmentation[M]// Deep Learning and Data Labeling for Medical Applications. Berlin: Springer, 2016: 179-187.
 [25] BOSSEN F. Common test conditions and software reference configurations[S/OL].[2013-06-20].http://wftp3.itu.int/av-arch/jctvc-site/2010_07_B_Geneva/JCTVC-B300.doc.
 [26] GLOROT X, BENGIO Y. Understanding the difficulty of training deep feedforward neural networks[C]// Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. Sardinia, Italy: JMLR, 2010: 249-256.
 [27] KINGMA D, BA J. Adam: a method for stochastic optimization[EB/OL].[2018-03-20]. http://yeolab.weebly.com/uploads/2/5/5/0/25509700/a_method_for_stochastic_optimization_.pdf.
 [28] BARRON J T. A more general robust loss function[J/OL]. arXiv Preprint, 2017, 2017: arXiv:1701.03077(2017-01-11)[2017-01-11]. https://arxiv.org/abs/1701.03077.
 [29] LAI W S, HUANG J B, AHUJA N, et al. Deep Laplacian pyramid networks for fast and accurate super-resolution[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 5835-5843.
 |