《计算机应用》唯一官方网站 ›› 2022, Vol. 42 ›› Issue (6): 1957-1964.DOI: 10.11772/j.issn.1001-9081.2021040597
所属专题: 多媒体计算与计算机仿真
李佳1, 郑元林1,2(), 廖开阳1,2, 楼豪杰1, 李世宇1, 陈泽豪1
收稿日期:
2021-04-16
修回日期:
2021-07-02
接受日期:
2021-07-15
发布日期:
2022-06-22
出版日期:
2022-06-10
通讯作者:
郑元林
作者简介:
李佳(1997—),女,四川广安人,硕士研究生,主要研究方向:深度学习、图像处理基金资助:
Jia LI1, Yuanlin ZHENG1,2(), Kaiyang LIAO1,2, Haojie LOU1, Shiyu LI1, Zehao CHEN1
Received:
2021-04-16
Revised:
2021-07-02
Accepted:
2021-07-15
Online:
2022-06-22
Published:
2022-06-10
Contact:
Yuanlin ZHENG
About author:
LI Jia,born in 1997,M. S. candidate. Her research interests include deep learning,image processingSupported by:
摘要:
针对通用型无参考图像质量评价(NR-IQA)算法,提出一种基于伪参考图像显著性深层特征的评价算法。首先,在失真图像的基础上,利用微调的ConSinGAN模型生成相应的伪参考图像作为失真图像的补偿信息,弥补NR-IQA算法缺少真实参考信息的不足;然后,提取伪参考图像的显著性信息,将伪参考显著性图像与失真图像输入到VGG16网络中提取深层特征;最后,融合二者的深层特征并将其映射到由全连接层组成的回归网络中,从而产生与人类视觉一致的质量预测。为了验证算法的有效性,在四个大型公开的图像数据集TID2013、TID2008、CSIQ与LIVE上进行实验,结果显示所提算法在TID2013数据集上的斯皮尔曼秩相关系数(SROCC)比H-IQA算法提升了5个百分点,比RankIQA算法提升了14个百分点,针对单一失真类型也具有稳定的性能。实验结果表明,所提算法总体表现优于现有主流全参考图像质量评价(FR-IQA)和NR-IQA算法,与人类主观感知表现一致。
中图分类号:
李佳, 郑元林, 廖开阳, 楼豪杰, 李世宇, 陈泽豪. 基于显著性深层特征的无参考图像质量评价算法[J]. 计算机应用, 2022, 42(6): 1957-1964.
Jia LI, Yuanlin ZHENG, Kaiyang LIAO, Haojie LOU, Shiyu LI, Zehao CHEN. No-reference image quality assessment algorithm based on saliency deep features[J]. Journal of Computer Applications, 2022, 42(6): 1957-1964.
数据集 | IQA | SSIM | FSIMc | VSI | GMSD | SPSIM | LLM | 本文算法 |
---|---|---|---|---|---|---|---|---|
TID2013 | SROCC | 0.742 | 0.851 | 0.897 | 0.805 | 0.904 | 0.904 | 0.918 |
PLCC | 0.789 | 0.877 | 0.900 | 0.854 | 0.909 | 0.907 | 0.932 | |
KROCC | 0.559 | 0.667 | 0.718 | 0.633 | 0.725 | 0.721 | 0.758 | |
RMSE | 0.761 | 0.596 | 0.540 | 0.644 | 0.517 | 0.528 | 0.461 | |
TID2008 | SROCC | 0.775 | 0.884 | 0.898 | 0.891 | 0.910 | 0.908 | 0.927 |
PLCC | 0.773 | 0.876 | 0.876 | 0.872 | 0.893 | 0.897 | 0.933 | |
KROCC | 0.577 | 0.699 | 0.712 | 0.709 | 0.730 | 0.737 | 0.773 | |
RMSE | 0.851 | 0.647 | 0.647 | 0.657 | 0.605 | 0.598 | 0.436 | |
LIVE | SROCC | 0.948 | 0.965 | 0.952 | 0.960 | 0.962 | 0.961 | 0.982 |
PLCC | 0.945 | 0.961 | 0.943 | 0.960 | 0.960 | 0.958 | 0.983 | |
KROCC | 0.796 | 0.836 | 0.806 | 0.827 | 0.827 | 0.823 | 0.831 | |
RMSE | 8.946 | 7.530 | 8.682 | 7.621 | 7.629 | 7.768 | 7.343 | |
CSIQ | SROCC | 0.876 | 0.931 | 0.942 | 0.957 | 0.944 | 0.905 | 0.963 |
PLCC | 0.861 | 0.919 | 0.928 | 0.954 | 0.934 | 0.900 | 0.944 | |
KROCC | 0.691 | 0.769 | 0.786 | 0.813 | 0.788 | 0.724 | 0.828 | |
RMSE | 0.133 | 0.103 | 0.098 | 0.079 | 0.093 | 0.123 | 0.077 |
表1 实验数据集上与本文算法与主流IQA算法的性能比较
Tab. 1 Performance comparison of the proposed algorithm and mainstream IQA algorithm on experimental dataset
数据集 | IQA | SSIM | FSIMc | VSI | GMSD | SPSIM | LLM | 本文算法 |
---|---|---|---|---|---|---|---|---|
TID2013 | SROCC | 0.742 | 0.851 | 0.897 | 0.805 | 0.904 | 0.904 | 0.918 |
PLCC | 0.789 | 0.877 | 0.900 | 0.854 | 0.909 | 0.907 | 0.932 | |
KROCC | 0.559 | 0.667 | 0.718 | 0.633 | 0.725 | 0.721 | 0.758 | |
RMSE | 0.761 | 0.596 | 0.540 | 0.644 | 0.517 | 0.528 | 0.461 | |
TID2008 | SROCC | 0.775 | 0.884 | 0.898 | 0.891 | 0.910 | 0.908 | 0.927 |
PLCC | 0.773 | 0.876 | 0.876 | 0.872 | 0.893 | 0.897 | 0.933 | |
KROCC | 0.577 | 0.699 | 0.712 | 0.709 | 0.730 | 0.737 | 0.773 | |
RMSE | 0.851 | 0.647 | 0.647 | 0.657 | 0.605 | 0.598 | 0.436 | |
LIVE | SROCC | 0.948 | 0.965 | 0.952 | 0.960 | 0.962 | 0.961 | 0.982 |
PLCC | 0.945 | 0.961 | 0.943 | 0.960 | 0.960 | 0.958 | 0.983 | |
KROCC | 0.796 | 0.836 | 0.806 | 0.827 | 0.827 | 0.823 | 0.831 | |
RMSE | 8.946 | 7.530 | 8.682 | 7.621 | 7.629 | 7.768 | 7.343 | |
CSIQ | SROCC | 0.876 | 0.931 | 0.942 | 0.957 | 0.944 | 0.905 | 0.963 |
PLCC | 0.861 | 0.919 | 0.928 | 0.954 | 0.934 | 0.900 | 0.944 | |
KROCC | 0.691 | 0.769 | 0.786 | 0.813 | 0.788 | 0.724 | 0.828 | |
RMSE | 0.133 | 0.103 | 0.098 | 0.079 | 0.093 | 0.123 | 0.077 |
算法 | LIVE | TID2013 | ||
---|---|---|---|---|
SROCC | PLCC | SROCC | PLCC | |
DIQaM-FR[ | 0.966 | 0.977 | 0.859 | 0.880 |
DIIVINE[ | 0.925 | 0.923 | 0.549 | 0.654 |
CORNIA[ | 0.942 | 0.935 | 0.549 | 0.613 |
BIQI[ | 0.841 | 0.843 | 0.860 | 0.870 |
DIQaM-NR[ | 0.960 | 0.972 | 0.835 | 0.855 |
hyperIQA[ | 0.968 | 0.966 | — | — |
H-IQA[ | 0.982 | 0.982 | 0.871 | — |
RankIQA[ | 0.981 | 0.982 | 0.780 | — |
本文算法 | 0.982 | 0.983 | 0.918 | 0.932 |
表2 在LIVE与TID2013数据集上不同算法的性能比较
Tab. 2 Performance comparison between different algorithms on LIVE and TID2013 databases
算法 | LIVE | TID2013 | ||
---|---|---|---|---|
SROCC | PLCC | SROCC | PLCC | |
DIQaM-FR[ | 0.966 | 0.977 | 0.859 | 0.880 |
DIIVINE[ | 0.925 | 0.923 | 0.549 | 0.654 |
CORNIA[ | 0.942 | 0.935 | 0.549 | 0.613 |
BIQI[ | 0.841 | 0.843 | 0.860 | 0.870 |
DIQaM-NR[ | 0.960 | 0.972 | 0.835 | 0.855 |
hyperIQA[ | 0.968 | 0.966 | — | — |
H-IQA[ | 0.982 | 0.982 | 0.871 | — |
RankIQA[ | 0.981 | 0.982 | 0.780 | — |
本文算法 | 0.982 | 0.983 | 0.918 | 0.932 |
IQA | TID2013 | LIVE | ||
---|---|---|---|---|
CSIQ | LIVE | CSIQ | TID2013 | |
BRISQUE[ | 0.639 | 0.681 | 0.577 | 0.367 |
BLIINDS-II[ | 0.456 | 0.836 | 0.577 | 0.393 |
DIIVINE[ | 0.146 | 0.687 | 0.596 | 0.355 |
CORNIA[ | 0.656 | — | 0.663 | 0.429 |
DIQaM-NR[ | 0.717 | 0.982 | 0.681 | 0.392 |
本文算法 | 0.761 | 0.953 | 0.693 | 0.482 |
表3 跨数据集测试中的SROCC
Tab. 3 SROCC in cross-dataset test
IQA | TID2013 | LIVE | ||
---|---|---|---|---|
CSIQ | LIVE | CSIQ | TID2013 | |
BRISQUE[ | 0.639 | 0.681 | 0.577 | 0.367 |
BLIINDS-II[ | 0.456 | 0.836 | 0.577 | 0.393 |
DIIVINE[ | 0.146 | 0.687 | 0.596 | 0.355 |
CORNIA[ | 0.656 | — | 0.663 | 0.429 |
DIQaM-NR[ | 0.717 | 0.982 | 0.681 | 0.392 |
本文算法 | 0.761 | 0.953 | 0.693 | 0.482 |
失真类型 | BRISQUE | DIIVINE | RankIQA | H-IQA | 本文算法 |
---|---|---|---|---|---|
均值 | 0.551 | 0.506 | 0.691 | 0.681 | 0.810 |
AGN | 0.706 | 0.855 | 0.667 | 0.923 | 0.875 |
ANC | 0.523 | 0.712 | 0.620 | 0.880 | 0.853 |
SCN | 0.776 | 0.463 | 0.821 | 0.945 | 0.800 |
MN | 0.295 | 0.675 | 0.365 | 0.673 | 0.782 |
HFN | 0.836 | 0.878 | 0.760 | 0.955 | 0.915 |
IN6 | 0.802 | 0.806 | 0.736 | 0.810 | 0.735 |
QN | 0.682 | 0.165 | 0.783 | 0.855 | 0.937 |
GB | 0.861 | 0.934 | 0.809 | 0.832 | 0.960 |
DEN | 0.500 | 0.723 | 0.767 | 0.957 | 0.963 |
JPEG | 0.790 | 0.629 | 0.866 | 0.914 | 0.902 |
JP2K | 0.779 | 0.853 | 0.878 | 0.624 | 0.949 |
JGTE | 0.254 | 0.239 | 0.704 | 0.460 | 0.869 |
J2TE | 0.723 | 0.061 | 0.810 | 0.782 | 0.906 |
NEPN | 0.213 | 0.060 | 0.512 | 0.664 | 0.874 |
Block | 0.197 | 0.093 | 0.622 | 0.122 | 0.588 |
MS | 0.217 | 0.010 | 0.268 | 0.182 | 0.303 |
CTC | 0.079 | 0.460 | 0.613 | 0.376 | 0.627 |
CCS | 0.113 | 0.068 | 0.662 | 0.156 | 0.744 |
MGN | 0.674 | 0.787 | 0.619 | 0.850 | 0.806 |
CN | 0.198 | 0.116 | 0.644 | 0.614 | 0.661 |
LCNI | 0.627 | 0.633 | 0.800 | 0.852 | 0.816 |
ICQD | 0.849 | 0.436 | 0.779 | 0.911 | 0.837 |
CHA | 0.724 | 0.661 | 0.629 | 0.381 | 0.882 |
SSR | 0.811 | 0.833 | 0.859 | 0.616 | 0.868 |
表 4 在TID2013数据集上对单个失真类型的性能比较(SROCC)
Tab. 4 Performance (SROCC) comparison on TID2013 dataset for single distortion types
失真类型 | BRISQUE | DIIVINE | RankIQA | H-IQA | 本文算法 |
---|---|---|---|---|---|
均值 | 0.551 | 0.506 | 0.691 | 0.681 | 0.810 |
AGN | 0.706 | 0.855 | 0.667 | 0.923 | 0.875 |
ANC | 0.523 | 0.712 | 0.620 | 0.880 | 0.853 |
SCN | 0.776 | 0.463 | 0.821 | 0.945 | 0.800 |
MN | 0.295 | 0.675 | 0.365 | 0.673 | 0.782 |
HFN | 0.836 | 0.878 | 0.760 | 0.955 | 0.915 |
IN6 | 0.802 | 0.806 | 0.736 | 0.810 | 0.735 |
QN | 0.682 | 0.165 | 0.783 | 0.855 | 0.937 |
GB | 0.861 | 0.934 | 0.809 | 0.832 | 0.960 |
DEN | 0.500 | 0.723 | 0.767 | 0.957 | 0.963 |
JPEG | 0.790 | 0.629 | 0.866 | 0.914 | 0.902 |
JP2K | 0.779 | 0.853 | 0.878 | 0.624 | 0.949 |
JGTE | 0.254 | 0.239 | 0.704 | 0.460 | 0.869 |
J2TE | 0.723 | 0.061 | 0.810 | 0.782 | 0.906 |
NEPN | 0.213 | 0.060 | 0.512 | 0.664 | 0.874 |
Block | 0.197 | 0.093 | 0.622 | 0.122 | 0.588 |
MS | 0.217 | 0.010 | 0.268 | 0.182 | 0.303 |
CTC | 0.079 | 0.460 | 0.613 | 0.376 | 0.627 |
CCS | 0.113 | 0.068 | 0.662 | 0.156 | 0.744 |
MGN | 0.674 | 0.787 | 0.619 | 0.850 | 0.806 |
CN | 0.198 | 0.116 | 0.644 | 0.614 | 0.661 |
LCNI | 0.627 | 0.633 | 0.800 | 0.852 | 0.816 |
ICQD | 0.849 | 0.436 | 0.779 | 0.911 | 0.837 |
CHA | 0.724 | 0.661 | 0.629 | 0.381 | 0.882 |
SSR | 0.811 | 0.833 | 0.859 | 0.616 | 0.868 |
1 | KIM J, LEE S. Deep learning of human visual sensitivity in image quality assessment framework[C]// Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 1969-1977. 10.1109/cvpr.2017.213 |
2 | 孙荣荣. 基于灰度共生矩阵相似图的图像质量评价方法[J]. 计算机应用, 2020, 40(S1): 177-179. 10.11772/j.issn.1001-9081.2019060957 |
SUN R R. Image quality assessment method based on similarity maps of gray level co-occurrence matrix[J]. Journal of Computer Applications, 2020, 40(S1):177-179. 10.11772/j.issn.1001-9081.2019060957 | |
3 | GOLESTANEH S A, KARAM L J. Reduced-reference quality assessment based on the entropy of DWT coefficients of locally weighted gradient magnitudes[J]. IEEE Transactions on Image Processing, 2016, 25(11): 5293-5303. 10.1109/tip.2016.2601821 |
4 | LIU X L, J van de WEIJEI, BAGDANOV A D. RankIQA: learning from rankings for no-reference image quality assessment[C]// Proceedings of 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 1040-1049. 10.1109/iccv.2017.118 |
5 | MIN X K, ZHAI G T, GU K, et al. Blind image quality estimation via distortion aggravation[J]. IEEE Transactions on Broadcasting, 2018, 64(2): 508-517. 10.1109/tbc.2018.2816783 |
6 | SU S L, YAN Q S, ZHU Y, et al. Blindly assess image quality in the wild guided by a self-adaptive hyper network[C]// Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 3664-3673. 10.1109/cvpr42600.2020.00372 |
7 | ZHANG L, ZHANG L, BOVIK A C. A feature-enriched completely blind image quality evaluator[J]. IEEE Transactions on Image Processing, 2015, 24(8): 2579-2591. 10.1109/tip.2015.2426416 |
8 | SAAD M A, BOVIK A C, CHARRIER C. Blind image quality assessment: a natural scene statistics approach in the DCT domain[J]. IEEE Transactions on Image Processing, 2012, 21(8): 3339-3352. 10.1109/tip.2012.2191563 |
9 | BOSSE S, MANIRY D, MÜLLER K R, et al. Deep neural networks for no-reference and full-reference image quality assessment[J]. IEEE Transactions on Image Processing, 2018, 27(1): 206-219. 10.1109/tip.2017.2760518 |
10 | XU P, GUO M, CHEN L, et al. No-reference stereoscopic image quality assessment based on binocular statistical features and machine learning[J]. Complexity, 2021, 2021: No.8834652. 10.1155/2021/8834652 |
11 | MIN X K, MA K D, GU K, et al. Unified blind quality assessment of compressed natural, graphic, and screen content images[J]. IEEE Transactions on Image Processing, 2017, 26(11): 5462-5474. 10.1109/tip.2017.2735192 |
12 | 曹玉东,蔡希彪. 基于增强型对抗学习的无参考图像质量评价算法[J]. 计算机应用, 2020, 40(11): 3166-3171. 10.11772/j.issn.1001-9081.2020010012 |
CAO Y D, CAI X B. No-reference image quality assessment algorithm with enhanced adversarial learning[J]. Journal of Computer Applications, 2020, 40(11): 3166-3171. 10.11772/j.issn.1001-9081.2020010012 | |
13 | HINT T, FISHER M, WANG O, et al. Improved techniques for training single-image GANs[C]// Proceedings of 2021 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2021: 1299-1308. 10.1109/wacv48630.2021.00134 |
14 | 胡晋滨,柴雄力,邵枫. 基于伪参考图像深层特征相似性的盲图像质量评价[J]. 光电子·激光, 2019, 30(11): 1184-1193. |
HU J B, CHAI X L, SHAO F. Deep features similarity for blind quality assessment using Pseudo-reference image[J]. Journal of Optoelectronics·Laser, 2019, 30(11): 1184-1193. | |
15 | LIN K Y, WANG G X. Hallucinated-IQA: no-reference image quality assessment via adversarial learning[C]// Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 732-741. 10.1109/cvpr.2018.00083 |
16 | GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 2672-2680. |
17 | ARJOVSKY M, CHINATALA S, BOTTOU L, et al. Wasserstein generative adversarial networks[C]// Proceedings of the 34th International Conference on Machine Learning. New York: JMLR.org, 2017: 214-223. |
18 | MIRZA M, OSINDERO S. Conditional generative adversarial nets[EB/OL]. (2014-11-06) [2021-07-07].. |
19 | RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL]. (2016-01-07) [2021-07-07].. |
20 | SØNDERBY C K, CABALLERO J, THEIS L, et al. Amortised MAP inference for image super-resolution[EB/OL]. (2017-02-21) [2021-07-07].. |
21 | ZHANG K, van GOOL L, TIMOFTE R. Deep unfolding network for image super-resolution[C]// Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 3214-3223. 10.1109/cvpr42600.2020.00328 |
22 | SHI W Z, CABALLERO J, HUSZÁR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]// Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1874-1883. 10.1109/cvpr.2016.207 |
23 | GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of Wasserstein GANs[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 5769-5779. |
24 | KINGMA D P, BA J L. Adam: a method for stochastic optimization[EB/OL]. (2017-01-30) [2021-07-07].. |
25 | SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15: 1929-1958. |
26 | PONOMARENKO N, IEREMEIEV O, LUKIN V, et al. Color image database TID2013: peculiarities and preliminary results[C]// Proceedings of the 2013 European Workshop on Visual Information Processing. Piscataway: IEEE, 2013: 106-111. 10.1109/euvip.2014.7018376 |
27 | SHEIKH H R, SABIR M F, BOVIK A C. A statistical evaluation of recent full reference image quality assessment algorithms[J]. IEEE Transactions on Image Processing, 2006, 15(11): 3440-3451. 10.1109/tip.2006.881959 |
28 | LARSON E C, CHANDLER D M. Most apparent distortion: full-reference image quality assessment and the role of strategy[J]. Journal of Electronic Imaging, 2010, 19(1): No.011006. 10.1117/1.3267105 |
29 | PONOMARENKO N, LUKIN V, ZELENSKY A, et al. TID2008 — a database for evaluation of full-reference visual quality assessment metrics[J]. Advances of Modern Radioelectronics, 2009, 10(4): 30-45. |
30 | WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. 10.1016/j.jvcir.2019.102655 |
31 | ZHANG L, ZHANG L, MOU X Q, et al. FSIM: a feature similarity index for image quality assessment[J]. IEEE Transactions on Image Processing, 2011, 20(8): 2378-2386. 10.1109/tip.2011.2109730 |
32 | ZHANG L, SHEN Y, LI H Y. VSI: a visual saliency-induced index for perceptual image quality assessment[J]. IEEE Transactions on Image Processing, 2014, 23(10): 4270-4281. 10.1109/tip.2014.2346028 |
33 | XUE W F, ZHANG L, MOU X Q, et al. Gradient magnitude similarity deviation: a highly efficient perceptual image quality index[J]. IEEE Transactions on Image Processing, 2014, 23(2): 684-695. 10.1109/tip.2013.2293423 |
34 | SUN W, LIAO Q M, XUE J H, et al. SPSIM: a superpixel-based similarity index for full-reference image quality assessment[J]. IEEE Transactions on Image Processing, 2018, 27(9): 4232-4244. 10.1109/tip.2018.2837341 |
35 | WANG H, FU J, HU S,et al.Image quality assessment based on local linear information and distortion-specific compensation[J].IEEE Transactions on Image Processing, 2017, 26(2): 915-926. 10.1109/tip.2016.2639451 |
36 | MOORTHY A K, BOVIK A C. Blind image quality assessment: from natural scene statistics to perceptual quality[J]. IEEE Transactions on Image Processing, 2011, 20(12): 3350-3364. 10.1109/tip.2011.2147325 |
37 | 杨璐,王辉,魏敏. 基于机器学习的无参考图像质量评价综述[J]. 计算机工程与应用, 2018, 54(19): 34-42. 10.3778/j.issn.1002-8331.1807-0169 |
YANG L, WANG H, WEI M. Review of no-reference image quality assessment based on machine learning[J]. Computer Engineering and Applications, 2018, 54(19): 34-42. 10.3778/j.issn.1002-8331.1807-0169 | |
38 | MOORTHY A K, BOVIK A C. A two-step framework for constructing blind image quality indices[J]. IEEE Signal Processing Letters, 2010, 17(5): 513-516. 10.1109/lsp.2010.2043888 |
39 | ALIZADEH M, SHARIFKHANI M. Subjective video quality prediction based on objective video quality metrics[C]// Proceedings of 4th Iranian Conference on Signal Processing and Intelligent Systems. Piscataway: IEEE, 2018: 7-9. 10.1109/icspis.2018.8700561 |
40 | LI D Q, JIANG T T, LIN W S, et al. Which has better visual quality: the clear blue sky or a blurry animal?[J]. IEEE Transactions on Multimedia, 2019, 21(5): 1221-1234. 10.1109/tmm.2018.2875354 |
[1] | 李顺勇, 李师毅, 胥瑞, 赵兴旺. 基于自注意力融合的不完整多视图聚类算法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2696-2703. |
[2] | 黄云川, 江永全, 黄骏涛, 杨燕. 基于元图同构网络的分子毒性预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2964-2969. |
[3] | 秦璟, 秦志光, 李发礼, 彭悦恒. 基于概率稀疏自注意力神经网络的重性抑郁疾患诊断[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2970-2974. |
[4] | 王熙源, 张战成, 徐少康, 张宝成, 罗晓清, 胡伏原. 面向手术导航3D/2D配准的无监督跨域迁移网络[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2911-2918. |
[5] | 潘烨新, 杨哲. 基于多级特征双向融合的小目标检测优化模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2871-2877. |
[6] | 刘禹含, 吉根林, 张红苹. 基于骨架图与混合注意力的视频行人异常检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2551-2557. |
[7] | 顾焰杰, 张英俊, 刘晓倩, 周围, 孙威. 基于时空多图融合的交通流量预测[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2618-2625. |
[8] | 石乾宏, 杨燕, 江永全, 欧阳小草, 范武波, 陈强, 姜涛, 李媛. 面向空气质量预测的多粒度突变拟合网络[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2643-2650. |
[9] | 赵亦群, 张志禹, 董雪. 基于密集残差物理信息神经网络的各向异性旅行时计算方法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2310-2318. |
[10] | 刘丽, 侯海金, 王安红, 张涛. 基于多尺度注意力的生成式信息隐藏算法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2102-2109. |
[11] | 徐松, 张文博, 王一帆. 基于时空信息的轻量视频显著性目标检测网络[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2192-2199. |
[12] | 孙逊, 冯睿锋, 陈彦如. 基于深度与实例分割融合的单目3D目标检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2208-2215. |
[13] | 吴筝, 程志友, 汪真天, 汪传建, 王胜, 许辉. 基于深度学习的患者麻醉复苏过程中的头部运动幅度分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2258-2263. |
[14] | 李欢欢, 黄添强, 丁雪梅, 罗海峰, 黄丽清. 基于多尺度时空图卷积网络的交通出行需求预测[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2065-2072. |
[15] | 张郅, 李欣, 叶乃夫, 胡凯茜. 基于暗知识保护的模型窃取防御技术DKP[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2080-2086. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||