《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (11): 3621-3631.DOI: 10.11772/j.issn.1001-9081.2024111583
• 网络空间安全 • 上一篇
收稿日期:2024-11-07
修回日期:2025-02-16
接受日期:2025-02-19
发布日期:2025-02-21
出版日期:2025-11-10
通讯作者:
段新涛
作者简介:保梦茹(1999—),女,河南郑州人,硕士研究生,主要研究方向:深度学习、模型保护基金资助:
Xintao DUAN1,2(
), Mengru BAO1, Yinhang WU1, Chuan QIN3
Received:2024-11-07
Revised:2025-02-16
Accepted:2025-02-19
Online:2025-02-21
Published:2025-11-10
Contact:
Xintao DUAN
About author:BAO Mengru, born in 1999, M. S. candidate. Her research interests include deep learning, model protection.Supported by:摘要:
基于深度神经网络(DNN)的模型以其优越的性能得到了广泛的应用,但训练一个性能强大的DNN模型需要大量的数据集、专业知识、计算资源、硬件条件和时间等,如果对它进行非法盗用会对模型拥有者造成巨大的损失。针对DNN模型的安全和知识产权问题,提出一种DNN模型主动保护方法。该方法使用一种新的综合性权重选择策略精准定位模型中的重要权重,并结合DNN模型卷积层的结构特点,在三维混沌系统的基础上首次引入四维Chen混沌系统对卷积层的少量权重进行位置置乱加密。同时,为了解决授权用户即使拥有密钥也无法解密的问题,结合椭圆曲线加密算法(ECC)构建加密模型的数字签名方案。加密后,权重位置和混沌序列的初始值复合形成加密密钥,授权用户可以使用该密钥正确解密DNN模型,而未被授权的攻击者即使截获了DNN模型也无法正常使用。实验结果表明,对分类模型的少量权重位置进行置乱能显著降低分类准确率,并且解密模型可以实现无损恢复。此外,该方法能够抵抗微调和剪枝攻击,且得到的密钥具有较强的敏感性并能抵抗暴力攻击。同时,通过实验验证了该方法不仅对图像分类模型有效,还能保护深度图像隐写模型和目标检测模型,具有可迁移性。
中图分类号:
段新涛, 保梦茹, 武银行, 秦川. 基于四维Chen混沌系统的深度神经网络模型主动保护方法[J]. 计算机应用, 2025, 45(11): 3621-3631.
Xintao DUAN, Mengru BAO, Yinhang WU, Chuan QIN. Active protection method for deep neural network model based on four-dimensional Chen chaotic system[J]. Journal of Computer Applications, 2025, 45(11): 3621-3631.
| 数据集 | 分类模型 | 原始准确率 | 加密准确率 | 恢复准确率 |
|---|---|---|---|---|
| CIFAR-10 | ResNet18 | 95.09 | 10.13 | 95.09 |
| ResNet50 | 96.44 | 10.00 | 96.44 | |
| MobileNetV2 | 95.12 | 12.41 | 95.12 | |
| MobileNetV3 | 95.09 | 9.82 | 95.09 | |
| ShuffleNet | 96.26 | 8.77 | 96.26 | |
| EfficientNet | 94.34 | 9.99 | 94.34 | |
| VGG16 | 91.04 | 10.63 | 91.04 | |
Fashion- MNIST | ResNet18 | 92.33 | 12.73 | 92.33 |
| ResNet50 | 91.56 | 10.00 | 91.56 | |
| MobileNetV2 | 92.66 | 10.00 | 92.66 | |
| MobileNetV3 | 92.91 | 9.90 | 92.91 | |
| ShuffleNet | 90.48 | 11.09 | 90.48 | |
| EfficientNet | 94.57 | 10.00 | 94.57 | |
| VGG16 | 90.92 | 9.89 | 90.92 |
表1 在CIFAR-10和Fashion-MNIST数据集上的测试结果 ( %)
Tab. 1 Test results on CIFAR-10 and Fashion-MNIST datasets
| 数据集 | 分类模型 | 原始准确率 | 加密准确率 | 恢复准确率 |
|---|---|---|---|---|
| CIFAR-10 | ResNet18 | 95.09 | 10.13 | 95.09 |
| ResNet50 | 96.44 | 10.00 | 96.44 | |
| MobileNetV2 | 95.12 | 12.41 | 95.12 | |
| MobileNetV3 | 95.09 | 9.82 | 95.09 | |
| ShuffleNet | 96.26 | 8.77 | 96.26 | |
| EfficientNet | 94.34 | 9.99 | 94.34 | |
| VGG16 | 91.04 | 10.63 | 91.04 | |
Fashion- MNIST | ResNet18 | 92.33 | 12.73 | 92.33 |
| ResNet50 | 91.56 | 10.00 | 91.56 | |
| MobileNetV2 | 92.66 | 10.00 | 92.66 | |
| MobileNetV3 | 92.91 | 9.90 | 92.91 | |
| ShuffleNet | 90.48 | 11.09 | 90.48 | |
| EfficientNet | 94.57 | 10.00 | 94.57 | |
| VGG16 | 90.92 | 9.89 | 90.92 |
| 数据集 | 分类模型 | 原始准确率 | 加密准确率 | 恢复准确率 |
|---|---|---|---|---|
| CIFAR-100 | ResNet18 | 80.59 | 0.84 | 80.59 |
| ResNet50 | 82.75 | 1.00 | 82.75 | |
| MobileNetV2 | 81.37 | 1.06 | 81.37 | |
| MobileNetV3 | 81.57 | 1.86 | 81.57 | |
| ShuffleNet | 82.89 | 1.18 | 82.89 | |
| EfficientNet | 84.19 | 0.86 | 84.19 | |
| VGG16 | 80.26 | 1.26 | 80.26 | |
| ImageNet | ResNet18 | 84.65 | 0.44 | 84.65 |
| ResNet50 | 89.67 | 0.50 | 89.67 | |
| MobileNetV2 | 86.42 | 0.47 | 86.42 | |
| MobileNetV3 | 89.45 | 0.37 | 89.45 | |
| ShuffleNet | 87.24 | 0.47 | 87.24 | |
| EfficientNet | 94.61 | 0.50 | 94.61 | |
| VGG16 | 82.68 | 0.50 | 82.68 |
表2 在CIFAR-100和ImageNet数据集上的测试结果 ( %)
Tab. 2 Test results on CIFAR-100 and ImageNet datasets
| 数据集 | 分类模型 | 原始准确率 | 加密准确率 | 恢复准确率 |
|---|---|---|---|---|
| CIFAR-100 | ResNet18 | 80.59 | 0.84 | 80.59 |
| ResNet50 | 82.75 | 1.00 | 82.75 | |
| MobileNetV2 | 81.37 | 1.06 | 81.37 | |
| MobileNetV3 | 81.57 | 1.86 | 81.57 | |
| ShuffleNet | 82.89 | 1.18 | 82.89 | |
| EfficientNet | 84.19 | 0.86 | 84.19 | |
| VGG16 | 80.26 | 1.26 | 80.26 | |
| ImageNet | ResNet18 | 84.65 | 0.44 | 84.65 |
| ResNet50 | 89.67 | 0.50 | 89.67 | |
| MobileNetV2 | 86.42 | 0.47 | 86.42 | |
| MobileNetV3 | 89.45 | 0.37 | 89.45 | |
| ShuffleNet | 87.24 | 0.47 | 87.24 | |
| EfficientNet | 94.61 | 0.50 | 94.61 | |
| VGG16 | 82.68 | 0.50 | 82.68 |
| 数据集 | 分类模型 | 篡改 | 验证结果 | 解密准确率/% |
|---|---|---|---|---|
| ImageNet | ShuffleNet | 否 | True | 87.24 |
| 是 | False | 0.56 | ||
| CIFAR-100 | MobileNetV2 | 否 | True | 81.37 |
| 是 | False | 1.32 |
表3 ECC数字签名的验证结果
Tab. 3 Verification results of ECC digital signature
| 数据集 | 分类模型 | 篡改 | 验证结果 | 解密准确率/% |
|---|---|---|---|---|
| ImageNet | ShuffleNet | 否 | True | 87.24 |
| 是 | False | 0.56 | ||
| CIFAR-100 | MobileNetV2 | 否 | True | 81.37 |
| 是 | False | 1.32 |
| 剪枝率 | 分类准确率/% | |||
|---|---|---|---|---|
EfficientNet (CIFAR-10) | VGG16 (CIFAR-100) | ShuffleNet (Fashion-MNIST) | ResNet18 (ImageNet) | |
| 0.1 | 9.98 | 1.23 | 19.40 | 0.58 |
| 0.2 | 10.00 | 1.23 | 17.73 | 0.51 |
| 0.3 | 10.00 | 1.33 | 10.65 | 0.53 |
| 0.4 | 9.90 | 1.07 | 12.63 | 0.62 |
| 0.5 | 10.00 | 1.04 | 8.58 | 0.57 |
| 0.6 | 10.00 | 1.07 | 10.00 | 0.53 |
| 0.7 | 11.12 | 0.74 | 10.00 | 0.51 |
| 0.8 | 10.00 | 1.01 | 10.00 | 0.50 |
| 0.9 | 10.00 | 1.09 | 10.00 | 0.50 |
表4 加密模型在不同剪枝率下的分类准确率对比
Tab. 4 Classification accuracy comparison of encryption models under different pruning rates
| 剪枝率 | 分类准确率/% | |||
|---|---|---|---|---|
EfficientNet (CIFAR-10) | VGG16 (CIFAR-100) | ShuffleNet (Fashion-MNIST) | ResNet18 (ImageNet) | |
| 0.1 | 9.98 | 1.23 | 19.40 | 0.58 |
| 0.2 | 10.00 | 1.23 | 17.73 | 0.51 |
| 0.3 | 10.00 | 1.33 | 10.65 | 0.53 |
| 0.4 | 9.90 | 1.07 | 12.63 | 0.62 |
| 0.5 | 10.00 | 1.04 | 8.58 | 0.57 |
| 0.6 | 10.00 | 1.07 | 10.00 | 0.53 |
| 0.7 | 11.12 | 0.74 | 10.00 | 0.51 |
| 0.8 | 10.00 | 1.01 | 10.00 | 0.50 |
| 0.9 | 10.00 | 1.09 | 10.00 | 0.50 |
| 轮数 | 分类准确率/% | |||
|---|---|---|---|---|
ShuffleNet (CIFAR-10) | ResNet18 (CIFAR-100) | MobileNetV2 (Fashion-MNIST) | MobileNetV3 (ImageNet) | |
| 10 | 1.44 | 5.82 | 9.45 | 0.61 |
| 20 | 4.74 | 14.34 | 14.79 | 1.07 |
| 30 | 15.63 | 14.54 | 14.97 | 2.42 |
| 40 | 17.13 | 14.75 | 15.01 | 3.50 |
| 50 | 23.86 | 14.83 | 14.51 | 4.82 |
| 60 | 28.16 | 14.84 | 15.07 | 5.74 |
| 70 | 29.41 | 14.92 | 14.56 | 6.56 |
| 80 | 33.20 | 14.93 | 15.31 | 7.38 |
| 90 | 35.61 | 14.93 | 15.54 | 7.94 |
| 100 | 46.60 | 15.08 | 15.76 | 8.55 |
表5 加密模型在微调攻击后的分类准确率对比
Tab. 5 Classification accuracy comparison of encryption models after fine-tuning attack
| 轮数 | 分类准确率/% | |||
|---|---|---|---|---|
ShuffleNet (CIFAR-10) | ResNet18 (CIFAR-100) | MobileNetV2 (Fashion-MNIST) | MobileNetV3 (ImageNet) | |
| 10 | 1.44 | 5.82 | 9.45 | 0.61 |
| 20 | 4.74 | 14.34 | 14.79 | 1.07 |
| 30 | 15.63 | 14.54 | 14.97 | 2.42 |
| 40 | 17.13 | 14.75 | 15.01 | 3.50 |
| 50 | 23.86 | 14.83 | 14.51 | 4.82 |
| 60 | 28.16 | 14.84 | 15.07 | 5.74 |
| 70 | 29.41 | 14.92 | 14.56 | 6.56 |
| 80 | 33.20 | 14.93 | 15.31 | 7.38 |
| 90 | 35.61 | 14.93 | 15.54 | 7.94 |
| 100 | 46.60 | 15.08 | 15.76 | 8.55 |
| 数据集 | 分类模型 | 分类准确率/% | |
|---|---|---|---|
| 原始密钥 | 扰动密钥 | ||
| CIFAR-10 | MobileNetV3 | 95.09 | 9.56 |
| CIFAR-100 | ShuffleNet | 82.89 | 1.07 |
| Fashion-MNIST | VGG16 | 90.92 | 10.67 |
| ImageNet | MobileNetV2 | 86.42 | 0.50 |
表6 密钥敏感性的客观效果
Tab. 6 Objective effects of key sensitivity
| 数据集 | 分类模型 | 分类准确率/% | |
|---|---|---|---|
| 原始密钥 | 扰动密钥 | ||
| CIFAR-10 | MobileNetV3 | 95.09 | 9.56 |
| CIFAR-100 | ShuffleNet | 82.89 | 1.07 |
| Fashion-MNIST | VGG16 | 90.92 | 10.67 |
| ImageNet | MobileNetV2 | 86.42 | 0.50 |
| 分类模型 | 加密时间 | 解密时间 | 处理单张图片的推理时间 | |
|---|---|---|---|---|
| 原始模型 | 解密模型 | |||
| ResNet18 | 0.430 | 0.340 | 0.049 | 0.045 |
| ResNet50 | 1.356 | 1.121 | 0.157 | 0.166 |
| VGG16 | 2.160 | 2.254 | 0.198 | 0.206 |
表7 加解密时间和模型推理时间对比 ( s)
Tab. 7 Comparison of encryption and decryption time, and model inference time
| 分类模型 | 加密时间 | 解密时间 | 处理单张图片的推理时间 | |
|---|---|---|---|---|
| 原始模型 | 解密模型 | |||
| ResNet18 | 0.430 | 0.340 | 0.049 | 0.045 |
| ResNet50 | 1.356 | 1.121 | 0.157 | 0.166 |
| VGG16 | 2.160 | 2.254 | 0.198 | 0.206 |
| 分类模型 | 存储大小/MB | 传输大小/MB | 传输开销 增加比例/% | ||
|---|---|---|---|---|---|
| 加密前 | 加密后 | 加密前 | 加密后 | ||
| ResNet50 | 90.745 | 90.731 | 90.745 | 91.083 | 0.373 |
| MobileNetV2 | 9.205 | 9.196 | 9.205 | 9.237 | 0.343 |
| EfficientNet | 203.217 | 203.142 | 203.271 | 203.823 | 0.298 |
| VGG16 | 515.301 | 515.297 | 515.301 | 517.465 | 0.420 |
表8 加密前后的存储和传输开销
Tab. 8 Storage and transmission costs before and after encryption
| 分类模型 | 存储大小/MB | 传输大小/MB | 传输开销 增加比例/% | ||
|---|---|---|---|---|---|
| 加密前 | 加密后 | 加密前 | 加密后 | ||
| ResNet50 | 90.745 | 90.731 | 90.745 | 91.083 | 0.373 |
| MobileNetV2 | 9.205 | 9.196 | 9.205 | 9.237 | 0.343 |
| EfficientNet | 203.217 | 203.142 | 203.271 | 203.823 | 0.298 |
| VGG16 | 515.301 | 515.297 | 515.301 | 517.465 | 0.420 |
| DNN模型 | 类别 | PSNR/dB | SSIM | LPIPS |
|---|---|---|---|---|
| Baluja | 原始模型 | 34.627 5 | 0.967 8 | 0.031 0 |
| 解密模型 | 34.627 5 | 0.967 8 | 0.031 0 | |
| 加密模型 | 10.991 9 | 0.326 0 | 0.801 1 | |
| HiDDeN | 原始模型 | 31.462 5 | 0.960 2 | 0.082 3 |
| 解密模型 | 31.462 5 | 0.960 2 | 0.082 3 | |
| 加密模型 | 11.232 6 | 0.270 1 | 0.725 0 |
表9 Baluja和HiDDeN提取网络在加密后提取的秘密图像的客观指标
Tab. 9 Objective indicators of secret images extracted by Baluja and HiDDeN extraction networks after encryption
| DNN模型 | 类别 | PSNR/dB | SSIM | LPIPS |
|---|---|---|---|---|
| Baluja | 原始模型 | 34.627 5 | 0.967 8 | 0.031 0 |
| 解密模型 | 34.627 5 | 0.967 8 | 0.031 0 | |
| 加密模型 | 10.991 9 | 0.326 0 | 0.801 1 | |
| HiDDeN | 原始模型 | 31.462 5 | 0.960 2 | 0.082 3 |
| 解密模型 | 31.462 5 | 0.960 2 | 0.082 3 | |
| 加密模型 | 11.232 6 | 0.270 1 | 0.725 0 |
| DNN模型 | mIoU | ||
|---|---|---|---|
| 原始模型 | 加密模型 | 解密模型 | |
| Faster R-CNN | 81.73 | 3.41 | 81.73 |
| Mask R-CNN | 82.30 | 18.72 | 82.30 |
表10 Faster R-CNN和Mask R-CNN模型的客观测试结果 (%)
Tab. 10 Objective test results of Faster R-CNN and Mask R-CNN models
| DNN模型 | mIoU | ||
|---|---|---|---|
| 原始模型 | 加密模型 | 解密模型 | |
| Faster R-CNN | 81.73 | 3.41 | 81.73 |
| Mask R-CNN | 82.30 | 18.72 | 82.30 |
| 方法 | 数据集 | 原始 准确率 | 加密 准确率 | 重训练 | 硬件支持 | 权重选择 |
|---|---|---|---|---|---|---|
文献[ 方法 | CIFAR-10 | 86.96 | 15.00 | √ | × | × |
| Fashion-MNIST | — | — | ||||
文献[ 方法 | CIFAR-10 | 89.54 | 9.37 | √ | √ | × |
| Fashion-MNIST | 89.93 | 10.05 | ||||
文献[ 方法 | CIFAR-10 | 92.02 | 10.86 | √ | × | √ |
| Fashion-MNIST | 91.01 | 10.36 | ||||
文献[ 方法 | CIFAR-10 | 91.24 | 10.00 | √ | × | √ |
| Fashion-MNIST | — | — | ||||
文献[ 方法 | CIFAR-10 | 93.91 | 10.48 | √ | × | √ |
| Fashion-MNIST | 93.97 | 10.00 | ||||
| 本文方法 | CIFAR-10 | 96.26 | 8.77 | × | × | √ |
| Fashion-MNIST | 94.57 | 10.00 |
表11 本文方法与SOTA的对比 ( %)
Tab. 11 Comparison of proposed method and SOTAs
| 方法 | 数据集 | 原始 准确率 | 加密 准确率 | 重训练 | 硬件支持 | 权重选择 |
|---|---|---|---|---|---|---|
文献[ 方法 | CIFAR-10 | 86.96 | 15.00 | √ | × | × |
| Fashion-MNIST | — | — | ||||
文献[ 方法 | CIFAR-10 | 89.54 | 9.37 | √ | √ | × |
| Fashion-MNIST | 89.93 | 10.05 | ||||
文献[ 方法 | CIFAR-10 | 92.02 | 10.86 | √ | × | √ |
| Fashion-MNIST | 91.01 | 10.36 | ||||
文献[ 方法 | CIFAR-10 | 91.24 | 10.00 | √ | × | √ |
| Fashion-MNIST | — | — | ||||
文献[ 方法 | CIFAR-10 | 93.91 | 10.48 | √ | × | √ |
| Fashion-MNIST | 93.97 | 10.00 | ||||
| 本文方法 | CIFAR-10 | 96.26 | 8.77 | × | × | √ |
| Fashion-MNIST | 94.57 | 10.00 |
| [1] | HASAN M D A, BALASUBADRA K, VADIVEL G, et al. IoT-driven image recognition for microplastic analysis in water systems using convolutional neural networks[C]// Proceedings of the 2nd International Conference on Computer, Communication and Control. Piscataway: IEEE, 2024: 1-6. |
| [2] | KHAN S, NASEER M, HAYAT M, et al. Transformers in vision: a survey[J]. ACM Computing Surveys, 2022, 54(10s): No.200. |
| [3] | RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text Transformer[J]. Journal of Machine Learning Research, 2020, 21: 1-67. |
| [4] | KE X, WU H, GUO W. StegFormer: rebuilding the glory of autoencoder-based steganography[C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 2723-2731. |
| [5] | JIANG W, GONG Z, ZHAN J, et al. A low-cost image encryption method to prevent model stealing of deep neural network[J]. Journal of Circuits, Systems and Computers, 2020, 29(16): No.2050252. |
| [6] | GOMEZ L, WILHELM M, MÁRQUEZ J, et al. Security for distributed deep neural networks towards data confidentiality & intellectual property protection[C]// Proceedings of the 16th International Joint Conference on e-Business and Telecommunications — Volume 2: SECRYPT. Setúbal: SciTePress, 2019: 439-447. |
| [7] | UCHIDA Y, NAGAI Y, SAKAZAWA S, et al. Embedding watermarks into deep neural networks[C]// Proceedings of the 2017 ACM International Conference on Multimedia Retrieval. New York: ACM, 2017: 269-277. |
| [8] | DARVISH ROUHANI B, CHEN H, KOUSHANFAR F. DeepSigns: an end-to-end watermarking framework for ownership protection of deep neural networks[C]// Proceedings of the 24th International Conference on Architectural Support for Programming Languages and Operating Systems. New York: ACM, 2019: 485-497. |
| [9] | LI Z, HU C, ZHANG Y, et al. How to prove your model belongs to you: a blind-watermark based framework to protect intellectual property of DNN[C]// Proceedings of the 35th Annual Computer Security Applications Conference. New York: ACM, 2019: 126-137. |
| [10] | PYONE A, MAUNG M, KIYA H. Training DNN model with secret key for model protection[C]// Proceedings of the IEEE 9th Global Conference on Consumer Electronics. Piscataway: IEEE, 2020: 818-821. |
| [11] | CHAKRABORTY A, MONDAI A, SRIVASTAVA A. Hardware-assisted intellectual property protection of deep learning models[C]// Proceedings of the 57th ACM/EDAC/IEEE Design Automation Conference. New York: ACM, 2020: No.172. |
| [12] | JIANG W, SONG Z, ZHAN J, et al. Layerwise security protection for deep neural networks in industrial cyber physical systems[J]. IEEE Transactions on Industrial Informatics, 2022, 18(12): 8797-8806. |
| [13] | XUE M, WU Z, HE C, et al. Active DNN IP protection: a novel user fingerprint management and DNN authorization control technique[C]// Proceedings of the IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications. Piscataway: IEEE, 2020: 975-982. |
| [14] | XUE M, WU Z, ZHANG Y, et al. AdvParams: an active DNN intellectual property protection technique via adversarial perturbation based parameter encryption[J]. IEEE Transactions on Emerging Topics in Computing, 2023, 11(3): 664-678. |
| [15] | FAN L, NG K W, CHAN C S, et al. DeepIPR: deep neural network ownership verification with passports[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(10): 6122-6139. |
| [16] | TIAN J, ZHOU J, DUAN J. Probabilistic selective encryption of convolutional neural networks for hierarchical services[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 2205-2214. |
| [17] | ZHOU T, LUO Y, REN S, et al. NNSplitter: an active defense solution for DNN model via automated weight obfuscation[C]// Proceedings of the 40th International Conference on Machine Learning. New York: JMLR.org, 2023: 42614-42624. |
| [18] | 刘海峰,周雪飞,梁星亮,等. 基于多混沌系统的图像加密算法[J]. 陕西科技大学学报, 2022, 40(1):188-195. |
| LIU H F, ZHOU X F, LIANG X L, et al. Image encryption algorithm based on multiple chaotic systems[J]. Journal of Shaanxi University of Science and Technology, 2022, 40(1): 188-195. | |
| [19] | KHALIQUE A, SINGH K, SOOD S. Implementation of elliptic curve digital signature algorithm[J]. International Journal of Computer Applications, 2010, 2(2): 21-27. |
| [20] | KRIZHEVSKY A. Learning multiple layers of features from tiny images[R/OL]. [2024-06-14].. |
| [21] | XIAO H, RASUL K, VOLLGRAF R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms[EB/OL]. [2024-06-14].. |
| [22] | DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]// Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2009: 248-255. |
| [23] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
| [24] | SANDLER M, HOWARD A, ZHU M, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4510-4520. |
| [25] | HOWARD A, SANDLER M, CHEN B, et al. Searching for MobileNetV3[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 1314-1324. |
| [26] | ZHANG X, ZHOU X, LIN M, et al. ShuffleNet: an extremely efficient convolutional neural network for mobile devices[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 6848-6856. |
| [27] | TAN M, LE Q V. EfficientNet: rethinking model scaling for convolutional neural networks[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 6105-6114. |
| [28] | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2024-06-10].. |
| [29] | BALUJA S. Hiding images in plain sight: deep steganography[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 2066-2076. |
| [30] | ZHU J, KAPLAN R, JOHNSON J, et al. HiDDeN: hiding data with deep networks[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11219. Cham: Springer, 2018: 682-697. |
| [31] | REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. |
| [32] | HE K, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2980-2988. |
| [33] | EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The PASCAL Visual Object Classes (VOC) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303-338. |
| [34] | SERMANET P, KAVUKCUOGLU K, CHINTALA S, et al. Pedestrian detection with unsupervised multi-stage feature learning[C]// Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2013: 3626-3633. |
| [35] | CHEN T, LIU S, CHANG S, et al. Adversarial robustness: from self-supervised pre-training to fine-tuning[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 696-705. |
| [36] | HUYNH-THU Q, GHANBARI M. Scope of validity of PSNR in image/video quality assessment[J]. Electronics Letters, 2008, 44(13): 800-801. |
| [37] | REHMAN A, WANG Z. Reduced-reference image quality assessment by structural similarity estimation[J]. IEEE Transactions on Image Processing, 2012, 21(8): 3378-3389. |
| [38] | ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 586-595. |
| [1] | 向尔康, 黄荣, 董爱华. 开放生成与特征优化的开集识别方法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2195-2202. |
| [2] | 齐巧玲, 王啸啸, 张茜茜, 汪鹏, 董永峰. 基于元学习的标签噪声自适应学习算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2113-2122. |
| [3] | 王慧斌, 胡展傲, 胡节, 徐袁伟, 文博. 基于分段注意力机制的时间序列预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2262-2268. |
| [4] | 陈路, 王怀瑶, 刘京阳, 闫涛, 陈斌. 融合空间-傅里叶域信息的机器人低光环境抓取检测[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1686-1693. |
| [5] | 王华华, 范子健, 刘泽. 基于多空间概率增强的图像对抗样本生成方法[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 883-890. |
| [6] | 杨本臣, 李浩然, 金海波. 级联融合与增强重建的多聚焦图像融合网络[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 594-600. |
| [7] | 杨晟, 李岩. 面向目标检测的对比知识蒸馏方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 354-361. |
| [8] | 石锐, 李勇, 朱延晗. 基于特征梯度均值化的调制信号对抗样本攻击算法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2521-2527. |
| [9] | 王美, 苏雪松, 刘佳, 殷若南, 黄珊. 时频域多尺度交叉注意力融合的时间序列分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1842-1847. |
| [10] | 肖斌, 杨模, 汪敏, 秦光源, 李欢. 独立性视角下的相频融合领域泛化方法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1002-1009. |
| [11] | 颜梦玫, 杨冬平. 深度神经网络平均场理论综述[J]. 《计算机应用》唯一官方网站, 2024, 44(2): 331-343. |
| [12] | 柴汶泽, 范菁, 孙书魁, 梁一鸣, 刘竟锋. 深度度量学习综述[J]. 《计算机应用》唯一官方网站, 2024, 44(10): 2995-3010. |
| [13] | 申云飞, 申飞, 李芳, 张俊. 基于张量虚拟机的深度神经网络模型加速方法[J]. 《计算机应用》唯一官方网站, 2023, 43(9): 2836-2844. |
| [14] | 赵旭剑, 李杭霖. 基于混合机制的深度神经网络压缩算法[J]. 《计算机应用》唯一官方网站, 2023, 43(9): 2686-2691. |
| [15] | 李淦, 牛洺第, 陈路, 杨静, 闫涛, 陈斌. 融合视觉特征增强机制的机器人弱光环境抓取检测[J]. 《计算机应用》唯一官方网站, 2023, 43(8): 2564-2571. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||