《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (12): 3647-3653.DOI: 10.11772/j.issn.1001-9081.2022121881

• 人工智能 •    下一篇

卷积神经网络中基于差分隐私的动量梯度下降算法

张宇, 蔡英(), 崔剑阳, 张猛, 范艳芳   

  1. 北京信息科技大学 计算机学院,北京 100101
  • 收稿日期:2022-12-26 修回日期:2023-03-19 接受日期:2023-03-24 发布日期:2023-04-12 出版日期:2023-12-10
  • 通讯作者: 蔡英
  • 作者简介:张宇(1997—),女,河北石家庄人,硕士研究生,主要研究方向:深度学习、差分隐私
    崔剑阳(1996—),男(满族),河北承德人,硕士研究生,主要研究方向:车载自组织网络、隐私保护
    张猛(1996—),男,河北定州人,硕士研究生,主要研究方向:图像检索、隐私保护
    范艳芳(1979—),女,山西运城人,副教授,博士,主要研究方向:信息安全、车联网、边缘计算。
  • 基金资助:
    北京市自然科学基金-海淀原始创新联合基金资助项目(L192023)

Gradient descent with momentum algorithm based on differential privacy in convolutional neural network

Yu ZHANG, Ying CAI(), Jianyang CUI, Meng ZHANG, Yanfang FAN   

  1. Computer School,Beijing Information Science and Technology University,Beijing 100101,China
  • Received:2022-12-26 Revised:2023-03-19 Accepted:2023-03-24 Online:2023-04-12 Published:2023-12-10
  • Contact: Ying CAI
  • About author:ZHANG Yu, born in 1997, M. S. candidate. Her research interests include deep learning, differential privacy.
    CUI Jianyang, born in 1996, M. S. candidate. His research interests include vehicular ad hoc network, privacy protection.
    ZHANG Meng,born in 1996, M. S. candidate. His research interests include image retrieval, privacy protection.
    FAN Yanfang, born in 1979, Ph. D, associate professor. Her research interests include information security, internet of vehicles, edge computing.
  • Supported by:
    Natural Science Foundation of Beijing-Haidian Original Innovation Joint Fund(L192023)

摘要:

针对卷积神经网络(CNN)模型的训练过程中,模型参数记忆数据部分特征导致的隐私泄露问题,提出一种CNN中基于差分隐私的动量梯度下降算法(DPGDM)。首先,在模型优化的反向传播过程中对梯度添加满足差分隐私的高斯噪声,并用加噪后的梯度值参与模型参数的更新过程,从而实现对模型整体的差分隐私保护;其次,为了减少引入差分隐私噪声对模型收敛速度的影响,设计学习率衰减策略,改进动量梯度下降算法;最后,为了降低噪声对模型准确率的影响,在模型优化过程中动态地调整噪声尺度的值,从而改变在每一轮迭代中需要对梯度加入的噪声量。实验结果表明,与DP-SGD (Differentially Private Stochastic Gradient Descent)相比,所提算法可以在隐私预算为0.3和0.5时,模型准确率分别提高约5和4个百分点。可见,所提算法提高了模型的可用性,并实现了对模型的隐私保护。

关键词: 卷积神经网络, 差分隐私, 动量梯度下降算法, 深度学习, 隐私保护

Abstract:

To address the privacy leakage problem caused by the model parameters memorizing some features of the data during the training process of the Convolutional Neural Network (CNN) models, a Gradient Descent with Momentum algorithm based on Differential Privacy in CNN (DPGDM) was proposed. Firstly, the Gaussian noise meeting differential privacy was added to the gradient in the backpropagation process of model optimization, and the noise-added gradient value was used to participate in the model parameter update process, so as to achieve differential privacy protection for the overall model. Secondly, to reduce the impact of the introduction of differential privacy noise on convergence speed of the model, a learning rate decay strategy was designed and then the gradient descent with momentum algorithm was improved. Finally, to reduce the influence of noise on the accuracy of the model, the value of the noise scale was adjusted dynamically during model optimization, thereby changing the amount of noise that needs to be added to the gradient in each round of iteration. Experimental results show that compared with DP-SGD (Differentially Private Stochastic Gradient Descent) algorithm, the proposed algorithm can improve the accuracy of the model by about 5 and 4 percentage points at privacy budget of 0.3 and 0.5, respectively, proving that by using the proposed algorithm, the model usability is improved and privacy protection of the model is achieved.

Key words: Convolutional Neural Network (CNN), differential privacy, gradient descent with momentum algorithm, deep learning, privacy protection

中图分类号: