Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Active protection method for deep neural network model based on four-dimensional Chen chaotic system
Xintao DUAN, Mengru BAO, Yinhang WU, Chuan QIN
Journal of Computer Applications    2025, 45 (11): 3621-3631.   DOI: 10.11772/j.issn.1001-9081.2024111583
Abstract27)   HTML0)    PDF (1955KB)(181)       Save

Deep Neural Network (DNN)-based models have been widely applied due to their superior performance. However, training a powerful DNN model requires extensive datasets, expertise, computational resources, specialized hardware, and significant time investment. Unauthorized exploitation of such models could cause substantial losses to model owners. Aiming at the security and intellectual property issues of DNN models, an active protection method was proposed. The method employed a new comprehensive weight selection strategy to precisely identify critical weights within the model. Combining with the structural characteristics of the convolutional layer in DNN model, the four-dimensional Chen chaotic system was introduced for the first time on the basis of the three-dimensional chaotic system to scramble and encrypt a small number of weights in the convolutional layer. Meanwhile, to address the problem that authorized users cannot decrypt even with the key, an Elliptic Curve Cryptography (ECC)-based digital signature scheme was integrated for encryption models. After encryption, the weight positions and the initial values of chaotic sequence were combined to form an encryption key. Authorized users can use the key to correctly decrypt the DNN model, while unauthorized attackers cannot functionally use intercepted models even if acquired. Experimental results show that scrambling a minimal fraction of weight positions significantly degrades classification accuracy, and the decryption model can be restored without any loss. In addition, the method is resistant to fine-tuning and pruning attacks, and the obtained key has strong sensitivity and is resistant to brute force attacks. Furthermore, the experiments verify the method’s transferability, it is effective for image classification models, and can protect deep image steganography models and object detection models simultaneously.

Table and Figures | Reference | Related Articles | Metrics