In data poisoning attacks, backdoor attackers manipulate the distribution of training data by inserting the samples with hidden triggers into the training set to make the test samples misclassified so as to change model behavior and reduce model performance. However, the drawback of the existing triggers is the sample independence, that is, no matter what trigger mode is adopted, different poisoned samples contain the same triggers. Therefore, by combining image steganography and Deep Convolutional Generative Adversarial Network (DCGAN), an attack method based on sample was put forward to generate image texture feature maps according to the gray level co-occurrence matrix, embed target label character into the texture feature maps as a trigger by using the image steganography technology, and combine texture feature maps with trigger and clean samples into poisoned samples. Then, a large number of fake pictures with trigger were generated through DCGAN. In the training set samples, the original poisoned samples and the fake pictures generated by DCGAN were mixed together to finally achieve the effect that after the poisoner injecting a small number of poisoned samples, the attack rate was high and the effectiveness, sustainability and concealment of the trigger were ensured. Experimental results show that this method avoids the disadvantages of sample independence and has the model accuracy reached 93.78%. When the proportion of poisoned samples is 30%, data preprocessing, pruning defense and AUROR defense have the least influence on the success rate of attack, and the success rate of attack can reach about 56%.