Deep Neural Networks (DNNs) are susceptible to adversarial perturbations, so that attackers may deceive DNNs by adding imperceptible adversarial perturbations to the image. The adversarial purification methods based on diffusion model use diffusion models to generate clean samples to defend against such attacks, but the diffusion models themselves are also susceptible to adversarial perturbations. Therefore, an adversarial purification method named StraightDiffusion was proposed, in which the diffusion process of diffusion model was guided by adversarial samples directly. Firstly, key problems and limitations of the existing methods during the used of diffusion models for adversarial purification were discussed. Secondly, a new sampling method was proposed, in which a two-stage guidance approach was used in the denoising process — head guidance and tail guidance, which means guidance was applied only in the early and late stages of denoising process, and not in other stages. Experimental results on the CIFAR-10 and ImageNet datasets using three classifiers: WideResNet-70-16, WideResNet-28-10, and ResNet-50 show that StraightDiffusion outperforms baseline methods in defense performance. Compared to the methods such as diffusion models for adversarial purification (DiffPure method) and Guided Diffusion Model for Purification (GDMP), StraightDiffusion achieves the best standard and robust accuracies on both CIFAR-10 and ImageNet datasets. The above verifies that the proposed method can improve purification performance, thereby enhancing the robust accuracy of classification models against adversarial samples and achieving effective defense under multiple attack scenarios.