To address the issue of image authenticity verification in deepfake detection and model copyright protection, a high-quality and highly robust watermarking method for diffusion model output, DeWM (Decoder-driven WaterMarking for diffusion model), was proposed. Firstly, a decoder-driven watermark embedding network was proposed to realize direct sharing of encoder and decoder features, so as to produce watermarks with high robustness and invisibility. Then, a fine-tuning strategy was designed to fine-tune the pre-trained diffusion model's decoder, and embed a specific watermark into all generated images, thereby achieving simple and effective watermark embedding without changing the model architecture and diffusion process. Experimental results show that compared with Stable Signature method on the MS-COCO dataset, when the watermark bit-length is increased to 64 bits, the proposed method has the Peak Signal-to-Noise Ratio (PSNR) and Structure SIMilarity (SSIM) of the generated watermarked images improved by 14.87% and 9.41%, respectively. Moreover, the average bit accuracy of watermark extraction under cropping, brightness adjustment and image reconstruction is enhanced by than 3%, which demonstrates significantly improved robustness.