In view of the problems of complex case structure, redundant facts involved in cases, and wide distribution of cases in judgment documents, the existing Large Language Models (LLMs) are difficult to focus on structural information effectively and may generate factual errors, resulting in missing structural information and factual inconsistency. To this end, a judgment document summary method combining LLMs and dynamic prompts, named DPCM (Dynamic Prompt Correction Method), was proposed. Firstly, LLMs were used for single-sample learning to generate a judgment document summary. Secondly, high-dimensional similarity between the original text and the summary was calculated to detect possible missing structure or factual inconsistency problems in the summary. If a problem was found, the wrong summary was spliced with the original text, and the prompt words were added. Then, one-shot learning was performed again to correct and generate a new summary, and a similarity test was performed again. If the problem still existed, the generation and detection process would be repeated. Finally, through this iterative method, the prompt words were adjusted dynamically to optimize the generated summary gradually. Experimental results on the CAIL2020 public justice summary dataset show that compared with Least-To-Most Prompting, Zero-Shot Reasoners, Self_Consistency_Cot and other methods, the proposed method has improvements in Rouge-1, Rouge-2, Rouge-L, BERTscore, FactCC (Factual Consistency) indicators.