Prompt paradigm is widely used to zero-shot Natural Language Processing (NLP) tasks. However, the existing zero-shot Relation Extraction (RE) model based on Prompt paradigm suffers from the difficulty of constructing answer space mappings and dependence on manual template selection, which leads to suboptimal performance. To address these issues, a zero-shot RE model via multi-template fusion in Prompt was proposed. Firstly, the zero-shot RE task was defined as the Masked Language Model (MLM) task, where the construction of answer space mapping was abandoned. Instead, the words output by the template were compared with the relation description text in the word embedding space to determine the relation class. Then, the part of speech of the relation description text was introduced as a feature, and the weight between this feature and each template was learned. Finally, this weight was utilized to fuse the results output by multiple templates, thereby reducing the performance loss caused by the manual selection of Prompt templates. Experimental results on FewRel (Few-shot Relation extraction dataset) and TACRED (Text Analysis Conference Relation Extraction Dataset) show that, the proposed model significantly outperforms the current state-of-the-art model, RelationPrompt, in terms of F1 score under different data resource settings, with an increase of 1.48 to 19.84 percentage points and 15.27 to 15.75 percentage points, respectively. These results convincingly demonstrate the effectiveness of the proposed model for zero-shot RE tasks.