Joint entity-relation extraction methods based on “pre-training + fine-tuning” paradigm rely on large-scale annotated data. In the small sample scenarios of ancient Chinese books where data annotation is difficult and costly, the fine-tuning efficiency is low and the extraction performance is poor; entity nesting and relation overlapping problems are common in ancient Chinese books, which limit the effect of joint entity-relation extraction; pipeline extraction methods have error propagation problems, which affect the extraction effect. In response to the above problems, a joint entity-relation extraction method for ancient Chinese books based on prompt learning and global pointer network was proposed. Firstly, the prompt learning method of span extraction reading comprehension was used to inject domain knowledge into the Pre-trained Language Model (PLM) to unify the optimization goals of pre-training and fine-tuning, and the input sentences were encoded. Then, the global pointer networks were used to predict and jointly decode the boundaries of subject and object and the boundaries of subject and object of different relationships, so as to align into entity-relation triples, and complete the construction of PTBG (Prompt Tuned BERT with Global pointer) model. As the results, the problem of entity nesting and relation overlapping was solved, and the error propagation problem of pipeline decoding was avoided. Finally, based on the above work, the influence of different prompt templates on extraction performance was analyzed. Experimental results on Records of the Grand Historian dataset show that compared with OneRel model before and after injecting domain knowledge, the PTBG model has the F1-value increased by 1.64 and 1.97 percentage points respectively. It can be seen that the PTBG model can better extract entity-relation jointly in ancient Chinese books, and provides new research ideas and approaches for low-resource, small-sample deep learning scenarios.