In Document-level Relation Extraction (DocRE) task, the existing models mainly focus on learning interaction among entities in the document, neglecting the learning of internal structures of entities, and pay little attention to recognition of pronoun references and application of logical rules in the document. The above leads to the model not being accurate enough in modeling relationships among entities in the document. Therefore, an anaphor-aware relation graph was integrated on the basis of the Transformer architecture to model interaction among entities and internal structures of entities. So that, anaphora was used to aggregate more contextual information to the corresponding entities, thereby enhancing relation extraction accuracy. Moreover, a data-driven approach was used to mine logical rules from relation annotations to enhance understanding and reasoning capabilities for implicit logical relationships in the text. To solve the problem of sample imbalance, a weighted long-tail loss function was introduced to improve the accuracy of identifying rare relations. Experiments were conducted on two public datasets DocRED (Document-level Relation Extraction Dataset) and Re?DocRED (Revisiting Document-level Relation Extraction Dataset). The results show that the proposed model has the best performance, when using BERT as encoder, its IgnF1 and F1 values on test set of on DocRED are increased by 1.79 and 2.09 percentage points compared to those of the baseline model ATLOP (Adaptive Thresholding and Localized cOntext Pooling), respectively, validating the high comprehensive performance of the proposed model.