Abductive reasoning is an important task in Natural Language Inference (NLI), which aims to infer reasonable process events (hypotheses) between the given initial observation event and final observation event. Earlier studies independently trained the inference model from each training sample; recently, mainstream studies have considered the semantic correlation between similar training samples and fitted the reasonableness of the hypotheses with the frequency of these hypotheses in the training set, so as to describe the reasonableness of the hypotheses in different environments more accurately. On this basis, while describing the reasonableness of the hypotheses, the difference and relativity constraints between reasonable hypotheses and unreasonable hypotheses were added, thereby achieving the purpose of two-way characterization of the reasonableness and unreasonableness of the hypotheses, and the overall relativity was modeled through many-to-many training. In addition, considering the difference of the word importance in the process of event expression, an attention module was constructed for different words in the samples. Finally, an abductive reasoning model based on attention balance list was formed. Experimental results show that compared with the L2R2 (Learning to Rank for Reasoning) model, the proposed model has the accuracy and AUC improved by about 0.46 and 1.36 percentage points respectively on the mainstream abductive inference dataset Abductive Reasoning in narrative Text (ART) , which prove the effectiveness of the proposed model.