Previous data-free class incremental learning methods can generate class data for learned tasks through techniques such as model inversion, but they cannot alleviate the model's plasticity-stability dilemma effectively, and these synthesis techniques are easy to ignore the diversity of data. To address these issues, a knowledge distillation-based incremental learning strategy was proposed. Firstly, local cross-entropy loss was utilized to facilitate the model in learning knowledge related to new classes. Secondly, a combination of distillation based on output features was introduced to reduce forgetting of knowledge related to old classes. Finally, the distillation based on relational features was applied to alleviate model's conflicts between learning representation of new classes and retaining representation of old classes. Furthermore, to enhance the diversity of generated data, a regularization term was introduced on the basis of model inversion to prevent the generated samples from being similar excessively. Experimental results show that compared to Relation-guided representation learning for Data-Free Class Incremental Learning (R?DFCIL), on CIFAR-100 dataset, the proposed model achieves average incremental accuracy improvements of 0.25 and 0.18 percentage points on 5-task and 10-task scenarios respectively, while on Tiny-ImageNet dataset, the corresponding improvements are 0.21 and 0.07 percentage points respectively. Besides, the proposed model does not require additional classifiers for fine-tuning, and the proposed diversity regularization item provides a way for improvement in data-free class incremental learning.