As the smallest semantic unit, sememe is crucial for headline generation task. Although Sememe-Driven Language Model (SDLM) is one of the mainstream models, it has limited encoding capability when dealing with long text sequences, does not fully consider positional relationships, and is prone to introduce noisy knowledge to affect the quality of generated headlines. To address the above problems, a Transformer-based generative headline model was proposed, namely Tran-A-SDLM (Transformer Adaption based Sememe-Driven Language Model with positional embedding and knowledge reasoning), which fully combined the advantages of adaptive position embedding and knowledge reasoning mechanism. Firstly, Transformer model was introduced to enhance the model’s encoding capability for text sequences. Secondly, the adaptive positional embedding mechanism was utilized to enhance the model’s positional awareness capability, thereby improving the learning of contextual sememe knowledge. In addition, a knowledge reasoning module was introduced for representing the sememe knowledge and guiding the model to generate accurate headlines. Finally, to demonstrate the superiority of Tran-A-SDLM, experiments were conducted on Large scale Chinese Short Text Summarization (LCSTS) dataset. Experimental results show that Tran-A-SDLM achieves improvements of 0.2, 0.7 and 0.5 percentage points respectively in ROUGE-1, ROUGE-2 and ROUGE-L scores, compared to RNN-context-SDLM. Results of the ablation study further validate the effectiveness of the proposed model.