Chinese Named Entity Recognition (NER) tasks aim to extract entities from unstructured text and assign them to predefined entity categories. Aiming at the issue of insufficient semantic learning caused by the lack of contextual information in most Chinese NER methods, an NER framework with hierarchical fusion of multi-knowledge, named HTLR (Chinese NER method based on Hierarchical Transformer fusing Lexicon and Radical), was proposed to utilize hierarchically fused multi-knowledge to help the model learn richer and more comprehensive contextual and semantic information. Firstly, the lexicon contained in the corpus was identified and vectorized by using a publicly available Chinese lexicon table and word vector table. At the same time, the knowledge about Chinese lexicon was learned by modeling semantic relationships between lexicon and related characters through optimized position encoding. Secondly, the corpus was converted into the corresponding coding sequences to represent the character form information by the coding based on Chinese character radicals provided by Han Dian website, and an RFE-CNN (Radical Feature Extraction-Convolutional Neural Network) model was proposed for extracting radical information. Finally, the Hierarchical Transformer model was proposed, where semantic relationships between characters and lexicon, characters and radical forms in lower-level modules, and multi-knowledge about characters, lexicon, and radical forms were learned at higher-level modules, which helped the model acquire character representations with richer semantics. Experimental results on public datasets Weibo, Resume, MSRA, and OntoNotes4.0 show that the F1 values of the proposed method are improved by 9.43, 0.75, 1.76, and 6.45 percentage points, respectively, compared with those of the mainstream method NFLAT (Non-Flat-LAttice Transformer for Chinese named entity recognition), reaching the optimal level. It can be seen that multi-semantic knowledge, hierarchical fusion, the RFE-CNN structure, and Hierarchical Transformer structure are effective for learning rich semantic knowledge and improving model performance.