Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
HTLR: named entity recognition framework with hierarchical fusion of multi-knowledge
Xueqiang LYU, Tao WANG, Xindong YOU, Ge XU
Journal of Computer Applications    2025, 45 (1): 40-47.   DOI: 10.11772/j.issn.1001-9081.2023111699
Abstract95)   HTML2)    PDF (1466KB)(33)       Save

Chinese Named Entity Recognition (NER) tasks aim to extract entities from unstructured text and assign them to predefined entity categories. Aiming at the issue of insufficient semantic learning caused by the lack of contextual information in most Chinese NER methods, an NER framework with hierarchical fusion of multi-knowledge, named HTLR (Chinese NER method based on Hierarchical Transformer fusing Lexicon and Radical), was proposed to utilize hierarchically fused multi-knowledge to help the model learn richer and more comprehensive contextual and semantic information. Firstly, the lexicon contained in the corpus was identified and vectorized by using a publicly available Chinese lexicon table and word vector table. At the same time, the knowledge about Chinese lexicon was learned by modeling semantic relationships between lexicon and related characters through optimized position encoding. Secondly, the corpus was converted into the corresponding coding sequences to represent the character form information by the coding based on Chinese character radicals provided by Han Dian website, and an RFE-CNN (Radical Feature Extraction-Convolutional Neural Network) model was proposed for extracting radical information. Finally, the Hierarchical Transformer model was proposed, where semantic relationships between characters and lexicon, characters and radical forms in lower-level modules, and multi-knowledge about characters, lexicon, and radical forms were learned at higher-level modules, which helped the model acquire character representations with richer semantics. Experimental results on public datasets Weibo, Resume, MSRA, and OntoNotes4.0 show that the F1 values of the proposed method are improved by 9.43, 0.75, 1.76, and 6.45 percentage points, respectively, compared with those of the mainstream method NFLAT (Non-Flat-LAttice Transformer for Chinese named entity recognition), reaching the optimal level. It can be seen that multi-semantic knowledge, hierarchical fusion, the RFE-CNN structure, and Hierarchical Transformer structure are effective for learning rich semantic knowledge and improving model performance.

Table and Figures | Reference | Related Articles | Metrics
Aspect-based sentiment analysis model fused with multi-window local information
Zhixiong ZHENG, Jianhua LIU, Shuihua SUN, Ge XU, Honghui LIN
Journal of Computer Applications    2023, 43 (6): 1796-1802.   DOI: 10.11772/j.issn.1001-9081.2022060891
Abstract339)   HTML11)    PDF (1323KB)(109)       Save

Focused on the issue that the current Aspect-Based Sentiment Analysis (ABSA) models rely too much on the syntactic dependency tree with relatively sparse relationships to learn feature representations, which leads to the insufficient ability of the model to learn local information, an ABSA model fused with multi-window local information called MWGAT (combining Multi-Window local information and Graph ATtention network) was proposed. Firstly, the local contextual features were learned through the multi-window local feature learning mechanism, and the potential local information contained in the text was mined. Secondly, Graph ATtention network (GAT), which can better understand the syntactic dependency tree, was used to learn the syntactic structure information represented by the syntactic dependency tree, and syntax-aware contextual features were generated. Finally, these two types of features representing different semantic information were fused to form the feature representation containing both the syntactic information of syntactic dependency tree and the local information, so that the sentiment polarities of aspect words were discriminated by the classifier efficiently. Three public datasets, Restaurant, Laptop, and Twitter were used for experiment. The results show that compared with the T-GCN (Type-aware Graph Convolutional Network) model combined with the syntactic dependency tree, the proposed model has the Macro-F1 score improved by 2.48%, 2.37% and 0.32% respectively. It can be seen that the proposed model can mine potential local information effectively and predict the sentiment polarities of aspect words more accurately.

Table and Figures | Reference | Related Articles | Metrics
Improvement of DV-Hop localization based on shuffled frog leaping algorithm
Yu GE Xue-ping WANG Jing LIANG
Journal of Computer Applications    2011, 31 (04): 922-924.   DOI: 10.3724/SP.J.1087.2011.00922
Abstract1790)      PDF (610KB)(482)       Save
In order to reduce the node localization error of DV-Hop algorithm in Wireless Sensor Network (WSN), a calculation method of average distance per hop was adjusted by using the shuffled frog leaping algorithm. The improved DV-Hop algorithm makes the average distance per hop closer to the actual value, thereby reducing the localization error. The simulation results indicate that the improved DV-Hop algorithm reduces localization error effectively and has good stability without additional devices; therefore, it is a practical localization solution for WSN.
Related Articles | Metrics