Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Dung beetle optimizer algorithm with restricted reverse learning and Cauchy-Gauss variation
Zhilong YANG, Dexuan ZOU, Can LI, Yingying SHAO, Lejie MA
Journal of Computer Applications    2025, 45 (7): 2304-2316.   DOI: 10.11772/j.issn.1001-9081.2024060778
Abstract110)   HTML0)    PDF (1670KB)(34)       Save

To overcome the shortcomings of slow convergence, low accuracy and being easy to fall into local optimum in Dung Beetle Optimizer (DBO) algorithm, a Dung Beetle Optimizer algorithm with restricted reverse learning and Cauchy-Gauss variation (SI-DBO) was proposed. Firstly, Circle mapping was used to initialize the population to make distribution of the population more uniform and diverse, which improved the convergence speed and optimization accuracy of the algorithm. Secondly, restricted reverse learning was used to update the locations of dung beetles, so as to improve the search ability of dung beetles. Finally, Cauchy-Gauss variation strategy was used to help the population escape from the local optimal location and find the global optimal location. To verify the performance of SI-DBO, simulation experiments were carried out on the test functions and Wilcoxon rank-sum test was performed on the experimental results, and the algorithm was used to solve robot gripper problem. Experimental results show that SI-DBO achieves higher optimization accuracy and convergence speed than Black Widow-Dung Beetle Optimization (BWDBO) algorithm and Sparrow Search Algorithm (SSA) on the test functions. Meanwhile, SI-DBO performs better than Particle Swarm Optimization (PSO) algorithm for solving robot gripper problem, indicating better optimization performance and engineering practicability of SI-DBO.

Table and Figures | Reference | Related Articles | Metrics
Chinese semantic error recognition model based on hierarchical information enhancement
Yuqi ZHANG, Ying SHA
Journal of Computer Applications    2025, 45 (12): 3771-3778.   DOI: 10.11772/j.issn.1001-9081.2024111694
Abstract52)   HTML1)    PDF (615KB)(319)       Save

The semantic errors in Chinese differ from simple spelling and grammatical errors, as they are more inconspicuous and complex. Chinese Semantic Error Recognition (CSER) aims to determine whether a Chinese sentence contains semantic errors. As a prerequisite task for semantic review, the performance of recognition model is crucial for semantic error correction. To address the issue of CSER models ignoring the differences between syntactic structure and contextual structure when integrating syntactic information, a Hierarchical Information Enhancement Graph Convolutional Network (HIE-GCN) model was proposed to embed the hierarchical information of nodes in the syntactic tree into the context encoder, thereby reducing the gap between syntactic structure and contextual structure. Firstly, a traversal algorithm was used to extract the hierarchical information of nodes in the syntactic tree. Secondly, the hierarchical information was embedded into the BERT (Bidirectional Encoder Representations from Transformers) model to generate character features, the Graph Convolutional Network (GCN) was adopted to utilize these character features for the nodes in the graph, and after graph convolution calculation, the feature vector of the entire sentence was obtained. Finally, a fully connected layer was used for one-class or multi-class semantic error recognition. Results of semantic error recognition and correction experiments conducted on the FCGEC (Fine-grained corpus for Chinese Grammatical Error Correction) and NaCGEC (Native Chinese Grammatical Error Correction) datasets show that, on the FCGEC dataset, in the recognition task, compared with the baseline model: HIE-GCN improves the accuracy by at least 0.10 percentage points and the F1 score by at least 0.13 percentage points in the one-class error recognition; in the multi-class error recognition, the accuracy is improved by at least 1.05 percentage points and the F1 score is improved by at least 0.53 percentage points. Ablation experimental results verify the effectiveness of hierarchical information embedding. Compared with Large Language Models (LLMs) such as GPT and Qwen, the proposed model’s overall performance in recognition is significantly higher. In the correction experiment, compared to the sequence-to-sequence direct error correction model, the recognition-correction two-stage pipeline improves the correction precision by 8.01 percentage points. It is also found that in the correction process of LLM GLM4, providing the model with hints on the sentence’s error type increases the correction precision by 4.62 percentage points.

Table and Figures | Reference | Related Articles | Metrics
Detection of denial of service and network probing attacks based on principal component analysis
LI Jie-ying SHAO Chao
Journal of Computer Applications    2012, 32 (06): 1620-1622.   DOI: 10.3724/SP.J.1087.2012.01620
Abstract1048)      PDF (623KB)(612)       Save
To solve the problem of detecting Denial of Service (DoS) and network probing attacks, a new method based on Principal Component Analysis (PCA) was proposed in this paper. PCA was done on both attack and normal traffic to collect various statistics, and then the detection model was constructed based on these statistics. At last, this paper utilized the threshold of the statistics to achieve a fixed rate of false alarms. The experimental results show that this approach can detect DoS and network probing attacks effectively, and yield 99 percent detection rate; in addition, security masters can make responses in time and the responses can reduce the loss under real-time attacks.
Related Articles | Metrics
New user transaction algorithm
Ji-rong GAO Yan TIAN Hai-ying SHAO
Journal of Computer Applications   
Abstract1299)      PDF (659KB)(945)       Save
This study proposed a user business algorithm with double thresholds. This algorithm first acted according to the page number which the user visited to judge whether this user was the accidental user, and then the network topology and homepage lowest interest degree to judge whether the homepage appealed to the users. This method improved the data pretreatment process, and deleted the visit record which the accidental user caused, as well as the link pages and the pages that users were not interested in, produced one kind of effective visiting page sequence, namely double thresholds user business. This paper has proved the validity of the algorithm through an instance.
Related Articles | Metrics