Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Point cloud classification and segmentation method based on adaptive dynamic graph convolution and parameter-free attention
Weigang LI, Xinyi LI, Yongqiang WANG, Yuntao ZHAO
Journal of Computer Applications    2025, 45 (6): 1980-1986.   DOI: 10.11772/j.issn.1001-9081.2024060878
Abstract22)   HTML0)    PDF (2200KB)(8)       Save

To address the challenges of traditional convolution in extracting neighborhood feature information accurately and integrating contextual information effectively in point cloud processing, a point cloud classification and segmentation method based on adaptive dynamic graph convolution and parameter-free attention was proposed. Firstly, the Adaptive Dynamic Graph Convolution module (ADGC) was used to learn feature information of different neighborhoods, generate the adaptive convolution kernels, and update the edge features, thereby extracting local neighborhood features of the point cloud accurately. Then, a residual structure was designed to learn spatial position information of the point cloud, so as to capture geometric structure between the point pairs accurately, and better retain and extract the detailed features. Finally, in order to better pay attention to and extract the local geometric features, the Parameter-Free Attention module (PFA) was combined with convolution operation to enhance connection among the neighbors and improve context-aware ability of the model. Experimental results show that compared to PointNet, the proposed method has significant advantages across various tasks. In specific, the proposed method has an increase of 4.6 percentage points in Overall Accuracy (OA) for classification tasks, an increase of 2.3 percentage points in mean Intersection over Union (mIoU) for part segmentation tasks, and an increase of 24.6 percentage points in mIoU for semantic segmentation tasks. It can be seen that the proposed method further improves the understanding and representation abilities of complex geometries, resulting in more accurate feature extraction and experimental performance in a variety of tasks.

Table and Figures | Reference | Related Articles | Metrics
Efficient fine-tuning method of large language models for test case generation
Peng CAO, Guangqi WEN, Jinzhu YANG, Gang CHEN, Xinyi LIU, Xuechun JI
Journal of Computer Applications    2025, 45 (3): 725-731.   DOI: 10.11772/j.issn.1001-9081.2024111598
Abstract95)   HTML7)    PDF (1215KB)(242)       Save

Data-driven automated generation technology of unit test cases has problems of low coverage and poor readability, struggling to meet the increasing demand for testing. Recently, Large Language Model (LLM) has shown great potential in code generation tasks. However, due to the differences in functional and coding styles of code data, LLMs face the challenges of catastrophic forgetting and resource constraints. To address these problems, a transfer learning idea was proposed by fine-tuning coding and functional styles simultaneously, and an efficient fine-tuning training method was developed for LLMs in generating unit test cases. Firstly, the widely used instruction datasets were adopted to align LLM with instructions, and the instruction sets were divided by task types. At the same time, the weight increments with task-specific features were extracted and stored. Secondly, an adaptive style extraction module was designed for dealing with various coding styles with noise-resistant learning and coding style backtracking learning in the module. Finally, joint training of the functional and coding style increments was performed respectively on the target domain, thereby realizing efficient adaptation and fine-tuning on the target domains with limited resources. Experimental results of test case generation on SF110 Corpus of Classes dataset indicate that the proposed method outperforms the methods for comparison. Compared to the mainstream code generation LLMs — Codex, Code Llama and DeepSeek-Coder, the proposed method has the compilation rate increased by 0.8%, 43.5% and 33.8%, respectively; the branch coverage increased by 3.1%, 1.0%, and 17.2% respectively; and the line coverage increased by 4.1%, 6.5%, and 15.5% respectively; verifying the superiority of the proposed method in code generation tasks.

Table and Figures | Reference | Related Articles | Metrics