Data-driven automated generation technology of unit test cases has problems of low coverage and poor readability, struggling to meet the increasing demand for testing. Recently, Large Language Model (LLM) has shown great potential in code generation tasks. However, due to the differences in functional and coding styles of code data, LLMs face the challenges of catastrophic forgetting and resource constraints. To address these problems, a transfer learning idea was proposed by fine-tuning coding and functional styles simultaneously, and an efficient fine-tuning training method was developed for LLMs in generating unit test cases. Firstly, the widely used instruction datasets were adopted to align LLM with instructions, and the instruction sets were divided by task types. At the same time, the weight increments with task-specific features were extracted and stored. Secondly, an adaptive style extraction module was designed for dealing with various coding styles with noise-resistant learning and coding style backtracking learning in the module. Finally, joint training of the functional and coding style increments was performed respectively on the target domain, thereby realizing efficient adaptation and fine-tuning on the target domains with limited resources. Experimental results of test case generation on SF110 Corpus of Classes dataset indicate that the proposed method outperforms the methods for comparison. Compared to the mainstream code generation LLMs — Codex, Code Llama and DeepSeek-Coder, the proposed method has the compilation rate increased by 0.8%, 43.5% and 33.8%, respectively; the branch coverage increased by 3.1%, 1.0%, and 17.2% respectively; and the line coverage increased by 4.1%, 6.5%, and 15.5% respectively; verifying the superiority of the proposed method in code generation tasks.
Nitrogen oxide (NOx) is one of the main pollutants in the regenerated flue gas of Fluid Catalytic Cracking (FCC) unit. Accurate prediction of NOx emission can effectively avoid the occurrence of pollution events in refinery enterprises. Because of the non-stationarity, nonlinearity and long-memory characteristics of pollutant emission data, a new hybrid model incorporating Ensemble Empirical Mode Decomposition (EEMD) and Long Short-Term Memory network (LSTM) was proposed to improve the prediction accuracy of pollutant emission concentration. The NOx emission concentration data was first decomposed into several Intrinsic Mode Functions (IMFs) and a residual by using the EEMD model. According to the correlation analysis between the IMF sub-sequences and the original data, the IMF sub-sequences with low correlation were eliminated, which could effectively reduce the noise in the original data. The IMFs could be divided into high and low frequency sequences, which were respectively trained in the LSTM networks with different depths. The final NOx concentration prediction results were reconstructed by the predicted results of each sub-sequences. Compared with the performance of LSTM in the NOx emission prediction of FCC unit, the Mean Square Error (MSE), Mean Absolute Error (MAE) were reduced by 46.7%, 45.9%,and determination coefficient (R2) of EEMD-LSTM was improved by 43% respectively, which means the proposed model achieves higher prediction accuracy.
Visual Background extractor (ViBe)model for moving target detection cannot avoid interference caused by irregular flicker pixels noise in dynamic outdoor scenes. In order to solve the issue, a flicker pixels noise-suppression method based on ViBe model algorithm was proposed. In the initial stage of background model, a fixed standard deviation of background model samples was used as the threshold value to limit the range of background model samples and get suitable background model samples for each pixel. In the foreground detection stage, an adaptive detection threshold was applied to improve the accuracy of detection result. Edge inhibition of image edge background pixels was executed to avoid error background sample values updating to the background model in the background model update process. On the basis of above, morphological operation was added to fix connected components to get more complete foreground images. Finally, the proposed method was compared with the original ViBe algorithm and the ViBe's improvement with morphology post-processing on the results of multiple video sequences. The experimental results show that the flicker pixels noise-suppression method can suppress flicker pixels noise effectively and get more accurate results.
In order to solve the problem of misjudgment which due to emotion point to an unknown and missing hidden view in traditional emotion classification method, a text sentiment classification method based on emotional role modeling was proposed. The method firstly identified evaluation objects in the text, and it used the measure based on local semantic analysis to tag the sentence emotion which had potential evaluation object. Then it distinguished the positive and negative polarity of evaluation objects in this paper by defining its emotional role. And it let the tendency value of emotional role integrate into feature space to improve the feature weight computation method. Finally, it proposed the concept named "features converge" to reduce the dimension of model. The experimental results show that the proposed method can improve the effect and accuracy of 3.2% for text sentiment classification effectively compared with other approaches which tend to pick the strong subjective emotional items as features.
The neighboring relationship of sketch patches and photo patches on the manifold cannot always reflect their intrinsic data structure. To resolve this problem, a Locality-Constrained Neighbor Embedding (LCNE) based face sketch-photo synthesis algorithm was proposed. The Neighbor Embedding (NE) based synthesis method was first applied to estimate initial sketches or photos. Then, the weight coefficients were constrained according to the similarity between the estimated sketch patches or photo patches and the training sketch patches or training photo patches. Subsequently, alternative optimization was deployed to determine the weight coefficients, select K candidate image patches and update the target synthesis patch. Finally, the synthesized image was generated by merging all the estimated sketch patches or photo patches. In the contrast experiments, the proposed method outperformed the NE based synthesis method by 0.0503 in terms of Structural SIMilarity (SSIM) index and by 14% in terms of face recognition accuracy. The experimental results illustrate that the proposed method resolves the problem of weak compatibility among neighbor patches in the NE based method and greatly alleviates the noises and deformations in the synthetic image.
A new image retrieval method based on enhanced micro-structure and context-sensitive similarity was proposed to overcome the shortcoming of high dimension of combined image feature and intangible combined weights. A new local pattern map was firstly used to create filter map, and then enhanced micro-structure descriptor was extracted based on color co-occurrence relationship. The descriptor combined several features with the same dimension as single color feature. Based on the extracted descriptor, normal distance between image pairs was calculated and sorted. Combined with the iterative context-sensitive similarity, the initial sorted image series were re-ranked. With setting the value of iteration times as 50 and considering the top 24 images in the retrieved image set, the comparative experiments with Multi-Texton Histogram (MTH) and Micro-Structure Descriptor (MSD) show that the retrieval precisions of the proposed algorithm respectively are increased by 13.14% and 7.09% on Corel-5000 image set and increased by 11.03% and 6.8% on Corel-10000 image set. By combining several features and using context information while keeping dimension unchanged, the new method can enhance the precision effectively.
An efficient real-time traffic scheduling algorithm for WLAN(Wireless Local Area Networks) was proposed based on the classic WRR (Weighted Round Robin) discipline. The algorithm was operated at link layer level, and was coupled closely with DCF(Distributed Coordinate Function). Through that, the HOL(Head Of Line) blocking problem was alleviated. With compensation for mobile users experiencing burst channel error, the long-term fairness approximately was achieved. Extensive simulations were performed using NS(Network Simulator). The results show that the algorithm is simple,and improves the channel utilization and data throughput effectively. The average packet delay is also decreased.