Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Efficient fine-tuning method of large language models for test case generation
Peng CAO, Guangqi WEN, Jinzhu YANG, Gang CHEN, Xinyi LIU, Xuechun JI
Journal of Computer Applications    2025, 45 (3): 725-731.   DOI: 10.11772/j.issn.1001-9081.2024111598
Abstract233)   HTML20)    PDF (1215KB)(499)       Save

Data-driven automated generation technology of unit test cases has problems of low coverage and poor readability, struggling to meet the increasing demand for testing. Recently, Large Language Model (LLM) has shown great potential in code generation tasks. However, due to the differences in functional and coding styles of code data, LLMs face the challenges of catastrophic forgetting and resource constraints. To address these problems, a transfer learning idea was proposed by fine-tuning coding and functional styles simultaneously, and an efficient fine-tuning training method was developed for LLMs in generating unit test cases. Firstly, the widely used instruction datasets were adopted to align LLM with instructions, and the instruction sets were divided by task types. At the same time, the weight increments with task-specific features were extracted and stored. Secondly, an adaptive style extraction module was designed for dealing with various coding styles with noise-resistant learning and coding style backtracking learning in the module. Finally, joint training of the functional and coding style increments was performed respectively on the target domain, thereby realizing efficient adaptation and fine-tuning on the target domains with limited resources. Experimental results of test case generation on SF110 Corpus of Classes dataset indicate that the proposed method outperforms the methods for comparison. Compared to the mainstream code generation LLMs — Codex, Code Llama and DeepSeek-Coder, the proposed method has the compilation rate increased by 0.8%, 43.5% and 33.8%, respectively; the branch coverage increased by 3.1%, 1.0%, and 17.2% respectively; and the line coverage increased by 4.1%, 6.5%, and 15.5% respectively; verifying the superiority of the proposed method in code generation tasks.

Table and Figures | Reference | Related Articles | Metrics
Prediction of NOx emission from fluid catalytic cracking unit based on ensemble empirical mode decomposition and long short-term memory network
Chong CHEN, Zhu YAN, Jixuan ZHAO, Wei HE, Huaqing LIANG
Journal of Computer Applications    2022, 42 (3): 791-796.   DOI: 10.11772/j.issn.1001-9081.2021040787
Abstract366)   HTML4)    PDF (1269KB)(146)       Save

Nitrogen oxide (NOx) is one of the main pollutants in the regenerated flue gas of Fluid Catalytic Cracking (FCC) unit. Accurate prediction of NOx emission can effectively avoid the occurrence of pollution events in refinery enterprises. Because of the non-stationarity, nonlinearity and long-memory characteristics of pollutant emission data, a new hybrid model incorporating Ensemble Empirical Mode Decomposition (EEMD) and Long Short-Term Memory network (LSTM) was proposed to improve the prediction accuracy of pollutant emission concentration. The NOx emission concentration data was first decomposed into several Intrinsic Mode Functions (IMFs) and a residual by using the EEMD model. According to the correlation analysis between the IMF sub-sequences and the original data, the IMF sub-sequences with low correlation were eliminated, which could effectively reduce the noise in the original data. The IMFs could be divided into high and low frequency sequences, which were respectively trained in the LSTM networks with different depths. The final NOx concentration prediction results were reconstructed by the predicted results of each sub-sequences. Compared with the performance of LSTM in the NOx emission prediction of FCC unit, the Mean Square Error (MSE), Mean Absolute Error (MAE) were reduced by 46.7%, 45.9%,and determination coefficient (R2) of EEMD-LSTM was improved by 43% respectively, which means the proposed model achieves higher prediction accuracy.

Table and Figures | Reference | Related Articles | Metrics
Xgboost algorithm optimization based on gradient distribution harmonized strategy
LI Hao, ZHU Yan
Journal of Computer Applications    2020, 40 (6): 1633-1637.   DOI: 10.11772/j.issn.1001-9081.2019101878
Abstract671)      PDF (515KB)(526)       Save
In order to solve the problem of low detection rate of minority class by ensemble learning model eXtreme gradient boosting (Xgboost) in the binary classification problem, an improved Xgboost algorithm based on gradient distribution harmonized strategy called Loss Contribution Gradient Harmonized Algorithm (LCGHA)-Xgboost was proposed. Firstly, Loss Contribution (LC) was defined to simulate the losses of the samples in Xgboost algorithm. Secondly, by defining Loss Contribution Density (LCD), the difficulty of samples being correctly classified in Xgboost algorithm was measured. Finally, a gradient distribution harmonized algorithm called LCGHA was proposed to dynamically adjust the one order gradient distribution of samples according to the LCD. In the algorithm, the losses of hard samples (mainly in minority class) were indirectly increased, and the losses of easy samples (mainly in majority class) were indirectly reduced, making Xgboost algorithm tend to learn the hard samples. The experimental results show that compared with three ensemble learning algorithms Xgboost, GBDT (Gradient Boosting Decision Tree) and Random_Forest, LCGHA-Xgboost has the recall increased by 5.4% - 16.7%, and Area Under the Curve (AUC) improved by 0.94% - 7.41% on multiple UCI datasets, and the Recall increased by 44.4% - 383.3%, and AUC improved by 5.8% - 35.6% on WebSpam-UK2007 and DC2010 datasets. LCGHA-Xgboost can effectively improve the classification and detection ability for minority class, and reduce the classification error rate of minority class.
Reference | Related Articles | Metrics
Semi-exponential gradient strategy and empirical analysis for online portfolio selection
WU Wanting, ZHU Yan, HUANG Dingjiang
Journal of Computer Applications    2019, 39 (8): 2462-2467.   DOI: 10.11772/j.issn.1001-9081.2018122588
Abstract618)      PDF (935KB)(324)       Save
Since the high frequency asset allocation adjustment of traditional portfolio strategies in each investment period results in high transaction costs and poor final returns, a Semi-Exponential Gradient portfolio (SEG) strategy based on machine learning and online learning was proposed. Firstly, the SEG strategy model was established by adjusting the portfolio only in the initial period of each segmentation of the investment period and not trading in the rest of the time, then a objective function was constructed by combining income and loss. Secondly, the closed-form solution of the portfolio iterative updating was solved by using the factor graph algorithm, and the theorem and its proof of the upper bound on the cumulative loss of assets accumulated were given, guaranteeing the return performance of the strategy theoretically. The experiments were performed on several datasets such as the New York Stock Exchange. Experimental results show that the proposed strategy can still maintain a high return even with the existence of transaction costs, confirming the insensitivity of this strategy to transaction costs.
Reference | Related Articles | Metrics
Visual sentiment analysis by combining global and local regions of image
CAI Guoyong, HE Xinhao, CHU Yangyang
Journal of Computer Applications    2019, 39 (8): 2181-2185.   DOI: 10.11772/j.issn.1001-9081.2018122452
Abstract749)      PDF (901KB)(888)       Save
Most existing visual sentiment analysis methods mainly construct visual sentiment feature representation based on the whole image. However, the local regions with objects in the image are able to highlight the sentiment better. Concerning the problem of ignorance of local regions sentiment representation in visual sentiment analysis, a visual sentiment analysis method by combining global and local regions of image was proposed. Image sentiment representation was mined by combining a whole image with local regions of the image. Firstly, an object detection model was used to locate the local regions with objects in the image. Secondly, the sentiment features of the local regions with objects were extracted by deep neural network. Finally, the deep features extracted from the whole image and the local region features were utilized to jointly train the image sentiment classifier and predict the sentiment polarity of the image. Experimental results show that the classification accuracy of the proposed method reaches 75.81% and 78.90% respectively on the real datasets TwitterⅠand TwitterⅡ, which is higher than the accuracy of sentiment analysis methods based on features extracted from the whole image or features extracted from the local regions of image.
Reference | Related Articles | Metrics
Imbalanced image classification approach based on convolution neural network and cost-sensitivity
TAN Jiefan, ZHU Yan, CHEN Tung-shou, CHANG Chin-chen
Journal of Computer Applications    2018, 38 (7): 1862-1865.   DOI: 10.11772/j.issn.1001-9081.2018010152
Abstract1038)      PDF (804KB)(590)       Save
Focusing on the issues that the recall of minority class is low, the cost of classification is high and manual feature selection costs too much in imbalanced image classification, an imbalanced image classification approach based on Triplet-sampling Convolutional Neural Network (Triplet-sampling CNN) and Cost-Sensitive Support Vector Machine (CSSVM), called Triplet-CSSVM, was proposed. This method had two parts:feature learning and cost sensitive classification. Firstly, the coding method which mapped images to a Euclidean space end-to-end was learned by the CNN which used Triplet loss as loss function. Then, the dataset was rescaled by sampling method to balance the distribution. At last, the best classification result with the minimum cost was obtained by CSSVM classification algorithm which assigned different cost factors to different classes. Experiments with the portrait dataset FaceScrub on the deep learning framework Caffe were conducted. And the experimental results show that the precision is increased by 31 percentage points and the recall of the proposed method is increased by 71 percentage points compared with VGGNet-SVM (Visual Geometry Group Net-Support Vector Machine) in the condition of 1:3 imbalanced rate.
Reference | Related Articles | Metrics
Optimum feature selection based on genetic algorithm under Web spam detection
WANG Jiaqing, ZHU Yan, CHEN Tung-shou, CHANG Chin-chen
Journal of Computer Applications    2018, 38 (1): 295-299.   DOI: 10.11772/j.issn.1001-9081.2017061560
Abstract558)      PDF (807KB)(394)       Save
Focusing on the issue that features used in Web spam detection are always high-dimensional and redundant, an Improved Feature Selection method Based on Information Gain and Genetic Algorithm (IFS-BIGGA) was proposed. Firstly, the priorities of features were ranked by Information Gain (IG), and dynamic threshold was set to get rid of redundant features. Secondly, the function of chromosome encoding was modified and the selection operator was improved in Genetic Algorithm (GA). After that, the Area Under receiver operating Characteristic (AUC) of Random Forest (RF) classifier was utilized as the fitness function to pick up the features with high degree of identification. Finally, the Optimal Minimum Feature Set (OMFS) was obtained by increasing the experimental iteration to avoid the randomness of the proposed algorithm. The experimental results show that OMFS, compared to the high-dimensional feature set, although the AUC under RF is decreased by 2%, the True Positive Rate (TPR) is increased by 21% and the feature dimension is reduced by 92%. And the average detecting time is decreased by 83%; moreover, by comparing to the Traditional GA (TGA) and Imperialist Competitive Algorithm (ICA), the F1 score under Bayes Net (BN) is increased by 4.2% and 3.5% respectively. The experimental results that the IFS-BIGGA can effectively reduce the dimension of features, which means it can effectively reduce the calculation cost, improves the detection efficiency in the actual Web spam detection inspection project.
Reference | Related Articles | Metrics
Combining topic similarity with link weight for Web spam ranking detection
WEI Sha, ZHU Yan
Journal of Computer Applications    2016, 36 (3): 735-739.   DOI: 10.11772/j.issn.1001-9081.2016.03.735
Abstract521)      PDF (737KB)(365)       Save
Focused on the issue that good-to-bad links in the Web degrade the detection performance of ranking algorithms (e.g. Anti-TrustRank), a distrust ranking algorithm—Topic Link Distrust Rank (TLDR) by combining topic similarity with link weight to adjust the propagation was proposed. Firstly, the topic distribution of all the pages was gotten by Latent Dirichlet Allocation (LDA), and the topic similarity of linked pages was computed. Secondly, link weight was computed according to the Web graph, and was combined with topic similarity to achieve the topic-link weight matrix. Then, the Anti-TrustRank and Weighted Anti-TrustRank (WATR) algorithm were improved by measuring the distrust scores correctly based on the topic and link weight. Finally, all the pages were ranked according to their distrust scores, and spam pages were detected by taking a threshold. The experiment results on the dataset WEBSPAM-UK2007 show that, compared with Anti-TrustRank and WATR, SpamFactor of TLDR is raised by 45% and 23.7%, F1-measure (threshold was 600) is improved by 3.4 percentage points and 0.5 percentage points, and spam ration(top 3 of the buckets) is increased by 15 percentage points and 10 percentage points, respectively.
Reference | Related Articles | Metrics
Implementation of distributed index in cluster environment
WENG Haixing, GONG Xueqing, ZHU Yanchao, HU Hualiang
Journal of Computer Applications    2016, 36 (1): 1-7.   DOI: 10.11772/j.issn.1001-9081.2016.01.0001
Abstract1027)      PDF (1303KB)(825)       Save
For performance issues brought by using non-primary key to access data on a distributed storage system, key technologies were mainly discussed to the implementation of indexing on a distributed storage system. Based on the rich analysis of new distributed storage features, the keys to design and implementation of distributed index were presented. By combining characteristics of distributed storage system and associated indexing technologies, the organization and maintenance of index, data concurrency and other issues were described. Then, the distributed indexing mechanism on the open source version of OceanBase, which is a distributed database system, was designed and implemented. The performance tests were run on the benchmarking tool YCSB. The experimental results show that the distributed auxiliary index will degrade the system performance, but it can be controlled within 5% under different data scale because of the consideration of system features and storage characteristics. In addition, it can increase index performance by even 100% with a redundant colume way.
Reference | Related Articles | Metrics
Noise-suppression method for flicker pixels in dynamic outdoor scenes based on ViBe
ZHOU Xiao, ZHAO Feng, ZHU Yanlin
Journal of Computer Applications    2015, 35 (6): 1739-1743.   DOI: 10.11772/j.issn.1001-9081.2015.06.1739
Abstract841)      PDF (950KB)(559)       Save

Visual Background extractor (ViBe)model for moving target detection cannot avoid interference caused by irregular flicker pixels noise in dynamic outdoor scenes. In order to solve the issue, a flicker pixels noise-suppression method based on ViBe model algorithm was proposed. In the initial stage of background model, a fixed standard deviation of background model samples was used as the threshold value to limit the range of background model samples and get suitable background model samples for each pixel. In the foreground detection stage, an adaptive detection threshold was applied to improve the accuracy of detection result. Edge inhibition of image edge background pixels was executed to avoid error background sample values updating to the background model in the background model update process. On the basis of above, morphological operation was added to fix connected components to get more complete foreground images. Finally, the proposed method was compared with the original ViBe algorithm and the ViBe's improvement with morphology post-processing on the results of multiple video sequences. The experimental results show that the flicker pixels noise-suppression method can suppress flicker pixels noise effectively and get more accurate results.

Reference | Related Articles | Metrics
Classification method of text sentiment based on emotion role model
HU Yang, DAI Dan, LIU Li, FENG Xupeng, LIU Lijun, HUANG Qingsong
Journal of Computer Applications    2015, 35 (5): 1310-1313.   DOI: 10.11772/j.issn.1001-9081.2015.05.1310
Abstract619)      PDF (780KB)(873)       Save

In order to solve the problem of misjudgment which due to emotion point to an unknown and missing hidden view in traditional emotion classification method, a text sentiment classification method based on emotional role modeling was proposed. The method firstly identified evaluation objects in the text, and it used the measure based on local semantic analysis to tag the sentence emotion which had potential evaluation object. Then it distinguished the positive and negative polarity of evaluation objects in this paper by defining its emotional role. And it let the tendency value of emotional role integrate into feature space to improve the feature weight computation method. Finally, it proposed the concept named "features converge" to reduce the dimension of model. The experimental results show that the proposed method can improve the effect and accuracy of 3.2% for text sentiment classification effectively compared with other approaches which tend to pick the strong subjective emotional items as features.

Reference | Related Articles | Metrics
Face sketch-photo synthesis based on locality-constrained neighbor embedding
HU Yanting, WANG Nannan, CHEN Jianjun, MURAT Hamit, ABDUGHRNI Kutluk
Journal of Computer Applications    2015, 35 (2): 535-539.   DOI: 10.11772/j.issn.1001-9081.2015.02.0535
Abstract609)      PDF (863KB)(404)       Save

The neighboring relationship of sketch patches and photo patches on the manifold cannot always reflect their intrinsic data structure. To resolve this problem, a Locality-Constrained Neighbor Embedding (LCNE) based face sketch-photo synthesis algorithm was proposed. The Neighbor Embedding (NE) based synthesis method was first applied to estimate initial sketches or photos. Then, the weight coefficients were constrained according to the similarity between the estimated sketch patches or photo patches and the training sketch patches or training photo patches. Subsequently, alternative optimization was deployed to determine the weight coefficients, select K candidate image patches and update the target synthesis patch. Finally, the synthesized image was generated by merging all the estimated sketch patches or photo patches. In the contrast experiments, the proposed method outperformed the NE based synthesis method by 0.0503 in terms of Structural SIMilarity (SSIM) index and by 14% in terms of face recognition accuracy. The experimental results illustrate that the proposed method resolves the problem of weak compatibility among neighbor patches in the NE based method and greatly alleviates the noises and deformations in the synthetic image.

Reference | Related Articles | Metrics
Image retrieval based on enhanced micro-structure and context-sensitive similarity
HU Yangbo YUAN Jie WANG Lidong
Journal of Computer Applications    2014, 34 (10): 2938-2943.   DOI: 10.11772/j.issn.1001-9081.2014.10.2938
Abstract340)      PDF (994KB)(567)       Save

A new image retrieval method based on enhanced micro-structure and context-sensitive similarity was proposed to overcome the shortcoming of high dimension of combined image feature and intangible combined weights. A new local pattern map was firstly used to create filter map, and then enhanced micro-structure descriptor was extracted based on color co-occurrence relationship. The descriptor combined several features with the same dimension as single color feature. Based on the extracted descriptor, normal distance between image pairs was calculated and sorted. Combined with the iterative context-sensitive similarity, the initial sorted image series were re-ranked. With setting the value of iteration times as 50 and considering the top 24 images in the retrieved image set, the comparative experiments with Multi-Texton Histogram (MTH) and Micro-Structure Descriptor (MSD) show that the retrieval precisions of the proposed algorithm respectively are increased by 13.14% and 7.09% on Corel-5000 image set and increased by 11.03% and 6.8% on Corel-10000 image set. By combining several features and using context information while keeping dimension unchanged, the new method can enhance the precision effectively.

Reference | Related Articles | Metrics
Automatic brain extraction method based on hybrid level set model
AO Qian ZHU Yanping JIANG Shaofeng
Journal of Computer Applications    2013, 33 (07): 2014-2017.   DOI: 10.11772/j.issn.1001-9081.2013.07.2014
Abstract848)      PDF (635KB)(550)       Save
Automatic extraction of brain is an important step in the preprocessing of brain internal analysis. To improve the extraction result, a modified Brain Extraction Tool (BET) and hybrid level set model based method for automatic brain extraction was proposed. The first step of the proposed method was obtaining rough brain boundary with the improved BET algorithm. Then the morphological expansion was operated on the rough brain boundary to initialize the Region of Interest (ROI) where the hybrid active contour model was defined to obtain a new contour. The ROI and the new contour were iteratively replaced until the accurate brain boundary was achieved. Seven Magnetic Resonance Imaging (MRI) volumes from Internet Brain Segmentation Repository (IBSR) website were used in the experiment. The proposed method achieved low average total misclassification ratio of 7.89%. The experimental results show the proposed method is effective and feasible.
Reference | Related Articles | Metrics
Chinese cross document co-reference resolution based on SVM classification and semantics
ZHAO Zhiwei GU Jinghang HU Yanan QIAN Longhua ZHOU Guodong
Journal of Computer Applications    2013, 33 (04): 984-987.   DOI: 10.3724/SP.J.1087.2013.00984
Abstract1139)      PDF (642KB)(675)       Save
The task of Cross-Document Co-reference Resolution (CDCR) aims to merge those words distributed in different texts which refer to the same entity together to form co-reference chains. The traditional research on CDCR addresses name disambiguation posed in information retrieval using clustering methods. This paper transformed CDCR as a classification problem by using an Support Vector Machine (SVM) classifier to resolve both name disambiguation and variant consolidation, both of which were prevalent in information extraction. This method can effectively integrate various features, such as morphological, phonetic, and semantic knowledge collected from the corpus and the Internet. The experiment on a Chinese cross-document co-reference corpus shows the classification method outperforms clustering methods in both precision and recall.
Reference | Related Articles | Metrics
Discrete sliding-mode guidance laws design based on variable rate reaching law
SHU Yanjun TANG Shuo
Journal of Computer Applications    2013, 33 (03): 878-881.   DOI: 10.3724/SP.J.1087.2013.00878
Abstract771)      PDF (528KB)(554)       Save
A discrete sliding-mode guidance laws based on a new discrete variable rate reaching law was proposed for the discrete form of the missile-target relative motion equation in plane. By adopting this reaching law, steady state oscillation and system chattering were eliminated significantly, and system state approached to zero diminishingly. Maneuvering acceleration of target was considered as unknown external disturbance, and disturbance observer was adopted to estimate and compensate for the uncertainties, which required just knowing the possible variation range of target acceleration between two adjacent sampling instants rather than a prior knowledge of the target acceleration bounds or matched conditions. The simulation results show this method has strong robustness, without steady state oscillation and system chattering.
Reference | Related Articles | Metrics
Fire image features selection and recognition based on rough set
HU Yan WANG Huiqin QIN Weiwei ZOU Ting LIANG Junshan
Journal of Computer Applications    2013, 33 (03): 704-707.   DOI: 10.3724/SP.J.1087.2013.00704
Abstract1038)      PDF (614KB)(629)       Save
Concerning the contradiction of accuracy and real-time in image fire detection, a fire image features selection and recognition algorithm based on rough set was proposed. Firstly, through in-depth study on the flame image features, the top edge of flame driven by the combustion energy is very irregular, and obvious vibration phenomenon occurres. But the lower edge is the opposite. Based on this feature, the upper and lower edges of the jitter projection ratio can be used as a flame from the edge shape regular interference. Then, the six striking flame features were chosen in order to create training samples. When fire classification ability was not affected, the feature classification table gained by experiment was used to reduce attributes of the training samples. And the reduced information systems attributes were applied to train a support vector machine model, and the fire detection was realized. Finally, this fire detection algorithm was compared to the traditional Support Vector Machine (SVM) fire detection algorithm. The results show that the presented algorithm reduces redundant attributes, eliminates the dimension of fire image features space, and decreases the data of training and testing in classifier in case rough set as a SVM classifier prefix system. While ensuring recognition accuracy, the algorithm improves fire detection speed.
Reference | Related Articles | Metrics
3D face reconstruction and recognition based on feature division
LU Le ZHOU Da-ke HU Yang-ming
Journal of Computer Applications    2012, 32 (11): 3189-3192.   DOI: 10.3724/SP.J.1087.2012.03189
Abstract1290)      PDF (702KB)(678)       Save
The traditional algorithm of 3D face reconstruction is inefficient and it is difficult to meet the requirements of practical application. To address this problem, a feature-slice-based 3D face reconstruction algorithm was proposed. Besides, the feature-slice-based weighed 3D face recognition was proposed on the basis of the reconstruction algorithm. First, a 2D template-based alignment algorithm was developed to process the correspondence between faces automatically, and a linear facial model was built up. Second, an improved Active Shape Model (ASM) algorithm was proposed to locate the feature points and slices in the 3D and 2D face images. Then, every facial feature slices shape was reconstructed by a PCA-based sparse morphable mode. Finally, the algorithm was applied to 3D face recognition. The experimental results show that the presented algorithm has higher efficiency and accuracy, and improves the 3D face recognition rate.
Reference | Related Articles | Metrics
Smart rail transportation-an implementation of deeper intelligence
YANG Yan ZHU Yan DAI Qi LI Tian-rui
Journal of Computer Applications    2012, 32 (05): 1205-1207.  
Abstract1726)      PDF (2286KB)(1352)       Save
Traveling by rail has become one of the important transportation modes of current residents. The core of Smart Rail Transportation (SRT) is to change the existing modes of railway transportation by a more intelligent way through modern information technology. Its aim is to bring more efficient, safe, comfortable intelligent transportation systems for human activities. This paper discussed four steps of a deeper wisdom in SRT, including wisdom data collection, wisdom data fusion, wisdom data mining and wisdom decision-making. These four steps form a spiral ascendance of intelligent information processing, and ultimately achieve a deeper wisdom in SRT.
Reference | Related Articles | Metrics
Image fire detection based on independent component analysis and support vector machine
HU Yan WANG Hui-qin MA Zong-fang LIANG Jun-shan
Journal of Computer Applications    2012, 32 (03): 889-892.   DOI: 10.3724/SP.J.1087.2012.00889
Abstract1424)      PDF (610KB)(621)       Save
Image-based fire detection can effectively solve the problems of large space fire detection contactlessly and rapidly. It is a new research direction in fire detection. Its essential issue is the classification of flames and disruptors. The ordinary detection methods are to extract one or a few characteristics of the flame in the image as a basis for identification. The disadvantages are to need a large number of experiential thresholds and the lower recognition rate by the inappropriate feature selection. Considering the entire characteristics of fire flame, a flame detection method based on Independent Component Analysis (ICA) and Support Vector Machine (SVM) was proposed. Firstly, a series of frames were pre-processed in RGB space. And suspected target areas were extracted depending on the flickering feature and fuzzy clustering analysis. Then the flame image features were described with ICA. Finally, SVM model was used in order to achieve flame recognition. The experimental result shows that the proposed method improves the accuracy and speed of image fire detection in a variety of fire detection environments.
Reference | Related Articles | Metrics
Hybrid BitTorrent traffic detection
LI Lin-qing YANG Zhe ZHU Yan-qin
Journal of Computer Applications    2011, 31 (12): 3210-3214.  
Abstract932)      PDF (788KB)(716)       Save
Peer-to-peer (P2P) applications generate a large volume of traffic and seriously affect quality of normal network services. Accurate and real-time identification of P2P traffic is important for network management. A hybrid approach consists of three sub-methods was proposed to identify BitTorrent (BT) traffic. It applied application signatures to identify unencrypted traffic. And for those encrypted flows, message-based method according to the features of the message stream encryption (MSE) protocol was proposed. And a pre-identification method based on signaling analysis was applied to predict BT flows and distinguish them even at the first packet with SYN flag. And some modified Vuze clients were used to label BT traffic in real traffic traces, which made high accuracy benchmark datasets to evaluate the hybrid approach. The results illustrate its effectiveness, especially for those un- or semi- established flows, which have no obvious signatures or flow statistics.
Related Articles | Metrics
Analysis of cooperation model for P2P live streaming in game theoretic framework
CHENG Pu CHU Yan-ping DU Ying
Journal of Computer Applications    2011, 31 (05): 1159-1161.   DOI: 10.3724/SP.J.1087.2011.01159
Abstract1621)      PDF (596KB)(1159)       Save
To resolve the problem of "free riding" and "tragedy of the commons" in peer-to-peer live streaming systems, a cooperation model was proposed in a game theoretic framework. The proportional fairness optimal strategy was proved under Nash equilibrium and Pareto optimality. And then the corresponding node behavior strategy was analyzed considering their cheating behaviors. Finally, the analytical results show that the model can effectively stimulate node cooperation and prevent cheating.
Related Articles | Metrics
Improved Ribbon Snake algorithm for automatic road generation
HU Yang ZU Ke-Ju LI Guang-Yao
Journal of Computer Applications   
Abstract1461)      PDF (1403KB)(896)       Save
In order to modify the incomplete road extraction caused by shadow, shelter and noise in high resolution remote sensing images, a Ribbon Snake model with width information was established based on the geometric characteristics of road. To overcome the great dependence of interior parameters and the easy being affected by the complex background of Ribbon Snake, a B-spline Ribbon Snake was constructed, where the smoothness of the Snake was implicit in the B-spline formulation and the flexibility of the Snake was adjusted by the number of control points. The road network segmentation results show that the improved B-spline Ribbon Snake can obtain a more accurate and smoother segmentation and is more robust to noise.
Related Articles | Metrics
Improvement of distributed coordination function in IEEE 802.11 WLANs
ZHANG Liang, SHU Yan-tai
Journal of Computer Applications    2005, 25 (06): 1257-1260.   DOI: 10.3724/SP.J.1087.20051257
Abstract959)      PDF (211KB)(1262)       Save
In this paper, the standard DCF protocol was modified to achieve efficient channel utilization and improve the performance of WLANs. In our modified DCF, the wireless station sends a small data packet (e.g., ACKs of TCP layer) instead of CTS to reply the RTS of AP. In this way, transmitting small packets separately in WLANs was avoided and also the number of CTS packets in WLANs was decreased. Therefore, the channel utilization rate was increased. The simulation results show that our modified DCF can consistently achieve higher throughput and maintain better fairness among all mobile stations than the standard DCF.
Related Articles | Metrics
Efficient real-time traffic scheduling algorithm based on WRR in wireless networks
ZHAO Zeng-hua,SHU Yan-tai
Journal of Computer Applications    2005, 25 (04): 903-905.   DOI: 10.3724/SP.J.1087.2005.0903
Abstract1079)      PDF (204KB)(1093)       Save

An efficient real-time traffic scheduling algorithm for WLAN(Wireless Local Area Networks) was proposed based on the classic WRR (Weighted Round Robin) discipline. The algorithm was operated at link layer level, and was coupled closely with DCF(Distributed Coordinate Function). Through that, the HOL(Head Of Line) blocking problem was alleviated. With compensation for mobile users experiencing burst channel error, the long-term fairness approximately was achieved. Extensive simulations were performed using NS(Network Simulator). The results show that the algorithm is simple,and improves the channel utilization and data throughput effectively. The average packet delay is also decreased.

Related Articles | Metrics
Application of threshold-crossing in wireless network traffic
LIU Chun-feng,SHU Yan-tai,LIU Jia-kun
Journal of Computer Applications    2005, 25 (04): 878-880.   DOI: 10.3724/SP.J.1087.2005.0878
Abstract1278)      PDF (131KB)(887)       Save
Comparing with the wired networks, the guaranteed QoS is difficult to get in wireless networks, because of its limited bandwidth and variable channel. Network traffic prediction is important for network control and allocation. This paper presented a method. The deviation function of threshold value v was made by the method of variance analysis for wireless network traffic, which presented bases for selection threshold value. At last, according to the threshold crossing theory, the efficiency of the solution was validated through calculating the crossing intensity. The threshold value can implement better in predicting the traffic.
Related Articles | Metrics
Adversarial purification method based on directly guided diffusion model#br#
#br#
HU Yan, LI Peng, CHENG Shuyan
Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2025030384
Online available: 01 July 2025