Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Feature selection method based on self-adaptive hybrid particle swarm optimization for software defect prediction
Zhenhua YU, Zhengqi LIU, Ying LIU, Cheng GUO
Journal of Computer Applications    2023, 43 (4): 1206-1213.   DOI: 10.11772/j.issn.1001-9081.2022030444
Abstract314)   HTML8)    PDF (1910KB)(137)       Save

Feature selection is a key step in data preprocessing for software defect prediction. Aiming at the problems of existing feature selection methods such as not significant dimension reduction performance and low classification accuracy of selected optimal feature subset, a feature selection method for software defect prediction based on Self-adaptive Hybrid Particle Swarm Optimization (SHPSO) was proposed. Firstly, combined with population partition, a self-adaptive weight update strategy based on Q-learning was designed, in which Q-learning was introduced to adaptively adjust the inertia weight according to the states of the particles. Secondly, to balance the global search ability in the early stage of the algorithm and the convergence speed in the later stage, the curve adaptivity based time-varying learning factors were proposed. Finally, a hybrid location update strategy was adopted to help particles jump out of the local optimal solution as soon as possible and increase the diversity of particles. Experiments were carried out on 12 public software defect datasets. The results show that the proposed method can effectively improve the classification accuracy of software defect prediction model and reduce the dimension of feature space compared with the method using all features, the commonly used traditional feature selection methods and the mainstream feature selection methods based on intelligent optimization algorithms. Compared with Improved Salp Swarm Algorithm (ISSA), the proposed method increases the classification accuracy by about 1.60% on average and reduces the feature subset size by about 63.79% on average. Experimental results show that the proposed method can select a feature subset with high classification accuracy and small size.

Table and Figures | Reference | Related Articles | Metrics
Underwater image enhancement algorithm based on artificial under-exposure fusion and white-balancing technique
Ye TAO, Wenhai XU, Luqiang XU, Fucheng GUO, Haibo PU, Guangtong CHEN
Journal of Computer Applications    2021, 41 (12): 3672-3679.   DOI: 10.11772/j.issn.1001-9081.2021010065
Abstract355)   HTML8)    PDF (2675KB)(183)       Save

Acquisition of clear and accurate underwater images is an important prerequisite to help people explore the underwater world. However, compared with regular images, underwater images always have problems such as low contrast, detail loss and color distortion, resulting in bad visual effect. In order to solve the problems, a new underwater image enhancement algorithm based on Artificial Under-exposure Fusion and White-Balancing technique (AUF+WB) was proposed. Firstly, the Gamma correction operation was used to process the original underwater image and generate 5 corresponding under-exposure images. Then, the contrast, saturation and well-exposedness were employed as fusion weights, and the multi-scale fusion method was combined to generate the fused image. Finally, the images compensated by various color channels were combined with the Gray-World white balance assumption respectively to generate the corresponding white balance images, and these obtained white balance images were evaluated by using the Underwater Color Image Quality Evaluation (UCIQE) and the Underwater Image Quality Measure (UIQM). With selecting different types of underwater images as experimental samples, the proposed AUF+WB algorithm was compared with the existing state-of-the-art underwater image defogging algorithms. The results show that, the proposed AUF+WB algorithm has better performance than the comparison algorithms on both qualitative and quantitative analysis of image quality. The proposed AUF+WB algorithm can effectively improve the visual quality of underwater images by removing color distortion, enhancing contrast, and recovering details of underwater images.

Table and Figures | Reference | Related Articles | Metrics
Security analysis and improvement of IEEE 802.1X
ZHOU Chao ZHOU Cheng GUO Liang
Journal of Computer Applications    2011, 31 (05): 1265-1270.   DOI: 10.3724/SP.J.1087.2011.01265
Abstract1348)      PDF (828KB)(1012)       Save
It has been proved in many researches that there are some design flaws in IEEE 802.1X standard. In order to eliminate the Denial of Service (DoS) attack, replay attack, session hijack, Man-In-the-Middle (MIM) attack and other security threats, the protocol was analyzed in view of the state machines. It is pointed out that the origin of these problems is the inequality and incompleteness of state machines as well as the lack of integrity protection and source authenticity on messages. However, an improvement proposal called Dual-way Challenge Handshake and Logoff Authentication was proposed, and a formal analysis was done on it with an improved BAN logic. It is proved that the proposal can effectively resist the security threats mentioned above.
Related Articles | Metrics
Efficient approach for identifying approximately duplicate Chinese database records
CHENG Guo-da, SU Hang-li
Journal of Computer Applications    2005, 25 (06): 1362-1365.   DOI: 10.3724/SP.J.1087.2005.1362
Abstract1028)      PDF (182KB)(1123)       Save
Eliminating duplicate records could improve data quality. An approach based on type numbers of field values was proposed to select the sorting fields. In the process of identifying approximately duplicate records, the first sorting field was used to create 2-D-linked list storing approximately duplicate records. And the second and third sorting fields were employed to sort pair-wise that belong to 2-D-linked list. To match between Chinese character strings efficiently, various errors were researched since customary abbreviations and some input errors of the similarities in pronunciation and shape. Solving the input mistakes by looking up the “Similarity Chinese Characters Table” and the similarity function which was used to determine whether two records were duplicate or not. The experimental results prove: the approach can detect efficiently the approximately duplicate Chinese database records.
Related Articles | Metrics
A self-adaptive approach for information integration
CHENG Guo-da,ZOU Ya-hui,ZHU Jing
Journal of Computer Applications    2005, 25 (03): 666-669.   DOI: 10.3724/SP.J.1087.2005.0666
Abstract973)      PDF (179KB)(1023)       Save
Detecting records that are approximate duplicates, but not exact duplicates, is one of the key tasks in information integration. Although various algorithms have been presented for detecting duplicated records, strings matching is essential to those algorithms. In self- adaptive information integration algorithm presented by this paper, the hybrid similarity, a comprehensive edit distance and token metric, was used to measure the similar degree between strings. In order to avoid mismatching because of different expressions, the strings in records were partitioned into vocabularies, then were sorted according to their first character. In the process of vocabularies matching, misspellings and abbreviations can be tolerated. The experimental results demonstrate that the self-adaptive approach for information integration achieves higher accuracy than that using Smith-Waterman edit distance and Jaro distance.
Related Articles | Metrics