Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Lightweight deep learning algorithm for weld seam surface quality detection of traction seat
Zijie HUANG, Yang OU, Degang JIANG, Cailing GUO, Bailin LI
Journal of Computer Applications    2024, 44 (3): 983-988.   DOI: 10.11772/j.issn.1001-9081.2023030349
Abstract221)   HTML11)    PDF (3404KB)(156)       Save

In order to address the low accuracy and speed of detection by manual and traditional automation methods for the weld seam surface of traction seat, a lightweight weld seam quality detection algorithm YOLOv5s-G2CW was proposed for the weld seam surface of traction seat. Firstly, the GhostBottleneckV2 module was applied as a replacement for the C3 module in YOLOv5s to reduce the number of parameters used in the model. Then, the CBAM (Convolutional Block Attention Module) was introduced into the Neck of the YOLOv5s model for integration of the weld features in two dimensions: channel and space. Also, the positioning loss function of the YOLOv5s model was improved into Wise-IoU, focusing on the predictive regression of ordinary quality anchor frames. Finally, the 13 × 13 feature layer used for the detection of large-sized objects in the YOLOv5s model was removed to further reduce the number of parameters used in the model. Experimental results show that, compared with the YOLOv5s model, the size of YOLOv5s-G2CW model reduces by 53.9%, the number of frames transmitted per second increases by 8.0%, and the mAP (mean Average Precision) value increases by 0.8 percentage points. It can be seen that the model is applicable to meet the requirements for real-time and accurate detection of the weld seam surface for traction seat.

Table and Figures | Reference | Related Articles | Metrics
Efficient adaptive robustness optimization algorithm for complex networks
Jie HUANG, Ruizi WU, Junli LI
Journal of Computer Applications    2024, 44 (11): 3530-3539.   DOI: 10.11772/j.issn.1001-9081.2023111659
Abstract140)   HTML2)    PDF (1046KB)(54)       Save

Enhancing the robustness of complex networks is crucial for the networks to resist external attacks and cascading failures. Existing evolutionary algorithms have limitations in solving network structure optimization problems, especially in terms of convergence and optimization speed. In response to this challenge, a new adaptive complex network robustness optimization algorithm named SU-ANet (SUrrogate-assisted and Adaptive Network optimization algorithm) was proposed. To reduce the huge time overhead of robustness computation, a robustness predictor based on attention mechanism was constructed in SU-ANet as an offline surrogate model to replace the frequent robustness computation in local search operator. In the evolutionary process, the global and local information was considered comprehensively to avoid falling into local optimum and broaden the search space simultaneously. By designing crossover operators, each individual exchanges edges with the global optimum candidate solution and a randomly selected individual to balance the convergence and diversity of the algorithm. Additionally, a parameter self-adaptive mechanism was applied to adjust the operator execution probabilities automatically, thereby alleviating the uncertainty of the algorithm brought by the parameter design. Experimental results on both synthetic networks and real-world networks demonstrate that SU-ANet has better search capabilities and higher evolutionary efficiency.

Table and Figures | Reference | Related Articles | Metrics
Voting instance selection algorithm based on learning to hash
Yajie HUANG, Junhai ZHAI, Xiang ZHOU, Yan LI
Journal of Computer Applications    2022, 42 (2): 389-394.   DOI: 10.11772/j.issn.1001-9081.2021071188
Abstract380)   HTML22)    PDF (574KB)(120)       Save

With the massive growth of data, how to store and use data has become a hot issue in academic research and industrial applications. As one of the methods to solve these problems, instance selection effectively reduces the difficulty of follow-up work by selecting representative instances from original data according to the established rules. Therefore, a voting instance selection algorithm based on learning to hash was proposed. Firstly, the Principal Component Analysis (PCA) method was used to map high-dimensional data to low-dimensional space. Secondly, the k-means algorithm was used to perform iterative operations by combining with the vector quantization method, and the hash codes of the cluster center were used to represent the data. After that, the classified data were randomly selected according to the proportion, and the final instances were selected by voting after several times independent running of the algorithm. Compared with the Compressed Nearest Neighbor (CNN) algorithm and the instance selection algorithm of linear complexity for big data named LSH-IS-F (Instance Selection algorithm by Hashing with two passes), the proposed algorithm has the compression ratio improved by an average of 19%. The idea of the proposed algorithm is simple and easy to implement, and the algorithm can control the compression ratio automatically by adjusting the parameters. Experimental results on 7 datasets show that the proposed algorithm has a great advantage compared to random hashing in terms of compression ratio and running time with similar test accuracy.

Table and Figures | Reference | Related Articles | Metrics
Three dimensional localization algorithm for wireless sensor networks based on projection and grid scan
TANG Jie HUANG Hongguang
Journal of Computer Applications    2013, 33 (09): 2470-2473.   DOI: 10.11772/j.issn.1001-9081.2013.09.2470
Abstract669)      PDF (561KB)(390)       Save
The paper proposed a method to solve the shortcomings of the current Wireless Sensor Network (WSN) three-dimensional localization algorithm in terms of accuracy and complexity. The raster scan was used to resolve the projection cross domain of the neighboring anchor nodes on the two coordinate planes, and got the corresponding positions of the unknown nodes on the two coordinate planes, thus ultimately realizing the three-dimensional position estimate. Finally, the locations of the unknown nodes in three-dimension were estimated. The simulation result shows that when 200 sensor nodes were deployed randomly confined to the space of 100m*100m*100m, the coverage ratio of unknown nodes reached 99.1%, and the relative error decreased to 0.5533. The use of projection reduced the complexity of the algorithm efficiently.
Related Articles | Metrics
Research status and development trend of human computation
YANG Jie HUANG Xiaopeng SHENG Yin
Journal of Computer Applications    2013, 33 (07): 1875-1879.   DOI: 10.11772/j.issn.1001-9081.2013.07.1875
Abstract870)      PDF (790KB)(524)       Save
Human computation is a kind of technology to combine the human ability with the distributed theory to solve problems that computer cannot solve. The concept of human computation and its properties were introduced. Meanwhile, the distinctions of human computation with many other similar concepts got clarified. According to the reference review, the current research methods and design criterion of human computation were sorted out. Finally, research directions and development trends of human computation were discussed.
Reference | Related Articles | Metrics
Application of particle filter algorithm in traveling salesman problem
WU Xin-jie HUANG Guo-xing
Journal of Computer Applications    2012, 32 (08): 2219-2222.   DOI: 10.3724/SP.J.1087.2012.02219
Abstract1381)      PDF (626KB)(390)       Save
The existing optimization algorithm for solving the Traveling Salesman Problem (TSP) easily falls into local extremum. To overcome this shortcoming, a new optimization method based on the particle filter, which regarded the searching process of the best route of TSP as a dynamic time-varying system, was brought forward. The basic ideas using particle filter principle to search the best route of TSP were enunciated, and its implementation procedures were given. In order to reduce the possibility of sinking into local extreme, the crossover and mutation operator of Genetic Algorithm (GA) was introduced into the new optimization algorithm to enhance the variety of particles in the sampling process. Finally, the simulation experiments were performed to prove the validity of the new method. The new optimization method based on particle filter can find better solutions than those of other optimization algorithms.
Reference | Related Articles | Metrics
Optimal algorithm for FIR digital filter with canonical signed digit coefficients
TAN Jiajie HUANG Sanwei ZOU Changqin
Journal of Computer Applications    2011, 31 (06): 1727-1729.   DOI: 10.3724/SP.J.1087.2011.01727
Abstract1171)      PDF (462KB)(497)       Save
In order to save the resources of the Finite Impulse Response (FIR) filter and increase the running speed, it was proposed to use the Least Mean-Square-Error (LMSE) to transfer the float point coefficients filter to the Canonical Signed Digit (CSD) filter. The FIR filter was implemented by the cascades structure, which conjugated pairs of zeros into two basic sections. First, all zeros of the digital filter were calculated, which were made of two cascade sections for an FIR. And then the coefficients of the first cascade were transferred to fixed point. Next step was to quantize the second cascade coefficients into fixed point. To eliminate the finite word-length effects, the LMSE was adopted to compensate zeros in this step. Finally, all the fixed point coefficients were quantized into CSD. In order to prove the effectiveness of the two methods, and the FIR filter was also designed with simple quantized coefficients. The magnitude responses of two methods show that the LMSE quantization is more effective than that of the simple quantization.
Reference | Related Articles | Metrics
Output code algorithm for ierarchical error correcting based on KNNModel
Yi yiXIN Gong-de GUO Li-fei CHEN Jie HUANG
Journal of Computer Applications    2009, 29 (11): 3051-3055.  
Abstract1632)      PDF (990KB)(1245)       Save
Error Correcting Output Codes (ECOC) is an effective algorithm to handle multi-class problem; however, the ECOC coding is only on the class level and the ECOC matrix is pre-designed. A novel classification algorithm based on hierarchical ECOC was proposed. The algorithm first used KNNModel to build multiple clusters on a given dataset and chose few clusters for each class as representatives to construct a hieratical coding matrix in training phase, and then the matrix was used to train each single classifier. In testing phase, the proposed method makes the most of the merits of KNNModel and ECOC through models combination. Experimental results in the UCI data sets show the effectiveness of the proposed method.
Related Articles | Metrics