Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Dynamic UAV path planning based on modified whale optimization algorithm
Xingwang WANG, Qingyang ZHANG, Shouyong JIANG, Yongquan DONG
Journal of Computer Applications    2025, 45 (3): 928-936.   DOI: 10.11772/j.issn.1001-9081.2024030370
Abstract139)   HTML5)    PDF (7205KB)(620)       Save

A dynamic Unmanned Aerial Vehicle (UAV) path planning method based on Modified Whale Optimization Algorithm (MWOA) was proposed for the problem of UAV path planning in environments with complex terrains. Firstly, by analyzing the mountain terrain, dynamic targets, and threat zones, a three-dimensional dynamic environment and a UAV route model were established. Secondly, an adaptive step size Gaussian walk strategy was proposed to balance the algorithm’s abilities of global exploration and local exploitation. Finally, a supplementary correction strategy was proposed to correct the optimal individual in the population, and combined with differential evolution strategy, the population was avoided from falling into local optimum while improving convergence accuracy of the algorithm. To verify the effectiveness of MWOA, MWOA and intelligent algorithms such as Whale Optimization Algorithm (WOA), and Artificial Hummingbird Algorithm (AHA) were used to solve the CEC2022 test functions, and validated in designed UAV dynamic environment model. The comparative analysis of simulation results shows that compared with the traditional WOA, MWOA improves the convergence accuracy by 6.1%, and reduces the standard deviation by 44.7%. The above proves that the proposed MWOA has faster convergence and higher accuracy, and can handle UAV path planning problems effectively.

Table and Figures | Reference | Related Articles | Metrics
Bearings fault diagnosis method based on multi-pathed hierarchical mixture-of-experts model
Xinran XU, Shaobing ZHANG, Miao CHENG, Yang ZHANG, Shang ZENG
Journal of Computer Applications    2025, 45 (1): 59-68.   DOI: 10.11772/j.issn.1001-9081.2024010043
Abstract128)   HTML12)    PDF (3277KB)(666)       Save

In response to the issue of low accuracy in handling complex work conditions in rolling bearing fault diagnosis, a Multi-Task Learning (MTL) model naming as Multi-pathed Hierarchical Mixture-of-Experts (MHMoE), and the corresponding hierarchical training mode were proposed. In this model, by combining multi-stage, multi-task joint training, a hierarchical information sharing mode was achieved. The model's generalization and fault recognition accuracy were further improved on the basis of the ordinary MTL mode, enabling the model to perform tasks on both complex and simple datasets excellently. Meanwhile, by incorporating the bottleneck layer structure of one-dimensional ResNet, the depth of the network was ensured while avoiding issues such as vanishing and exploding gradients, so as to extract relevant features of the dataset fully. Experimental results on the Paderborn University bearing fault dataset (PU) as the test dataset demonstrate that under varying degrees of working complexity, compared to the OMoE (One-gate Mixture-of-Experts) -ResNet18 model without MTL, the proposed model has the accuracy improved by 5.45 to 9.30 percentage points. Compared to the models such as Ensemble Empirical Mode Decomposition Hilbert spectral transform (EEMD-Hilbert), MMoE (Multi-gate Mixture-of-Experts), and Multi-Scale multi-Task Attention Convolutional Neural Network (MSTACNN), the proposed model has the accuracy improved by 3.21 to 16.45 percentage points at least.

Table and Figures | Reference | Related Articles | Metrics
Survey of fairness in federated learning
Shufen ZHANG, Hongyang ZHANG, Zhiqiang REN, Xuebin CHEN
Journal of Computer Applications    2025, 45 (1): 1-14.   DOI: 10.11772/j.issn.1001-9081.2023121881
Abstract244)   HTML16)    PDF (907KB)(159)       Save

Federated Learning(FL) has experienced rapid development due to its advantages in distributed structure and privacy security. However, the fairness issues caused by large-scale FL affect the sustainability of FL systems. In response to the fairness issues in FL, recent researches on fairness in FL was reviewed systematically and analyzed deeply. Firstly, the workflow and definitions of FL were explained, and biases and fairness concepts in FL were summarized. Secondly, commonly used datasets in fairness research of FL were detailed, and the challenges faced by fairness research were discussed. Finally, the advantages, disadvantages, applicable scenarios, and experimental setting of relevant research work were summed up from four aspects: data source selection, model optimization, contribution evaluation, and incentive mechanism, and the future research directions and trends in fairness of FL were prospected.

Table and Figures | Reference | Related Articles | Metrics
Review on security threats and defense measures in federated learning
Xuebin CHEN, Zhiqiang REN, Hongyang ZHANG
Journal of Computer Applications    2024, 44 (6): 1663-1672.   DOI: 10.11772/j.issn.1001-9081.2023060832
Abstract347)   HTML22)    PDF (1072KB)(674)       Save

Federated learning is a distributed learning approach for solving the data sharing problem and privacy protection problem in machine learning, in which multiple parties jointly train a machine learning model and protect the privacy of data. However, there are security threats inherent in federated learning, which makes federated learning face great challenges in practical applications. Therefore, analyzing the attacks faced by federation learning and the corresponding defensive measures are crucial for the development and application of federation learning. First, the definition, process and classification of federated learning were introduced, and the attacker model in federated learning was introduced. Then, the possible attacks in terms of both robustness and privacy of federated learning systems were introduced, and the corresponding defense measures were introduced as well. Furthermore, the shortcomings of the defense schemes were also pointed out. Finally, a secure federated learning system was envisioned.

Table and Figures | Reference | Related Articles | Metrics
Multivariate long-term series forecasting model based on decomposition and frequency domain feature extraction
Yiyang FAN, Yang ZHANG, Shang ZENG, Yu ZENG, Maoli FU
Journal of Computer Applications    2024, 44 (11): 3442-3448.   DOI: 10.11772/j.issn.1001-9081.2023111684
Abstract176)   HTML4)    PDF (753KB)(368)    PDF(mobile) (1302KB)(18)    Save

In response to the problems that the existing Transformer-based Multivariate Long-Term Series Forecasting (MLTSF) models mainly extract features from the time domain, and it is difficult to find out reliable dependencies directly from the dispersed time points of the long-term series, a new decomposition and frequency domain feature extraction model was proposed. Firstly, a periodic term-trend term decomposition method based on the frequency domain was proposed, which reduced the time complexity of the decomposition process. Then, based on the extraction of trend features using periodic term-trend term decomposition, a Transformer network performing frequency domain feature extraction based on Gabor transform was utilized to capture periodic dependencies, which enhanced the stability and robustness of forecasting. Experimental results on five benchmark datasets show that compared with the current state-of-the-art methods, the proposed model has the Mean Squared Error (MSE) in MLTSF is reduced by an average of 7.6% with a maximum reduction of 18.9%, which demonstrates that the proposed model improves forecasting accuracy effectively.

Table and Figures | Reference | Related Articles | Metrics
Time series prediction algorithm based on multi-scale gated dilated convolutional network
Yu ZENG, Yang ZHANG, Shang ZENG, Maoli FU, Qixue HE, Linlong ZENG
Journal of Computer Applications    2024, 44 (11): 3427-3434.   DOI: 10.11772/j.issn.1001-9081.2023111583
Abstract239)   HTML5)    PDF (803KB)(733)       Save

Addressing challenges in time series prediction tasks, such as high-dimensional features, large-scale data, and the demand for high prediction accuracy, a multi-scale trend-period decomposition model based on a multi-head gated dilated convolutional network was proposed. A multi-scale decomposition approach was employed to decompose the original covariate sequence and the prediction variable sequence into their respective periodic terms and trend terms, thereby enabling independent prediction. For the periodic terms, the multi-head gated dilated convolutional network encoder was introduced to extract respective periodic information; in the decoder stage, channel information interaction and fusion were performed through the utilization of a cross-attention mechanism, and after sampling and aligning the periodic information of the prediction variables, the periodic prediction was performed through time attention and channel fusion information. The trend terms prediction was executed by using an autoregressive approach. Finally, the prediction sequence was obtained by incorporating the trend prediction results with the periodic prediction results. Compared with multiple mainstream benchmark models such as Long Short-Term Memory (LSTM) and Informer, on five datasets including ETTm1 and ETTh1, a reduction in Mean Squared Error (MSE) is observed, ranging from 19.2% to 52.8% on average, a decrease in Mean Absolute Error (MAE) is noted, ranging from 12.1% to 33.8% on average. Ablation experiments confirm that the proposed multi-scale decomposition module, multi-head gated dilation convolution, and time attention module can enhance the accuracy of time series prediction.

Table and Figures | Reference | Related Articles | Metrics
Image denoising model based on approximate U-shaped network structure
Huazhong JIN, Xiuyang ZHANG, Zhiwei YE, Wenqi ZHANG, Xiaoyu XIA
Journal of Computer Applications    2022, 42 (8): 2571-2577.   DOI: 10.11772/j.issn.1001-9081.2021061126
Abstract456)   HTML11)    PDF (952KB)(148)       Save

Aiming at the problem of poor denoising effect and long training period in image denoising, an image denoising model based on approximate U-shaped network structure was proposed. Firstly, the original linear network structure was modified to an approximate U-shaped network structure by using convolutional layers with different strides. Then, the image information of different receptive fields was superimposed on each other to preserve the original information of the image as much as possible. Finally, the deconvolutional network layer was introduced for image restoration and further noise removal. Experimental results show that on Set12 and BSD68 test sets: compared with Denoising Convolutional Neural Network (DnCNN) model, the proposed model has an average increase of 0.04 to 0.14 dB on Peak Signal-to-Noise Ratio (PSNR), and an average reduction of 41% on training time, verifying that the proposed model has better denoising effect and shorter training time.

Table and Figures | Reference | Related Articles | Metrics
Malicious code detection method based on attention mechanism and residual network
Yang ZHANG, Jiangbo HAO
Journal of Computer Applications    2022, 42 (6): 1708-1715.   DOI: 10.11772/j.issn.1001-9081.2021061410
Abstract545)   HTML26)    PDF (1407KB)(202)       Save

As the existing malicious code detection methods based on deep learning have problems of insufficiency and low accuracy of feature extraction, a malicious code detection method based on attention mechanism and Residual Network (ResNet) called ARMD was proposed. To support the training of this method, the hash values of 47 580 malicious and benign codes were obtained from Kaggle website, and the APIs called by each code were extracted by analysis tool VirusTotal. After that, the called APIs were integrated into 1 000 non-repeated APIs as the detection features, and the training sample data was constructed through these features. Then, the sample data was labeled by determining the benignity and maliciousness based on the VirusTotal analysis results, and the SMOTE (Synthetic Minority Over-sampling Technique) enhancement algorithm was used to equalize the data samples. Finally, the ResNet injecting with the attention mechanism was built and trained to complete the malicious code detection. Experimental results show that the accuracy of malicious code detection of ARMD is 97.76%, and compared with the existing detection methods based on Convolutional Neural Network (CNN) and ResNet models, ARMD has the average precision improved by at least 2%, verifying the effectiveness of ARMD.

Table and Figures | Reference | Related Articles | Metrics
Coupling related code smell detection method based on deep learning
Shan SU, Yang ZHANG, Dongwen ZHANG
Journal of Computer Applications    2022, 42 (6): 1702-1707.   DOI: 10.11772/j.issn.1001-9081.2021061403
Abstract415)   HTML19)    PDF (1071KB)(137)       Save

Heuristic and machine learning based code smell detection methods have been proved to have limitations, and most of these methods focus on the common code smells. In order to solve these problems, a deep learning based method was proposed to detect three relatively rare code smells which are related to coupling, those are Intensive Coupling, Dispersed Coupling and Shotgun Surgery. First, the metrics of three code smells were extracted, and the obtained data were processed. Second, a deep learning model combining Convolutional Neural Network (CNN) and attention mechanism was constructed, and the introduced attention mechanism was able to assign weights to the metric features. The datasets were extracted from 21 open source projects, and the detection methods were validated in 10 open source projects and compared with CNN model. Experimental results show that the proposed model achieves the better performance with the code smell precisions of 93.61% and 99.76% for Intensive Coupling and Dispersed Coupling respectively, and the CNN model achieves the better results with the code smell precision of 98.59% for Shotgun Surgery.

Table and Figures | Reference | Related Articles | Metrics
Influence maximization algorithm based on node coverage and structural hole
Jie YANG, Mingyang ZHANG, Xiaobin RUI, Zhixiao WANG
Journal of Computer Applications    2022, 42 (4): 1155-1161.   DOI: 10.11772/j.issn.1001-9081.2021071256
Abstract338)   HTML6)    PDF (829KB)(126)       Save

Influence maximization is one of the important issues in social network analysis, which aims to identify a small group of seed nodes. When these nodes act as initial spreaders, information can be spread to the remaining nodes as much as possible in the network. The existing heuristic algorithms based on network topology usually only consider one single network centrality, failing to comprehensively combine node characteristics and network topology; thus, their performance is unstable and can be easily affected by the network structure. To solve the above problem, an influence maximization algorithm based on Node Coverage and Structural Hole (NCSH) was proposed. Firstly, the coverages and grid constraint coefficients of all nodes were calculated. Then the seed was selected according to the principle of maximum coverage gain. Secondly, if there were multiple nodes with the same gain, the seed was selected according to the principle of minimum grid constraint coefficient. Finally, the above steps were performed repeatedly until all seeds were selected. The proposed NCSH maintains good performance on six real networks under different numbers of seeds and different spreading probabilities. NCSH achieves 3.8% higher node coverage than to the similar NCA (Node Coverage Algorithm) on average, and 43% lower time consumption than the similar SHDD (maximization algorithm based on Structure Hole and DegreeDiscount). The experimental results show that the NCSH can effectively solve the problem of influence maximization.

Table and Figures | Reference | Related Articles | Metrics
Efficient failure recovery method for stream data processing system
Yang LIU, Yangyang ZHANG, Haoyi ZHOU
Journal of Computer Applications    2022, 42 (11): 3337-3345.   DOI: 10.11772/j.issn.1001-9081.2021122108
Abstract403)   HTML15)    PDF (2031KB)(155)       Save

Focusing on the issue that the single point of failure cannot be efficiently handled by streaming data processing system Flink, a new fault?tolerant system based on incremental state and backup, Flink+, was proposed. Firstly, backup operators and data paths were established in advance. Secondly, the output data in the data flow diagram was cached, and disks were used if necessary. Thirdly, task state synchronization was performed during system snapshots. Finally, backup tasks and cached data were used to recover calculation in case of system failure. In the system experiment and test, Flink+ dose not significantly increase the additional fault tolerance overhead during fault?free operation; when dealing with the single point of failure in both single?machine and distributed environments, compared with Flink system, the proposed system has the failure recovery time reduced by 96.98% in single?machine 8?task parallelism and by 88.75% in distributed 16?task parallelism. Experimental results show that using incremental state and backup method together can effectively reduce the recovery time of the single point of failure of the stream system and enhance the robustness of the system.

Table and Figures | Reference | Related Articles | Metrics
Text feature selection method based on Word2Vec word embedding and genetic algorithm for biomarker selection in high-dimensional omics
Yang ZHANG, Xiaoning WANG
Journal of Computer Applications    2021, 41 (11): 3151-3155.   DOI: 10.11772/j.issn.1001-9081.2020122032
Abstract436)   HTML15)    PDF (673KB)(124)       Save

Text feature is the key part of natural language processing. Concerning the problems of high dimensionality and sparseness of text features, a text feature selection method based on Word2Vec word embedding and Genetic AlgoRithm for Biomarker selection in high-dimensional Omics (GARBO) was proposed, so as to facilitate the subsequent text classification tasks. Firstly, the data input form was optimized, and the Word2Vec word embedding method was used to transform the text into the word vectors similar to gene expression. Then, the gene expression simulated by the high-dimensional word vectors was iteratively evolved. Finally, the random forest classifier was used to classify the text after feature selection. The experiments were conducted on the Chinese comment dataset to verify the proposed method. The experimental results show that, the optimized GARBO feature selection method is effective in text feature selection, successfully reducing 300-dimensional features to 50-dimensional features with more value, and has the classification accuracy reached 88%. Compared with other filtering type text feature selection methods, the proposed method can effectively reduce the dimension of text features and improve the effect of text classification.

Table and Figures | Reference | Related Articles | Metrics
Quick visual Boolean operation on heavy mesh models
YANG Zhanglong, CHEN Ming
Journal of Computer Applications    2017, 37 (7): 2050-2056.   DOI: 10.11772/j.issn.1001-9081.2017.07.2050
Abstract602)      PDF (1174KB)(502)       Save
A new algorithm was proposed to meet the instantaneous response requirements of the Boolean operation between large-scale mesh models in the product design. Discrete sampling was performed on mesh models to obtain the ray-segment point clound model and the three-dimensional Boolean operation among triangular mesh was then converted into one-dimensional one among ray segments; the intersection points could be accurately solved and interpolated around the overlapped regions of mesh models, so the Boolean operation was significantly speeded and the design efficiency of products of complex topology was greatly improved in turn. The point cloud model obtained by the proposed algorithm could be rendered with the same effect as that by the triangular mesh model. The proposed method can be adopted in engineering applications.
Reference | Related Articles | Metrics
Efficient and secure deduplication cloud storage scheme based on proof of ownership by Bloom filter
LIU Zhusong, YANG Zhangjie
Journal of Computer Applications    2017, 37 (3): 766-770.   DOI: 10.11772/j.issn.1001-9081.2017.03.766
Abstract701)      PDF (807KB)(527)       Save
Convergent encryption algorithm is generally used in deduplication cloud storage system, the data can be encrypted by using the hash value as the encryption key, so that the same data is encrypted to obtain the same ciphertext, and the deletion of the duplicate data can be realized, then through the Proof of oWnership (PoW), the authenticity of user data can be verified to protect data security. Aiming at the problem that the time overhead of Proof of oWnership (PoW) is too high, which leads to the degradation of the whole system performance, an efficient security method based on Bloom Filter (BF) was proposed to verify the user hash value and the initialization value efficiently. Finally, a BF scheme supporting fine-grained data deduplication was proposed. When the file level data was duplicated, the PoW was needed; otherwise, only partial block level data duplication detection was needed. The simulation experiment results show that, the key space overhead of the proposed BF scheme is lower than the classical Baseline scheme, and the time cost of the BF scheme is also lower than the Baseline scheme; and with the increase of data size, the performance advantage of BF scheme is more obvious.
Reference | Related Articles | Metrics
LI Zuoyong ZHANG Xiaoli WANG Jiayang ZHANG Zhengjian
Journal of Computer Applications    2014, 34 (6): 1641-1644.   DOI: 10.11772/j.issn.1001-9081.2014.06.1641
Abstract269)      PDF (564KB)(349)       Save

Aiming at the limitations of easily falling into local minimum and poor stability in simple Monkey-King Genetic Algorithm (MKGA), a MKGA by Immune Evolutionary Hybridized (MKGAIEH) was proposed. MKGAIEH divided the total population into several sub-groups. In order to make full use of the best individual (monkey-king) information of total population, the Immune Evolutionary Algorithm (IEA) was introduced to iterative calculation. In addition, for the other individuals in the sub-groups, the crossover and mutation operations were performed on the monkey-kings of sub-groups and total population. As local searches of all sub-groups were completed, the solutions of sub-groups were mixed again. As the iteration proceeds, this strategy combined the global information exchange with local search is not only to avoid the premature convergence, but also to approximate the global optimal solution with a higher accuracy. Comparison experiments on 6 test functions using MKGAIEH, MKGA, Improved MKGA (IMKGA), Bee Evolutionary Genetic Algorithm (BEGA), Algorithm of Shuffled Frog Leaping based on Immune Evolutionary Particle Swarm Optimization (IEPSOSFLA), and Common climbing Operator Genetic Algorithm (COGA) were given. The results show that the MKGAIEH can find the global optimal solutions for all 6 test functions, and the mean values and standard deviation accuracy of 5 test functions achieve the minimums with improving several orders of magnitude than those of the comparison algorithms. Therefore, MKGAIEH has the optimal searching ability and the stability all the better.

Reference | Related Articles | Metrics
Universal designated verifier signcryption scheme in standard model
MING Yang ZHANG Lin HAN Juan ZHOU Jun
Journal of Computer Applications    2014, 34 (2): 464-468.  
Abstract436)      PDF (702KB)(434)       Save
Concerning the signature security problem in reality, based on the Waters'technology, a universal designated verifier signcryption scheme in the standard model was proposed. Signcryption is a cryptographic primitive which performs encryption and signature in a single logical step. Universal designated verifier signature allowed a signature holder who had a signature of a signer, to convince a designated verifier that he was in possession of a signer's signature, while the verifier could not transfer such conviction to anyone else, only allowed the designated verifier to verify the existence of the signature. The scheme by combining universal designated verifier and signcryption eliminated the signer and signture holders for signature transmission required for a secure channel. Under the assumption of Computational Bilinear Diffie-Hellman (CBDH) problem, the scheme was proved to be safe. Compared with the existing schemes, the proposed scheme has better computational efficiency.
Related Articles | Metrics
Parallel framework based on aspect-oriented programming and run-time reflection
ZHANG Yang ZHANG Dongwen WANG Yizhuo
Journal of Computer Applications    2014, 34 (11): 3096-3099.   DOI: 10.11772/j.issn.1001-9081.2014.11.3096
Abstract203)      PDF (550KB)(519)       Save

JOMP that is the OpenMP-like implementation in Java needs to be optimized, so a parallel framework, which can separate parallel logic and logic function, was proposed.The parallel framework was implemented by a parallel library named waxberry, and the parts which need to be processed parallelly were annotated and executed by using Aspect-Oriented Programming (AOP) and run-time reflection. AOP was used to separate parallel parts with core ones, and to weave them together. Run-time reflection was used to obtain the related information during the parallel execution. The library waxberry was evaluated using Java Grande Forum (JGF) benchmarks on a quad-core processor. The experimental results show that the waxberry can obtain good performance.

Reference | Related Articles | Metrics
Implementation and optimization of speaker recognition algorithm based on SOPC
HE Wei XU Yang ZHANG Ling
Journal of Computer Applications    2012, 32 (05): 1463-1466.  
Abstract1188)      PDF (2119KB)(823)       Save
Making use of the flexible programmability of SOPC (System On a Programmable Chip) and strong parallel processing ability of FPGA (Field Programmable Gate Array), the speaker recognition algorithm was implemented on FPGA, and the system was optimized in terms of identification speed and accuracy. The principle of speaker recognition algorithm got researched, and according to the characteristics, the SOPC was constructed. It used ping-pong operation to implement voice collection and processing, and used the hardware of FPGA to deal with some time-consuming modules in algorithm so as to quicken the recognition. It also used Genetic Algorithm (GA) to generate template codebook to improve the identification accuracy. Finally, the system realized the function of identity recognition with high real-time quality and accuracy.
Reference | Related Articles | Metrics
Deep Web query interface identification approach based on label coding
WANG Yan SONG Bao-yan ZHANG Jia-yang ZHANG Hong-mei LI Xiao-guang
Journal of Computer Applications    2011, 31 (05): 1351-1354.   DOI: 10.3724/SP.J.1087.2011.01351
Abstract1120)      PDF (598KB)(922)       Save
In this paper, concerning the complexity of calculation, maintenance and matching ambiguity, a Deep Web query interface identification approach based on label coding was proposed after studying the current identification approach of query interface thoroughly. This approach coded and grouped labels by the directivity and the irregularity of arrangement of the query interface. The identification approach of simple attributes and composite attributes and the processing approach of isolated texts were proposed, taking each label group as an independent unit to identify the feature information. The texts matching the elements were determined by the constraints on the label subscript, which greatly reduced the number of texts considered in matching an element and avoided the problem of matching ambiguity caused by massive heuristic algorithm, and the presentation of nested information was solved by twice clustering effectively and efficiently.
Related Articles | Metrics
Study on method for mining concurrent sequential pattern
Yang ZHANG Wei-ru CHEN Shan-shan CHEN
Journal of Computer Applications    2009, 29 (11): 3096-3099.  
Abstract1240)      PDF (744KB)(1200)       Save
The definitions of concurrent relation and concurrence threshold were re-submitted. On the basis of these definitions, the concept of concurrent sequential pattern was given. The method to mine concurrent sequential patterns was also proposed, named concurrent sequential patterns mining method based on supporting vector. Under this method,through finding the supporting vector of each element of sequential patterns, the two branch concurrent sequential patterns and their supporting vectors could be got. The supporting vectors of k branch sequential pattern and their supporting vectors could be acquired using supporting vector of any k-1 branch concurrent sequential pattern and supporting vector of any sequential pattern, and thus the whole k branch concurrent sequential patterns could be found. The method was tested and analyzed to be efficient and feasible through experiments.
Related Articles | Metrics