Most Read articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All

    All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Review of multi-modal medical image segmentation based on deep learning
    Meng DOU, Zhebin CHEN, Xin WANG, Jitao ZHOU, Yu YAO
    Journal of Computer Applications    2023, 43 (11): 3385-3395.   DOI: 10.11772/j.issn.1001-9081.2022101636
    Abstract1533)   HTML55)    PDF (3904KB)(1371)       Save

    Multi-modal medical images can provide clinicians with rich information of target areas (such as tumors, organs or tissues). However, effective fusion and segmentation of multi-modal images is still a challenging problem due to the independence and complementarity of multi-modal images. Traditional image fusion methods have difficulty in addressing this problem, leading to widespread research on deep learning-based multi-modal medical image segmentation algorithms. The multi-modal medical image segmentation task based on deep learning was reviewed in terms of principles, techniques, problems, and prospects. Firstly, the general theory of deep learning and multi-modal medical image segmentation was introduced, including the basic principles and development processes of deep learning and Convolutional Neural Network (CNN), as well as the importance of the multi-modal medical image segmentation task. Secondly, the key concepts of multi-modal medical image segmentation was described, including data dimension, preprocessing, data enhancement, loss function, and post-processing, etc. Thirdly, different multi-modal segmentation networks based on different fusion strategies were summarized and analyzed. Finally, several common problems in medical image segmentation were discussed, the summary and prospects for future research were given.

    Table and Figures | Reference | Related Articles | Metrics
    Review of application analysis and research progress of deep learning in weather forecasting
    Runting DONG, Li WU, Xiaoying WANG, Tengfei CAO, Jianqiang HUANG, Qin GUAN, Jiexia WU
    Journal of Computer Applications    2023, 43 (6): 1958-1968.   DOI: 10.11772/j.issn.1001-9081.2022050745
    Abstract1235)   HTML93)    PDF (1570KB)(1444)       Save

    With the advancement of technologies such as sensor networks and global positioning systems, the volume of meteorological data with both temporal and spatial characteristics has exploded, and the research on deep learning models for Spatiotemporal Sequence Forecasting (STSF) has developed rapidly. However, the traditional machine learning methods applied to weather forecasting for a long time have unsatisfactory effects in extracting the temporal correlations and spatial dependences of data, while the deep learning methods can extract features automatically through artificial neural networks to improve the accuracy of weather forecasting effectively, and have a very good effect in encoding long-term spatial information modeling. At the same time, the deep learning models driven by observational data and Numerical Weather Prediction (NWP) models based on physical theories are combined to build hybrid models with higher prediction accuracy and longer prediction time. Based on these, the application analysis and research progress of deep learning in the field of weather forecasting were reviewed. Firstly, the deep learning problems in the field of weather forecasting and the classical deep learning problems were compared and studied from three aspects: data format, problem model and evaluation metrics. Then, the development history and application status of deep learning in the field of weather forecasting were looked back, and the latest progress in combining deep learning technologies with NWP was summarized and analyzed. Finally, the future development directions and research focuses were prospected to provide a certain reference for future deep learning research in the field of weather forecasting.

    Table and Figures | Reference | Related Articles | Metrics
    Embedded road crack detection algorithm based on improved YOLOv8
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023050635
    Online available: 01 September 2023

    Multimodal knowledge graph representation learning: a review
    Chunlei WANG, Xiao WANG, Kai LIU
    Journal of Computer Applications    2024, 44 (1): 1-15.   DOI: 10.11772/j.issn.1001-9081.2023050583
    Abstract850)   HTML69)    PDF (3449KB)(796)       Save

    By comprehensively comparing the models of traditional knowledge graph representation learning, including the advantages and disadvantages and the applicable tasks, the analysis shows that the traditional single-modal knowledge graph cannot represent knowledge well. Therefore, how to use multimodal data such as text, image, video, and audio for knowledge graph representation learning has become an important research direction. At the same time, the commonly used multimodal knowledge graph datasets were analyzed in detail to provide data support for relevant researchers. On this basis, the knowledge graph representation learning models under multimodal fusion of text, image, video, and audio were further discussed, and various models were summarized and compared. Finally, the effect of multimodal knowledge graph representation on enhancing classical applications, including knowledge graph completion, question answering system, multimodal generation and recommendation system in practical applications was summarized, and the future research work was prospected.

    Table and Figures | Reference | Related Articles | Metrics

    Technology application prospects and risk challenges of large language model

    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023060885
    Online available: 14 September 2023

    Dynamic multi-domain adversarial learning method for cross-subject motor imagery EEG signals
    Xuan CAO, Tianjian LUO
    Journal of Computer Applications    2024, 44 (2): 645-653.   DOI: 10.11772/j.issn.1001-9081.2023030286
    Abstract715)   HTML3)    PDF (3364KB)(157)       Save

    Decoding motor imagery EEG (ElectroEncephaloGraphy) signal is one of the crucial techniques for building Brain Computer Interface (BCI) system. Due to EEG signal’s high cost of acquisition, large inter-subject discrepancy, and characteristics of strong time variability and low signal-to-noise ratio, constructing cross-subject pattern recognition methods become the key problem of such study. To solve the existing problem, a cross-subject dynamic multi-domain adversarial learning method was proposed. Firstly, the covariance matrix alignment method was used to align the given EEG samples. Then, a global discriminator was adapted for marginal distribution of different domains, and multiple class-wise local discriminators were adapted to conditional distribution for each class. The self-adaptive adversarial factor for multi-domain discriminator was automatically learned during training iterations. Based on dynamic multi-domain adversarial learning strategy, the Dynamic Multi-Domain Adversarial Network (DMDAN) model could learn deep features with generalization ability between cross-subject domains. Experimental results on public BCI Competition IV 2A and 2B datasets show that, DMDAN model improves the ability of learning domain-invariant features, achieving 1.80 and 2.52 percentage points higher average classification accuracy on dataset 2A and dataset 2B compared with the existing adversarial learning method Deep Representation Domain Adaptation (DRDA). It can be seen that DMDAN model improves the decoding performance of cross-subject motor imagery EEG signals, and has generalization ability on different datasets.

    Table and Figures | Reference | Related Articles | Metrics
    Small object detection algorithm of YOLOv5 for safety helmet
    Zongzhe LYU, Hui XU, Xiao YANG, Yong WANG, Weijian WANG
    Journal of Computer Applications    2023, 43 (6): 1943-1949.   DOI: 10.11772/j.issn.1001-9081.2022060855
    Abstract711)   HTML40)    PDF (3099KB)(527)       Save

    Safety helmet wearing is a powerful guarantee of workers’ personal safety. Aiming at the collected safety helmet wearing pictures have characteristics of high density, small pixels and difficulty to detect, a small object detection algorithm of YOLOv5 (You Only Look Once version 5) for safety helmet was proposed. Firstly, based on YOLOv5 algorithm, the bounding box regression loss function and confidence prediction loss function were optimized to improve the learning effect of the algorithm on the features of dense small objects in training. Secondly, slicing aided fine-tuning and Slicing Aided Hyper Inference (SAHI) were introduced to make the small object produce a larger pixel area by slicing the pictures input into the network, and the effect of network inference and fine-tuning was improved. In the experiments, a dataset containing dense small objects of safety helmets in the industrial scenes was used for training. The experimental results show that compared with the original YOLOv5 algorithm, the improved algorithm can increase the precision by 0.26 percentage points, the recall by 0.38 percentage points. And the mean Average Precision (mAP) of the proposed algorithm reaches 95.77%, which is improved by 0.46 to 13.27 percentage points compared to several algorithms such as the original YOLOv5 algorithm. The results verify that the introduction of slicing aided fine-tuning and SAHI improves the precision and confidence of small object detection and recognition in the dense scenes, reduces the false detection and missed detection cases, and can satisfy the requirements of safety helmet wearing detection effectively.

    Table and Figures | Reference | Related Articles | Metrics
    Few-shot text classification method based on prompt learning
    Bihui YU, Xingye CAI, Jingxuan WEI
    Journal of Computer Applications    2023, 43 (9): 2735-2740.   DOI: 10.11772/j.issn.1001-9081.2022081295
    Abstract705)   HTML51)    PDF (884KB)(736)       Save

    Text classification tasks usually rely on sufficient labeled data. Concerning the over-fitting problem of classification models on samples with small size in low resource scenarios, a few-shot text classification method based on prompt learning called BERT-P-Tuning was proposed. Firstly, the pre-trained model BERT (Bidirectional Encoder Representations from Transformers) was used to learn the optimal prompt template from labeled samples. Then, the prompt template and vacancy were filled in each sample, and the text classification task was transformed into the cloze test task. Finally, the final labels were obtained by predicting the word with the highest probability of the vacant positions and combining the mapping relationship between it and labels. Experimental results on the short text classification tasks of public dataset FewCLUE show that the proposed method have significantly improved the evaluation indicators compared to the BERT fine-tuning based method. In specific, the proposed method has the accuracy and F1 score increased by 25.2 and 26.7 percentage points respectively on the binary classification task, and the proposed method has the accuracy and F1 score increased by 6.6 and 8.0 percentage points respectively on the multi-class classification task. Compared with the PET (Pattern Exploiting Training) method of constructing templates manually, the proposed method has the accuracy increased by 2.9 and 2.8 percentage points respectively on two tasks, and the F1 score increased by 4.4 and 4.2 percentage points respectively on two tasks. The above verifies the effectiveness of applying pre-trained model on few-shot tasks.

    Table and Figures | Reference | Related Articles | Metrics
    Review of lifelong learning in computer vision
    Yichi CHEN, Bin CHEN
    Journal of Computer Applications    2023, 43 (6): 1785-1795.   DOI: 10.11772/j.issn.1001-9081.2022050766
    Abstract660)   HTML67)    PDF (2053KB)(765)       Save

    LifeLong learning (LLL), as an emerging method, breaks the limitations of traditional machine learning and gives the models the ability to accumulate, optimize and transfer knowledge in the learning process like human beings. In recent years, with the wide application of deep learning, more and more studies attempt to solve catastrophic forgetting problem in deep neural networks and get rid of the stability-plasticity dilemma, as well as apply LLL methods to a wide varieties of real-world scenarios to promote the development of artificial intelligence from weak to strong. Aiming at the field of computer vision, firstly, LLL methods were classified into four types in image classification tasks: data-driven methods, optimization process based methods, network structure based methods and knowledge combination based methods. Then, typical applications of LLL methods in other visual tasks and related evaluation indicators were introduced. Finally, the deficiencies of LLL methods at current stage were discussed, and the future development directions of LLL methods were proposed.

    Table and Figures | Reference | Related Articles | Metrics
    Review of object pose estimation in RGB images based on deep learning
    Yi WANG, Jie XIE, Jia CHENG, Liwei DOU
    Journal of Computer Applications    2023, 43 (8): 2546-2555.   DOI: 10.11772/j.issn.1001-9081.2022071022
    Abstract647)   HTML27)    PDF (858KB)(455)       Save

    6 Degree of Freedom (DoF) pose estimation is a key technology in computer vision and robotics, and has become a crucial task in the fields such as robot operation, automatic driving, augmented reality by estimating 6 DoF pose of an object from a given input image, that is, 3 DoF translation and 3 DoF rotation. Firstly, the concept of 6 DoF pose and the problems of traditional methods based on feature point correspondence, template matching, and three-dimensional feature descriptors were introduced. Then, the current mainstream 6 DoF pose estimation algorithms based on deep learning were introduced in detail from different angles of feature correspondence-based, pixel voting-based, regression-based and multi-object instances-oriented, synthesis data-oriented, and category level-oriented. At the same time, the datasets and evaluation indicators commonly used in pose estimation were summarized and sorted out, and some algorithms were evaluated experimentally to show their performance. Finally, the challenges and the key research directions in the future of pose estimation were given.

    Table and Figures | Reference | Related Articles | Metrics
    Gradient descent with momentum algorithm based on differential privacy in convolutional neural network
    Yu ZHANG, Ying CAI, Jianyang CUI, Meng ZHANG, Yanfang FAN
    Journal of Computer Applications    2023, 43 (12): 3647-3653.   DOI: 10.11772/j.issn.1001-9081.2022121881
    Abstract632)   HTML110)    PDF (1985KB)(681)       Save

    To address the privacy leakage problem caused by the model parameters memorizing some features of the data during the training process of the Convolutional Neural Network (CNN) models, a Gradient Descent with Momentum algorithm based on Differential Privacy in CNN (DPGDM) was proposed. Firstly, the Gaussian noise meeting differential privacy was added to the gradient in the backpropagation process of model optimization, and the noise-added gradient value was used to participate in the model parameter update process, so as to achieve differential privacy protection for the overall model. Secondly, to reduce the impact of the introduction of differential privacy noise on convergence speed of the model, a learning rate decay strategy was designed and then the gradient descent with momentum algorithm was improved. Finally, to reduce the influence of noise on the accuracy of the model, the value of the noise scale was adjusted dynamically during model optimization, thereby changing the amount of noise that needs to be added to the gradient in each round of iteration. Experimental results show that compared with DP-SGD (Differentially Private Stochastic Gradient Descent) algorithm, the proposed algorithm can improve the accuracy of the model by about 5 and 4 percentage points at privacy budget of 0.3 and 0.5, respectively, proving that by using the proposed algorithm, the model usability is improved and privacy protection of the model is achieved.

    Table and Figures | Reference | Related Articles | Metrics
    Survey of online learning resource recommendation
    Yongfeng DONG, Yacong WANG, Yao DONG, Yahan DENG
    Journal of Computer Applications    2023, 43 (6): 1655-1663.   DOI: 10.11772/j.issn.1001-9081.2022091335
    Abstract628)   HTML59)    PDF (824KB)(503)       Save

    In recent years, more and more schools tend to use online education widely. However, learners are hard to search for their needs from the massive learning resources in the Internet. Therefore, it is very important to research the online learning resource recommendation and perform personalized recommendations for learners, so as to help learners obtain the high-quality learning resources they need quickly. The research status of online learning resource recommendation was analyzed and summarized from the following five aspects. Firstly, the current work of domestic and international online education platforms in learning resource recommendation was summed up. Secondly, four types of algorithms were analyzed and discussed: using knowledge point exercises, learning paths, learning videos and learning courses as learning resource recommendation targets respectively. Thirdly, from the perspectives of learners and learning resources, using the specific algorithms as examples, three learning resource recommendation algorithms based on learners’ portraits, learners’ behaviors and learning resource ontologies were introduced in detail respectively. Moreover, the public online learning resource datasets were listed. Finally, the current challenges and future research directions were analyzed.

    Table and Figures | Reference | Related Articles | Metrics
    Review of YOLO algorithm and its application to object detection in autonomous driving scenes
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023060889
    Online available: 11 September 2023

    Feature selection method for graph neural network based on network architecture design
    Dapeng XU, Xinmin HOU
    Journal of Computer Applications    2024, 44 (3): 663-670.   DOI: 10.11772/j.issn.1001-9081.2023030353
    Abstract592)   HTML101)    PDF (1001KB)(706)       Save

    In recent years, researchers have proposed many improved model architecture designs for Graph Neural Network (GNN), driving performance improvements in various prediction tasks. But most GNN variants start with the assumption that node features are equally important, which is not the case. To solve this problem, a feature selection method was proposed to improve the existing model and select important feature subsets for the dataset. The proposed method consists of two components, a feature selection layer, and a separate label-feature mapping. Softmax normalizer and feature “soft selector” were used for feature selection in the feature selection layer, and the model structure was designed under the idea of separate label-feature mapping to select the corresponding subsets of related features for different labels, and multiple related feature subsets were performed union operation to obtain an important feature subset of the final dataset. Graph ATtention network (GAT) and GATv2 models were selected as the benchmark models, and the algorithm was applied to the benchmark models to obtain new models. Experimental results show that when the proposed models perform node classification tasks on six datasets, their accuracies are improved by 0.83% - 8.79% compared with the baseline models. The new models also select the corresponding important feature subsets for the six datasets, in which the number of features accounts for 3.94% - 12.86% of the total number of features in their respective datasets. After using the important feature subset as the new input of the benchmark model, the accuracy more than 95% (using all features) is still achieved. That is, the scale of the model is reduced while ensuring the accuracy. It can be seen that the proposed new algorithm can improve the accuracy of node classification, and can effectively select the corresponding important feature subset for the dataset.

    Table and Figures | Reference | Related Articles | Metrics
    UAV cluster cooperative combat decision-making method based on deep reinforcement learning
    Lin ZHAO, Ke LYU, Jing GUO, Chen HONG, Xiancai XIANG, Jian XUE, Yong WANG
    Journal of Computer Applications    2023, 43 (11): 3641-3646.   DOI: 10.11772/j.issn.1001-9081.2022101511
    Abstract567)   HTML12)    PDF (2944KB)(406)       Save

    When the Unmanned Aerial Vehicle (UAV) cluster attacks ground targets, it will be divided into two formations: a strike UAV cluster that attacks the targets and a auxiliary UAV cluster that pins down the enemy. When auxiliary UAVs choose the action strategy of aggressive attack or saving strength, the mission scenario is similar to a public goods game where the benefits to the cooperator are less than those to the betrayer. Based on this, a decision method for cooperative combat of UAV clusters based on deep reinforcement learning was proposed. First, by building a public goods game based UAV cluster combat model, the interest conflict problem between individual and group in cooperation of intelligent UAV clusters was simulated. Then, Muti-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm was used to solve the most reasonable combat decision of the auxiliary UAV cluster to achieve cluster victory with minimum loss cost. Training and experiments were performed under conditions of different numbers of UAV. The results show that compared to the training effects of two algorithms — IDQN (Independent Deep Q-Network) and ID3QN (Imitative Dueling Double Deep Q-Network), the proposed algorithm has the best convergence, its winning rate can reach 100% with four auxiliary UAVs, and it also significantly outperforms the comparison algorithms with other UAV numbers.

    Table and Figures | Reference | Related Articles | Metrics
    Poisoning attack detection scheme based on generative adversarial network for federated learning
    Qian CHEN, Zheng CHAI, Zilong WANG, Jiawei CHEN
    Journal of Computer Applications    2023, 43 (12): 3790-3798.   DOI: 10.11772/j.issn.1001-9081.2022121831
    Abstract566)   HTML26)    PDF (2367KB)(365)       Save

    Federated Learning (FL) emerges as a novel privacy-preserving Machine Learning (ML) paradigm. However, the distributed training structure of FL is more vulnerable to poisoning attack, where adversaries contaminate the global model through uploading poisoning models, resulting in the convergence deceleration and the prediction accuracy degradation of the global model. To solve the above problem, a poisoning attack detection scheme based on Generative Adversarial Network (GAN) was proposed. Firstly, the benign local models were fed into the GAN to output testing samples. Then, the testing samples were used to detect the local models uploaded by the clients. Finally, the poisoning models were eliminated according to the testing metrics. Meanwhile, two test metrics named F1 score loss and accuracy loss were defined to detect the poisoning models and extend the detection scope from one single type of poisoning attacks to all types of poisoning attacks. Besides, a threshold determination method was designed to deal with misjudgment, so that the robust of misjudgment was confirmed. Experimental results on MNIST and Fashion-MNIST datasets show that the proposed scheme can generate high-quality testing samples, and then detect and eliminate poisoning models. Compared with the global models trained with the detection scheme based on directly gathering test data from clients and the detection scheme based on generating test data and using test accuracy as the test metric, the global model trained with the proposed scheme has significant accuracy improvement from 2.7 to 12.2 percentage points.

    Table and Figures | Reference | Related Articles | Metrics
    Survey of data-driven intelligent cloud-edge collaboration
    Pengxin TIAN, Guannan SI, Zhaoliang AN, Jianxin LI, Fengyu ZHOU
    Journal of Computer Applications    2023, 43 (10): 3162-3169.   DOI: 10.11772/j.issn.1001-9081.2022091418
    Abstract556)   HTML29)    PDF (1772KB)(393)       Save

    With the rapid development of Internet of Things (IoT), a large amount of data generated in edge scenarios such as sensors often needs to be transmitted to cloud nodes for processing, which brings huge transmission cost and processing delay. Cloud-edge collaboration provides a solution for these problems. Firstly, on the basis of comprehensive investigation and analysis of the development process of cloud-edge collaboration, combined with the current research ideas and progress of intelligent cloud-edge collaboration, the data acquisition and analysis, computation offloading technology and model-based intelligent optimization technology in cloud edge architecture were analyzed and discussed emphatically. Secondly, the functions and applications of various technologies in intelligent cloud-edge collaboration were analyzed deeply from the edge and the cloud respectively, and the application scenarios of intelligent cloud-edge collaboration technology in reality were discussed. Finally, the current challenges and future development directions of intelligent cloud-edge collaboration were pointed out.

    Table and Figures | Reference | Related Articles | Metrics
    Prompt learning based unsupervised relation extraction model
    Menglin HUANG, Lei DUAN, Yuanhao ZHANG, Peiyan WANG, Renhao LI
    Journal of Computer Applications    2023, 43 (7): 2010-2016.   DOI: 10.11772/j.issn.1001-9081.2022071133
    Abstract541)   HTML17)    PDF (1353KB)(235)       Save

    Unsupervised relation extraction aims to extract the semantic relations between entities from unlabeled natural language text. Currently, unsupervised relation extraction models based on Variational Auto-Encoder (VAE) architecture provide supervised signals to train model through reconstruction loss, which offers a new idea to complete unsupervised relation extraction tasks. Focusing on the issue that this kind of models cannot understand contextual information effectively and relies on dataset inductive biases, a Prompt-based learning based Unsupervised Relation Extraction (PURE) model was proposed, including a relation extraction module and a link prediction module. In the relation extraction module, a context-aware Prompt template function was designed to fuse the contextual information, and the unsupervised relation extraction task was converted into a mask prediction task, so as to make full use of the knowledge obtained during pre-training phase to extract relations. In the link prediction module, supervised signals were provided for the relation extraction module by predicting the missing entities in the triples to assist model training. Extensive experiments on two public real-world relation extraction datasets were carried out. The results show that PURE model can use contextual information effectively and does not rely on dataset inductive biases, and has the evaluation index B-cubed F1 improved by 3.3 percentage points on NYT dataset compared with the state-of-the-art VAE architecture-based model UREVA (Variational Autoencoder-based Unsupervised Relation Extraction model).

    Table and Figures | Reference | Related Articles | Metrics
    Few-shot object detection algorithm based on Siamese network
    Junjian JIANG, Dawei LIU, Yifan LIU, Yougui REN, Zhibin ZHAO
    Journal of Computer Applications    2023, 43 (8): 2325-2329.   DOI: 10.11772/j.issn.1001-9081.2022121865
    Abstract522)   HTML40)    PDF (1472KB)(674)       Save

    Deep learning based algorithms such as YOLO (You Only Look Once) and Faster Region-Convolutional Neural Network (Faster R-CNN) require a huge amount of training data to ensure the precision of the model, and it is difficult to obtain data and the cost of labeling data is high in many scenarios. And due to the lack of massive training data, the detection range is limited. Aiming at the above problems, a few-shot object Detection algorithm based on Siamese Network was proposed, namely SiamDet, with the purpose of training an object detection model with certain generalization ability by using a few annotated images. Firstly, a Siamese network based on depthwise separable convolution was proposed, and a feature extraction network ResNet-DW was designed to solve the overfitting problem caused by insufficient samples. Secondly, an object detection algorithm SiamDet was proposed based on Siamese network, and based on ResNet-DW, Region Proposal Network (RPN) was introduced to locate the interested objects. Thirdly, binary cross entropy loss was introduced for training, and contrast training strategy was used to increase the distinction among categories. Experimental results show that SiamDet has good object detection ability for few-shot objects, and SiamDet improves AP50 by 4.1% on MS-COCO 20-way 2-shot and 2.6% on PASCAL VOC 5-way 5-shot compared with the suboptimal algorithm DeFRCN (Decoupled Faster R-CNN).

    Table and Figures | Reference | Related Articles | Metrics
    Multi-view clustering network with deep fusion
    Ziyi HE, Yan YANG, Yiling ZHANG
    Journal of Computer Applications    2023, 43 (9): 2651-2656.   DOI: 10.11772/j.issn.1001-9081.2022091394
    Abstract477)   HTML47)    PDF (1074KB)(365)       Save

    Current deep multi-view clustering methods have the following shortcomings: 1) When feature extraction is carried out for a single view, only attribute information or structural information of the samples is considered, and these two types of information are not integrated. Thus, the extracted features cannot fully represent latent structure of the original data. 2) Feature extraction and clustering were divided into two separated processes, without establishing the relationship between them, so that the feature extraction process cannot be optimized by the clustering process. To solve these problems, a Deep Fusion based Multi-view Clustering Network (DFMCN) was proposed. Firstly, the embedding space of each view was obtained by combining autoencoder and graph convolution autoencoder to fuse attribute information and structure information of samples. Then, the embedding space of the fusion view was obtained through weighted fusion, and clustering was carried out in this space. And in the process of clustering, the feature extraction process was optimized by a two-layer self-supervision mechanism. Experimental results on FM (Fashion-MNIST), HW (HandWritten numerals), and YTF (YouTube Face) datasets show that the accuracy of DFMCN is higher than those of all comparison methods; and DFMCN has the accuracy increased by 1.80 percentage points compared with the suboptimal CMSC-DCCA (Cross-Modal Subspace Clustering via Deep Canonical Correlation Analysis) method on FM dataset, the Normalized Mutual Information (NMI) of DFMCN is increased by 1.26 to 14.84 percentage points compared to all methods except for CMSC-DCCA and DMSC (Deep Multimodal Subspace Clustering networks). Experimental results verify the effectiveness of the proposed method.

    Table and Figures | Reference | Related Articles | Metrics
    Survey on combination of computation offloading and blockchain in internet of things
    Rui MEN, Shujia FAN, Axida SHAN, Shaoyu DU, Xiumei FAN
    Journal of Computer Applications    2023, 43 (10): 3008-3016.   DOI: 10.11772/j.issn.1001-9081.2022091466
    Abstract457)   HTML26)    PDF (882KB)(199)       Save

    With the recent development of mobile communication technology and the popularization of smart devices, the computation-intensive tasks of the terminal devices can be offloaded to edge servers to solve the problem of insufficient resources. However, the distributed nature of computation offloading technology exposes terminal devices and edge servers to security risks. And, blockchain technology can provide a safe environment transaction for the computation offloading system. The combination of the above two technologies can solve the insufficient resource and the security problems in internet of things. Therefore, the research results of applications combining computation offloading with blockchain technologies in internet of things were surveyed. Firstly, the application scenarios and system functions in the combination of computation offloading and blockchain technologies were analyzed. Then, the main problems solved by blockchain technology and the key techniques used in this technology were summarized in the computation offloading system. The formulation methods, optimization objectives and optimization algorithms of computation offloading strategies in the blockchain system were classified. Finally, the problems in the combination were provided, and the future directions of development in this area were prospected.

    Table and Figures | Reference | Related Articles | Metrics
    Overview of classification methods for complex data streams with concept drift
    Dongliang MU, Meng HAN, Ang LI, Shujuan LIU, Zhihui GAO
    Journal of Computer Applications    2023, 43 (6): 1664-1675.   DOI: 10.11772/j.issn.1001-9081.2022060881
    Abstract439)   HTML30)    PDF (1939KB)(272)       Save

    The traditional classifiers are difficult to cope with the challenges of complex types of data streams with concept drift, and the obtained classification results are often unsatisfactory. Aiming at the methods of dealing with concept drift in different types of data streams, classification methods for complex data streams with concept drift were summarized from four aspects: imbalance, concept evolution, multi-label and noise-containing. Firstly, classification methods of four aspects were introduced and analyzed: block-based and online-based learning approaches for classifying imbalanced concept drift data streams, clustering-based and model-based learning approaches for classifying concept evolution concept drift data streams, problem transformation-based and algorithm adaptation-based learning approaches for classifying multi-label concept drift data streams and noisy concept drift data streams. Then, the experimental results and performance metrics of the mentioned concept drift complex data stream classification methods were compared and analyzed in detail. Finally, the shortcomings of the existing methods and the next research directions were given.

    Table and Figures | Reference | Related Articles | Metrics
    Task offloading algorithm for UAV-assisted mobile edge computing
    Xiaolin LI, Yusang JIANG
    Journal of Computer Applications    2023, 43 (6): 1893-1899.   DOI: 10.11772/j.issn.1001-9081.2022040548
    Abstract436)   HTML7)    PDF (2229KB)(248)       Save

    Unmanned Aerial Vehicle (UAV) is flexible and easy to deploy, and can assist Mobile Edge Computing (MEC) to help wireless systems improve coverage and communication quality. However, there are challenges such as computational latency requirements and resource management in the research of UAV-assisted MEC systems. Aiming at the delay problem of UAV providing auxiliary calculation services to multiple ground terminals, a Twin Delayed Deep Deterministic policy gradient (TD3) based Task Offloading Algorithm for Delay Minimization (TD3-TOADM) was proposed. Firstly, the optimization problem was modeled as the problem of minimizing the maximum computational delay under energy constraints. Secondly, TD3-TOADM was used to jointly optimize terminal equipment scheduling, UAV trajectory and task offloading ratio to minimize the maximum computational delay. Simulation analysis results show that compared with the task offloading algorithms based on Actor-Critic (AC), Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG), TD3-TOADM reduces the computational delay by more than 8.2%. It can be seen that TD3-TOADM algorithm has good convergence and robustness, and can obtain the optimal offloading strategy with low delay.

    Table and Figures | Reference | Related Articles | Metrics
    Lightweight image super-resolution reconstruction network based on Transformer-CNN
    Hao CHEN, Zhenping XIA, Cheng CHENG, Xing LIN-LI, Bowen ZHANG
    Journal of Computer Applications    2024, 44 (1): 292-299.   DOI: 10.11772/j.issn.1001-9081.2023010048
    Abstract432)   HTML16)    PDF (1855KB)(224)       Save

    Aiming at the high computational complexity and large memory consumption of the existing super-resolution reconstruction networks, a lightweight image super-resolution reconstruction network based on Transformer-CNN was proposed, which made the super-resolution reconstruction network more suitable to be applied on embedded terminals such as mobile platforms. Firstly, a hybrid block based on Transformer-CNN was proposed, which enhanced the ability of the network to capture local-global depth features. Then, a modified inverted residual block, with special attention to the characteristics of the high-frequency region, was designed, so that the improvement of feature extraction ability and reduction of inference time were realized. Finally, after exploring the best options for activation function, the GELU (Gaussian Error Linear Unit) activation function was adopted to further improve the network performance. Experimental results show that the proposed network can achieve a good balance between image super-resolution performance and network complexity, and reaches inference speed of 91 frame/s on the benchmark dataset Urban100 with scale factor of 4, which is 11 times faster than the excellent network called SwinIR (Image Restoration using Swin transformer), indicates that the proposed network can efficiently reconstruct the textures and details of the image and reduce a significant amount of inference time.

    Table and Figures | Reference | Related Articles | Metrics
    Dam surface disease detection algorithm based on improved YOLOv5
    Shengwei DUAN, Xinyu CHENG, Haozhou WANG, Fei WANG
    Journal of Computer Applications    2023, 43 (8): 2619-2629.   DOI: 10.11772/j.issn.1001-9081.2022081207
    Abstract430)   HTML27)    PDF (7862KB)(312)       Save

    For the current water conservancy dams mainly rely on manual on-site inspections, which have high operating costs and low efficiency, an improved detection algorithm based on YOLOv5 was proposed. Firstly, a modified multi-scale visual Transformer structure was used to improve the backbone, and the multi-scale global information associated with the multi-scale Transformer structure and the local information extracted by Convolutional Neural Network (CNN) were used to construct the aggregated features, thereby making full use of the multi-scale semantic information and location information to improve the feature extraction capability of the network. Then, coordinate attention mechanism was added in front of each feature detection layer of the network to encode features in the height and width directions of the image, and long-distance associations of pixels on the feature map were constructed by the encoded features to enhance the target localization ability of the network in complex environments. The sampling algorithm of the positive and negative training samples of the network was improved to help the candidate positive samples to respond to the prior frames of similar shape to themselves by constructing the average fit and difference between the prior frames and the ground-truth frames, so as to make the network converge faster and better, thus improving the overall performance of the network and the network generalization. Finally, the network structure was lightened for application requirements and was optimized by pruning the network structure and structural re-parameterization. Experimental results show that on the current adopted dam disease data, compared with the original YOLOv5s algorithm, the improved network has the mAP (mean Average Precision)@0.5 improved by 10.5 percentage points, the mAP@0.5:0.95 improved by 17.3 percentage points; compared to the network before lightening, the lightweight network has the number of parameters and the FLOPs(FLoating point Operations Per second) reduced by 24% and 13% respectively, and the detection speed improved by 42%, verifying that the network meets the requirements for precision and speed of disease detection in current application scenarios.

    Table and Figures | Reference | Related Articles | Metrics
    Acceleration and optimization of quantum computing simulator implemented on new Sunway supercomputer
    Xinmin SHI, Yong LIU, Yaojian CHEN, Jiawei SONG, Xin LIU
    Journal of Computer Applications    2023, 43 (8): 2486-2492.   DOI: 10.11772/j.issn.1001-9081.2022091456
    Abstract430)   HTML59)    PDF (2000KB)(440)       Save

    Two optimization methods for quantum simulator implemented on Sunway supercomputer were proposed aiming at the problems of gradual scaling of quantum hardware and insufficient classical simulation speed. Firstly, the tensor contraction operator library SWTT was reconstructed by improving the tensor transposition strategy and computation strategy, which improved the computing kernel efficiency of partial tensor contraction and reduced redundant memory access. Secondly, the balance between complexity and efficiency of path computation was achieved by the contraction path adjustment method based on data locality optimization. Test results show that the improvement method of operator library can improve the simulation efficiency of the "Sycamore" quantum supremacy circuit by 5.4% and the single-step tensor contraction efficiency by up to 49.7 times; the path adjustment method can improve the floating-point efficiency by about 4 times with the path computational complexity inflated by a factor of 2. The two optimization methods have the efficiencies of single-precision and mixed-precision floating-point operations for the simulation of Google’s 53-bit, 20-layer quantum chip random circuit with a million amplitude sampling improved from 3.98% and 1.69% to 18.48% and 7.42% respectively, and reduce the theoretical estimated simulation time from 470 s to 226 s for single-precision and 304 s to 134 s for mixed-precision, verifying that the two methods significantly improve the quantum computational simulation speed.

    Table and Figures | Reference | Related Articles | Metrics
    Differential privacy clustering algorithm in horizontal federated learning
    Xueran XU, Geng YANG, Yuxian HUANG
    Journal of Computer Applications    2024, 44 (1): 217-222.   DOI: 10.11772/j.issn.1001-9081.2023010019
    Abstract425)   HTML6)    PDF (1418KB)(219)       Save

    Clustering analysis can uncover hidden interconnections between data and segment the data according to multiple indicators, which can facilitate personalized and refined operations. However, data fragmentation and isolation caused by data islands seriously affects the effectiveness of cluster analysis applications. To solve data island problem and protect data privacy, an Equivalent Local differential privacy Federated K-means (ELFedKmeans) algorithm was proposed. A grid-based initial cluster center selection method and a privacy budget allocation scheme were designed for the horizontal federation learning model. To generate same random noise with lower communication cost, all organizations jointly negotiated random seeds, protecting local data privacy. The ELFedKmeans algorithm was demonstrated satisfying differential privacy protection through theoretical analysis, and it was also compared with Local Differential Privacy distributed K-means (LDPKmeans) algorithm and Hybrid Privacy K-means (HPKmeans) algorithm on different datasets. Experimental results show that all three algorithms increase F-measure and decrease SSE (Sum of Squares due to Error) gradually as privacy budget increases. As a whole, the F-measure values of ELFedKmeans algorithm was 1.794 5% to 57.066 3% and 21.245 2% to 132.048 8% higher than those of LDPKmeans and HPKmeans algorithms respectively; the Log(SSE) values of ELFedKmeans algorithm were 1.204 2% to 12.894 6% and 5.617 5% to 27.575 2% less than those of LDPKmeans and HPKmeans algorithms respectively. With the same privacy budget, ELFedKmeans algorithm outperforms the comparison algorithms in terms of clustering quality and utility metric.

    Table and Figures | Reference | Related Articles | Metrics
    Multi-view ensemble clustering algorithm based on view-wise mutual information weighting
    Jinghuan LAO, Dong HUANG, Changdong WANG, Jianhuang LAI
    Journal of Computer Applications    2023, 43 (6): 1713-1718.   DOI: 10.11772/j.issn.1001-9081.2022060925
    Abstract422)   HTML14)    PDF (1573KB)(206)       Save

    Many of the existing multi-view clustering algorithms lack the ability to estimate the reliability of different views and thus weight the views accordingly, and some multi-view clustering algorithms with view-weighting ability generally rely on the iterative optimization of specific objective function, whose real-world applications may be significantly influenced by the practicality of the objective function and the rationality of tuning some sensitive hyperparameters. To address these problems, a Multi-view Ensemble Clustering algorithm based on View-wise Mutual Information Weighting (MEC-VMIW) was proposed, whose overall process consists of two phases: the view-wise mutual weighting phase and the multi-view ensemble clustering phase. In the view-wise mutual weighting phase, multiple random down-samplings were performed to the dataset, so as to reduce the problem size in the evaluating and weighting process. After that, a set of down-sampled clusterings of multiple views was constructed. And, based on multiple runs of mutual evaluation among the clustering results of different views, the view-wise reliability was estimated and used for view weighting. In the multi-view ensemble clustering phase, the ensemble of base clusterings was constructed for each view, and multiple base clustering sets were weighted to model a bipartite graph structure. By performing efficient bipartite graph partitioning, the final multi-view clustering results were obtained. Experiments on several multi-view datasets confirm the robust clustering performance of the proposed multi-view ensemble clustering algorithm.

    Table and Figures | Reference | Related Articles | Metrics
    Infrared small target tracking method based on state information
    Xin TANG, Bo PENG, Fei TENG
    Journal of Computer Applications    2023, 43 (6): 1938-1942.   DOI: 10.11772/j.issn.1001-9081.2022050762
    Abstract421)   HTML11)    PDF (1552KB)(135)       Save

    Infrared small targets occupy few pixels and lack features such as color, texture and shape, so it is difficult to track them effectively. To solve this problem, an infrared small target tracking method based on state information was proposed. Firstly, the target, background and distractors in the local area of the small target to be detected were encoded to obtain dense local state information between consecutive frames. Secondly, feature information of the current and the previous frames were input into the classifier to obtain the classification score. Thirdly, the state information and the classification score were fused to obtain the final degree of confidence and determine the center position of the small target to be detected. Finally, the state information was updated and propagated between the consecutive frames. After that, the propagated state information was used to track the infrared small target in the entire sequences. The proposed method was validated on an open dataset DIRST (Dataset for Infrared detection and tRacking of dim-Small aircrafT). Experimental results show that for infrared small target tracking, the recall of the proposed method reaches 96.2%, and the precision of the method reaches 97.3%, which are 3.7% and 3.7% higher than those of the current best tracking method KeepTrack. It proves that the proposed method can effectively complete the tracking of small infrared targets under complex background and interference.

    Table and Figures | Reference | Related Articles | Metrics
    Review of research on aquaculture counting based on machine vision
    Hanyu ZHANG, Zhenbo LI, Weiran LI, Pu YANG
    Journal of Computer Applications    2023, 43 (9): 2970-2982.   DOI: 10.11772/j.issn.1001-9081.2022081261
    Abstract415)   HTML17)    PDF (1320KB)(258)       Save

    Aquaculture counting is an important part of the aquaculture process, and the counting results provide an important basis for feeding, breeding density adjustment, and economic efficiency estimation of aquatic animals. In response to the traditional manual counting methods, which are time-consuming, labor-intensive, and prone to large errors, a large number of methods and applications based on machine vision have been proposed, thereby greatly promoting the development of non-destructive counting of aquatic products. In order to deeply understand the research on aquaculture counting based on machine vision, the relevant domestic and international literature in the past 30 years was collated and analyzed. Firstly, a review of aquaculture counting was presented in the perspective of data acquisition, and the methods for acquiring the data required for machine vision were summed up. Secondly, the aquaculture counting methods were analyzed and summarized in terms of traditional machine vision and deep learning. Thirdly, the practical applications of counting methods in different farming environments were compared and analyzed. Finally, the difficulties in the development of aquaculture counting research were summarized in terms of data, methods, and applications, and corresponding views were presented for the future trends of aquaculture counting research and equipment applications.

    Table and Figures | Reference | Related Articles | Metrics
    Collaborative recommendation algorithm based on deep graph neural network
    Runchao PAN, Qishan YU, Hongfei XIONG, Zhihui LIU
    Journal of Computer Applications    2023, 43 (9): 2741-2746.   DOI: 10.11772/j.issn.1001-9081.2022091361
    Abstract414)   HTML34)    PDF (1539KB)(345)       Save

    For the problem of over-smoothing in the existing recommendation algorithms based on Graph Neural Network (GNN), a collaborative filtering recommendation algorithm based on deep GCN was proposed, namely Deep NGCF (Deep Neural Graph Collaborative Filtering). In the algorithm, the initial residual connection and identity mapping were introduced into GNN, which avoided GNN from falling into over-smoothing after multiple graph convolution operations. Firstly, the initial embeddings of users and items were obtained through their interaction history. Next, in aggregation and propagation layer, collaborative signals of users and items in different stages were obtained with the use of initial residual connection and identity mapping. Finally, score prediction was performed according to the linear representation of all collaborative signals. In addition, to further improve the flexibility and recommendation performance of the model, the weights were set in the initial residual connection and identity mapping for adjustment. In order to verify the feasibility and effectiveness of Deep NGCF algorithm, experiments were conducted on datasets Gowalla, Yelp-2018 and Amazon-book. The results show that compared with the existing GNN recommendation algorithm such as Graph Convolutional Matrix Completion (GCMC) and Neural Graph Collaborate Filtering (NGCF), Deep NGCF algorithm achieves the best results on recall and Normalized Discounted Cumulative Gain (NDCG), thereby verifying the effectiveness of the proposed algorithm.

    Table and Figures | Reference | Related Articles | Metrics
    Survey of incomplete multi-view clustering
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023060813
    Online available: 21 August 2023

    Hierarchical storyline generation method for hot news events
    Dong LIU, Chuan LIN, Lina REN, Ruizhang HUANG
    Journal of Computer Applications    2023, 43 (8): 2376-2381.   DOI: 10.11772/j.issn.1001-9081.2022091377
    Abstract405)   HTML20)    PDF (1333KB)(265)       Save

    The development of hot news events is very rich, and each stage of the development has its own unique narrative. With the development of events, a trend of hierarchical storyline evolution is presented. Aiming at the problem of poor interpretability and insufficient hierarchy of storyline in the existing storyline generation methods, a Hierarchical Storyline Generation Method (HSGM) for hot news events was proposed. First, an improved hotword algorithm was used to select the main seed events to construct the trunk. Second, the hotwords of branch events were selected to enhance the branch interpretability. Third, in the branch, a storyline coherence selection strategy fusing hotword relevance and dynamic time penalty was used to enhance the connection of parent-child events, so as to build hierarchical hotwords, and then a multi-level storyline was built. In addition, considering the incubation period of hot news events, a hatchery was added during the storyline construction process to solve the problem of neglecting the initial events due to insufficient hotness. Experimental results on two real self-constructed datasets show that in the event tracking process, compared with the methods based on singlePass and k-means respectively, HSGM has the F score increased by 4.51% and 6.41%, 20.71% and 13.01% respectively; in the storyline construction process, HSGM performs well in accuracy, comprehensibility and integrity on two self-constructed datasets compared with Story Forest and Story Graph.

    Table and Figures | Reference | Related Articles | Metrics
    Application review of deep models in medical image segmentation: from U-Net to Transformer
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023071059
    Online available: 26 October 2023

    Survey of Parkinson’s disease auxiliary diagnosis methods based on gait analysis
    Jing QIN, Xueqian MA, Fujie GAO, Changqing JI, Zumin WANG
    Journal of Computer Applications    2023, 43 (6): 1687-1695.   DOI: 10.11772/j.issn.1001-9081.2022060926
    Abstract401)   HTML26)    PDF (2009KB)(234)       Save

    Focused on the existing diagnosis methods of Parkinson's Disease (PD), the auxiliary diagnosis methods of PD based on gait analysis was reviewed. In clinical practice, the common diagnosis method of gait assessment for PD is based on scales, which is simple and convenient, but is highly subjective and requires well-experienced clinical doctors. With the development of computer technology, more methods of gait analysis are provided. Firstly, PD and its abnormal manifestations in gait were summarized. Then, the common methods of auxiliary diagnosis for PD based on gait analysis were reviewed. These methods were able to be roughly divided into two types: methods based on wearable or non-wearable devices. Wearable devices are small and have high accuracy for diagnosis, and with the use of them, the gait status of patients can be monitored for a long time. With the use of non-wearable devices, human gait data is captured through video sensors such as Microsoft Kinect, without wearing related devices and restricting patients' movements. Finally, the deficiencies in the existing gait analysis methods were pointed out, and the possible development trends in the future were discussed.

    Table and Figures | Reference | Related Articles | Metrics
    Overview of cryptocurrency regulatory technologies research
    Jiaxin WANG, Jiaqi YAN, Qian’ang MAO
    Journal of Computer Applications    2023, 43 (10): 2983-2995.   DOI: 10.11772/j.issn.1001-9081.2022111694
    Abstract400)   HTML53)    PDF (911KB)(468)       Save

    With the help of blockchain and other emerging technologies, cryptocurrencies are decentralized, autonomous and cross-border. Research on cryptocurrency regulatory technologies is not only helpful to fight criminal activities based on cryptocurrencies, but also helpful to provide feasible supervision schemes for the expansion of blockchain technologies in other fields. Firstly, based on the application characteristics of cryptocurrency, the Generation, Exchange and Circulation (GEC) cycle theory of cryptocurrency was defined and elaborated. Then, the frequent international and domestic crimes based on cryptocurrencies were analyzed in detail, and the research status of cryptocurrency security supervision technologies in all three stages was investigated and surveyed as key point. Finally, the cryptocurrency regulatory platform ecology systems and current challenges faced by the regulatory technologies were summarized, and the future research directions of cryptocurrency regulatory technologies were prospected in order to provide reference for subsequent research.

    Table and Figures | Reference | Related Articles | Metrics
    Software Guard Extensions-based secure data processing framework for traffic monitoring of internet of vehicles
    Ruiqi FENG, Leilei WANG, Xiang LIN, Jinbo XIONG
    Journal of Computer Applications    2023, 43 (6): 1870-1877.   DOI: 10.11772/j.issn.1001-9081.2022050734
    Abstract399)   HTML6)    PDF (1801KB)(238)       Save

    Internet of Vehicles (IoV) traffic monitoring requires the transmission, storage and analysis of private data of users, making the security guarantee of private data particularly crucial. However, traditional security solutions are often hard to guarantee real-time computing and data security at the same time. To address the above issue, security protocols, including two initialization protocols and a periodic reporting protocol, were designed, and a Software Guard Extensions (SGX)-based IoV traffic monitoring Secure Data Processing Framework (SDPF) was built. In SDPF, the trusted hardware was used to enable the plaintext computation of private data in Road Side Unit (RSU), and efficient operation and privacy protection of the framework were ensured through security protocols and hybrid encryption scheme. Security analysis shows that SDPF is resistant to eavesdropping, tampering, replay, impersonation, rollback, and other attacks. Experiment results show that all computational operations of SDPF are at millisecond level, specifically, all data processing overhead of a single vehicle is less than 1 millisecond. Compared with PFCF (Privacy-preserving Fog Computing Framework for vehicular crowdsensing networks) based on fog computing and PPVF (Privacy-preserving Protocol for Vehicle Feedback in cloud-assisted Vehicular Ad hoc NETwork (VANET)) based on homomorphic encryption, SDPF has the security design more comprehensive: the message length of a single session is reduced by more than 90%, and the computational cost is reduced by at least 16.38%.

    Table and Figures | Reference | Related Articles | Metrics
    Source code vulnerability detection based on hybrid code representation
    Kun ZHANG, Fengyu YANG, Fa ZHONG, Guangdong ZENG, Shijian ZHOU
    Journal of Computer Applications    2023, 43 (8): 2517-2526.   DOI: 10.11772/j.issn.1001-9081.2022071135
    Abstract392)   HTML12)    PDF (1958KB)(205)       Save

    Software vulnerabilities pose a great threat to network and information security, and the root of vulnerabilities lies in software source code. Existing traditional static detection tools and deep learning based detection methods do not fully represent code features, and simply use word embedding method to transform code representation, so that their detection results have low accuracy and high false positive rate or high false negative rate. Therefore, a source code vulnerability detection method based on hybrid code representation was proposed to solve the problem of incomplete code representation and improve detection performance. Firstly, source code was compiled into Intermediate Representation (IR), and the program dependency graph was extracted. Then, structural features were obtained through program slicing based on data flow and control flow analysis. At the same time, unstructural features were obtained by embedding node statements using doc2vec. Next, Graph Neural Network (GNN) was used to learn the hybrid features. Finally, the trained GNN was used for prediction and classification. In order to verify the effectiveness of the proposed method, experimental evaluation was performed on Software Assurance Reference Dataset (SARD) and real-world datasets, and the F1 score of detection results reached 95.3% and 89.6% respectively. Experimental results show that the proposed method has good vulnerability detection ability.

    Table and Figures | Reference | Related Articles | Metrics
    Parallel algorithm of betweenness centrality for dynamic networks
    Zhenyu LIU, Chaokun WANG, Gaoyang GUO
    Journal of Computer Applications    2023, 43 (7): 1987-1993.   DOI: 10.11772/j.issn.1001-9081.2022071121
    Abstract392)   HTML57)    PDF (1663KB)(393)       Save

    Betweenness centrality is a common metric for evaluating the importance of nodes in a graph. However, the update efficiency of betweenness centrality in large-scale dynamic graphs is not high enough to meet the application requirements. With the development of multi-core technology, algorithm parallelization has become one of the effective ways to solve this problem. Therefore, a Parallel Algorithm of Betweenness centrality for dynamic networks (PAB) was proposed. Firstly, the time cost of redundant point pairs was reduced through operations such as community filtering, equidistant pruning and classification screening. Then, the determinacy of the algorithm was analyzed and processed to realize parallelization. Comparison experiments were conducted on real datasets and synthetic datasets, and the results show that the update efficiency of PAB is 4 times that of the latest batch-iCENTRAL algorithm on average when adding edges. It can be seen that the proposed algorithm can improve the update efficiency of betweenness centrality in dynamic networks effectively.

    Table and Figures | Reference | Related Articles | Metrics
    Aspect-based sentiment analysis method with integrating prompt knowledge
    Xinyue ZHANG, Rong LIU, Chiyu WEI, Ke FANG
    Journal of Computer Applications    2023, 43 (9): 2753-2759.   DOI: 10.11772/j.issn.1001-9081.2022091347
    Abstract390)   HTML18)    PDF (1699KB)(207)       Save

    Aspect-based sentiment analysis based on pre-trained models generally uses end-to-end frameworks, has the problems of inconsistency between the upstream and downstream tasks, and is difficult to model the relationships between aspect words and context effectively. To address these problems, an aspect-based sentiment analysis method integrating prompt knowledge was proposed. First, in order to capture the semantic relation between aspect words and context effectively and enhance the model’s perception ability for sentiment analysis tasks, based on the Prompt mechanism, a prompt text was constructed and spliced with the original sentence and aspect words, and the obtained results were used as the input of the pre-trained model Bidirectional Encoder Representations from Transformers (BERT). Then, a sentimental label vocabulary was built and integrated into the sentimental verbalizer layer, so as to reduce search space of the model, make the pre-trained model obtain rich semantic knowledge in the label vocabulary, and improve the learning ability of the model. Experimental results on Restaurant and Laptop field datasets of SemEval2014 Task4 dataset as well as ChnSentiCorp dataset show that the F1-score of the proposed method reaches 77.42%, 75.20% and 94.89% respectively, which is increased by 0.65 to 10.71, 1.02 to 9.58 and 0.83 to 6.40 percentage points compared with the mainstream aspect-based sentiment analysis methods such as Glove-TextCNN and P-tuning. The above verifies the effectiveness of the proposed method.

    Table and Figures | Reference | Related Articles | Metrics
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF