Most Download articles

    Published in last 1 year| In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    Published in last 1 year
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Review of application analysis and research progress of deep learning in weather forecasting
    Runting DONG, Li WU, Xiaoying WANG, Tengfei CAO, Jianqiang HUANG, Qin GUAN, Jiexia WU
    Journal of Computer Applications    2023, 43 (6): 1958-1968.   DOI: 10.11772/j.issn.1001-9081.2022050745
    Abstract1235)   HTML93)    PDF (1570KB)(1443)       Save

    With the advancement of technologies such as sensor networks and global positioning systems, the volume of meteorological data with both temporal and spatial characteristics has exploded, and the research on deep learning models for Spatiotemporal Sequence Forecasting (STSF) has developed rapidly. However, the traditional machine learning methods applied to weather forecasting for a long time have unsatisfactory effects in extracting the temporal correlations and spatial dependences of data, while the deep learning methods can extract features automatically through artificial neural networks to improve the accuracy of weather forecasting effectively, and have a very good effect in encoding long-term spatial information modeling. At the same time, the deep learning models driven by observational data and Numerical Weather Prediction (NWP) models based on physical theories are combined to build hybrid models with higher prediction accuracy and longer prediction time. Based on these, the application analysis and research progress of deep learning in the field of weather forecasting were reviewed. Firstly, the deep learning problems in the field of weather forecasting and the classical deep learning problems were compared and studied from three aspects: data format, problem model and evaluation metrics. Then, the development history and application status of deep learning in the field of weather forecasting were looked back, and the latest progress in combining deep learning technologies with NWP was summarized and analyzed. Finally, the future development directions and research focuses were prospected to provide a certain reference for future deep learning research in the field of weather forecasting.

    Table and Figures | Reference | Related Articles | Metrics
    Review of multi-modal medical image segmentation based on deep learning
    Meng DOU, Zhebin CHEN, Xin WANG, Jitao ZHOU, Yu YAO
    Journal of Computer Applications    2023, 43 (11): 3385-3395.   DOI: 10.11772/j.issn.1001-9081.2022101636
    Abstract1531)   HTML55)    PDF (3904KB)(1370)       Save

    Multi-modal medical images can provide clinicians with rich information of target areas (such as tumors, organs or tissues). However, effective fusion and segmentation of multi-modal images is still a challenging problem due to the independence and complementarity of multi-modal images. Traditional image fusion methods have difficulty in addressing this problem, leading to widespread research on deep learning-based multi-modal medical image segmentation algorithms. The multi-modal medical image segmentation task based on deep learning was reviewed in terms of principles, techniques, problems, and prospects. Firstly, the general theory of deep learning and multi-modal medical image segmentation was introduced, including the basic principles and development processes of deep learning and Convolutional Neural Network (CNN), as well as the importance of the multi-modal medical image segmentation task. Secondly, the key concepts of multi-modal medical image segmentation was described, including data dimension, preprocessing, data enhancement, loss function, and post-processing, etc. Thirdly, different multi-modal segmentation networks based on different fusion strategies were summarized and analyzed. Finally, several common problems in medical image segmentation were discussed, the summary and prospects for future research were given.

    Table and Figures | Reference | Related Articles | Metrics
    Review of online education learner knowledge tracing
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023060852
    Online available: 07 November 2023

    Embedded road crack detection algorithm based on improved YOLOv8
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023050635
    Online available: 01 September 2023

    Multimodal knowledge graph representation learning: a review
    Chunlei WANG, Xiao WANG, Kai LIU
    Journal of Computer Applications    2024, 44 (1): 1-15.   DOI: 10.11772/j.issn.1001-9081.2023050583
    Abstract848)   HTML69)    PDF (3449KB)(795)       Save

    By comprehensively comparing the models of traditional knowledge graph representation learning, including the advantages and disadvantages and the applicable tasks, the analysis shows that the traditional single-modal knowledge graph cannot represent knowledge well. Therefore, how to use multimodal data such as text, image, video, and audio for knowledge graph representation learning has become an important research direction. At the same time, the commonly used multimodal knowledge graph datasets were analyzed in detail to provide data support for relevant researchers. On this basis, the knowledge graph representation learning models under multimodal fusion of text, image, video, and audio were further discussed, and various models were summarized and compared. Finally, the effect of multimodal knowledge graph representation on enhancing classical applications, including knowledge graph completion, question answering system, multimodal generation and recommendation system in practical applications was summarized, and the future research work was prospected.

    Table and Figures | Reference | Related Articles | Metrics
    Review of lifelong learning in computer vision
    Yichi CHEN, Bin CHEN
    Journal of Computer Applications    2023, 43 (6): 1785-1795.   DOI: 10.11772/j.issn.1001-9081.2022050766
    Abstract658)   HTML67)    PDF (2053KB)(763)       Save

    LifeLong learning (LLL), as an emerging method, breaks the limitations of traditional machine learning and gives the models the ability to accumulate, optimize and transfer knowledge in the learning process like human beings. In recent years, with the wide application of deep learning, more and more studies attempt to solve catastrophic forgetting problem in deep neural networks and get rid of the stability-plasticity dilemma, as well as apply LLL methods to a wide varieties of real-world scenarios to promote the development of artificial intelligence from weak to strong. Aiming at the field of computer vision, firstly, LLL methods were classified into four types in image classification tasks: data-driven methods, optimization process based methods, network structure based methods and knowledge combination based methods. Then, typical applications of LLL methods in other visual tasks and related evaluation indicators were introduced. Finally, the deficiencies of LLL methods at current stage were discussed, and the future development directions of LLL methods were proposed.

    Table and Figures | Reference | Related Articles | Metrics
    Few-shot text classification method based on prompt learning
    Bihui YU, Xingye CAI, Jingxuan WEI
    Journal of Computer Applications    2023, 43 (9): 2735-2740.   DOI: 10.11772/j.issn.1001-9081.2022081295
    Abstract705)   HTML51)    PDF (884KB)(736)       Save

    Text classification tasks usually rely on sufficient labeled data. Concerning the over-fitting problem of classification models on samples with small size in low resource scenarios, a few-shot text classification method based on prompt learning called BERT-P-Tuning was proposed. Firstly, the pre-trained model BERT (Bidirectional Encoder Representations from Transformers) was used to learn the optimal prompt template from labeled samples. Then, the prompt template and vacancy were filled in each sample, and the text classification task was transformed into the cloze test task. Finally, the final labels were obtained by predicting the word with the highest probability of the vacant positions and combining the mapping relationship between it and labels. Experimental results on the short text classification tasks of public dataset FewCLUE show that the proposed method have significantly improved the evaluation indicators compared to the BERT fine-tuning based method. In specific, the proposed method has the accuracy and F1 score increased by 25.2 and 26.7 percentage points respectively on the binary classification task, and the proposed method has the accuracy and F1 score increased by 6.6 and 8.0 percentage points respectively on the multi-class classification task. Compared with the PET (Pattern Exploiting Training) method of constructing templates manually, the proposed method has the accuracy increased by 2.9 and 2.8 percentage points respectively on two tasks, and the F1 score increased by 4.4 and 4.2 percentage points respectively on two tasks. The above verifies the effectiveness of applying pre-trained model on few-shot tasks.

    Table and Figures | Reference | Related Articles | Metrics
    Feature selection method for graph neural network based on network architecture design
    Dapeng XU, Xinmin HOU
    Journal of Computer Applications    2024, 44 (3): 663-670.   DOI: 10.11772/j.issn.1001-9081.2023030353
    Abstract592)   HTML101)    PDF (1001KB)(706)       Save

    In recent years, researchers have proposed many improved model architecture designs for Graph Neural Network (GNN), driving performance improvements in various prediction tasks. But most GNN variants start with the assumption that node features are equally important, which is not the case. To solve this problem, a feature selection method was proposed to improve the existing model and select important feature subsets for the dataset. The proposed method consists of two components, a feature selection layer, and a separate label-feature mapping. Softmax normalizer and feature “soft selector” were used for feature selection in the feature selection layer, and the model structure was designed under the idea of separate label-feature mapping to select the corresponding subsets of related features for different labels, and multiple related feature subsets were performed union operation to obtain an important feature subset of the final dataset. Graph ATtention network (GAT) and GATv2 models were selected as the benchmark models, and the algorithm was applied to the benchmark models to obtain new models. Experimental results show that when the proposed models perform node classification tasks on six datasets, their accuracies are improved by 0.83% - 8.79% compared with the baseline models. The new models also select the corresponding important feature subsets for the six datasets, in which the number of features accounts for 3.94% - 12.86% of the total number of features in their respective datasets. After using the important feature subset as the new input of the benchmark model, the accuracy more than 95% (using all features) is still achieved. That is, the scale of the model is reduced while ensuring the accuracy. It can be seen that the proposed new algorithm can improve the accuracy of node classification, and can effectively select the corresponding important feature subset for the dataset.

    Table and Figures | Reference | Related Articles | Metrics
    Gradient descent with momentum algorithm based on differential privacy in convolutional neural network
    Yu ZHANG, Ying CAI, Jianyang CUI, Meng ZHANG, Yanfang FAN
    Journal of Computer Applications    2023, 43 (12): 3647-3653.   DOI: 10.11772/j.issn.1001-9081.2022121881
    Abstract632)   HTML110)    PDF (1985KB)(681)       Save

    To address the privacy leakage problem caused by the model parameters memorizing some features of the data during the training process of the Convolutional Neural Network (CNN) models, a Gradient Descent with Momentum algorithm based on Differential Privacy in CNN (DPGDM) was proposed. Firstly, the Gaussian noise meeting differential privacy was added to the gradient in the backpropagation process of model optimization, and the noise-added gradient value was used to participate in the model parameter update process, so as to achieve differential privacy protection for the overall model. Secondly, to reduce the impact of the introduction of differential privacy noise on convergence speed of the model, a learning rate decay strategy was designed and then the gradient descent with momentum algorithm was improved. Finally, to reduce the influence of noise on the accuracy of the model, the value of the noise scale was adjusted dynamically during model optimization, thereby changing the amount of noise that needs to be added to the gradient in each round of iteration. Experimental results show that compared with DP-SGD (Differentially Private Stochastic Gradient Descent) algorithm, the proposed algorithm can improve the accuracy of the model by about 5 and 4 percentage points at privacy budget of 0.3 and 0.5, respectively, proving that by using the proposed algorithm, the model usability is improved and privacy protection of the model is achieved.

    Table and Figures | Reference | Related Articles | Metrics
    Few-shot object detection algorithm based on Siamese network
    Junjian JIANG, Dawei LIU, Yifan LIU, Yougui REN, Zhibin ZHAO
    Journal of Computer Applications    2023, 43 (8): 2325-2329.   DOI: 10.11772/j.issn.1001-9081.2022121865
    Abstract522)   HTML40)    PDF (1472KB)(674)       Save

    Deep learning based algorithms such as YOLO (You Only Look Once) and Faster Region-Convolutional Neural Network (Faster R-CNN) require a huge amount of training data to ensure the precision of the model, and it is difficult to obtain data and the cost of labeling data is high in many scenarios. And due to the lack of massive training data, the detection range is limited. Aiming at the above problems, a few-shot object Detection algorithm based on Siamese Network was proposed, namely SiamDet, with the purpose of training an object detection model with certain generalization ability by using a few annotated images. Firstly, a Siamese network based on depthwise separable convolution was proposed, and a feature extraction network ResNet-DW was designed to solve the overfitting problem caused by insufficient samples. Secondly, an object detection algorithm SiamDet was proposed based on Siamese network, and based on ResNet-DW, Region Proposal Network (RPN) was introduced to locate the interested objects. Thirdly, binary cross entropy loss was introduced for training, and contrast training strategy was used to increase the distinction among categories. Experimental results show that SiamDet has good object detection ability for few-shot objects, and SiamDet improves AP50 by 4.1% on MS-COCO 20-way 2-shot and 2.6% on PASCAL VOC 5-way 5-shot compared with the suboptimal algorithm DeFRCN (Decoupled Faster R-CNN).

    Table and Figures | Reference | Related Articles | Metrics
    Robotic grasp detection in low-light environment by incorporating visual feature enhancement mechanism
    Gan LI, Mingdi NIU, Lu CHEN, Jing YANG, Tao YAN, Bin CHEN
    Journal of Computer Applications    2023, 43 (8): 2564-2571.   DOI: 10.11772/j.issn.1001-9081.2023050586
    Abstract274)   HTML26)    PDF (2821KB)(642)       Save

    Existing robotic grasping operations are usually performed under well-illuminated conditions with clear object details and high regional contrast. At the same time, for low-light conditions caused by night and occlusion, where the objects’ visual features are weak, the detection accuracies of existing robotic grasp detection models decrease dramatically. In order to improve the representation ability of sparse and weak grasp features in low-light scenarios, a grasp detection model incorporating visual feature enhancement mechanism was proposed to use the visual enhancement sub-task to impose feature enhancement constraints on grasp detection. In grasp detection module, the U-Net like encoder-decoder structure was adopted to achieve efficient feature fusion. In low-light enhancement module, the texture and color information was respectively extracted from local and global level, thereby balancing the object details and visual effect in feature enhancement. In addition, two low-light grasp datasets called low-light Cornell dataset and low-light Jacquard dataset were constructed as new benchmark dataset of low-light grasp and used to conduct the comparative experiments. Experimental results show that the accuracies of the proposed low-light grasp detection model are 95.5% and 87.4% on the benchmark datasets respectively, which are 11.1, 1.2 percentage points higher on low-light Cornell dataset and 5.5, 5.0 percentage points higher on low-light Jacquard dataset than those of the existing grasp detection models, including Generative Grasping Convolutional Neural Network (GG-CNN), and Generative Residual Convolutional Neural Network (GR-ConvNet), indicating that the proposed model has good grasp detection performance.

    Table and Figures | Reference | Related Articles | Metrics
    Authenticatable privacy-preserving scheme based on signcryption from lattice for vehicular ad hoc network
    Jianyang CUI, Ying CAI, Yu ZHANG, Yanfang FAN
    Journal of Computer Applications    2024, 44 (1): 233-241.   DOI: 10.11772/j.issn.1001-9081.2023010083
    Abstract185)   HTML2)    PDF (2194KB)(562)       Save

    To address the issues of user privacy leakage and message authentication in Vehicular Ad hoc NETwork (VANET), an authenticatable privacy-preserving scheme based on signcryption from lattice was proposed. Firstly, the public key of receiver was used to signcrypt the message to generate the ciphertext, and only the receiver with corresponding private key could decrypt the ciphertext, which ensures messages visible only to authorized users. Secondly, after decrypting the message, the receiver calculated the hash value of the message by one-way secure hash function, and judged whether the hash value of the message changed, which realized message authentication. Finally, Number Theoretic Transform (NTT) algorithm was used to reduce the computational overhead of polynomial multiplication and improve the computational efficiency of the scheme. The proposed scheme was proved to have INDistinguishability under Chosen Ciphertext Attack (IND-CCA2) and Strong UnForgeability under Chosen Message Attack (SUF-CMA) under the random oracle model. In addition, the security of the proposed scheme is based on lattice hardness problems, so that it can resist quantum algorithm attack. Simulation experiment results show that the proposed scheme improves the performance in terms of communication delay (at least reducing 10.01%), message loss rate (at least reducing 31.79%) and communication overhead (at least reducing 31.25%) compared to similar authenticated privacy-preserving schemes and a lattice-based signature scheme. Therefore, the proposed scheme is more suitable for resource-constrained VANETs.

    Table and Figures | Reference | Related Articles | Metrics
    Review of YOLO algorithm and its application to object detection in autonomous driving scenes
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023060889
    Online available: 11 September 2023

    Small object detection algorithm of YOLOv5 for safety helmet
    Zongzhe LYU, Hui XU, Xiao YANG, Yong WANG, Weijian WANG
    Journal of Computer Applications    2023, 43 (6): 1943-1949.   DOI: 10.11772/j.issn.1001-9081.2022060855
    Abstract710)   HTML40)    PDF (3099KB)(527)       Save

    Safety helmet wearing is a powerful guarantee of workers’ personal safety. Aiming at the collected safety helmet wearing pictures have characteristics of high density, small pixels and difficulty to detect, a small object detection algorithm of YOLOv5 (You Only Look Once version 5) for safety helmet was proposed. Firstly, based on YOLOv5 algorithm, the bounding box regression loss function and confidence prediction loss function were optimized to improve the learning effect of the algorithm on the features of dense small objects in training. Secondly, slicing aided fine-tuning and Slicing Aided Hyper Inference (SAHI) were introduced to make the small object produce a larger pixel area by slicing the pictures input into the network, and the effect of network inference and fine-tuning was improved. In the experiments, a dataset containing dense small objects of safety helmets in the industrial scenes was used for training. The experimental results show that compared with the original YOLOv5 algorithm, the improved algorithm can increase the precision by 0.26 percentage points, the recall by 0.38 percentage points. And the mean Average Precision (mAP) of the proposed algorithm reaches 95.77%, which is improved by 0.46 to 13.27 percentage points compared to several algorithms such as the original YOLOv5 algorithm. The results verify that the introduction of slicing aided fine-tuning and SAHI improves the precision and confidence of small object detection and recognition in the dense scenes, reduces the false detection and missed detection cases, and can satisfy the requirements of safety helmet wearing detection effectively.

    Table and Figures | Reference | Related Articles | Metrics

    Technology application prospects and risk challenges of large language model

    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023060885
    Online available: 14 September 2023

    Attribute-based encryption scheme for blockchain privacy protection
    Haifeng MA, Yuxia LI, Qingshui XUE, Jiahai YANG, Yongfu GAO
    Journal of Computer Applications    2024, 44 (2): 485-489.   DOI: 10.11772/j.issn.1001-9081.2023020173
    Abstract92)   HTML9)    PDF (1621KB)(507)       Save

    To solve the security problems caused by the disclosure of blockchain ledgers, the key lies in the hiding of private information. An attribute-based encryption scheme with multiple authorities was proposed for privacy protection of blockchain data. Compared to single authority, multiple authorities are decentralized and avoid any single point of failure. First, the key component generation algorithm was modified, where each authority used the user identity as a parameter to generate private key components, preventing collusion between nodes to access unauthorized data. Then, identity-based signature technology was modified to establish a connection between user identities and wallet addresses, making the blockchain policeable and the illegal users traceable. Finally, based on the DBDH (Decisional Bilinear Diffie-Hellman) hypothesis, the safety of the proposed scheme was proved in random oracle model. The experimental results show that, compared with the blockchain privacy protection scheme based on the ring signature based on the elliptic curve and the blockchain privacy protection scheme supporting keyword forgetting search, the proposed scheme takes the least amount of time and is more feasible, when generating the same number of blocks.

    Table and Figures | Reference | Related Articles | Metrics
    Survey of online learning resource recommendation
    Yongfeng DONG, Yacong WANG, Yao DONG, Yahan DENG
    Journal of Computer Applications    2023, 43 (6): 1655-1663.   DOI: 10.11772/j.issn.1001-9081.2022091335
    Abstract626)   HTML59)    PDF (824KB)(503)       Save

    In recent years, more and more schools tend to use online education widely. However, learners are hard to search for their needs from the massive learning resources in the Internet. Therefore, it is very important to research the online learning resource recommendation and perform personalized recommendations for learners, so as to help learners obtain the high-quality learning resources they need quickly. The research status of online learning resource recommendation was analyzed and summarized from the following five aspects. Firstly, the current work of domestic and international online education platforms in learning resource recommendation was summed up. Secondly, four types of algorithms were analyzed and discussed: using knowledge point exercises, learning paths, learning videos and learning courses as learning resource recommendation targets respectively. Thirdly, from the perspectives of learners and learning resources, using the specific algorithms as examples, three learning resource recommendation algorithms based on learners’ portraits, learners’ behaviors and learning resource ontologies were introduced in detail respectively. Moreover, the public online learning resource datasets were listed. Finally, the current challenges and future research directions were analyzed.

    Table and Figures | Reference | Related Articles | Metrics
    Survey on privacy-preserving technology for blockchain transaction
    Qingqing XIE, Nianmin YANG, Xia FENG
    Journal of Computer Applications    2023, 43 (10): 2996-3007.   DOI: 10.11772/j.issn.1001-9081.2022101555
    Abstract375)   HTML32)    PDF (2911KB)(486)       Save

    Blockchain ledger is open and transparent. Some attackers can obtain sensitive information through analyzing the ledger data. It causes a great threat to users’ privacy preservation of transaction. In view of the importance of blockchain transaction privacy preservation, the causes of the transaction privacy leakage were analyzed at first, and the transaction privacy was divided into two types: the transaction participator’s identity privacy and transaction data privacy. Then, in the perspectives of these two types of transaction privacy, the existing privacy-preserving technologies for blockchain transaction were presented. Next, in view of the contradiction between the transaction identity privacy preservation and supervision, transaction identity privacy preservation schemes considering supervision were introduced. Finally, the future research directions of the privacy-preserving technologies for blockchain transaction were summarized and prospected.

    Table and Figures | Reference | Related Articles | Metrics
    Overview of cryptocurrency regulatory technologies research
    Jiaxin WANG, Jiaqi YAN, Qian’ang MAO
    Journal of Computer Applications    2023, 43 (10): 2983-2995.   DOI: 10.11772/j.issn.1001-9081.2022111694
    Abstract397)   HTML53)    PDF (911KB)(464)       Save

    With the help of blockchain and other emerging technologies, cryptocurrencies are decentralized, autonomous and cross-border. Research on cryptocurrency regulatory technologies is not only helpful to fight criminal activities based on cryptocurrencies, but also helpful to provide feasible supervision schemes for the expansion of blockchain technologies in other fields. Firstly, based on the application characteristics of cryptocurrency, the Generation, Exchange and Circulation (GEC) cycle theory of cryptocurrency was defined and elaborated. Then, the frequent international and domestic crimes based on cryptocurrencies were analyzed in detail, and the research status of cryptocurrency security supervision technologies in all three stages was investigated and surveyed as key point. Finally, the cryptocurrency regulatory platform ecology systems and current challenges faced by the regulatory technologies were summarized, and the future research directions of cryptocurrency regulatory technologies were prospected in order to provide reference for subsequent research.

    Table and Figures | Reference | Related Articles | Metrics
    Review of object pose estimation in RGB images based on deep learning
    Yi WANG, Jie XIE, Jia CHENG, Liwei DOU
    Journal of Computer Applications    2023, 43 (8): 2546-2555.   DOI: 10.11772/j.issn.1001-9081.2022071022
    Abstract647)   HTML27)    PDF (858KB)(455)       Save

    6 Degree of Freedom (DoF) pose estimation is a key technology in computer vision and robotics, and has become a crucial task in the fields such as robot operation, automatic driving, augmented reality by estimating 6 DoF pose of an object from a given input image, that is, 3 DoF translation and 3 DoF rotation. Firstly, the concept of 6 DoF pose and the problems of traditional methods based on feature point correspondence, template matching, and three-dimensional feature descriptors were introduced. Then, the current mainstream 6 DoF pose estimation algorithms based on deep learning were introduced in detail from different angles of feature correspondence-based, pixel voting-based, regression-based and multi-object instances-oriented, synthesis data-oriented, and category level-oriented. At the same time, the datasets and evaluation indicators commonly used in pose estimation were summarized and sorted out, and some algorithms were evaluated experimentally to show their performance. Finally, the challenges and the key research directions in the future of pose estimation were given.

    Table and Figures | Reference | Related Articles | Metrics
    Acceleration and optimization of quantum computing simulator implemented on new Sunway supercomputer
    Xinmin SHI, Yong LIU, Yaojian CHEN, Jiawei SONG, Xin LIU
    Journal of Computer Applications    2023, 43 (8): 2486-2492.   DOI: 10.11772/j.issn.1001-9081.2022091456
    Abstract430)   HTML59)    PDF (2000KB)(439)       Save

    Two optimization methods for quantum simulator implemented on Sunway supercomputer were proposed aiming at the problems of gradual scaling of quantum hardware and insufficient classical simulation speed. Firstly, the tensor contraction operator library SWTT was reconstructed by improving the tensor transposition strategy and computation strategy, which improved the computing kernel efficiency of partial tensor contraction and reduced redundant memory access. Secondly, the balance between complexity and efficiency of path computation was achieved by the contraction path adjustment method based on data locality optimization. Test results show that the improvement method of operator library can improve the simulation efficiency of the "Sycamore" quantum supremacy circuit by 5.4% and the single-step tensor contraction efficiency by up to 49.7 times; the path adjustment method can improve the floating-point efficiency by about 4 times with the path computational complexity inflated by a factor of 2. The two optimization methods have the efficiencies of single-precision and mixed-precision floating-point operations for the simulation of Google’s 53-bit, 20-layer quantum chip random circuit with a million amplitude sampling improved from 3.98% and 1.69% to 18.48% and 7.42% respectively, and reduce the theoretical estimated simulation time from 470 s to 226 s for single-precision and 304 s to 134 s for mixed-precision, verifying that the two methods significantly improve the quantum computational simulation speed.

    Table and Figures | Reference | Related Articles | Metrics
    Current research status and challenges of blockchain in supply chain applications
    Lina GE, Jingya XU, Zhe WANG, Guifen ZHANG, Liang YAN, Zheng HU
    Journal of Computer Applications    2023, 43 (11): 3315-3326.   DOI: 10.11772/j.issn.1001-9081.2022111758
    Abstract360)      PDF (2371KB)(433)       Save

    The supply chain faces many challenges in the development process, including how to ensure the authenticity and reliability of information as well as the security of the traceability system in the process of product traceability, the security of products in the process of logistics, and the trust management in the financing process of small and medium enterprises. With characteristics of decentralization, immutability and traceability, blockchain provides efficient solutions to supply chain management, but there are some technical challenges in the actual implementation process. To study the applications of blockchain technology in the supply chain, some typical applications were discussed and analyzed. Firstly, the concept of supply chain and the current challenges were briefly introduced. Secondly, problems faced by blockchain in three different supply chain fields of information flow, logistics flow and capital flow were described, and a comparative analysis of related solutions was given. Finally, the technical challenges faced by blockchain in the practical applications of supply chain were summarized, and future applications were prospected.

    Reference | Related Articles | Metrics
    Quantum K-Means algorithm based on Hamming distance
    Jing ZHONG, Chen LIN, Zhiwei SHENG, Shibin ZHANG
    Journal of Computer Applications    2023, 43 (8): 2493-2498.   DOI: 10.11772/j.issn.1001-9081.2022091469
    Abstract316)   HTML34)    PDF (1623KB)(422)       Save

    The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.

    Table and Figures | Reference | Related Articles | Metrics
    Survey on anomaly detection algorithms for unmanned aerial vehicle flight data
    Chaoshuai QI, Wensi HE, Yi JIAO, Yinghong MA, Wei CAI, Suping REN
    Journal of Computer Applications    2023, 43 (6): 1833-1841.   DOI: 10.11772/j.issn.1001-9081.2022060808
    Abstract360)   HTML23)    PDF (3156KB)(417)       Save

    Focused on the issue of anomaly detection for Unmanned Aerial Vehicle (UAV) flight data in the field of UAV airborne health monitoring, firstly, the characteristics of UAV flight data, the common flight data anomaly types and the corresponding demands on anomaly detection algorithms for UAV flight data were presented. Then, the existing research on UAV flight data anomaly detection algorithms was reviewed, and these algorithms were classified into three categories: prior-knowledge based algorithms for qualitative anomaly detection, model-based algorithms for quantitative anomaly detection, and data-driven anomaly detection algorithms. At the same time, the application scenarios, advantages and disadvantages of the above algorithms were analyzed. Finally, the current problems and challenges of UAV anomaly detection algorithms were summarized, and key development directions of the field of UAV anomaly detection were prospected, thereby providing reference ideas for future research.

    Table and Figures | Reference | Related Articles | Metrics
    UAV cluster cooperative combat decision-making method based on deep reinforcement learning
    Lin ZHAO, Ke LYU, Jing GUO, Chen HONG, Xiancai XIANG, Jian XUE, Yong WANG
    Journal of Computer Applications    2023, 43 (11): 3641-3646.   DOI: 10.11772/j.issn.1001-9081.2022101511
    Abstract567)   HTML12)    PDF (2944KB)(406)       Save

    When the Unmanned Aerial Vehicle (UAV) cluster attacks ground targets, it will be divided into two formations: a strike UAV cluster that attacks the targets and a auxiliary UAV cluster that pins down the enemy. When auxiliary UAVs choose the action strategy of aggressive attack or saving strength, the mission scenario is similar to a public goods game where the benefits to the cooperator are less than those to the betrayer. Based on this, a decision method for cooperative combat of UAV clusters based on deep reinforcement learning was proposed. First, by building a public goods game based UAV cluster combat model, the interest conflict problem between individual and group in cooperation of intelligent UAV clusters was simulated. Then, Muti-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm was used to solve the most reasonable combat decision of the auxiliary UAV cluster to achieve cluster victory with minimum loss cost. Training and experiments were performed under conditions of different numbers of UAV. The results show that compared to the training effects of two algorithms — IDQN (Independent Deep Q-Network) and ID3QN (Imitative Dueling Double Deep Q-Network), the proposed algorithm has the best convergence, its winning rate can reach 100% with four auxiliary UAVs, and it also significantly outperforms the comparison algorithms with other UAV numbers.

    Table and Figures | Reference | Related Articles | Metrics
    Parallel algorithm of betweenness centrality for dynamic networks
    Zhenyu LIU, Chaokun WANG, Gaoyang GUO
    Journal of Computer Applications    2023, 43 (7): 1987-1993.   DOI: 10.11772/j.issn.1001-9081.2022071121
    Abstract392)   HTML57)    PDF (1663KB)(393)       Save

    Betweenness centrality is a common metric for evaluating the importance of nodes in a graph. However, the update efficiency of betweenness centrality in large-scale dynamic graphs is not high enough to meet the application requirements. With the development of multi-core technology, algorithm parallelization has become one of the effective ways to solve this problem. Therefore, a Parallel Algorithm of Betweenness centrality for dynamic networks (PAB) was proposed. Firstly, the time cost of redundant point pairs was reduced through operations such as community filtering, equidistant pruning and classification screening. Then, the determinacy of the algorithm was analyzed and processed to realize parallelization. Comparison experiments were conducted on real datasets and synthetic datasets, and the results show that the update efficiency of PAB is 4 times that of the latest batch-iCENTRAL algorithm on average when adding edges. It can be seen that the proposed algorithm can improve the update efficiency of betweenness centrality in dynamic networks effectively.

    Table and Figures | Reference | Related Articles | Metrics
    Survey of data-driven intelligent cloud-edge collaboration
    Pengxin TIAN, Guannan SI, Zhaoliang AN, Jianxin LI, Fengyu ZHOU
    Journal of Computer Applications    2023, 43 (10): 3162-3169.   DOI: 10.11772/j.issn.1001-9081.2022091418
    Abstract555)   HTML29)    PDF (1772KB)(391)       Save

    With the rapid development of Internet of Things (IoT), a large amount of data generated in edge scenarios such as sensors often needs to be transmitted to cloud nodes for processing, which brings huge transmission cost and processing delay. Cloud-edge collaboration provides a solution for these problems. Firstly, on the basis of comprehensive investigation and analysis of the development process of cloud-edge collaboration, combined with the current research ideas and progress of intelligent cloud-edge collaboration, the data acquisition and analysis, computation offloading technology and model-based intelligent optimization technology in cloud edge architecture were analyzed and discussed emphatically. Secondly, the functions and applications of various technologies in intelligent cloud-edge collaboration were analyzed deeply from the edge and the cloud respectively, and the application scenarios of intelligent cloud-edge collaboration technology in reality were discussed. Finally, the current challenges and future development directions of intelligent cloud-edge collaboration were pointed out.

    Table and Figures | Reference | Related Articles | Metrics
    Application review of deep models in medical image segmentation: from U-Net to Transformer
    Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023071059
    Online available: 26 October 2023

    Hyperparameter optimization for neural network based on improved real coding genetic algorithm
    Wei SHE, Yang LI, Lihong ZHONG, Defeng KONG, Zhao TIAN
    Journal of Computer Applications    2024, 44 (3): 671-676.   DOI: 10.11772/j.issn.1001-9081.2023040441
    Abstract309)   HTML37)    PDF (1532KB)(381)       Save

    To address the problems of poor effects, easily falling into suboptimal solutions, and inefficiency in neural network hyperparameter optimization, an Improved Real Coding Genetic Algorithm (IRCGA) based hyperparameter optimization algorithm for the neural network was proposed, which was named IRCGA-DNN (IRCGA for Deep Neural Network). Firstly, a real-coded form was used to represent the values of hyperparameters, which made the search space of hyperparameters more flexible. Then, a hierarchical proportional selection operator was introduced to enhance the diversity of the solution set. Finally, improved single-point crossover and variational operators were designed to explore the hyperparameter space more thoroughly and improve the efficiency and quality of the optimization algorithm, respectively. Two simulation datasets were used to show IRCGA’s performance in damage effectiveness prediction and convergence efficiency. The experimental results on two datasets indicate that, compared to GA-DNN(Genetic Algorithm for Deep Neural Network), the proposed algorithm reduces the convergence iterations by 8.7% and 13.6% individually, and the MSE (Mean Square Error) is not much different; compared to IGA-DNN(Improved Genetic Algorithm for Deep Neural Network), IRCGA-DNN achieves reductions of 22.2% and 13.6% in convergence iterations respectively. Experimental results show that the proposed algorithm is better in both convergence speed and prediction performance, and is suitable for hyperparametric optimization of neural networks.

    Table and Figures | Reference | Related Articles | Metrics
    Poisoning attack detection scheme based on generative adversarial network for federated learning
    Qian CHEN, Zheng CHAI, Zilong WANG, Jiawei CHEN
    Journal of Computer Applications    2023, 43 (12): 3790-3798.   DOI: 10.11772/j.issn.1001-9081.2022121831
    Abstract566)   HTML26)    PDF (2367KB)(365)       Save

    Federated Learning (FL) emerges as a novel privacy-preserving Machine Learning (ML) paradigm. However, the distributed training structure of FL is more vulnerable to poisoning attack, where adversaries contaminate the global model through uploading poisoning models, resulting in the convergence deceleration and the prediction accuracy degradation of the global model. To solve the above problem, a poisoning attack detection scheme based on Generative Adversarial Network (GAN) was proposed. Firstly, the benign local models were fed into the GAN to output testing samples. Then, the testing samples were used to detect the local models uploaded by the clients. Finally, the poisoning models were eliminated according to the testing metrics. Meanwhile, two test metrics named F1 score loss and accuracy loss were defined to detect the poisoning models and extend the detection scope from one single type of poisoning attacks to all types of poisoning attacks. Besides, a threshold determination method was designed to deal with misjudgment, so that the robust of misjudgment was confirmed. Experimental results on MNIST and Fashion-MNIST datasets show that the proposed scheme can generate high-quality testing samples, and then detect and eliminate poisoning models. Compared with the global models trained with the detection scheme based on directly gathering test data from clients and the detection scheme based on generating test data and using test accuracy as the test metric, the global model trained with the proposed scheme has significant accuracy improvement from 2.7 to 12.2 percentage points.

    Table and Figures | Reference | Related Articles | Metrics
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF