Loading...

Table of Content

    10 April 2024, Volume 44 Issue 4
    The 9th National Conference on Intelligent Information Processing(NCIIP 2023)
    Device-to-device content sharing mechanism based on knowledge graph
    Xiaoyan ZHAO, Yan KUANG, Menghan WANG, Peiyan YUAN
    2024, 44(4):  995-1001.  DOI: 10.11772/j.issn.1001-9081.2023040500
    Asbtract ( )   HTML ( )   PDF (3288KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Device-to-Device(D2D) communication leverages the local computing and caching capabilities of the edge network to meet the demand for low-latency, energy-efficient content sharing among future mobile network users. The performance improvement of content sharing efficiency in edge networks not only depends on user social relationships, but also heavily relies on the characteristics of end devices, such as computation, storage, and residual energy resources. Therefore, a D2D content sharing mechanism was proposed to maximize energy efficiency with multidimensional association features of user-device-content, which took into account device heterogeneity, user sociality, and interest difference. Firstly, the multi-objective constraint problem about the user cost-benefit maximization was transformed into the optimal node selection and power control problem. And the multi-dimensional knowledge association features and the graph model for user-device-content were constructed by processing structurally multi-dimensional features related to devices, such as computing resources and storage resources. Then, the willingness measurement methods of users on device attributes and social attributes were studied, and a sharing willingness measurement method was proposed based on user socialization and device graphs. Finally, according to user sharing willingness, a D2D collaboration cluster oriented to content sharing was constructed, and a power control algorithm based on shared willingness for energy efficiency was designed to maximize the performance of network sharing. The experimental results on a real user device dataset and infocom06 dataset show that, compared to nearest selection algorithm and a selection algorithm without considering device willingness, the proposed power control algorithm based on shared willingness improves the system sum rate by about 97.2% and 11.1%, increases the user satisfaction by about 72.7% and 4.3%, and improves the energy efficiency by about 57.8% and 9.7%, respectively. This verifies the effectiveness of the proposed algorithm in terms of transmission rate, energy efficiency and user satisfaction.

    Domain generalization method of phase-frequency fusion from independent perspective
    Bin XIAO, Mo YANG, Min WANG, Guangyuan QIN, Huan LI
    2024, 44(4):  1002-1009.  DOI: 10.11772/j.issn.1001-9081.2023050623
    Asbtract ( )   HTML ( )   PDF (2055KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The existing Domain Generalization (DG) methods process the domain features poorly and have weak generalization ability, thus a method based on the feature independence of the frequency domain was proposed to solve the domain generalization problem. Firstly, a frequency domain decomposition algorithm was designed to obtain domain-independent features from phase information by the Fast Fourier Transform (FFT) of depth features of the image, improving the recognition ability of domain-independent features. Secondly, from the independence perspective, the correlation of attributes in frequency domain features was further eliminated by weighting the features of samples, and the most effective domain-independent features were extracted to solve the poor generalization problem caused by correlation between sample features. Finally, the amplitude fusion strategy was proposed to narrow the distance between the source domain and the target domain, so as to further improve the generalization ability of the model to the unknown domain. Experimental results on popular image domain generalization datasets PACS and VLCS show that the average accuracy of the proposed method is 0.44, 0.59 percentage points higher than that of StableNet, and the proposed method achieves excellent performance on all datasets.

    Node coverage optimization of wireless sensor network based on multi-strategy improved butterfly optimization algorithm
    Xiuxi WEI, Maosong PENG, Huajuan HUANG
    2024, 44(4):  1009-1017.  DOI: 10.11772/j.issn.1001-9081.2023040501
    Asbtract ( )   HTML ( )   PDF (1855KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of low coverage rate and uneven distribution of nodes in Wireless Sensor Network (WSN), a node coverage optimization strategy based on Multi-strategy Improved Butterfly Optimization Algorithm (MIBOA) was proposed. Firstly, the basic Butterfly Optimization Algorithm (BOA) was combined with Sparrow Search Algorithm (SSA) to improve the search process. Secondly, the adaptive weight coefficient was introduced to improve the optimization accuracy and convergence speed. Finally, the current best individual was perturbed by Cauchy mutation to improve the robustness of the algorithm. The optimization experiment results on benchmark functions show that, MIBOA can basically solve the optimal value of the test function within 3 seconds, and the average accuracy of convergence is improved by 97.96% compared with BOA. MIBOA was applied to the WSN node coverage optimization problem. Compared with optimization results of BOA and SSA, the node coverage rate was improved by 3.63 percentage points at least. Compared with the Improved Grey Wolf Optimization algorithm (IGWO), the deployment time was shortened by 145.82 seconds. Compared with the Improved Whale Optimization Algorithm (IWOA), the node coverage rate was increased by 0.20 percentage points and the time was shortened by 1 112.61 seconds. In conclusion, MIBOA can improve the node coverage rate and reduce the redundant coverage rate, and effectively prolong the lifetime of WSN.

    Network security risk assessment method for CTCS based on α-cut triangular fuzzy number and attack tree
    Honglei YAO, Jiqiang LIU, Endong TONG, Wenjia NIU
    2024, 44(4):  1018-1026.  DOI: 10.11772/j.issn.1001-9081.2023050584
    Asbtract ( )   HTML ( )   PDF (2359KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To solve the problems of uncertain influence factors and indicator quantification difficulty in the risk assessment of industrial control networks, a method based on fuzzy theory and attack tree was proposed, and the proposed method was tested and verified on Chinese Train Control System (CTCS). First, an attack tree model for CTCS was constructed based on network security threats and system vulnerability. α-cut Triangular Fuzzy Number (TFN) was used to calculate the interval probabilities of leaf nodes and attack paths. Then, Analytic Hierarchy Process (AHP) was adopted to establish the mathematical model for security event losses and get the final risk assessment result. Finally, the experimental result demonstrates that the proposed method implements system risk assessment effectively, predicts the attack paths successfully and reduces the influence of subjective factors. By taking advantage of the proposed method, the risk assessment result would be more realistic and provides reference and basis for the selection of security protection strategies.

    Network abnormal traffic detection based on port attention and convolutional block attention module
    Bin XIAO, Yun GAN, Min WANG, Xingpeng ZHANG, Zhaoxing WANG
    2024, 44(4):  1027-1034.  DOI: 10.11772/j.issn.1001-9081.2023050649
    Asbtract ( )   HTML ( )   PDF (1692KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Network abnormal traffic detection is an important part of network security protection. At present, abnormal traffic detection methods based on deep learning treat the port number attribute the same as other traffic attributes, ignoring the importance of the port number. Considering the idea of attention, a novel abnormal traffic detection module based on Convolutional Neural Network (CNN) combining Port Attention Module (PAM) and Convolutional Block Attention Module (CBAM) was proposed to improve the performance of abnormal traffic detection. Firstly, the original network traffic was taken as the input of PAM, the port number attribute was separated and sent to the full connected layer, and the learned port attention weight value was obtained, and the traffic data after port attention was output by dot-multiplying with other traffic attributes. Then, the traffic data was converted into a grayscale map, and CNN and CBAM were used to extract the the channel and space information of the feature map more fully. Finally, the focus loss function was used to solve the problem of data imbalance. The proposed PAM has the advantages of few parameters, plug and play, and universal applicability. The accuracy of the proposed model is 99.18% for the binary-class classification task of abnormal traffic detection and 99.07% for the multi-class classification task on the CICIDS2017 dataset, and it also has a high recognition rate for classes with only a few training samples.

    Artificial intelligence
    Survey of extractive text summarization based on unsupervised learning and supervised learning
    Xiawuji, Heming HUANG, Gengzangcuomao, Yutao FAN
    2024, 44(4):  1035-1048.  DOI: 10.11772/j.issn.1001-9081.2023040537
    Asbtract ( )   HTML ( )   PDF (1575KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Different from generative summarization methods, extractive summarization methods are more feasible to implement, more readable, and more widely used. At present, the literatures on extractive summarization methods mostly analyze and review some specific methods or fields, and there is no multi-faceted and multi-lingual systematic review. Therefore, the meanings of text summarization generation were discussed, related literatures were systematically reviewed, and the methods of extractive text summarization based on unsupervised learning and supervised learning were analyzed multi-dimensionally and comprehensively. First, the development of text summarization techniques was reviewed, and different methods of extractive text summarization were analyzed, including the methods based on rules, Term Frequency-Inverse Document Frequency (TF-IDF), centrality, potential semantic, deep learning, graph sorting, feature engineering, and pre-training learning, etc. Also, comparisons of advantages and disadvantages among different algorithms were made. Secondly, datasets in different languages for text summarization and popular evaluation metrics were introduced in detail. Finally, problems and challenges for research of extractive text summarization were discussed, and solutions and research trends were presented.

    Aspect sentiment triplet extraction based on aspect-aware attention enhancement
    Longtao GAO, Nana LI
    2024, 44(4):  1049-1057.  DOI: 10.11772/j.issn.1001-9081.2023040411
    Asbtract ( )   HTML ( )   PDF (2126KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    For fine-grained sentiment analysis in Natural Language Processing (NLP), in order to explore the influence of Pre-trained Language Models (PLMs) with structural biases on the end-to-end sentiment triple extraction task, and solve the problem of low fault tolerance rate of aspect semantic feature dependence that is common in previous studies, combining aspect-aware attention mechanism and Graph Convolutional Network (GCN), an Aspect-aware attention Enhanced GCN (AE-GCN) model was proposed for aspect sentiment triple extraction tasks. Firstly, multiple types of relations were introduced for the aspect sentiment triple extraction task. Then, these relations were embedded into the adjacent tensors between words in the sentence by using the double affine attention mechanism. At the same time, the aspect-aware attention mechanism was introduced to obtain the sentence attention scoring matrix, and the aspect-related semantic features were further mined. Next, a sentence was converted into a multi-channel graph through the graph convolutional neural network, to learn a relation-aware node representation by treating words and relation adjacent tensors as edges and nodes, respectively. Finally, an effective word pair representation refinement strategy was used to determine whether word pairs matched, which was used to consider the implicit results of aspect and opinion extraction. Experimental results show that, on ASTE-D1 benchmark dataset, the F1 values of the proposed model on the 14res, 14lap, 15res and 16res sub-datasets are improved by 0.20, 0.21, 1.25 and 0.26 percentage points compared with the Enhanced Multi-Channel Graph Convolutional Network (EMC-GCN) model; on ASTE-D2 benchmark dataset, the F1 values of the proposed model on the 14lap, 15res and 16res sub-datasets are increased by 0.42, 0.31 and 2.01 percentage points compared with the EMC-GCN model. It can be seen that the proposed model has great improvement in precision and effectiveness compared with the EMC-GCN model.

    Aspect-level sentiment analysis model based on alternating‑attention mechanism and graph convolutional network
    Xianfeng YANG, Yilei TANG, Ziqiang LI
    2024, 44(4):  1058-1064.  DOI: 10.11772/j.issn.1001-9081.2023040497
    Asbtract ( )   HTML ( )   PDF (943KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aspect-level sentiment analysis aims to predict the sentiment polarity of specific target in given text. Aiming at the problem of ignoring the syntactic relationship between aspect words and context and reducing the attention difference caused by average pooling, an aspect-level sentiment analysis model based on Alternating-Attention (AA) mechanism and Graph Convolutional Network (AA-GCN) was proposed. Firstly, the Bidirectional Long Short-Term Memory (Bi-LSTM) network was used to semantically model context and aspect words. Secondly, the GCN based on syntactic dependency tree was used to learn location information and dependencies, and the AA mechanism was used for multi-level interactive learning to adaptively adjust the attention to the target word. Finally, the final classification basis was obtained by splicing the corrected aspect features and context features. Compared with the Target-Dependent Graph Attention Network (TD-GAT), the accuracies of the proposed model on four public datasets increased by 1.13%-2.67%, and the F1 values on five public datasets increased by 0.98%-4.89%, indicating the effectiveness of using syntactic relationships and increasing keyword attention.

    Advanced computing
    Offensive speech detection with irony mechanism
    Haihan WANG, Yan ZHU
    2024, 44(4):  1065-1071.  DOI: 10.11772/j.issn.1001-9081.2023040533
    Asbtract ( )   HTML ( )   PDF (2696KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Offensive speech on the internet seriously disrupts the normal network order and destroys the network environment for healthy communication. Existing detection technologies focus on the distinctive features in the text, and are difficult to discover more implicit attack methods. For the above problems, an offensive speech detection model BSWD (Bidirectional Encoder Representation from Transformers-based Sarcasm and Word Detection) incorporating irony mechanism was proposed. First, a model based on irony mechanism Sarcasm-BERT was proposed to detect semantic conflicts in speech. Secondly, a fine-grained word offensive feature extraction model WordsDetect was proposed to detect offensive words in speech. Finally, the model BSWD was obtained by fusing the above two models. The experimental results show that the accuracy, precision, recall, and F1 score indicators of the proposed model are generally improved by 2%, compared with the BERT(Bidirectional Encoder Representation from Transformers) and HateBERT methods. BSWD significantly improves the detection performance and can better detect implicit offensive speech. Compared with the SKS (Sentiment Knowledge Sharing) and BiCHAT (Bi-LSTM with deep CNN and Hierarchical ATtention) methods, BSWD has stronger generalization ability and robustness. The above results verify that BSWD can effectively detect the implicit offensive speech.

    Artificial intelligence
    Technology term recognition with comprehensive constituency parsing
    Junjie ZHU, Li YU, Shengwen LI, Changzheng ZHOU
    2024, 44(4):  1072-1079.  DOI: 10.11772/j.issn.1001-9081.2023040532
    Asbtract ( )   HTML ( )   PDF (1342KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Technology terms are used to communicate information accurately in the field of science and technology. Automatically recognizing technology terms from text can help experts and the public to discover, recognize, and apply new technologies, which is great of value, but unsupervised technology term recognition methods still have some limitations, such as complex rules and poor adaptability. To enhance the ability to recognize technology terms from text, an unsupervised technology term recognition method was proposed. Firstly, a syntactic structure tree was constructed through constituency parsing. Then, the candidate technology terms were extracted from both top-down and bottom-up perspectives. Finally, the statistical frequency and semantic information were combined to determine the most appropriate technology terms. Besides, a technology term dataset was constructed to validate the effectiveness of the proposed method. Experimental results on the proposed dataset show that the proposed method with top-down extraction has the F1 score improved by 4.55 percentage points compared to the dependency-based method. Meanwhile, the analysis results conducted on case study in the field of 3D printing show that the recognized technology terms by the proposed method are in line with the development of the field, which can be used to trace the development process of technology and depict the evolution path of technology, so as to provide references for understanding, discovering, and exploring future technologies of the field.

    Twice attention mechanism distantly supervised relation extraction based on BERT
    Quan YUAN, Changping CHEN, Ze CHEN, Linfeng ZHAN
    2024, 44(4):  1080-1085.  DOI: 10.11772/j.issn.1001-9081.2023040490
    Asbtract ( )   HTML ( )   PDF (737KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem of incomplete semantic information of word vectors and the problem of word polysemy faced by text feature extraction, a BERT (Bidirectional Encoder Representation from Transformer) word vector-based Twice Attention mechanism weighting algorithm for Relation Extraction (TARE) was proposed. Firstly, in the word embedding stage, the self-attention dynamic encoding algorithm was used to capture the semantic information before and after the text for the current word vector by constructing QK and V matrices. Then, after the model output the sentence-level feature vector, the locator was used to extract the corresponding parameters of the fully connected layer to construct the relation attention matrix. Finally, the sentence level attention mechanism algorithm was used to add different attention scores to sentence-level feature vectors to improve the noise immunity of sentence-level features. The experimental results show that compared with Contrastive Instance Learning (CIL) algorithm for relation extraction, the F1 value is increased by 4.0 percentage points and the average value of Precision@100, Precision@200, and Precision@300 (P@M) is increased by 11.3 percentage points on the NYT-10m dataset. Compared with the Piecewise Convolutional Neural Network algorithm based on ATTention mechanism (PCNN-ATT), the AUC (Area Under precision-recall Curve) value is increased by 4.8 percentage points and the P@M value is increased by 2.1 percentage points on the NYT-10d dataset. In various mainstream Distantly Supervised for Relation Extraction (DSRE) tasks, TARE effectively improves the model’s ability to learn data features.

    Point cloud semantic segmentation based on attention mechanism and global feature optimization
    Pengfei ZHANG, Litao HAN, Hengjian FENG, Hongmei LI
    2024, 44(4):  1086-1092.  DOI: 10.11772/j.issn.1001-9081.2023050588
    Asbtract ( )   HTML ( )   PDF (1971KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In the 3D point cloud semantic segmentation algorithm based on deep learning, to enhance the fine-grained ability to extract local features and learn the long-range dependencies between different local neighborhoods, a neural network based on attention mechanism and global feature optimization was proposed. First, a Single-Channel Attention (SCA) module and a Point Attention (PA) module were designed in the form of additive attention. The former strengthened the resolution of local features by adaptively adjusting the features of each point in a single channel, and the latter adjusted the importance of the single-point feature vector to suppress useless features and reduce feature redundancy. Second, a Global Feature Aggregation (GFA) module was added to aggregate local neighborhood features to capture global context information, thereby improving semantic segmentation accuracy. The experimental results show that the proposed network improves the mean Intersection?over?Union (mIoU) by 1.8 percentage points compared with RandLA-Net (Random sampling and an effective Local feature Aggregator Network) on the point cloud dataset S3DIS, and has good segmentation performance and good adaptability.

    Location control method for generated objects by diffusion model with exciting and pooling attention
    Jinsong XU, Ming ZHU, Zhiqiang LI, Shijie GUO
    2024, 44(4):  1093-1098.  DOI: 10.11772/j.issn.1001-9081.2023050634
    Asbtract ( )   HTML ( )   PDF (2886KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Due to the ambiguity of text and the lack of location information in training data, current state-of-the-art diffusion model cannot accurately control the locations of generated objects in the image under the condition of text prompts. To address this issue, a spatial condition of the object’s location range was introduced, and an attention-guided method was proposed based on the strong correlation between the cross-attention map in U-Net and the image spatial layout to control the generation of the attention map, thus controlling the locations of the generated objects. Specifically, based on the Stable Diffusion (SD) model, in the early stage of the generation of the cross-attention map in the U-Net layer, a loss was introduced to stimulate high attention values in the corresponding location range, and reduce the average attention value outside the range. The noise vector in the latent space was optimized step by step in each denoising step to control the generation of the attention map. Experimental results show that the proposed method can significantly control the locations of one or more objects in the generated image, and when generating multiple objects, it can reduce the phenomenon of object omission, redundant object generation, and object fusion.

    Re-weighted adversarial variational autoencoder and its application in industrial causal effect estimation
    Zongyu LI, Siwei QIANG, Xiaobo GUO, Zhenfeng ZHU
    2024, 44(4):  1099-1106.  DOI: 10.11772/j.issn.1001-9081.2023050557
    Asbtract ( )   HTML ( )   PDF (2192KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Counterfactual prediction and selection bias are major challenges in causal effect estimation. To effectively represent the complex mixed distribution of potential covariant and enhance the generalization ability of counterfactual prediction, a Re-weighted adversarial Variational AutoEncoder Network (RVAENet) model was proposed for industrial causal effect estimation. To address bias problem in mixed distribution, the idea of domain adaptation was adopted, and an adversarial learning mechanism was used to balance the representation learning distribution of the latent variables obtained by the Variational AutoEncoder (VAE). Furthermore, the sample propensity weights were learned to re-weight the samples, reducing the distribution difference between the treatment group and the control group. The experimental results show that, in two scenarios of the industrial real-world datasets, the Areas Under Uplift Curve (AUUC) of the proposed model are improved by 15.02% and 16.02% compared to TEDVAE (Treatment Effect with Disentangled VAE). On the public datasets, the proposed model generally achieves optimal results for Average Treatment Effect (ATE) and Precision in Estimation of Heterogeneous Effect (PEHE).

    Image classification algorithm based on overall topological structure of point cloud
    Jie WANG, Hua MENG
    2024, 44(4):  1107-1113.  DOI: 10.11772/j.issn.1001-9081.2023050563
    Asbtract ( )   HTML ( )   PDF (2456KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Convolutional Neural Network (CNN) is sensitive to the local features of data due to the complex classification boundaries and too many parameters. As a result, the accuracy of CNN model will decrease significantly when it is attacked by adversarial attacks. However, the Topological Data Analysis (TDA) method pays more attention to the macro features of data, which naturally can resist noise and gradient attacks. Therefore, an image classification algorithm named MCN (Mapper-Combined neural Network) combining topological data analysis and CNN was proposed. Firstly, the Mapper algorithm was used to obtain the Mapper map that described the macro features of the dataset. Each sample point was represented by a new feature using a multi-view Mapper map, and the new feature was represented as a binary vector. Then, the hidden layer feature was enhanced by combining the new feature with the hidden layer feature extracted by the CNN. Finally, the feature-enhanced sample data was used to train the fully connected classification network to complete the image classification task. Comparing MCN with pure convolutional network and single Mapper feature classification algorithm on MNIST and FashionMNIST data sets, the initial classification accuracy of the MCN with PCA (Principal Component Analysis) dimension reduction is improved by 4.65% and 8.05%, the initial classification accuracy of the MCN with LDA (Linear Discriminant Analysis) dimensionality reduction is improved by 8.21% and 5.70%. Experimental results show that MCN has higher classification accuracy and stronger anti-attack capability.

    Bird recognition algorithm based on attention mechanism
    Tianhua CHEN, Jiaxuan ZHU, Jie YIN
    2024, 44(4):  1114-1120.  DOI: 10.11772/j.issn.1001-9081.2023081042
    Asbtract ( )   HTML ( )   PDF (2874KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the low accuracy problem of existing algorithms for fine-grained target bird recognition tasks, a target detection algorithm for bird targets called YOLOv5-Bird, was proposed. Firstly, a mixed domain based Coordinate Attention (CA) mechanism was introduced in the backbone of YOLOv5 to increase the weights of valuable channels and distinguish the features of the target from the redundant features in the background. Secondly, Bi-level Routing Attention (BRA) modules were used to replace part C3 modules in the original backbone to filter the low correlated key-value pair information and obtain efficient long-distance dependencies. Finally, WIoU (Wise-Intersection over Union) function was used as loss function to enhance the localization ability of algorithm. Experimental results show that the detection precision of YOLOv5-Bird reaches 82.8%, and the recall reaches 77.0% on the self-constructed dataset, which are 4.3 and 7.6 percentage points higher than those of YOLOv5 algorithm. Compared with the algorithms adding other attention mechanisms, YOLOv5-Bird also has performance advantages.It is verified that YOLOv5-Bird has better performance in bird target detection scenarios.

    Data science and technology
    Recommendation method based on knowledge‑awareness and cross-level contrastive learning
    Jie GUO, Jiayu LIN, Zuhong LIANG, Xiaobo LUO, Haitao SUN
    2024, 44(4):  1121-1127.  DOI: 10.11772/j.issn.1001-9081.2023050613
    Asbtract ( )   HTML ( )   PDF (968KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    As a kind of side information, Knowledge Graph (KG) can effectively improve the recommendation quality of recommendation models, but the existing knowledge-awareness recommendation methods based on Graph Neural Network (GNN) suffer from unbalanced utilization of node information. To address the above problem, a new recommendation method based on Knowledge?awareness and Cross-level Contrastive Learning (KCCL) was proposed. To alleviate the problem of unbalanced node information utilization caused by the sparse interaction data and noisy knowledge graph that deviate from the true representation of inter-node dependencies during information aggregation, a contrastive learning paradigm was introduced into knowledge-awareness recommendation model of GNN. Firstly, the user-item interaction graph and the item knowledge graph were integrated into a heterogeneous graph, and the node representations of users and items were realized by a GNN based on the graph attention mechanism. Secondly, consistent noise was added to the information propagation aggregation layer for data augmentation to obtain node representations of different levels, and the obtained outermost node representation was compared with the innermost node representation for cross-level contrastive learning. Finally, the supervised recommendation task and the contrastive learning assistance task were jointly optimized to obtain the final representation of each node. Experimental results on DBbook2014 and MovieLens-1m datasets show that compared to the second prior contrastive method, the Recall@10 of KCCL is improved by 3.66% and 0.66%, respectively, and the NDCG@10 is improved by 3.57% and 3.29%, respectively, which verifies the effectiveness of KCCL.

    Fuzzy clustering algorithm based on belief subcluster cutting
    Yu DING, Hanlin ZHANG, Rong LUO, Hua MENG
    2024, 44(4):  1128-1138.  DOI: 10.11772/j.issn.1001-9081.2023050610
    Asbtract ( )   HTML ( )   PDF (4644KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Belief Peaks Clustering (BPC) algorithm is a new variant of Density Peaks Clustering (DPC) algorithm based on fuzzy perspective. It uses fuzzy mathematics to describe the distribution characteristics and correlation of data. However, BPC algorithm mainly relies on the information of local data points in the calculation of belief values, instead of investigating the distribution and structure of the whole dataset. Moreover, the robustness of the original allocation strategy is weak. To solve these problems, a fuzzy Clustering algorithm based on Belief Subcluster Cutting (BSCC) was proposed by combining belief peaks and spectral method. Firstly, the dataset was divided into many high-purity subclusters by local belief information. Then, the subcluster was regarded as a new sample, and the spectral method was used for cutting graph clustering through the similarity relationship between clusters, thus coupling local information and global information. Finally, the points in the subcluster were assigned to the class cluster where the subcluster was located to complete the final clustering. Compared with BPC algorithm, BSCC has obvious advantages on datasets with multiple subclusters, and it has the ACCuracy (ACC) improvement of 16.38 and 21.35 percentage points on americanflag dataset and Car dataset, respectively. Clustering experimental results on synthetic datasets and real datasets show that BSCC outperforms BPC and the other seven clustering algorithms on the three evaluation indicators of Adjusted Rand Index (ARI), Normalized Mutual Information (NMI) and ACC.

    Cyber security
    Blockchain consensus improvement algorithm based on BDLS
    Lipeng ZHAO, Bing GUO
    2024, 44(4):  1139-1147.  DOI: 10.11772/j.issn.1001-9081.2023050581
    Asbtract ( )   HTML ( )   PDF (4688KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To solve the problem of low consensus efficiency of Blockchain version of DLS (BDLS) consensus algorithm in a system with a large number of nodes and hierarchy, an blockchain consensus improvement algorithm HBDLS (Hierarchical Blockchain version of DLS) based on BDLS was proposed. Firstly, nodes were divided into two levels according to the attributes of nodes in practical applications. Each high-level node managed a low-level node cluster respectively. Then, cluster consensus was carried out on all lower-level nodes, and the consensus results were reported to the corresponding higher-level nodes. Finally, the consensus results of all the high-level nodes to the lower level nodes were agreed again, and the data passed the high-level consensus was written into the blockchain. Theoretical analysis and simulation experimental results show that in the case of 36 nodes and a single block containing 4 500 transactions, the throughput of HBDLS is about 21% higher than that of BDLS algorithm; in the case of 44 nodes and a single block containing 3 000 transactions, the throughput of HBDLS is about 52% higher than that of BDLS algorithm; in the case of 44 nodes and a single block containing 1 transaction, the consensus latency of HBDLS is about 26% lower than that of BDLS algorithm. Experimental results show that HBDLS can significantly improve the consensus efficiency for the system with a large number of nodes and a large transaction volume.

    Data classified and graded access control model based on master-slave multi-chain
    Meihong CHEN, Lingyun YUAN, Tong XIA
    2024, 44(4):  1148-1157.  DOI: 10.11772/j.issn.1001-9081.2023040529
    Asbtract ( )   HTML ( )   PDF (3335KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the problems of slow accurate search speed due to mixed data storage and difficult security governance caused by unclassified and graded data management, a data classified and graded access control model based on master-slave multi-chain was built to achieve classified and graded protection of data and dynamic secure access. Firstly, a hybrid on-chain and off-chain trusted storage model was constructed to balance the storage bottleneck faced by blockchain. Secondly, a master-slave multi-chain architecture was proposed and smart contracts were designed to automatically store data with different privacy levels in the slave chain. Finally, based on Role-Based Access Control, a Multi-Chain and Level Policy-Role Based Access Control (MCLP-RBAC) mechanism was constructed and its specific access control process design was provided. Under the graded access control policy, the throughput of the proposed model is stabilized at around 360 TPS (Transactions Per Second). Compared with the BC-BLPM scheme, it has a certain superiority in throughput, with the ratio of sending rate to throughput reaching 1∶1. Compared with no access strategy, the memory consumption is reduced by about 35.29%; compared with the traditional single chain structure, the memory average consumption is reduced by 52.03%. And compared with the scheme with all the data on the chain, the average storage space is reduced by 36.32%. The experimental results show the proposed model can effectively reduce the storage burden, achieve graded secure access, and suitable for the management of multi-class data with high scalability.

    Domain transfer intrusion detection method for unknown attacks on industrial control systems
    Haoran WANG, Dan YU, Yuli YANG, Yao MA, Yongle CHEN
    2024, 44(4):  1158-1165.  DOI: 10.11772/j.issn.1001-9081.2023050566
    Asbtract ( )   HTML ( )   PDF (2452KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of lack of Industrial Control System (ICS) data and poor detection of unknown attacks by industrial control intrusion detection systems, an unknown attack intrusion detection method for industrial control systems based on Generative Adversarial Transfer Learning network (GATL) was proposed. Firstly, causal inference and cross-domain feature mapping relations were introduced to reconstruct the data to improve its understandability and reliability. Secondly, due to the data imbalance between source domain and target domain, domain confusion-based conditional Generative Adversarial Network (GAN) was used to increase the size and diversity of the target domain dataset. Finally, the differences and commonalities of the data were fused through domain adversarial transfer learning to improve the detection and generalization capabilities of the industrial control intrusion detection model for unknown attacks in the target domain. The experimental results show that on the standard dataset of industrial control network, GATL has an average F1-score of 81.59% in detecting unknown attacks in the target domain while maintaining a high detection rate of known attacks, which is 63.21 and 64.04 percentage points higher than the average F1-score of Dynamic Adversarial Adaptation Network (DAAN) and Information-enhanced Adversarial Domain Adaptation (IADA) method, respectively.

    Security analysis of PFP algorithm under quantum computing model
    Yanjun LI, Xiaoyu JING, Huiqin XIE, Yong XIANG
    2024, 44(4):  1166-1171.  DOI: 10.11772/j.issn.1001-9081.2023050576
    Asbtract ( )   HTML ( )   PDF (1376KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The rapid development of quantum technology and the continuous improvement of quantum computing efficiency, especially the emergence of Shor algorithm and Grover algorithm, greatly threaten the security of traditional public key cipher and symmetric cipher. The block cipher PFP algorithm designed based on Feistel structure was analyzed. First, the linear transformation P of the round function was fused into the periodic functions in the Feistel structure, then four 5-round periodic functions of PFP were obtained, two rounds more than periodic functions in general Feistel structure, which was verified through experiments. Furthermore, by using quantum Grover and Simon algorithms, with a 5-round periodic function as the distinguisher, the security of 9, 10-round PFP was evaluated by analyzing the characteristics of PFP key arrangement algorithm. The time complexity required for key recovery is 226, 238.5, the quantum resource required is 193, 212 qubits, and the 58, 77 bits key can be restored, which are superior to the existing impossible differential analysis results.

    Advanced computing
    DFS-Cache: memory-efficient and persistent client cache for distributed file systems
    Ruixuan NI, Miao CAI, Baoliu YE
    2024, 44(4):  1172-1180.  DOI: 10.11772/j.issn.1001-9081.2023050590
    Asbtract ( )   HTML ( )   PDF (3096KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To effectively reduce cache defragmentation overhead and improve cache hit radio in data-intensive workflows, a persistent client cache for distributed file system was proposed, namely DFS-Cache (Distributed File System Cache), which was designed and implemented based on Non-Volatile Memory (NVM) and was able to ensure data persistence and crash consistency with significantly reducing cold start time. DFS-Cache was consisted of a cache defragmentation mechanism based on virtual memory remapping and a cache space management strategy based on Time-To-Live (TTL). The former was based on the characteristic that NVM could be directly addressed by the memory controller. By dynamically modifying the mapping relationship between virtual addresses and physical addresses, zero-copy memory defragmentation was achieved. The latter was a cold-hot separated grouping management strategy that could enhance cache space management efficiency with the support of the remapping-based cache defragmentation mechanism. Experiments were conducted using real Intel Optane persistent memory devices. Compared with commercial distributed file systems MooseFS and GlusterFS, while employing standard benchmarking programs like Fio and Filebench, the proposed client cache can increase the system throughput by up to 5.73 times and 1.89 times.

    Potential barrier estimation criterion based on quantum dynamics framework of optimization algorithm
    Yaqin CHEN, Peng WANG
    2024, 44(4):  1180-1186.  DOI: 10.11772/j.issn.1001-9081.2023040553
    Asbtract ( )   HTML ( )   PDF (2696KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Quantum Dynamics Framework (QDF) is a basic iterative process of optimization algorithm with representative and universal significance, which is obtained under the quantum dynamics model of optimization algorithm. Differential acceptance is an important mechanism to avoid the optimization algorithm falling into local optimum and to solve the premature convergence problem of the algorithm. In order to introduce the differential acceptance mechanism into the QDF, based on the quantum dynamics model, the differential solution was regarded as a potential barrier encountered in the process of particle motion, and the probability of particles penetrating the potential barrier was calculated by using the transmission coefficient in the quantum tunneling effect. Thus, the differential acceptance criterion of quantum dynamics model was obtained: Potential Barrier Estimation Criterion (PBEC). PBEC was related to the height and width of the potential barrier and the quality of the particles. Compared with the classical Metropolis acceptance criterion, PBEC can comprehensively estimate the behavior of the optimization algorithm when it encounters the differential solution during sampling. The experimental results show that, the QDF algorithm based on PBEC has stronger ability to jump out of the local optimum and higher search efficiency than the QDF algorithm based on Metropolis acceptance criterion, and PBEC is a feasible and effective differential acceptance mechanism in quantum optimization algorithms.

    Hybrid NSGA-Ⅱ for vehicle routing problem with multi-trip pickup and delivery
    Jianqiang LI, Zhou HE
    2024, 44(4):  1187-1194.  DOI: 10.11772/j.issn.1001-9081.2023101512
    Asbtract ( )   HTML ( )   PDF (1477KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Concerning the trade-off between convergence and diversity in solving the multi-trip pickup and delivery Vehicle Routing Problem (VRP), a hybrid Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ) combining Adaptive Large Neighborhood Search (ALNS) algorithm and Adaptive Neighborhood Selection (ANS), called NSGA-Ⅱ-ALNS-ANS, was proposed. Firstly, considering the influence of the initial population on the convergence speed of the algorithm, an improved regret insertion method was employed to obtain high-quality initial population. Secondly, to improve global and local search capabilities of the algorithm, various destroy-repair operators and neighborhood structures were designed, according to the characteristics of the pickup and delivery problem. Finally, a Best Fit Decreasing (BFD) algorithm based on random sampling and an efficient feasible solution evaluation criterion were proposed to generate vehicle routing schemes. The simulation experiments were conducted on public benchmark instances of different scales, in the comparison experiments with the MA (Memetic Algorithm), the optimal solution quality of the proposed algorithm increased by 27%. The experimental results show that the proposed algorithm can rapidly generate high-quality vehicle routing schemes that satisfy multiple constraints, and outperform the existing algorithms in terms of both convergence and diversity.

    Network and communications
    Robust resource allocation optimization in cognitive wireless network integrating information communication and over-the-air computation
    Hualiang LUO, Quanzhong LI, Qi ZHANG
    2024, 44(4):  1195-1202.  DOI: 10.11772/j.issn.1001-9081.2023050573
    Asbtract ( )   HTML ( )   PDF (1373KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To address the power resource limitations of wireless sensors in over-the-air computation networks and the spectrum competition with existing wireless information communication networks, a cognitive wireless network integrating information communication and over-the-air computation was studied, in which the primary network focused on wireless information communication, and the secondary network aimed to support over-the-air computation where the sensors utilized signals sent by the base station of the primary network for energy harvesting. Considering the constraints of the Mean Square Error (MSE) of over-the-air computation and the transmit power of each node in the network, base on the random channel uncertainty, a robust resource optimization problem was formulated, with the objective function of maximizing the sum rate of wireless information communication users. To solve the robust optimization problem effectively, an Alternating Optimization (AO)-Improved Constrained Stochastic Successive Convex Approximation (ICSSCA) algorithm called AO-ICSSCA,was proposed, by which the original robust optimization problem was transformed into deterministic optimization sub-problems, and the downlink beamforming vector of the base station in the primary network, the power factors of the sensors, and the fusion beamforming vector of the fusion center in the secondary network were alternately optimized. Simulation experimental results demonstrate that AO-ICSSCA algorithm achieves superior performance with less computing time compared to the Constrained Stochastic Successive Convex Approximation (CSSCA) algorithm before improvement.

    Energy-spectrum efficiency trade-off for multi-cognitive relay network with decode-and-forward full-duplex maximum energy harvesting
    Zhipeng MAO, Runhe QIU
    2024, 44(4):  1202-1208.  DOI: 10.11772/j.issn.1001-9081.2023040534
    Asbtract ( )   HTML ( )   PDF (2370KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In a full-duplex multi-cognitive relay network supported by Simultaneous Wireless Information and Power Transfer (SWIPT), in order to maximize energy-spectrum efficiency, the relay with the maximum energy harvesting was selected for decoding and forwarding, thus forming an energy-spectrum efficiency trade-off optimization problem. The problem was transformed into a convex optimization problem by variable transformation and concave-convex process optimization method. When the trade-off factor was 0, the optimization problem was equivalent to the optimization problem of maximizing the Spectrum Efficiency (SE). When the trade-off factor was 1, the optimization problem was equivalent to the problem of minimizing the energy consumed by the system. In order to solve this optimization problem, an improved algorithm that could directly obtain the trade-off factor for maximizing Energy Efficiency (EE) was proposed, which was optimized by combining the source node transmit power and the power split factor. The proposed algorithm was divided into two steps. First, the power split factor was fixed, and the source node transmit power and trade-off factor that made the EE optimal were obtained. Then, the optimal source node transmit power was fixed, and the optimal power split factor was obtained by using the relationship between energy-spectrum efficiency and power split factor. Through simulation experimental results, it is found that the relay network with the maximum energy harvesting is better in EE and SE than the network composed of other relays. Compared with the method of only optimizing the transmit power, the proposed algorithm increases the EE by more than 63%, and increases the SE by more than 30%; its EE and SE are almost the same as the exhaustive method, and the proposed algorithm converges faster.

    Energy efficiency optimization mechanism for UAV-assisted and non-orthogonal multiple access-enabled data collection system
    Rui TANG, Shibo YUE, Ruizhi ZHANG, Chuan LIU, Chuanlin PANG
    2024, 44(4):  1209-1218.  DOI: 10.11772/j.issn.1001-9081.2023040482
    Asbtract ( )   HTML ( )   PDF (2575KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In the Unmanned Aerial Vehicle (UAV)-assisted and Non-Orthogonal Multiple Access (NOMA)-enabled data collection system, the total energy efficiency of all sensors is maximized by jointly optimizing the three-dimensional placement design of the UAVs and the power allocation of sensors under the ground-air probabilistic channel model and the quality-of-service requirements. To solve the original mixed-integer non-convex programming problem, an energy efficiency optimization mechanism was proposed based on convex optimization theory, deep learning theory and Harris Hawk Optimization (HHO) algorithm. Under any given three-dimensional placement of the UAVs, first, the power allocation sub-problem was equivalently transformed into a convex optimization problem. Then, based on the optimal power allocation strategy, the Deep Neural Network (DNN) was applied to construct the mapping from the positions of the sensors to the three-dimensional placement of the UAVs, and the HHO algorithm was further utilized to train the model parameters corresponding to the optimal mapping offline. The trained mechanism only involved several algebraic operations and needed to solve a single convex optimization problem. Simulation experimental results show that compared with the travesal search mechanism based on particle swarm optimization algorithm, the proposed mechanism reduces the average operation time by 5 orders of magnitude while sacrificing only about 4.73% total energy efficiency in the case of 12 sensors.

    Improved DV-Hop localization model based on multi-scenario
    Han SHEN, Zhongsheng WANG, Zhou ZHOU, Changyuan WANG
    2024, 44(4):  1219-1227.  DOI: 10.11772/j.issn.1001-9081.2023040486
    Asbtract ( )   HTML ( )   PDF (4541KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Considering the low positioning accuracy and strong scene dependence of optimization strategy in the Distance Vector Hop (DV-Hop) localization model, an improved DV-Hop model, Function correction Distance Vector Hop (FuncDV-Hop) based on function analysis and determining coefficients by simulation was presented. First, the average hop distance, distance estimation, and least square error in the DV-Hop model were analyzed. The following concepts were introduced: undetermined coefficient optimization, step function segmentation experiment, weight function approach using equivalent points, and modified maximum likelihood estimation. Then, in order to design control trials, the number of nodes, the proportion of beacon nodes, the communication radius, the number of beacon nodes, and the number of unknown nodes were all designed for multi-scenario comparison experiments by using the control variable technique. Finally, the experiment was split into two phases:determining coefficients by simulation and integrated optimization testing. Compared with the original DV-Hop model, the positioning accuracy of the final improved strategy is improved by 23.70%-75.76%, and the average optimization rate is 57.23%. The experimental results show that, the optimization rate of FuncDV-Hop model is up to 50.73%, compared with the DV-Hop model based on genetic algorithm and neurodynamic improvement, the positioning accuracy of FuncDV-Hop model is increased by 0.55%-18.77%. The proposed model does not introduce other parameters, does not increase the protocol overhead of Wireless Sensor Networks (WSN), and effectively improves the positioning accuracy.

    MAC layer scheduling strategy of roadside units based on MEC server priority service
    Xin LI, Liyong BAO, Hongwei DING, Zheng GUAN
    2024, 44(4):  1227-1235.  DOI: 10.11772/j.issn.1001-9081.2023050556
    Asbtract ( )   HTML ( )   PDF (3959KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the Multi-access Edge Computing (MEC) server data transmission requirements of high reliability, low latency and large data volume, a Media Access Control (MAC) scheduling strategy based on conflict-free access, priority architecture and elastic service technology for the vehicle edge computing scenario was proposed. The proposed strategy was based on the centralized coordination of channel access rights by the Road Side Unit (RSU) of the Internet of Vehicles (IoV), which prioritized the link transmission quality between the On Board Unit (OBU) and the MEC server in the vehicle network, so that the Vehicle-to-Network (V2N) service data could be transmitted in a timely manner. At the same time, an elastic service approach was adopted for services between local OBUs to enhance the reliability of emergency message transmission when dense vehicles were accessed. First, a queuing analysis model was constructed for the scheduling strategy. Then, the embedded Markov chains were established according to the non-aftereffect characteristics of the system state variables at each moment, and the system was analyzed theoretically by the analysis method of probability generating functions to obtain the exact analytical expressions of key indicators such as the average queue length, and the average waiting latency of MEC server communication units and OBUs, and RSU query period. Computer simulation experimental results show that the statistical analysis results are consistent with the theoretical calculation results, and the proposed scheduling strategy can improve the stability and flexibility of the IoV under high load conditions.

    Secondary signal detection algorithm for high-speed mobile environments
    Huahua WANG, Xu ZHANG, Feng LI
    2024, 44(4):  1236-1241.  DOI: 10.11772/j.issn.1001-9081.2023050580
    Asbtract ( )   HTML ( )   PDF (2710KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Orthogonal Time Sequency Multiplexing (OTSM) achieves transmission performance similar to Orthogonal Time Frequency Space (OTFS) modulation with lower complexity, providing a promising solution for future high-speed mobile communication systems that require low complexity transceivers. To address the issue of insufficient efficiency in existing time-domain based Gauss-Seidel (GS) iterative equalization, a secondary signal detection algorithm was proposed. First, Linear Minimum Mean Square Error (LMMSE) detection with low complexity was performed in the time domain, and then Successive Over Relaxation (SOR) iterative algorithm was used to further eliminate residual symbol interference. To further optimize convergence efficiency and detection performance, the SOR algorithm was linearly optimized to obtain an Improved SOR (ISOR) algorithm. The simulation experimental results show that compared with SOR algorithm, ISOR algorithm improves detection performance and accelerates convergence while increasing lower complexity. Compared with GS iterative algorithm, ISOR algorithm has a gain of 1.61 dB when using 16 QAM modulation with a bit error rate of 10-4.

    Resource allocation algorithm for low earth orbit satellites oriented to user demand
    Fatang CHEN, Miao HUANG, Yufeng JIN
    2024, 44(4):  1242-1247.  DOI: 10.11772/j.issn.1001-9081.2023050561
    Asbtract ( )   HTML ( )   PDF (2078KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In Low Earth orbit (LEO)satellite multi-beam communication scenario, the traditional fixed resource allocation algorithm can not meet the differences in channel capacity requirements of different users. In order to meet the requirements of users, the optimization model of minimum supply-demand difference of combining channel allocation, bandwidth allocation and power allocation was established, and Pattern Division Multiple Access technology (PDMA)was introduced to improve the utilization of channel resources. In view of the non-convex characteristic of the model, the optimal resource allocation strategy learned by the Q-learning algorithm was used to allocate the channel capacity suitable for each user, and a reward threshold was introduced to further improve the algorithm, speeding up the convergence and minimizing the difference between supply and demand when the algorithm converged. The simulation results show that the convergence speed of the improved algorithm is about 3.33 times that before improvement; the improved algorithm can meet larger user requirement, about 14% higher than the Q-learning algorithm before improvement, about 2.14 times that of the traditional fixed algorithm.

    Computer software technology
    Survey of code similarity detection technology
    Xiangjie SUN, Qiang WEI, Yisen WANG, Jiang DU
    2024, 44(4):  1248-1258.  DOI: 10.11772/j.issn.1001-9081.2023040551
    Asbtract ( )   HTML ( )   PDF (1868KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Code reuse not only brings convenience to software development, but also introduces security risks, such as accelerating vulnerability propagation and malicious code plagiarism. Code similarity detection technology is to calculate code similarity by analyzing lexical, syntactic, semantic and other information between codes. It is one of the most effective technologies to judge code reuse, and it is also a program security analysis technology that has developed rapidly in recent years. First, the latest technical progress of code similarity detection was systematically reviewed, and the current code similarity detection technology was classified. According to whether the target code was open source, it was divided into source code similarity detection and binary code similarity detection. According to the different programming languages and instruction sets, the second subdivision was carried out. Then, the ideas and research results of each technology were summarized, the successful cases of machine learning technology in the field of code similarity detection were analyzed, and the advantages and disadvantages of existing technologies were discussed. Finally, the development trend of code similarity detection technology was given to provide reference for relevant researchers.

    Code clone detection based on dependency enhanced hierarchical abstract syntax tree
    Zexuan WAN, Chunli XIE, Quanrun LYU, Yao LIANG
    2024, 44(4):  1259-1268.  DOI: 10.11772/j.issn.1001-9081.2023040485
    Asbtract ( )   HTML ( )   PDF (1734KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In the field of software engineering, code clone detection methods based on semantic similarity can reduce the cost of software maintenance and prevent system vulnerabilities. As a typical form of code abstract representation, Abstract Syntax Tree (AST) has achieved success in code clone detection tasks of many program languages. However, the existing work mainly uses the original AST to extract code semantics, and does not dig deep semantic and structural information in AST. To solve the above problem, a code clone detection method based on Dependency Enhanced Hierarchical Abstract Syntax Tree (DEHAST) was proposed. Firstly, the AST was layered and divided into different semantic levels. Secondly, corresponding dependency enhancement edges were added to different levels of AST to construct DEHAST, thus a simple AST was transformed into a heterogeneous graph with richer program semantics. Finally, a Graph Matching Network (GMN) model was used to detect the similarity of heterogeneous graphs to achieve code clone detection. Experimental results on two datasets BigCloneBench and Google Code Jam show that DEHAST is able to detect 100% of Type-1 and Type-2 code clones, 99% of Type-3 code clones, and 97% of Type-4 code clones; compared with the tree based method ASTNN (AST-based Neural Network), the F1 values all increase by 4 percentage points. Therefore, DEHAST can effectively perform code semantic clone detection.

    Multimedia computing and computer simulation
    Image aesthetic quality evaluation method based on self-supervised vision Transformer
    Rong HUANG, Junjie SONG, Shubo ZHOU, Hao LIU
    2024, 44(4):  1269-1276.  DOI: 10.11772/j.issn.1001-9081.2023040540
    Asbtract ( )   HTML ( )   PDF (3071KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The existing image aesthetic quality evaluation methods widely use Convolution Neural Network (CNN) to extract image features. Limited by the local receptive field mechanism, it is difficult for CNN to extract global features from a given image, thereby resulting in the absence of aesthetic attributes like global composition relations, global color matching and so on. In order to solve this problem, an image aesthetic quality evaluation method based on SSViT (Self-Supervised Vision Transformer) model was proposed. Self-attention mechanism was utilized to establish long-distance dependencies among local patches of the image and to adaptively learn their correlations, and extracted the global features so as to characterize the aesthetic attributes. Meanwhile, three tasks of perceiving the aesthetic quality, namely classifying image degradation, ranking image aesthetic quality, and reconstructing image semantics, were designed to pre-train the vision Transformer in a self-supervised manner using unlabeled image data, so as to enhance the representation of global features. The experimental results on AVA (Aesthetic Visual Assessment) dataset show that the SSViT model achieves 83.28%, 0.763 4, 0.746 2 on the metrics including evaluation accuracy, Pearson Linear Correlation Coefficient (PLCC) and SRCC (Spearman Rank-order Correlation Coefficient), respectively. These experimental results demonstrate that the SSViT model achieves higher accuracy in image aesthetic quality evaluation.

    Video super-resolution reconstruction network based on frame straddling optical flow
    Yang LIU, Rong LIU, Ke FANG, Xinyue ZHANG, Guangxu WANG
    2024, 44(4):  1277-1284.  DOI: 10.11772/j.issn.1001-9081.2023040523
    Asbtract ( )   HTML ( )   PDF (3588KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Current Video Super-Resolution (VSR) algorithms cannot fully utilize inter-frame information of different distances when processing complex scenes with large motion amplitude, resulting in difficulty in accurately recovering occlusion, boundaries, and multi-detail regions. A VSR model based on frame straddling optical flow was proposed to solve these problems. Firstly, shallow features of Low-Resolution frames (LR) were extracted through Residual Dense Blocks (RDBs). Then, motion estimation and compensation was performed on video frames using a Spatial Pyramid Network (SPyNet) with straddling optical flows of different time lengths, and deep feature extraction and correction was performed on inter-frame information through RDBs connected in multiple layers. Finally, the shallow and deep features were fused, and High-Resolution frames (HR) were obtained through up-sampling. The experimental results on the REDS4 public dataset show that compared with deep Video Super-Resolution network using Dynamic Upsampling Filters without explicit motion compensation (DUF-VSR), the proposed model improves Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) by 1.07 dB and 0.06, respectively. The experimental results show that the proposed model can effectively improve the quality of video image reconstruction.

    Interstitial lung disease segmentation algorithm based on multi-task learning
    Wei LI, Ling CHEN, Xiuyuan XU, Min ZHU, Jixiang GUO, Kai ZHOU, Hao NIU, Yuchen ZHANG, Shanye YI, Yi ZHANG, Fengming LUO
    2024, 44(4):  1285-1293.  DOI: 10.11772/j.issn.1001-9081.2023040517
    Asbtract ( )   HTML ( )   PDF (3659KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Interstitial Lung Disease (ILD) segmentation labels are highly costly, leading to small sample sizes in existing datasets and resulting in poor performance of trained models. To address this issue, a segmentation algorithm for ILD based on multi-task learning was proposed. Firstly, a multi-task segmentation model was constructed based on U-Net. Then, the generated lung segmentation labels were used as auxiliary task labels for multi-task learning. Finally, a method of dynamically weighting the multi-task loss functions was used to balance the losses of the primary task and the secondary task. Experimental results on a self-built ILD dataset show that the Dice Similarity Coefficient (DSC) of the multi-task segmentation model reaches 82.61%, which is 2.26 percentage points higher than that of U-Net. The experimental results demonstrate that the proposed algorithm can improve the segmentation performance of ILD and can assist clinical doctors in ILD diagnosis.

    3D-GA-Unet: MRI image segmentation algorithm for glioma based on 3D-Ghost CNN
    Lijun XU, Hui LI, Zuyang LIU, Kansong CHEN, Weixuan MA
    2024, 44(4):  1294-1302.  DOI: 10.11772/j.issn.1001-9081.2023050606
    Asbtract ( )   HTML ( )   PDF (3121KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Gliomas are the most common primary cranial tumors arising from cancerous changes in the glia of the brain and spinal cord, with a high proportion of malignant gliomas and a significant mortality rate. Quantitative segmentation and grading of gliomas based on Magnetic Resonance Imaging (MRI) images is the main method for diagnosis and treatment of gliomas. To improve the segmentation accuracy and speed of glioma, a 3D-Ghost Convolutional Neural Network (CNN) -based MRI image segmentation algorithm for glioma, called 3D-GA-Unet, was proposed. 3D-GA-Unet was built based on 3D U-Net (3D U-shaped Network). A 3D-Ghost CNN block was designed to increase the useful output and reduce the redundant features in traditional CNNs by using linear operation. Coordinate Attention (CA) block was added, which helped to obtain more image information that was favorable to the segmentation accuracy. The model was trained and validated on the publicly available glioma dataset BraTS2018. The experimental results show that 3D-GA-Unet achieves average Dice Similarity Coefficients (DSCs) of 0.863 2, 0.847 3 and 0.803 6 and average sensitivities of 0.867 6, 0.949 2 and 0.831 5 for Whole Tumor (WT), Tumour Core (TC), and Enhanced Tumour (ET) in glioma segmentation results. It is verified that 3D-GA-Unet can accurately segment glioma images and further improve the segmentation efficiency, which is of positive significance for the clinical diagnosis of gliomas.

    Two-channel progressive feature filtering network for tampered image detection and localization
    Shunwang FU, Qian CHEN, Zhi LI, Guomei WANG, Yu LU
    2024, 44(4):  1303-1309.  DOI: 10.11772/j.issn.1001-9081.2023040493
    Asbtract ( )   HTML ( )   PDF (1982KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The existing image tamper detection networks based on deep learning often have problems such as low detection accuracy and weak algorithm transferability. To address the above issues, a two-channel progressive feature filtering network was proposed. Two channels were used to extract the two-domain features of the image in parallel, one of which was used to extract the shallow and deep features of the image spatial domain, and the other channel was used to extract the feature distribution of the image noise domain. At the same time, a progressive subtle feature screening mechanism was used to filter redundant features and gradually locate the tampered regions; in order to extract the tamper mask more accurately, a two-channel subtle feature extraction module was proposed, which combined the subtle features of the spatial domain and the noise domain to generate a more accurate tamper mask. During the decoding process, the localization ability of the network to tampered regions was improved by fusing filtered features of different scales and the contextual information of the network. The experimental results show that in terms of detection and localization, compared with the existing advanced tamper detection networks ObjectFormer, Multi-View multi-Scale Supervision Network (MVSS-Net) and Progressive Spatio-Channel Correlation Network (PSCC-Net), the F1 score of the proposed network is increased by an 10.4, 5.9 and 12.9 percentage points on CASIA V2.0 dataset; when faced with Gaussian low-pass filtering, Gaussian noise, and JPEG compression attacks, compared with Manipulation Tracing Network (ManTra-Net) and Spatial Pyramid Attention Network (SPAN), the Area Under Curve (AUC) of the proposed network is increased by 10.0 and 5.4 percentage points at least. It is verified that the proposed network can effectively solve the problems of low detection accuracy and poor transferability in the tamper detection algorithm.

    Segmentation network for day and night ground-based cloud images based on improved Res-UNet
    Boyue WANG, Yingxiang LI, Jiandan ZHONG
    2024, 44(4):  1310-1316.  DOI: 10.11772/j.issn.1001-9081.2023040453
    Asbtract ( )   HTML ( )   PDF (3059KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of detail information loss and low segmentation accuracy in the segmentation of day and night ground-based cloud images, a segmentation network called CloudResNet-UNetwork (CloudRes-UNet) for day and night ground-based cloud images based on improved Res-UNet (Residual network-UNetwork) was proposed, in which the overall network structure of encoder-decoder was adopted. Firstly, ResNet50 was used by the encoder to extract features to enhance the feature extraction ability. Then, a Multi-Stage feature extraction (Multi-Stage) module was designed, which combined three techniques of group convolution, dilated convolution and channel shuffle to obtain high-intensity semantic information. Secondly, Efficient Channel Attention Network (ECA?Net) module was added to focus on the important information in the channel dimension, strengthen the attention to the cloud region in the ground-based cloud image, and improve the segmentation accuracy. Finally, bilinear interpolation was used by the decoder to upsample the features, which improved the clarity of the segmented image and reduced the loss of object and position information. The experimental results show that, compared with the state-of-the-art ground-based cloud image segmentation network Cloud-UNetwork (Cloud-UNet) based on deep learning, the segmentation accuracy of CloudRes-UNet on the day and night ground-based cloud image segmentation dataset is increased by 1.5 percentage points, and the Mean Intersection over Union (MIoU) is increased by 1.4 percentage points, which indicates that CloudRes-UNet obtains cloud information more accurately. It has positive significance for weather forecast, climate research, photovoltaic power generation and so on.

    Monaural speech enhancement based on gated dilated convolutional recurrent network
    Xinyuan YOU, Heng WANG
    2024, 44(4):  1317-1324.  DOI: 10.11772/j.issn.1001-9081.2023040452
    Asbtract ( )   HTML ( )   PDF (1791KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The use of contextual information plays an important role in speech enhancement tasks. To address the under-utilization problem of global speech, a Gated Dilated Convolutional Recurrent Network (GDCRN) for complex spectral mapping was proposed. GDCRN was composed of an encoder, a Gated Temporal Convolution Module (GTCM) and a decoder. The encoder and decoder had asymmetric network structure. Firstly, features were processed by the encoder using a Gated Dilated Convolution Module (GDCM), which expanded the receptive field. Secondly, longer contextual information was captured and selectively passed through the use of the GTCM. Finally, the deconvolution combined with a Gated Linear Unit (GLU)was used by the decoder, which was connected to the corresponding convolution layer in the encoder using skip connection. Additionally, a Channel Time-Frequency Attention (CTFA) mechanism was introduced. Experimental results show that the proposed network has fewer parameters and shorter training time than other networks such as Temporal Convolutional Neural Network (TCNN) and Gated Convolutional Recurrent Network (GCRN). The proposed GDCRN significantly improves PESQ (Perceptual Evaluation of Speech Quality) and STOI(Short-Time Objective Intelligibility) up by 0.258 9 and 4.67 percentage points, demonstrating that the proposed network has better enhancement effect and stronger generalization ability.

2024 Vol.44 No.9

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF