Loading...

Table of Content

    10 July 2023, Volume 43 Issue 7
    The 39th CCF National Database Conference (NDBC 2022)
    Parallel algorithm of betweenness centrality for dynamic networks
    Zhenyu LIU, Chaokun WANG, Gaoyang GUO
    2023, 43(7):  1987-1993.  DOI: 10.11772/j.issn.1001-9081.2022071121
    Asbtract ( )   HTML ( )   PDF (1663KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Betweenness centrality is a common metric for evaluating the importance of nodes in a graph. However, the update efficiency of betweenness centrality in large-scale dynamic graphs is not high enough to meet the application requirements. With the development of multi-core technology, algorithm parallelization has become one of the effective ways to solve this problem. Therefore, a Parallel Algorithm of Betweenness centrality for dynamic networks (PAB) was proposed. Firstly, the time cost of redundant point pairs was reduced through operations such as community filtering, equidistant pruning and classification screening. Then, the determinacy of the algorithm was analyzed and processed to realize parallelization. Comparison experiments were conducted on real datasets and synthetic datasets, and the results show that the update efficiency of PAB is 4 times that of the latest batch-iCENTRAL algorithm on average when adding edges. It can be seen that the proposed algorithm can improve the update efficiency of betweenness centrality in dynamic networks effectively.

    Dictionary partition vector space model for ciphertext ranked search in cloud environment
    Jiaxing LU, Hua DAI, Yuanlong LIU, Qian ZHOU, Geng YANG
    2023, 43(7):  1994-2000.  DOI: 10.11772/j.issn.1001-9081.2022071111
    Asbtract ( )   HTML ( )   PDF (1846KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems that the dimensions of vectors generated by Traditional Vector Space Model (TVSM) are high, and the vector dot product operation to calculate the correlation between the documents and the queried keywords is time-consuming, a Dictionary Partition Vector Space Model (DPVSM) for ciphertext ranked search in cloud environment was proposed. Firstly, the specific definition of DPVSM was given, and it was proved that the relevance score between the queried keywords and the documents in DPVSM was exactly the same as that in TVSM. Then, by adopting the equal-length dictionary partition method, an encrypted vector generation algorithm and a relevance score calculation algorithm between documents and queried keywords were proposed. Experimental results show that the space occupation of document vectors of DPVSM is much lower than that of TVSM, and the more the number of documents, the greater the occupation reduction. In addition, the space occupation of query vectors and the time consumption of relevance score calculation are also much lower than those of TVSM. Obviously, DPVSM is superior to TVSM in both the space efficiency of generated vectors and the efficiency cost of relevance score calculation.

    Explainable recommendation mechanism by fusion collaborative knowledge graph and counterfactual inference
    Zifang XIA, Yaxin YU, Ziteng WANG, Jiaqi QIAO
    2023, 43(7):  2001-2009.  DOI: 10.11772/j.issn.1001-9081.2022071113
    Asbtract ( )   HTML ( )   PDF (1898KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to construct a transparent and trustworthy recommendation mechanism, relevant research works mainly provide reasonable explanations for personalized recommendation through explainable recommendation mechanisms. However, there are three major limitations of the existing explainable recommendation mechanism: 1) using correlations only can provide rational explanations rather than causal explanations, and using paths to provide explanations will bring privacy leakage; 2) the problem of sparse user feedback is ignored, so it is difficult to guarantee the fidelity of explanations; 3) the granularities of explanations are relatively coarse, and users’ personalized preferences are not considered. To solve the above problems, an explainable recommendation mechanism ERCKCI based on Collaborative Knowledge Graph (CKG) and counterfactual inference was proposed. Firstly, based on the user’s own behavior sequence, the counterfactual inference was used to achieve high-sparsity causal decorrelation by using the casual relations, and the counterfactual explanations were derived iteratively. Secondly, in order to improve the fidelity of explanations, not only the CKG and the neighborhood propagation mechanism of the Graph Neural Network (GNN) were used to learn users’ and items’ representations based on single time slice; but also the user long-short term preference were captured to enhance user preference representation through self-attention mechanism on multiple time slices. Finally, via a higher-order connected subgraph of the counterfactual set, the multi-granularity personalized preferences of user was captured to enhance counterfactual explanations. To verify the effectiveness of ERCKCI mechanism, comparison experiments were performed on the public datasets MovieLens(100k), Book-crossing and MovieLens(1M). The obtained results show that compared with the Explainable recommendation based on Counterfactual Inference (ECI) algorithm under the Relational Collaborative Filtering (RCF) recommendation model on the first two datasets, the proposed mechanism has the explanation fidelity improved by 4.89 and 3.38 percentage points respectively, the size of CF set reduced by 63.26% and 66.24% respectively, and the sparsity index improved by 1.10 and 1.66 percentage points respectively; so the explainability is improved effectively by the proposed mechanism.

    Prompt learning based unsupervised relation extraction model
    Menglin HUANG, Lei DUAN, Yuanhao ZHANG, Peiyan WANG, Renhao LI
    2023, 43(7):  2010-2016.  DOI: 10.11772/j.issn.1001-9081.2022071133
    Asbtract ( )   HTML ( )   PDF (1353KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Unsupervised relation extraction aims to extract the semantic relations between entities from unlabeled natural language text. Currently, unsupervised relation extraction models based on Variational Auto-Encoder (VAE) architecture provide supervised signals to train model through reconstruction loss, which offers a new idea to complete unsupervised relation extraction tasks. Focusing on the issue that this kind of models cannot understand contextual information effectively and relies on dataset inductive biases, a Prompt-based learning based Unsupervised Relation Extraction (PURE) model was proposed, including a relation extraction module and a link prediction module. In the relation extraction module, a context-aware Prompt template function was designed to fuse the contextual information, and the unsupervised relation extraction task was converted into a mask prediction task, so as to make full use of the knowledge obtained during pre-training phase to extract relations. In the link prediction module, supervised signals were provided for the relation extraction module by predicting the missing entities in the triples to assist model training. Extensive experiments on two public real-world relation extraction datasets were carried out. The results show that PURE model can use contextual information effectively and does not rely on dataset inductive biases, and has the evaluation index B-cubed F1 improved by 3.3 percentage points on NYT dataset compared with the state-of-the-art VAE architecture-based model UREVA (Variational Autoencoder-based Unsupervised Relation Extraction model).

    OmegaDB: concurrent computing framework of relational operators for heterogeneous architecture
    Jinhui LAI, Zichen XU, Yicheng TU, Guolong TAN
    2023, 43(7):  2017-2025.  DOI: 10.11772/j.issn.1001-9081.2022071131
    Asbtract ( )   HTML ( )   PDF (2915KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    There is a possibility of overlapping access data paths and shared computation among different queries in database systems, and batch processing of queries in workloads is called Multiple-Query-at-a-Time model. Several developed multi-query processing frameworks have been proven effective, but all of them lack a general framework for building complete query processing and optimization methods. On the basis of a query time operator merging optimization framework constructed based on equivalent transformation, a relational operator concurrent computing framework for heterogeneous architectures, called OmegaDB, was proposed. In this framework, by studying the GPU-oriented relational operator flow-batch computing model, and constructing the relational data query pipeline, a flow-batch computing method aggregating multiple-query was implemented on the CPU-GPU heterogeneous architecture. On experiments and prototype implementation, the advantages of OmegaDB were verified through theoretical analysis and experimental results by comparing with Relational Database Management System (RDBMS), and the potential of OmegaDB in utilizing new hardware was shown. According to the theoretical study of query optimization frameworks of Multiple-Query-at-a-Time models based on the traditional relational algebraic rules, several optimization methods were proposed and future research directions were prospected. Using TPC-H business intelligence computing as a benchmarking program, the results show that OmegaDB achieves up to 24 times end-to-end speedup while consuming lower disk I/O and CPU time than the modern advanced commercial database system SQL SERVER.

    Dynamic aggregate nearest neighbor query algorithm in weighted road network space
    Fangshu CHEN, Wei ZHANG, Xiaoming HU, Yufei ZHANG, Xiankai MENG, Linxiang SHI
    2023, 43(7):  2026-2033.  DOI: 10.11772/j.issn.1001-9081.2022091371
    Asbtract ( )   HTML ( )   PDF (2757KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    As a classical problem in spatial databases, Aggregate Nearest Neighbor (ANN) query is of great importance in the optimization of network link structures, the location selection of logistics distribution points and the car-sharing services, and can effectively contribute to the development of fields such as logistics, mobile Internet industry and operations research. The existing research has some shortcomings: lack of efficient index structure for large-scale dynamic road network data, low query efficiency of the algorithms when the data point locations move in real time and network weights update dynamically. To address these problems, an ANN query algorithm in dynamic scenarios was proposed. Firstly, with adopting G-tree as the road network index, a pruning algorithm combining spatial index structures such as quadtrees and k-d trees with the Incremental Euclidean Restriction (IER) algorithm was proposed to solve ANN queries in statistic space. Then, aiming at the issue of frequent updates of data point locations in dynamic scenarios, the time window and safe zone update strategy were added to reduce the iteration times of the algorithm, experimental results showed that the efficiency could be improved by 8% to 85%. Finally, for ANN query problems with road network weight changed, based on historical query results, two correction based continuous query algorithms were proposed to obtain the current query results according to the increment of weight changes. In certain scenarios, these algorithms can reduce errors by approximately 50%. The theoretical research and experimental results show that the proposed algorithms can solve the ANN query problems in dynamic scenarios efficiently and more accurately.

    Approximate query processing approach based on deep autoregressive model
    Libin CEN, Jingdong LI, Chunbo LIN, Xiaoling WANG
    2023, 43(7):  2034-2039.  DOI: 10.11772/j.issn.1001-9081.2022071128
    Asbtract ( )   HTML ( )   PDF (1513KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Recently, Approximate Query Processing (AQP) of aggregate functions is a research hotspot in the database field. Existing approximate query techniques have problems such as high query response time cost, high storage overhead, and no support for multi-predicate queries. Thus, a deep autoregressive model-based AQP approach DeepAQP (Deep Approximate Query Processing) was proposed. DeepAQP leveraged deep autoregressive model to learn the joint probability distribution of multi-column data in the table in order to estimate the selectivity and the target column’s probability distribution of the given query, enhancing the ability to handle the approximate query requests of aggregation functions with multiple predicates in a single table. Experiments were conducted on TPC-H and TPC-DS datasets. The results show that compared with VerdictDB, which is a sample-based method, DeepAQP has the query response time reduced by 2 to 3 orders of magnitude, and the storage space reduced by 3 orders of magnitude; compared with DBEst++, which is a machine learning-based method, DeepAQP has the query response time reduced by 1 order of magnitude and the model training time reduced significantly. Besides, DeepAQP can handle with multi-predicate query requests, for which DBEst++ does not support. It can be seen that DeepAQP achieves good accuracy and speed at the same time and reduces the training and storage overhead of algorithm significantly.

    GTC: geo-distributed triangle counting algorithm in graph streams
    Chunze CAO, Delong MA, Ye YUAN
    2023, 43(7):  2040-2048.  DOI: 10.11772/j.issn.1001-9081.2022071130
    Asbtract ( )   HTML ( )   PDF (1947KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The existing distributed triangle counting algorithms assume that all computing nodes are in the same location, while in reality, the nodes may be located in multiple data centers across continents. Geo-distributed data centers which are connected with wide area networks have characteristics of heterogeneous network bandwidth, high communication cost and uneven distribution, and the existing distributed algorithms cannot be applied to geo-distributed environment. At the same time, the existing research which ignores the temporal locality of the formation of triangles mostly adopts strategies such as random sampling and elimination of edges. Therefore, the triangle counting problem of real graph streams in geo-distributed environment was studied, and a Geo-distributed Triangle Counting (GTC) algorithm was proposed. Firstly, aiming at the problem of too high data transmission caused by the existing edge distribution strategy, a geo-distributed edge distribution strategy was proposed to build a benefit formula combining the time benefit and data benefit of communication and use point-to-point communication to replace broadcast edges. Then, for the triangle repeated counting problem caused by point-to-point communication in geo-distributed environment, a final edge calculation rule was proposed to ensure no counting repetition. Finally, based on the time weighted sampling algorithm, a time-weighted triangle counting algorithm was proposed to use the time locality of the triangle to sample. The GTC was compared with Conditional Counting and Sampling (CoCoS) and Tri-Fly on five real graph streams. The results show that GTC has the communication data size decreased by 17% compared to CoCoS and decreased by 44% compared to Tri-Fly, GTC has the error rate decreased by 53% compared to Tri-Fly and slightly less than CoCoS, and GTC has the running time decreased by 34% compared to Tri-Fly and slightly more than CoCoS. It can be seen that the GTC can reduce the size of communication data effectively while ensuring high accuracy and short algorithm running time.

    Closed high utility quantitative itemset mining algorithm on incremental data
    Zhihui SHAN, Meng HAN, Qiang HAN
    2023, 43(7):  2049-2056.  DOI: 10.11772/j.issn.1001-9081.2022091333
    Asbtract ( )   HTML ( )   PDF (2376KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    High Utility Itemset (HUI) mining can provide information about the combination of highly profitable items in a dataset, which is useful for developing effective marketing strategies in real-world applications. However, HUIs only provide the itemsets and their total utility, not the purchased numbers of individual items, and the numbers of items in a real scenarios provide more precise accurate information. Therefore, High Utility Quantitative Itemset (HUQI) mining algorithms have been proposed by researchers. Focusing on the issue that the current HUQI mining algorithms can only process static data and have the problem of redundant resultsets, an incrementally updated quantitative utility list structure was proposed for storing and updating the utility information of items in the dataset, and based on this structure, an algorithm for mining Closed High Utility Quantitative Itemset (CHUQI) was proposed. The time and memory consumption of the proposed algorithm was compared with that of Faster High Utility Quantitative Itemset Miner (FHUQI-Miner) algorithm in terms of the number of result sets, minimum utility threshold, number of batches, and scalability. Experimental results show that the proposed algorithm can process incremental data effectively and mine more interesting itemsets.

    PrivSPM: frequent sequential pattern mining algorithm under local differential privacy
    Shuo HUANG, Yanhui LI, Jianqiu CAO
    2023, 43(7):  2057-2064.  DOI: 10.11772/j.issn.1001-9081.2022091365
    Asbtract ( )   HTML ( )   PDF (1710KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Sequential data may contain a lot of sensitive information, so that directly mining frequent patterns of sequential data would carry significant risk to privacy of individuals. By resisting attackers with any background knowledge, Local Differential Privacy (LDP) can provide more comprehensive protection for sensitive information. Due to the inherent sequentiality and high-dimensionality, it is challenging to mine frequent sequential patterns with the application of LDP. To tackle this problem, a top-k frequent sequential pattern mining algorithm satisfying ε-LDP, called PrivSPM, was proposed. In this algorithm, filling and sampling technologies, adaptive frequency estimation algorithm and frequent item prediction technology were integrated to construct candidate item. Based on the new domain, an exponential mechanism based strategy was employed to perturb the user data, and the final frequent sequential patterns were identified by combining the frequency estimation algorithm. Theoretical analysis proves that the proposed algorithm satisfies ε-LDP. Experimental results on three real datasets demonstrate that PrivSPM algorithm performs better than the comparison algorithm on True Positive Rate (TPR) and Normalized Cumulative Rank (NCR), and can improve the accuracy of mined results effectively.

    Differential privacy generative adversarial network algorithm with dynamic gradient threshold clipping
    Shaoquan CHEN, Jianping CAI, Lan SUN
    2023, 43(7):  2065-2072.  DOI: 10.11772/j.issn.1001-9081.2022071114
    Asbtract ( )   HTML ( )   PDF (1824KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Most of the existing methods combining Generative Adversarial Network (GAN) and differential privacy use gradient perturbation to achieve privacy protection, that is in the process of optimization, the gradient clipping technology was used to constrain the sensitivity of the optimizer to single data, and random noise is added to the clipped gradient to achieve the purpose of model protection. However, most methods take the clipping threshold as a fixed parameter during training. Whether the threshold is too large or too small, the performance of the model will be affected. To solve this problem, DGC_DPGAN (Dynamic Gradient Clipping Differential Privacy Generative Adversarial Network) with dynamic gradient threshold clipping was proposed to consider privacy protection and model performance at the same time. In this algorithm, combined with the pre-training technology, in the process of optimization, the mean gradient F-norm value of each batch of privacy data was obtained as the dynamic gradient clipping threshold at first, and then the gradient was perturbed. Considering different clipping orders, CLIP_DGC_DPGAN (Clip Dynamic Gradient Clipping Differential Privacy Generative Adversarial Network), which clipping first and adding noise after, and DGC_DPGAN, which adding noise first and clipping after, were proposed, and Rényi Accountant was used to calculate the privacy loss. Experimental results show that under the same privacy budget, the two proposed dynamic gradient clipping algorithms are better than the fixed gradient threshold clipping method. On Mnist dataset, the two proposed algorithm has the Inception Score (IS), Structural SIMilarity (SSIM), and Convolutional Neural Network (CNN) classification accuracy improved by 0.32 to 3.92, 0.03 to 0.27, and 7% to 44% respectively; on Fashion-Mnist dataset, the two proposed algorithm has the IS, SSIM, and CNN classification accuracy improved by 0.40 to 4.32, 0.01 to 0.44 and 20% to 51% respectively. At the same time, the usability of the images generated by GAN model is better.

    Self-regularization optimization methods for Non-IID data in federated learning
    Mengjie LAN, Jianping CAI, Lan SUN
    2023, 43(7):  2073-2081.  DOI: 10.11772/j.issn.1001-9081.2022071122
    Asbtract ( )   HTML ( )   PDF (4171KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Federated Learning (FL) is a new distributed machine learning paradigm that breaks down data barriers and protects data privacy at the same time, thereby enabling clients to collaboratively train a machine learning model without sharing local data. However, how to deal with Non-Independent Identical Distribution (Non-IID) data from different clients remains a huge challenge faced by FL. Some existing proposed solutions to this problem do not utilize the implicit relationship between local and global models to solve the problem simply and efficiently. To address the Non-IID issue of different clients in FL, novel FL optimization algorithms including Federated Self-Regularization (FedSR) and Dynamic Federated Self-Regularization (Dyn-FedSR) were proposed. In FedSR, self-regularization penalty terms were introduced in each training round to modify the local loss function dynamically, and by building a relationship between the local and the global models, the local model was closer to the global model that aggregates rich knowledge, thereby alleviating the client drift problem caused by Non-IID data. In Dyn-FedSR, the self-regularization term coefficient was determined dynamically by calculating the similarity between the local and global models. Extensive experimental analyses on different tasks demonstrate that the two algorithms, FedSR and Dyn-FedSR, significantly outperform the state-of-the-art FL algorithms such as Federated Averaging (FedAvg) algorithm, Federated Proximal (FedProx) optimization algorithm and Stochastic Controlled Averaging algorithm (SCAFFOLD) in various scenarios, and can achieve efficient communication and high accuracy, as well as the robustness to imbalanced data and uncertain local updates.

    Sparse reward exploration mechanism fusing curiosity and policy distillation
    Ziteng WANG, Yaxin YU, Zifang XIA, Jiaqi QIAO
    2023, 43(7):  2082-2090.  DOI: 10.11772/j.issn.1001-9081.2022071116
    Asbtract ( )   HTML ( )   PDF (1696KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Deep reinforcement learning algorithms are difficult to learn optimal policy through interaction with environment in reward sparsity environments, so that the intrinsic reward needs to be built to guide the update of algorithms. However, there are still some problems in this way: 1) statistical inaccuracy of state classification will misjudge reward value, thereby causing the agent to learn wrong behavior; 2) due to the strong ability of the prediction network to identify state information, the state freshness generated by the intrinsic reward decreases, which affects the learning effect of the optimal policy; 3) due to the random state transition, the information of the teacher strategies is not effectively utilized, which reduces the agent’s ability to explore the environment. To solve the above problems, a reward construction mechanism combining prediction error of stochastic generative network with hash discretization statistics, namely RGNP-HCE (Randomly Generated Network Prediction and Hash Count Exploration), was proposed, and the knowledge of multi-teacher policy was transferred to student policy through distillation. In RGNP-HCE mechanism, the fusion reward was constructed through the idea of curiosity classification. In specific, the global curiosity reward was constructed by stochastic generative network’s prediction error between multiple episodes, and the local curiosity reward was constructed by hash discretization statistics in one episode, which guaranteed the rationality of intrinsic rewards and the correctness of policy gradient updates. In addition, multi-teacher policy distillation provides students with multiple reference directions for exploration, which improved environmental exploration ability of the student policy effectively. Finally, in the test environments of Montezuma’s Revenge and Breakout, experiment of comparing the proposed mechanism with four current mainstream deep reinforcement learning algorithms was carried out, and policy distillation was performed. The results show that compared with the average performance of current high-performance deep reinforcement learning algorithms, the average performance of RGNP-HCE mechanism in both test environments is improved, and the distilled student policy is further improved in average performance, indicating that RGNP-HCE mechanism and policy distillation are effective in improving the exploration ability of agent.

    Online incentive mechanism based on quality perception in spatio-temporal crowdsourcing
    Yanan PAN, Qingxian PAN, Zhaoyi YU, Jiajing CHU, Song YU
    2023, 43(7):  2091-2099.  DOI: 10.11772/j.issn.1001-9081.2022071095
    Asbtract ( )   HTML ( )   PDF (2623KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In the real-time and complex network environment, how to motivate workers to participate in tasks and obtain high-quality perception data is the focus of spatio-temporal crowdsourcing research. Based on this, a spatio-temporal crowdsourcing’s online incentive mechanism based on quality perception was proposed. Firstly, in order to adapt to the real-time characteristics of spatio-temporal crowdsourcing, a Phased Online selection of workers Algorithm (POA) was proposed. In this algorithm, the entire crowdsourcing activity cycle was divided into multiple stages under budget constraints, and workers were selected online in each stage. Secondly, in order to improve the accuracy and efficiency of quality prediction, an Improved Expected Maximum (IEM) algorithm was proposed. In this algorithm, the task results submitted by workers with high reliability were given priority in the process of algorithm iteration. Finally, the effectiveness of the proposed incentive mechanism in improving platform utility was verified by comparison experiments on real datasets. Experimental results show that in terms of efficiency, compared with the Improved Two-stage Auction (ITA) algorithm, the Multi-attribute and ITA (M-ITA) algorithm, Lyapunov-based Vickrey-Clarke-Groves (L-VCG) and other auction algorithms, the efficiency of POA has increased by 11.11% on average, and the amount of additional rewards for workers has increased by 12.12% on average, which can encourage workers to move to remote and unpopular areas; In terms of quality estimation, the IEM algorithm has an average improvement of 5.06% in accuracy and 14.2% in efficiency compared to other quality estimation algorithms.

    Prediction of taxi demands between urban regions by fusing origin-destination spatial-temporal correlation
    Yuan WEI, Yan LIN, Shengnan GUO, Youfang LIN, Huaiyu WAN
    2023, 43(7):  2100-2106.  DOI: 10.11772/j.issn.1001-9081.2022091364
    Asbtract ( )   HTML ( )   PDF (1507KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Accurate prediction of taxi demands between urban regions can provide decision support information for taxi guidance and scheduling as well as passenger travel recommendation, so as to optimize the relation between taxi supply and demand. However, most of the existing models only focus on modeling and predicting the taxi demands within a region, do not consider enough the spatial-temporal correlation between regions, and pay less attention to the more fine-grained demand prediction between regions. To solve the above problems, a prediction model for taxi demands between urban regions — Origin-Destination fusion with Spatial-Temporal Network (ODSTN) model was proposed. In this model, complex spatial-temporal correlations between regions was captured from spatial dimensions of the regions and region pairs respectively and three temporal dimensions of recent, daily and weekly periods by using graph convolution and attention mechanism, and a new path perception fusion mechanism was designed to combine the multi-angle features and finally realize the taxi demand prediction between urban regions. Experiments were carried out on two real taxi order datasets in Chengdu and Manhattan. The results show that the Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) of ODSTN model are 0.897 1, 3.527 4, 50.655 6% and 0.589 6, 1.163 8, 61.079 4%, respectively, indicating that ODSTN model has high accuracy in taxi demand prediction tasks.

    Artificial intelligence
    Joint triple extraction model combining pointer network and relational embedding
    Yuxin TUO, Tao XUE
    2023, 43(7):  2116-2124.  DOI: 10.11772/j.issn.1001-9081.2022060846
    Asbtract ( )   HTML ( )   PDF (1576KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of complex entity overlap situations and difficulties in extracting multiple relational triples in natural language texts, a joint triple extraction model combining pointer network and relational embedding was proposed. Firstly, the BERT (Bidirectional Encoder Representations from Transformers) pre-training model was used to encode and represent the input sentence. Secondly, the head and tail pointer labeling was used to extract all subjects in the sentence, and the attention mechanism guided by subjects and relations was used to distinguish the importance of different relation labels to each word, so that the relation label information was added to the sentence embedding. Finally, for the subjects and each relation, the corresponding object was extracted by using the pointer labeling and cascade structure, and the relational triples were generated. Extensive experiments were conducted on two datasets, New York Times (NYT) and Web Natural Language Generation (WebNLG), and the results show that the proposed model has better overall performance than the current best Novel Cascade Binary Tagging Framework (CasRel) model by 1.9 and 0.7 percentage points respectively; compared with the Extract-Then-Label method with Span-based scheme (ETL-Span) model, the performance improvements of the proposed model are more than 6.0% and more than 3.7% in the comparison experiments with 1 to 5 triples, respectively. Especially in complex sentences with more than 5 triples, the proposed model has the F1 score improved by 8.5 and 1.3 percentage points respectively. And stable extraction ability of this model is maintained while capturing more entity pairs, which further verifies the effectiveness of this model in triple overlap problem.

    Long non-coding RNA-disease association prediction model based on semantic and global dual attention mechanism
    Yi ZHANG, Gangsheng CAI, Zhenmei WANG
    2023, 43(7):  2125-2132.  DOI: 10.11772/j.issn.1001-9081.2022060872
    Asbtract ( )   HTML ( )   PDF (781KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the limitations of existing long non-coding RNA (lncRNA) -disease association prediction models in comprehensively utilizing interaction and semantic information of heterogeneous biological networks, an lncRNA-Disease Association prediction model based on Semantic and Global dual Attention mechanism (SGALDA) was proposed. Firstly, an lncRNA-disease-microRNA (miRNA) heterogeneous network was constructed based on similarity and known associations. And a feature extraction module was designed based on message passing types to extract and fuse the neighborhood features of homogeneous and heterogeneous nodes on the network, so as to capture multi-level interactive relations on the heterogeneous network. Secondly, the heterogeneous network was decomposed into multiple semantic sub-networks based on meta-paths. And a Graph Convolutional Network (GCN) was applied on each sub-network to extract semantic features of nodes, so as to capture the high-order interactive relations on the heterogeneous network. Thirdly, a semantic and global dual attention mechanism was used to fuse semantic and neighborhood features of the nodes to obtain more representative node features. Finally, lncRNA-disease associations were reconstructed by using the inner product of lncRNA node features and disease node features. The 5-fold cross-validation results show that the Area Under Receiver Operating Characteristic curve (AUROC) of SGALDA is 0.994 5±0.000 2, and the Area Under Precision-Recall curve (AUPR) of SGALDA is 0.916 7±0.001 1, both of them are the highest among AUROCs sand AUPRs of all the comparison models. It proves SGALDA’s good prediction performance. Case studies on breast cancer and stomach cancer further prove the ability of SGALDA to identify potential lncRNA-disease associations, indicating that SGALDA has the potential to be a reliable lncRNA-disease association prediction model.

    Recruitment recommendation model based on field fusion and time weight
    Kunpei YE, Xi XIONG, Zhe DING
    2023, 43(7):  2133-2139.  DOI: 10.11772/j.issn.1001-9081.2022060802
    Asbtract ( )   HTML ( )   PDF (1499KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To address the problem of strong feature overfitting and weak feature underfitting problem when learning user representations using Embedding layer & Multi-Layer Perceptron (Embedding&MLP) paradigm for recommendation systems and the problem of learning user interests using Gated Recurrent Unit (GRU) without considering that the influence of current behaviors on users’ final interests diminishes over time, a Recruitment Recommendation Model based on Field Fusion and Time Weight (RecRec) was proposed. In RecRec, firstly, a new domain fusion layer was adopted to replace the traditional tandem layer, and the domain fusion layer showed a significantly superior performance on multi-domain features. Then, time weight was incorporated into GRU in the interest evolution layer, and a TimeStamp Gated Recurrent Unit (TSGRU) was proposed, by which made the user interests were learned more accurately. Ultimately, personalised recommendations were achieved by predicting the dial-up rate of users. Experimental results show that the Area Under Curve (AUC) of RecRec improves by 0.03 to 0.36 percentage points compared to YouTube Deep Neural Network (DNN), Wide&Deep, Auxiliary LSTM-Attention Matrix Factorization (ALAMF) and Long-term & Short-term Sequential Self-Attention Network (LSSSAN), indicating that RecRec can effectively learn user representations and user interests.

    Reliability evaluation method of high-reliability products based on improved evidence fusion
    Sirui WANG, Shijuan CHENG, Feimeng YUAN
    2023, 43(7):  2140-2146.  DOI: 10.11772/j.issn.1001-9081.2022060867
    Asbtract ( )   HTML ( )   PDF (1351KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In the reliability evaluation of many high-reliability and high-value products, product reliability often cannot be accurately evaluated due to the lack of objective test data. Aiming at this problem, a reliability evaluation method for high-reliability products based on improved evidence fusion was proposed in order to make full use of reliability information from different sources. Firstly, combining the characteristics of reliability engineering, the modified weight of each evidence was determined by the consistency of the evidence at credal level, pignistic level and the uncertainty of the evidence itself. Secondly, the optimal comprehensive weight was obtained by linear combination of each weight vector based on game theory. Finally, the Dempster’s combination rule was used to fuse the modified evidence, and the probability distribution of the product reliability index was obtained through the Pignistic probability transformation formula to complete the product reliability evaluation. The reliability evaluation results of one electronic device show that compared with the results of Jiang’s combination method and Yang’s combination method, which also consider multi-dimensional weight modification, the credibility of the conflict interval given by the proposed method is reduced by 69.6% and 54.6% respectively, and the credibility of the overall frame of discrimination given by the proposed method is reduced by 5.6% and 3.7% respectively. Therefore, in the application of reliability engineering, the performance of the proposed method in solving evidence conflict and reducing the uncertainty of fusion results is better than that of the comparison methods, and this method can fuse multi-source reliability information effectively and improve the credibility of the results of product reliability evaluation.

    High-precision object detection algorithm based on improved VarifocalNet
    Zhangjian JI, Ming ZHANG, Zilong WANG
    2023, 43(7):  2147-2154.  DOI: 10.11772/j.issn.1001-9081.2022060823
    Asbtract ( )   HTML ( )   PDF (2629KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To address the problems of low recognition precision and difficult recognition of the existing one-stage anchor-free detectors in genetic object detection scenarios, a high-precision object detection algorithm based on improved variable focal network VarifocalNet (VFNet) was proposed. Firstly, the ResNet backbone network used for feature extraction in VFNet was replaced by the Recurrent Layer Aggregation Network (RLANet). The recurrent residual connection operation imported the features of the previous layer into the subsequent network layer to improve the representation ability of the features. Next, the original feature fusion network was substituted by the Feature Pyramid Network (FPN) with feature alignment convolution operation, thereby effectively utilizing the deformable convolution operation in the fusion process of the upper and lower layers of FPN to align the features and optimize the feature quality. Finally, the Focal-Global Distillation (FGD) algorithm was used to further improve the detection performance of small-scale algorithm. The evaluation experimental results on COCO (Common Objects in Context) 2017 dataset show that under the same training conditions,the improved algorithm adopting RLANet-50 as the backbone can achieve the mean Average Precision (mAP) of 45.9%, which is 4.3 percentage points higher than that of the VFNet algorithm, and the improved algorithm has the number of parameters of 36.67×10 6, which is only 4×10 6 higher than that of the VFNet algorithm. The improved VFNet algorithm only slightly increases the amount of parameters while improving the detection accuracy, indicating that the algorithm can meet the requirements of lightweight and high-precision of object detection.

    Weakly perceived object detection method based on point cloud completion and multi-resolution Transformer
    Jing ZHOU, Yiyu HU, Chengyu HU, Tianjiang WANG
    2023, 43(7):  2155-2165.  DOI: 10.11772/j.issn.1001-9081.2022060908
    Asbtract ( )   HTML ( )   PDF (3409KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To solve the problem of low detection precision of weakly perceived objects with missing shapes in distant or occluded scenes, a Weakly Perceived object detection method based on point cloud Completion and Multi-resolution Transformer (WP-CMT) was proposed. Firstly, since that some key information was lost due to the down-sampling convolution operation in object detection network, the Part-Aware and Aggregation (Part-A2) method with deconvolution up-sampling structure was chosen as the basic network to generate the initial proposals. Then, in order to enhance the shape and position features of the weakly perceived objects in the initial proposals, the point cloud completion module was applied to reconstruct the dense point sets on the surface of the weakly perceptive objects, and a novel multi-resolution Transformer feature encoding module was constructed to aggregate the completed shape features with original spatial location information of the weakly perceived objects, and then the enhanced local features of the weakly perceived objects were captured by encoding the contextual semantic correlation of the aggregated features on local coordinate point sets with different resolutions. Finally, the refined bounding boxes were generated. Experimental results show that WP-CMT achieves 2.51 percentage points gain on average precision and 1.59 percentage points on mean average precision compared to baseline method Part-A2 for the weakly perceived objects at hard level in KITTI and Waymo datasets, which proves the effectiveness of the proposed method for weakly perceived object detection. Meanwhile, ablation experimental results show that the point cloud completion and multi-resolution Transformer feature encoding modules in WP-CMT can effectively improve the detection performance of weakly perceived objects for different Region Proposal Network (RPN) structures.

    Camouflage object segmentation method based on channel attention and edge fusion
    Chunlan ZHAN, Anzhi WANG, Minghui WANG
    2023, 43(7):  2166-2172.  DOI: 10.11772/j.issn.1001-9081.2022060933
    Asbtract ( )   HTML ( )   PDF (2120KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The goal of Camouflage Object Segmentation (COS) is to detect hidden objects from the background. In recent years, Camouflage Object Detection (COD) based on Convolutional Neural Network (CNN) has developed rapidly, but there is still a problem that the complete object cannot be accurately detected in scenes with highly similar foreground/background. For the above problem, a COS method based on Channel Attention (CA) and edge fusion, called CANet (Network based on Channel Attention and edge fusion), was proposed to obtain a complete segmentation result with clearer edge details of camouflage objects. Firstly, the SE (Squeeze-and-Excitation) attention was introduced to extract richer high-level semantic features. Secondly, an edge fusion module was proposed to restrain interference in low-level features and make full use of edge details information of the image. Finally, a channel attention module based on depthwise separable convolution was designed to gradually integrate cross-level multi-scale features in a top-down manner, which further improved detection accuracy and efficiency. Experimental results on multiple public COD datasets show that compared to eight mainstream methods such as SINet (Search Identification Net), TINet (Texture-aware Interactive guidance Network) and C2FNet (Context-aware Cross-level Fusion Network), CANet performs better and can obtain rich camouflage objects’ internal and edge detail information. Among them, CANet improves the structure-measure index by 2.6 percentage points compared to SINet on the challenging COD10K dataset. CANet has superior performance and is suitable for medical detection of lesion areas similar to human tissue, military detection of hidden targets, and other related fields.

    Lightweight gesture recognition algorithm for basketball referee
    Zhongyu LI, Haodong SUN, Jiao LI
    2023, 43(7):  2173-2181.  DOI: 10.11772/j.issn.1001-9081.2022060810
    Asbtract ( )   HTML ( )   PDF (4447KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem that the number of parameters, calculation amount and accuracy of general gesture recognition algorithms are difficult to balance, a lightweight gesture recognition algorithm for basketball referee was proposed. The proposed algorithm was reconstructed on the basis of YOLOV5s (You Only Look Once Version 5s) algorithm: Firstly, the Involution operator was used to replace CSP1_1 (Cross Stage Partial 1_1) convolution operator to expand the context information capturing range and reduce the kernel redundancy. Secondly, the Coordinate Attention (CA) mechanism was added after the C3 module to obtain stronger gesture feature extraction ability. Thirdly, a lightweight content aware upsampling operator was used to improve the original upsampling module, and the sampling points were concentrated in the object area and the background part was ignored. Finally, the Ghost-Net with SiLU (Sigmoid Weighted Liner Unit) as the activation function was used for lightweight pruning. Experimental results on the self-made basketball referee gesture dataset show that the calculation amount, number of parameters and model size of this lightweight gesture recognition algorithm for basketball referee are 3.3 GFLOPs, 4.0×106 and 8.5 MB respectively, which are only 79%, 44% and 40% of those of YOLOV5s algorithm, mAP@0.5 of the proposed algorithm is 91.7%, and the detection frame rate of the proposed algorithm on the game video with a resolution of 1 920×1 280 reaches 89.3 frame/s, verifying that the proposed algorithm can meet the requirements of low error, high detection rate and lightweight.

    Person re-identification method based on multi-modal graph convolutional neural network
    Jiaming HE, Jucheng YANG, Chao WU, Xiaoning YAN, Nenghua XU
    2023, 43(7):  2182-2189.  DOI: 10.11772/j.issn.1001-9081.2022060827
    Asbtract ( )   HTML ( )   PDF (1887KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems that person textual attribute information is not fully utilized and the semantic relationships among the textual attributes are not mined in person re-identification, a person re-identification method based on multi-modal Graph Convolutional neural Network (GCN) was proposed. Firstly, Deep Convolutional Neural Network (DCNN) was used to learn person textual attributes and person image features. Then, with the help of the effective relationship mining ability of GCN, the textual attribute features and image features were treated as the input of GCN, and the semantic information of the textual attribute nodes was transferred through the graph convolution operation, so as to learn the implicit semantic relationship information among the textual attributes and incorporate this semantic information into image features. Finally, the robust person features were output by GCN. The multi-modal person re-identification method achieves the mean Average Precision (mAP) of 87.6% and the Rank-1 accuracy of 95.1% on Market-1501 dataset, and achieves the mAP of 77.3% and the Rank-1 accuracy of 88.4% on DukeMTMC-reID dataset, which verify the effectiveness of the proposed method.

    Data science and technology
    Community search method based on motif connectivity
    Ming DU, Wanli GU, Junfeng ZHOU, Zhijun WANG
    2023, 43(7):  2190-2199.  DOI: 10.11772/j.issn.1001-9081.2022060941
    Asbtract ( )   HTML ( )   PDF (2711KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The goal of community search is to obtain compact subgraphs containing query vertices from data graphs, and community search is widely used in sociology, biology, and other fields. In view of the fact that the basic connectivity structures of the existing community models based on subgraph connectivity are all completely connected graphs, which cannot meet the needs of users for the diversity of community structures in practical applications, a community search method based on motif connectivity was proposed, including Motif-Connective Community (MCC) model based on motif connectivity and two corresponding community search algorithms — MPCS (Motif-Processed Community Search) algorithm and MP-index based community search algorithm. MCC model was able to help users freely specify the basic connectivity structure of community, and MPCS algorithm was able to be used to solve the search problem of MCC. Furthermore, two pruning optimization techniques were proposed for the motif instance search process and the belonged community judgment process. Finally, the MP-index forest was designed to avoid redundant traversal operations in the process of community search. Experimental results on multiple real datasets show that the pruning optimization can reduce the running time of MPCS algorithm by 60% to 85%, and the efficiency of the community search algorithm based on MP-index forest is improved by two to three orders of magnitude compared with the efficiency of MPCS algorithm added with pruning optimization. It can be seen that the proposed method has practical application values in commodity recommendation, social network and other issues.

    Cross-level high utility itemsets mining algorithm based on data index structure
    Hua JIANG, Xing LI, Huijiao WANG, Jinghai WEI
    2023, 43(7):  2200-2208.  DOI: 10.11772/j.issn.1001-9081.2022060907
    Asbtract ( )   HTML ( )   PDF (1910KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The existing cross-level High Utility Itemsets Mining (HUIM) algorithms consume a lot of time and occupy large amounts of memory. To address these problems, a Data Index Structure Cross-level High utility itemsets mining (DISCH) algorithm was proposed. Firstly, the utility list with taxonomic information and index information was expanded into Data Index Structure (DIS) to efficiently store and quickly retrieve all itemsets in the search space. Then, in order to improve the memory utilization, the memory occupied by the utility lists that do not meet the conditions was reclaimed and reallocated. Finally, during the construction of utility list, early termination strategy was used to reduce the generation of utility list. Experimental results based on real retail datasets and synthetic datasets show that compared with the CLH-Miner (Cross-Level High utility itemsets Miner) algorithm, DISCH reduces the running time by 77.6% on average and the memory consumption by 73.3% on average. Therefore, the proposed algorithm can search the cross-level high utility itemsets efficiently and reduce the memory consumption of algorithm.

    Cyber security
    Privacy-preserving federated learning algorithm based on blockchain in edge computing
    Wanzhen CHEN, En ZHANG, Leiyong QIN, Shuangxi HONG
    2023, 43(7):  2209-2216.  DOI: 10.11772/j.issn.1001-9081.2022060909
    Asbtract ( )   HTML ( )   PDF (1974KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of the leakage of model parameters, that the untrusted server may return wrong aggregation results, and the users participating in training may upload wrong or low-quality model parameters in the process of federated learning in edge computing scenarios, a privacy-preserving federated learning algorithm based on blockchain in edge computing was proposed. In the training process, firstly, the global model parameters were trained on the local dataset of each user by the users, and the model parameters obtained by training were uploaded to neighboring edge nodes through secret sharing, thereby protecting the local model parameters of the users. Secondly, the Euclidean distances between the shares of model parameters received by the edge nodes were computed, and the results of these calculations were uploaded to the blockchain. Finally, the Euclidean distances between model parameters were reconstructed by the blockchain, and then the global model parameter was aggregated after removing the poisoned updates. The security analysis proves the security of the proposed algorithm: even in the case of collusion of a part of edge nodes, the users’ local model parameter information will not be leaked. At the same time, the experimental results show the high accuracy of this algorithm: the accuracy of the proposed algorithm is 94.2% when the proportion of poisoned samples is 30%, which is close to the accuracy of the Federated Averaging (FedAvg) algorithm without poisoned samples (97.8%), and the accuracy of FedAvg algorithm is decreased to 68.7% when the proportion of poisoned samples is 30%.

    Malicious code classification method based on improved MobileNetV2
    Bona XUAN, Jin LI, Yafei SONG, Zexuan MA
    2023, 43(7):  2217-2225.  DOI: 10.11772/j.issn.1001-9081.2022060931
    Asbtract ( )   HTML ( )   PDF (3547KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of insufficient accuracy, high prediction time cost and weak ability against obfuscation of the traditional malicious code classification methods, a malicious code classification method based on improved MobileNetV2 was proposed. Firstly, aiming at the problems of malicious code encryption and obfuscation, the Coordinate Attention (CA) method was used to introduce a wider range of spatial locations to enhance malicious code image features. Then, aiming at the problem of high training cost caused by training from scratch, the Transfer Learning (TL) was used to improve the MobileNetV2 learning method to increase the ability against obfuscation. Finally, aiming at the large computational load and slow convergence of traditional deep learning networks, the MobileNetV2 lightweight convolutional network model was used, and Ranger21 was combined to improve the training method to promote rapid convergence. Experimental results show that the above-mentioned method has the accuracy achieved 99.26% and 96.98% for Malimg dataset and DataCon dataset. The method has the accuracy increased by 1.49% and the detection efficiency increased by 45.31% on the Malimg dataset compared with the AlexNet method, and has the accuracy increased by 1.14% on the DataCon dataset compared with the ensemble learning method. It can be seen that the improved MobileNetV2 based malicious code classification method can improve the generalization ability, ability against obfuscation and classification efficiency of the model.

    Advanced computing
    Dynamic multi-objective optimization algorithm based on weight vector clustering
    Erchao LI, Yanli CHENG
    2023, 43(7):  2226-2236.  DOI: 10.11772/j.issn.1001-9081.2022060843
    Asbtract ( )   HTML ( )   PDF (3030KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    There are many Dynamic Multiobjective Optimization Problems (DMOPs) in real life. For such problems, when the environment changes, Dynamic Multi-Objective Evolutionary Algorithm (DMOEA) is required to track the Pareto Front (PF) or Pareto Set (PS) quickly and accurately under the new environment. Aiming at the problem of poor performance of the existing algorithms on population prediction, a dynamic multi-objective optimization algorithm based on Weight Vector Clustering Prediction (WVCP) was proposed. Firstly, the uniform weight vectors were generated in the target space, and the individuals in the population were clustered. According to the clustering results, the distribution of the population was analyzed. Secondly, a time series was established for the center points of clustered individuals. For the same weight vector, the corresponding coping strategies were adopted to supplement individuals according to different clustering situations. If there were cluster centers at all adjacent moments, the difference model was used to predict individuals in the new environment. If there was no cluster center at a certain moment, the centroid of the cluster centers of adjacent weight vectors was used as the cluster center at that moment, and then the difference model was used to predict individuals. In this way, the problem of poor population distribution was solved effectively, and the accuracy of prediction was improved at the same time. Finally, the introduction of individual supplement strategy was beneficial to make full use of historical information. In order to verify the performance of the proposed algorithm, simulation comparison of this algorithm and four representative algorithms was carried out. Experimental results show that the proposed algorithm can solve DMOPs well.

    Symbiotic organisms search algorithm for information transfer multi-task optimization
    Meiying CHENG, Qian QIAN, Weiqing XIONG
    2023, 43(7):  2237-2247.  DOI: 10.11772/j.issn.1001-9081.2022060896
    Asbtract ( )   HTML ( )   PDF (3862KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems that Symbiotic Organisms Search (SOS) algorithm only can solve single tasks and negative information transfer affects Multi-Task Optimization (MTO) performance, an Information Transfer Multi-Task SOS (ITMTSOS) algorithm was proposed. Firstly, based on multi-population evolution framework MTO, multiple populations were set according to the number of tasks. Secondly, each population ran basic SOS algorithm independently, and by introducing individual itself optimal experience and neighborhood optimal individuals, the knowledge module containing the above two was formed and transferred to the process of individual evolution when a population stagnated for several consecutive generations. Finally, the time and space complexity of ITMTSOS was analyzed. Simulation results show that ITMTSOS converges rapidly to the global optimal solution 0 when resolving a batch of different shape high-dimensional functions, and the average running time is reduced around 25.25% when compared with single task SOS; when solving the multi-dimensional 0/1 knapsack problems and the teacher-student matching problems concurrently, the optimal fitnesses on weing1 and weing7 test sets are increased by 22 767 and 22 602 respectively compared with the current published optimal results, the absolute values of the optimal and the average matching difference of teacher-student matching problem are decreased by 26 and 33 respectively, and the average running time is reduced around 7.69%.

    Multimodal differential evolution algorithm for solving capacitated vehicle routing problem
    Jian LIN, Jingxuan YE, Wenwen LIU, Xiaowen SHAO
    2023, 43(7):  2248-2254.  DOI: 10.11772/j.issn.1001-9081.2022060812
    Asbtract ( )   HTML ( )   PDF (1138KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In Capacitated Vehicle Routing Problem (CVRP), the influence of uncertain factors including traffic congestion, resource supply and customer demand will easily make the single optimal solution infeasible or non-optimal. To solve this problem, a Multimodal Differential Evolution (MDE) algorithm was proposed to obtain multiple alternative vehicle routing schemes with similar objective values. Firstly, combined with the characteristics of CVRP, an efficient solution individual coding and decoding strategy was constructed, and the solution individual quality was improved using a repair mechanism. Secondly, in the framework of Differential Evolution (DE) algorithm, a dynamic radius niche generation method was introduced from the perspective of multimodal optimization, and the Jaccard coefficient was used to measure the similarity between solution individuals, which realized the calculation of the distance between solution individuals. Finally, the neighborhood search strategy was modified, and a multimodal optimal solution set was obtained using elite archiving and updating strategy. Simulation and analysis results based on typical datasets show that the average number of optimal solutions obtained by the proposed MDE algorithm reaches 1.743 4, and the deviation between the average optimal solution obtained by the proposed MDE algorithm and the known optimal solution is 0.03%, better than 0.8486 and 0.63% obtained by the DE algorithm. It can be seen that the proposed algorithm has high effectiveness and stability in solving CVRP, and can obtain multiple optimal solutions for CVRP simultaneously.

    Network and communications
    Time synchronization method based on precision time protocol in industrial wireless sensor networks
    Feiqiao SHAN, Zhaowei WANG, Yue SHEN
    2023, 43(7):  2255-2260.  DOI: 10.11772/j.issn.1001-9081.2022060825
    Asbtract ( )   HTML ( )   PDF (1545KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Concerning the dynamic change of link delay, clock timing interference, and uncertainty of timestamp acquisition caused by complex link environment and temperature fluctuation in Industrial Wireless Sensor Networks (IWSNs), a time synchronization method based on Precision Time Protocol (PTP) in IWSNs was proposed. Firstly, the clock state space model and observation model were constructed by integrating the clock timing interference and asymmetric link delay noise in PTP bidirectional time synchronization process. Secondly, a reverse adaptive Kalman filter algorithm was constructed to remove the noise interference. Thirdly, the rationality of the noise statistical model was evaluated by using the clock state normalized innovation ratio of the reverse estimation and the forward estimation. Finally, the process noise of the clock state was dynamically adjusted after setting the detection threshold, thereby achieving precise estimation of clock parameters. Simulation results show that compared with the classical Kalman filter algorithm and PTP protocol, the proposed algorithm has the clock offset and skew estimation with smaller and more stable error standard deviations under different clock timing precision. The reverse adaptive Kalman filter can effectively solve the problem of Kalman filter divergence caused by reasons such as noise uncertainty, and improve the reliability of time synchronization.

    Computer software technology
    Software testing resource allocation algorithm for dynamic changes in architecture
    Lei LI, Guofu ZHANG, Zhaopin SU, Feng YUE
    2023, 43(7):  2261-2270.  DOI: 10.11772/j.issn.1001-9081.2022060824
    Asbtract ( )   HTML ( )   PDF (1050KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Testing resource allocation is a core problem in software testing. Most of the existing related studies assume that the software architecture is static and rarely consider cost constraints. To address this problem, a software testing resource allocation algorithm for dynamic changes in architecture was proposed. Firstly, a multi-stage multi-objective multi-constraint testing resource allocation model with dynamically changing architecture was constructed. Then, based on parameter re-estimation and generalized differential evolution, the population re-initialization was added to the algorithm, which was able to reduce the algorithm search space and improve the algorithm performance. Finally, a new repair processing mechanism was added to the algorithm, which was able to eliminate the invalid solutions generated by the algorithm effectively. Compared with the solution sets obtained by the Multi-Objective Differential Evolution based on Weighted Normalized Sum (WNS-MODE) algorithm and Dynamic Testing Resource Allocation based on Generalized Differential Evolution 3 (DTRA-GDE3) algorithm, the solution set obtained by the proposed algorithm has the capacity value improved by about 11.81 times and 0.39 times respectively. In terms of coverage value metrics, the proposed algorithm completely covered the WNS-MODE algorithm and improved 81 percentage points with respect to the DTRA-GDE3 algorithm. In terms of the super volume value metrics, the proposed algorithm improved nearly 6 and 9 times, respectively. Experimental results show that the proposed algorithm can better adapt to the dynamic changes in software architecture, can provide more and better testing resource allocation schemes for dynamic testing of software products, and meets the dynamic changes in user requirements.

    Optimal supervisory control algorithm of discrete-event systems
    Yuhong HU, Deguang WANG, Jiahan HE, Zhiheng ZHANG
    2023, 43(7):  2271-2279.  DOI: 10.11772/j.issn.1001-9081.2022060884
    Asbtract ( )   HTML ( )   PDF (3280KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    A supervisor of a discrete-event system can prohibit controllable events to ensure the safety and liveness specifications of the system. However, the supervisor does not actively select the controllable events that are allowed to occur, so it is possible that several controllable events occur simultaneously. In practice, such as traffic scheduling and robot path planning, the system is required to allow at most one controllable event to occur in each state. In response to the above problem, an optimal mechanism was introduced to quantify control cost, and an optimal supervisory control algorithm of discrete-event systems was proposed, which not only can guarantee the safety and liveness of the system, but also can minimize the cumulative cost of event execution. Firstly, the automata model of controlled system and behavioral constraints was given, and a nonblocking supervisor with maximum allowable behaviors was solved on the basis of the supervisory control theory of Ramadge and Wonham. Secondly, a cost function was defined to assign the corresponding cost to the execution of each event in the supervisor. Finally, an optimal directed supervisor was calculated iteratively based on dynamic programming to achieve the goals of at most one controllable event occurring in each state and minimizing the cumulative cost of event execution. To verify the effectiveness and correctness of the proposed algorithm, a one-way train guideway example and a multi-track train control example were used. For the above two examples, the cumulative cost of the event execution required for the directed supervisor solved by the proposed algorithm to reach the target state is 26.0 and 14.0 respectively, which is lower than the 27.5 and 16.0 of greedy algorithm and the 26.5 and 14.0 of Q-learning.

    Multimedia computing and computer simulation
    Image super-resolution reconstruction method based on iterative feedback and attention mechanism
    Min LIANG, Jiayi LIU, Jie LI
    2023, 43(7):  2280-2287.  DOI: 10.11772/j.issn.1001-9081.2022060877
    Asbtract ( )   HTML ( )   PDF (3427KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To address the difficulties in reconstructing high-frequency information in image super-resolution reconstruction due to the lack of dependency between low-resolution and high-resolution images and the lack of order during the reconstruction of feature map, a single-image super-resolution reconstruction method based on iterative feedback and attention mechanism was proposed. Firstly, high- and low-frequency information in the image was extracted respectively by using frequency decomposition block, and the two kinds of information was processed respectively, so that the network focused on the extracted high-frequency details to increase the restoration ability of the method on image details. Secondly, through the channel-wise attention mechanism, the reconstruction focus was put on the feature channels with effective features to improve the network ability of extracting the feature map information. Thirdly, the iterative feedback idea was adopted to increase quality of the restored image in the process of repeated comparison and reconstruction. Finally, the output image was generated through the reconstruction block. The proposed method shows better performance in comparison with mainstream super-resolution methods in the 2×, 4× and 8× experiments on Set5, Set14, BSD100, Urban100 and Manga109 benchmark datasets. In the 8× experiments on Manga109 dataset, the proposed method improves Peak Signal-to-Noise Ratio (PSNR) by about 3.01 dB and 2.32 dB averagely and respectively compared to the traditional interpolation method and the Super-Resolution Convolutional Neural Network (SRCNN). Experimental results show that the proposed method can reduce the errors in the reconstruction process and effectively reconstruct finer high-resolution images.

    Adaptive image deblurring generative adversarial network algorithm based on active discrimination mechanism
    Anyang LIU, Huaici ZHAO, Wenlong CAI, Zechao XU, Ruideng XIE
    2023, 43(7):  2288-2294.  DOI: 10.11772/j.issn.1001-9081.2022060840
    Asbtract ( )   HTML ( )   PDF (2675KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems that existing image deblurring algorithms suffer from diffusion and artifacts when dealing with edge loss and the use of full-frame deblurring in video processing does not meet real-time requirements, an Adaptive DeBlurring Generative Adversarial Network (ADBGAN)algorithm based on active discrimination mechanism was proposed. Firstly, an adaptive fuzzy discrimination mechanism was proposed, and an adaptive fuzzy processing network module was developed to make a priori judgment of fuzziness on the input image. When collecting the input, the blurring degree of the input image was judged in advance, and the input frame which was clear enough was eliminated to improve the running efficiency of the algorithm. Then, the incentive link of the attention mechanism was introduced in the process of fine feature extraction, so that weight normalization was carried out in the forward flow of feature extraction to improve the performance of the network to recover fine-grained features. Finally, the feature pyramid fine feature recovery structure was improved in the generator architecture, and a more lightweight feature fusion process was adopted to improve the running efficiency. In order to verify the effectiveness of the algorithm, detailed comparison experiments were conducted on the open source datasets GoPro and Kohler. Experimental results on GoPro dataset show that the visual fidelity of ADBGAN is 2.1 times that of Scale-Recurrent Network (SRN) algorithm, the Peak Signal-to-Noise Ratio (PSNR) of ADBGAN is improved by 0.762 dB compared with that of SRN algorithm, and ADBGAN has good image information recovery ability; in terms of video processing time,the actual processing time is reduced by 85.9% compared to SRN.The proposed algorithm can generate deblurred images with higher information quality efficiently.

    Dangerous goods detection method in elevator scene based on improved attention mechanism
    Yiyu GUO, Luoyu ZHOU, Xinyu LIU, Yao LI
    2023, 43(7):  2295-2302.  DOI: 10.11772/j.issn.1001-9081.2022060857
    Asbtract ( )   HTML ( )   PDF (5447KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the hidden danger of fire caused by electric bicycles and gas tanks taken into elevators, an improved attention mechanism was proposed to detect dangerous goods in elevator scene, and a method based on the mechanism was proposed. With YOLOX-s as the baseline model, firstly, a depthwise separable convolution was introduced in the enhanced feature extraction network to replace the standard convolution, which improved the reasoning speed of the model. Secondly, an Efficient Convolutional Block Attention Module (ECBAM) based on mixed-domain was proposed and embedded into the backbone feature extraction network. In the channel attention part of ECBAM, two fully connected layers were replaced by a one-dimensional convolution, which not only reduced the complexity of Convolutional Block Attention Module (CBAM) but also improved the detection precision. Finally, a multi-frame collaboration algorithm was proposed to reduce the false alarms of dangerous goods’ intrusion into the elevator by combining the dangerous goods detection results of multiple images. Experimental results show that compared with YOLOX-s, the improved model can increase the mean Average Precision (mAP) by 1.05 percentage points, reduce the floating point computational cost by 34.1% and reduce the model size by 42.8%. The improved model reduces false alarms in practical applications and meets the precision and speed requirements of dangerous goods detection in elevator scene.

    3D liver image segmentation method based on multi-scale feature fusion and grid attention mechanism
    Shuai ZHENG, Xiaolong ZHANG, He DENG, Hongwei REN
    2023, 43(7):  2303-2310.  DOI: 10.11772/j.issn.1001-9081.2022060803
    Asbtract ( )   HTML ( )   PDF (2868KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Due to the high similarity of gray values among liver and adjacent organs in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) images, a 3D liver image segmentation method based on multi-scale feature fusion and grid attention mechanism, namely MAGNet (Multi-scale feature fusion And Grid attention mechanism Network), was proposed to segment liver automatically and accurately. Firstly, high-level features and low-level features were connected by the attention-guided concatenation module to extract important context information, and the grid attention mechanism was introduced in the attention-guided concatenation module to focus on the segmentation region of interest. Then, the multi-scale feature fusion module was formed by the layered connection in a single feature map according to the number of channels, and this module was used to replace the basic convolutional block to obtain multi-scale semantic information. Finally, the deep supervision mechanism was utilized to solve the problems of vanishing gradient, exploding gradient and slow convergence. Experimental results show that on 3DIRCADb dataset, compared with the U3-Net+DC method, MAGNet improves the Dice Similarity Coefficient (DSC) metric by 0.10 percentage points and reduces the Relative Volume Difference (RVD) metric by 1.97 percentage points; on Sliver07 dataset, compared with the CANet method, MAGNet improves the DSC metrics by 0.30 percentage points, reduces Volumetric Overlap Error (VOE) metrics by 0.68 percentage points, and reduces the Average Symmetric Surface Distance (ASD) and Root Mean Square Symmetric Surface Distance (RMSD) metrics 0.03 mm and 0.22 mm respectively; on the liver MRI dataset of a hospital, MAGNet also has good results on all metrics. Besides, MAGNet was applied to a mixed dataset of 3DIRCADb dataset and the hospital liver MRI dataset above, and a competitive segmentation result was also achieved.

    Pulmonary nodule detection algorithm based on attention feature pyramid networks
    Yuanyuan QIN, Hong ZHANG
    2023, 43(7):  2311-2318.  DOI: 10.11772/j.issn.1001-9081.2022060924
    Asbtract ( )   HTML ( )   PDF (2687KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of low sensitivity and high false positive rate caused by various shapes and difficulty in detecting pulmonary nodules by the Computer-Aided Detection (CAD) system of pulmonary nodules, a pulmonary nodule detection algorithm based on attention feature pyramid networks was proposed. In the first stage, a more compact Dual Path Network (DPN) was used as the backbone network, and a Feature Pyramid Network (FPN) was combined for multi-scale prediction to obtain feature information at different levels. At the same time, the Global Attention Mechanism (GAM) was embedded to refine the semantic features to be emphasized in learning and improve the sensitivity of the algorithm. In the second stage, a false positive reduction network was proposed to obtain the final classification prediction results. In the training stage, the focal loss function and various data augmentation techniques were used to deal with the data imbalance problem. Experimental results on the public dataset LUNA16 (LUng Nodule Analysis 2016) show that the Competitive Performance Metric (CPM) of the algorithm only with the first stage reaches 0.908, and after adding the false positive reduction network, the CPM of the algorithm reaches 0.933, which is 1.1 percentage points higher than that of the classic algorithm — Convolutional Neural Network (CNN) based on Maximum Intensity Projection (MIP). And ablation experimental results show that the dual path network, FPN, and GAM are effective in improving the detection sensitivity. The above proves that the proposed two-stage detection algorithm can obtain multi-scale nodule information, improve the sensitivity of pulmonary nodule detection, and reduce the false positive rate.

2024 Vol.44 No.7

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF