Loading...

Table of Content

    10 February 2022, Volume 42 Issue 2
    Artificial intelligence
    Survey of communication overhead of federated learning
    Xinyuan QIU, Zecong YE, Xiaolong CUI, Zhiqiang GAO
    2022, 42(2):  333-342.  DOI: 10.11772/j.issn.1001-9081.2021020232
    Asbtract ( )   HTML ( )   PDF (1356KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To solve the irreconcilable contradiction between data sharing demands and requirements of privacy protection, federated learning was proposed. As a distributed machine learning, federated learning has a large number of model parameters needed to be exchanged between the participants and the central server, resulting in higher communication overhead. At the same time, federated learning is increasingly deployed on mobile devices with limited communication bandwidth and limited power, and the limited network bandwidth and the sharply raising client amount will make the communication bottleneck worse. For the communication bottleneck problem of federated learning, the basic workflow of federated learning was analyzed at first, and then from the perspective of methodology, three mainstream types of methods based on frequency reduction of model updating, model compression and client selection respectively as well as special methods such as model partition were introduced, and a deep comparative analysis of specific optimization schemes was carried out. Finally, the development trends of federated learning communication overhead technology research were summarized and prospected.

    Feature construction and preliminary analysis of uncertainty for meta-learning
    Yan LI, Jie GUO, Bin FAN
    2022, 42(2):  343-348.  DOI: 10.11772/j.issn.1001-9081.2021071198
    Asbtract ( )   HTML ( )   PDF (483KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Meta-learning is the learning process of applying machine learning methods (meta-algorithms) to seek the mapping between features of a problem (meta-features) and relative performance measures of the algorithm, thereby forming the learning process of meta-knowledge. How to construct and extract meta-features is an important research content. Concerning the problem that most of meta-features used in the existing related researches are statistical features of data, uncertainty modeling was proposed and the impact of uncertainty on learning system was studied. Based on inconsistency of data, complexity of boundary, uncertainty of model output, linear capability to be classified, degree of attribute overlap, and uncertainty of feature space, six kinds of uncertainty meta-features were established for data or models. At the same time,the uncertainty size of the learning problem itself was measured from different perspectives, and specific definitions were given. The correlations between these meta-features were analyzed on artificial datasets and real datasets of a large number of classification problems, and multiple classification algorithms such as K-Nearest Neighbor (KNN) were used to conduct a preliminary analysis of the correlation between meta-features and test accuracy. Results show that the average degree of correlation is about 0.8, indicating that these meta-features have a significant impact on learning performance.

    Centered kernel alignment based multiple kernel one-class support vector machine
    Xiangzhou QI, Hongjie XING
    2022, 42(2):  349-356.  DOI: 10.11772/j.issn.1001-9081.2021071230
    Asbtract ( )   HTML ( )   PDF (608KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In comparison with single kernel learning, Multiple Kernel Learning (MKL) methods obtain better performance in the tasks of classification and regression. However, all the traditional MKL methods are used for tackling two-class or multi-class classification problems. To make MKL methods fit for dealing with the problems of One-Class Classification (OCC), a Centered Kernel Alignment (CKA) based multiple kernel One-Class Support Vector Machine (OCSVM) was proposed. Firstly,CKA was utilized to calculate the weight of each kernel matrix, and the obtained weights were used as the linear combination coefficients to linearly combine different types of kernel functions to construct the combination kernel function and introduce them into the traditional OCSVM to replace the single kernel function. The proposed method can not only avoid the selection of kernel function, but also improve the generalization and anti-noise performances. In comparison with other five related methods including OCSVM,Localized Multiple Kernel OCSVM (LMKOCSVM) and Kernel-Target Alignment based Multiple Kernel OCSVM (KTA-MKOCSVM) on 20 UCI benchmark datasets, the geometric mean (g-mean) values of the proposed algorithm were higher than those of the comparison methods on 13 datasets. At the time, the traditional single kernel OCSVM obtained better results on 2 datasets,LMKOCSVM and KTA-MKOCSVM achieved better classification effects on 5 datasets. Therefore, the effectiveness of the proposed method was sufficiently verified by experimental comparisons.

    Topology optimization based graph convolutional network combining with global structural information
    Kun FU, Jinhui GAO, Xiaomeng ZHAO, Jianing LI
    2022, 42(2):  357-364.  DOI: 10.11772/j.issn.1001-9081.2021030380
    Asbtract ( )   HTML ( )   PDF (1079KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    As a kind of Graph Convolutional Neural Network (GCNN), Topology Optimization based Graph Convolutional Network (TOGCN) model adopts auxiliary information in the network to optimize topological structure of the network, thereby helping to reflect the relational degrees between the nodes. However, TOGCN model only focuses on the association between local nodes, and not enough on the potential global structure information. Fusing global feature information, the model will help to improve performance as well as its robustness in dealing with incomplete information. A Global structure information Enhanced-TOGCN (GE-TOGCN) model was proposed, the attributes of neighboring nodes were utilized to optimize the topological graph, and the class information was regarded as the global structure information to maintain intra-class aggregation and inter-class separation. Firstly, the center vector of each class was calculated by the labeled nodes, then some unlabeled nodes were selected to update these class center vectors. Finally, all the nodes were assigned to the corresponding class according to their similarity to class center vectors, and a semi-supervised loss function was adopted to optimize the class center vector of each class and the final representation vectors of the nodes. On Cora and Citeseer datasets, node classification task and node visualization task were performed by using the obtained node representation vectors with the loss of label information. Experimental results show that compared with Graph Convolutional Network (GCN), Graph Learning-Convolutional Network (GLCN) and other models, GE-TOGCN has the classification accuracy increased by 1.2-12.0 percentage points on Cora dataset, and the classification accuracy increased by 0.9-9.9 percentage points on Citeseer dataset. In node visualization task, the proposed model has higher degree of intra-class node aggregation and more obvious boundaries between class clusters. In summary, the fusion of class global information can reduce the negative influence of label information loss on learning effects of the model, and the node representations obtained by the proposed model have better performance in downstream tasks.

    Derivative-free few-shot learning based performance optimization method of pre-trained models with convolution structure
    Yaming LI, Kai XING, Hongwu DENG, Zhiyong WANG, Xuan HU
    2022, 42(2):  365-374.  DOI: 10.11772/j.issn.1001-9081.2021020230
    Asbtract ( )   HTML ( )   PDF (841KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Deep learning model with convolution structure has poor generalization performance in few-shot learning scenarios. Therefore, with AlexNet and ResNet as examples, a derivative-free few-shot learning based performance optimization method of convolution structured pre-trained models was proposed. Firstly, the sample data were modulated to generate the series data from the non-series data based on causal intervention, and the pre-trained model was pruned directly based on the co-integration test from the perspective of data distribution stability. Then, based on Capital Asset Pricing Model (CAPM) and optimal transmission theory, in the intermediate output process of the pre-trained model, the forward learning without gradient propagation was carried out, and a new structure was constructed, thereby generating the representation vectors with clear inter-class distinguishability in the distribution space. Finally, the generated effective features were adaptively weighted based on the self-attention mechanism, and the features were aggregated in the fully connected layer to generate the embedding vectors with weak correlation. Experimental results indicate that the proposed method can increase the Top-1 accuracies of the AlexNet and ResNet convolution structured pre-trained models on 100 classes of images in ImageNet 2012 dataset from 58.82%, 78.51% to 68.50%, 85.72%, respectively. Therefore, the proposed method can effectively improve the performance of convolution structured pre-trained models based on few-shot training data.

    Iterative intuitionistic fuzzy K-modes algorithm
    Yudan CHEN, Cuifang GAO, Wanqiang SHEN, Ping YIN
    2022, 42(2):  375-381.  DOI: 10.11772/j.issn.1001-9081.2021030383
    Asbtract ( )   HTML ( )   PDF (581KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Intuitionistic Fuzzy K-Modes (IFKM) algorithm adopts the simple 0-1 matching similarity measure in clustering process, which can not effectively describe the similarity of data objects in class, and fails to reflect the contribution of different attributes in clustering process. In addition, IFKM algorithm directly determines the classes of data objects according to the intuitionistic fuzzy membership matrix in each iteration of clustering, and do not give full play to the role of intuitionistic fuzziness idea. In order to solve these two problems, an Iterative IFKM (IIFKM) algorithm was proposed. Firstly, a weighted similarity measure of intuitionistic fuzzy membership degree was defined based on Intuitionistic Fuzzy Entropy(IFE) and Intuitionistic Fuzzy Set (IFS). Secondly, the intuitionistic fuzzy membership matrix was used as iterative information in the whole clustering process, so that the intuitionistic fuzziness idea in the algorithm was fully reflected. Experimental results on 5 datasets from UCI database show that compared with IFKM algorithm, the proposed IIFKM algorithm can improve the accuracy and recall by 7%-11%, and can also improve the precision to some degree.

    Neighborhood decision tree construction algorithm based on variable-precision neighborhood equivalent granules
    Xin XIE, Xianyong ZHANG, Xuanye WANG, Pengfei TANG
    2022, 42(2):  382-388.  DOI: 10.11772/j.issn.1001-9081.2021071168
    Asbtract ( )   HTML ( )   PDF (541KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the shortcomings such as information loss and poor effect of the existing decision tree algorithms for continuous data classification, a Neighborhood Decision Tree (NDT) construction algorithm was proposed. Firstly, the variable-precision neighborhood equivalent granules on the neighborhood decision information system were mined, and the related properties were discussed. Secondly, the neighborhood Gini index measure was constructed based on the variable-precision neighborhood equivalent granules to measure the uncertainty of the neighborhood decision information system. Finally, the neighborhood Gini index measure was used to induce the tree node selection conditions, and the variable-precision neighborhood equivalent granules were used as the tree splitting rules to construct NDT. Experimental results on UCI datasets show that the accuracy of NDT algorithm is generally improved by about 20 percentage points compared with those of Iterative Dichotomiser 3 (ID3) algorithm, Classification And Regression Tree (CART) algorithm, C4.5 algorithm and combining Information Gain and Gini Index (IGGI) algorithm, indicating that the proposed NDT algorithm is effective.

    Voting instance selection algorithm based on learning to hash
    Yajie HUANG, Junhai ZHAI, Xiang ZHOU, Yan LI
    2022, 42(2):  389-394.  DOI: 10.11772/j.issn.1001-9081.2021071188
    Asbtract ( )   HTML ( )   PDF (574KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    With the massive growth of data, how to store and use data has become a hot issue in academic research and industrial applications. As one of the methods to solve these problems, instance selection effectively reduces the difficulty of follow-up work by selecting representative instances from original data according to the established rules. Therefore, a voting instance selection algorithm based on learning to hash was proposed. Firstly, the Principal Component Analysis (PCA) method was used to map high-dimensional data to low-dimensional space. Secondly, the k-means algorithm was used to perform iterative operations by combining with the vector quantization method, and the hash codes of the cluster center were used to represent the data. After that, the classified data were randomly selected according to the proportion, and the final instances were selected by voting after several times independent running of the algorithm. Compared with the Compressed Nearest Neighbor (CNN) algorithm and the instance selection algorithm of linear complexity for big data named LSH-IS-F (Instance Selection algorithm by Hashing with two passes), the proposed algorithm has the compression ratio improved by an average of 19%. The idea of the proposed algorithm is simple and easy to implement, and the algorithm can control the compression ratio automatically by adjusting the parameters. Experimental results on 7 datasets show that the proposed algorithm has a great advantage compared to random hashing in terms of compression ratio and running time with similar test accuracy.

    Parameter asynchronous updating algorithm based on multi-column convolutional neural network
    Xinyu CHEN, Mingzhe LIU, Jun REN, Ying TANG
    2022, 42(2):  395-403.  DOI: 10.11772/j.issn.1001-9081.2021020367
    Asbtract ( )   HTML ( )   PDF (4787KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To address the problem that the existing algorithm uses synchronous manual optimization of deep learning networks, and ignores the negative information of network learning, which leads to a large number of redundant parameters or even overfitting, thereby affecting the counting accuracy, a parameter asynchronous updating algorithm based on Multi-column Convolutional Neural Network (MCNN) was proposed. Firstly, a single frame image was input to the network, and after three columns of convolutions to extracting features with different scales respectively, the correlation of every two columns of feature maps was learned through the mutual information between columns. Then, the parameters of each column were updated asynchronously according to the optimized mutual information and the updated loss function until the algorithm converges. Finally, the dynamic Kalman filtering was used to deeply fuse the output density maps output by the columns, and all pixels in the fused density map were summed up to obtain the total number of people in the image. Experimental results show that on the UCSD (University of California San Diego) dataset, the Mean Absolute Error (MAE) of the proposed algorithm is 1.1% less than that of ic-CNN+McML (iterative crowd counting Convolution Neural Network Multi-column Multi-task Learning) with the best MAE performance on the dataset, and the Mean Square Error (MSE) of the proposed algorithm is 4.3% less than that of Contextual Pyramid Convolution Neural Network (CP-CNN) with the best MSE performance on the dataset; on the ShanghaiTech Part_A dataset, the MAE of the proposed algorithm is reduced by 1.7% compared to that of ic-CNN+McML with the best MAE performance on the dataset, and the MSE of the proposed algorithm is reduced by 3.2% compared to that of ACSCP (Adversarial Cross-Scale Consistency Pursuit)with the best MSE performance on the dataset; on the ShanghaiTech Part_B dataset, the proposed algorithm has the MAE and MSE reduced by 18.3% and 35.2% respectively compared to ic-CNN+McML with the best MAE and MSE performances on the dataset; on the UCF_CC_50 (University of Central Florida Crowd Counting) dataset, the proposed algorithm has the MAE and MSE reduced by 1.9% and 9.8% respectively compared to ic-CNN+McML with the best MAE and MSE performances on the dataset. The above shows that this algorithm can effectively improve the accuracy and robustness of crowd counting, and allows the input image to have any size or resolution, and can adapt to the large-scale transformation of the detected target.

    Recommendation model for user attribute preference modeling based on convolutional neural network interaction
    Renzhi PAN, Fulan QIAN, Shu ZHAO, Yanping ZHANG
    2022, 42(2):  404-411.  DOI: 10.11772/j.issn.1001-9081.2021041070
    Asbtract ( )   HTML ( )   PDF (633KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Latent Factor Model (LFM) have been widely used in recommendation field due to their excellent performance. In addition to interactive data, auxiliary information is also introduced to solve the problem of data sparsity, thereby improving the performance of recommendations. However, most LFMs still have some problems. First, when modeling users by LFM, how users make decisions on items based on their feature preferences is ignored. Second, the feature interaction using inner product assumes that the feature dimensions are independent to each other, without considering the correlation between the feature dimensions. In order to solve the above problems, a recommendation model for User Attribute preference Modeling based on Convolutional Neural Network (CNN) interaction (UAMC) was proposed. In this model, the general preferences of users, user attributes and item embeddings were firstly obtained, and then the user attributes and item embeddings were interacted to explore the preferences of different attributes of users to different items. After that, the interacted user preference attributes were sent to the CNN layer to explore the correlation between different dimensions of different preference attributes and thus obtain the users’ attribute preference vectors. Next, the attention mechanism was used to combine the general preferences of the users with the attribute preferences obtained from CNN layer to obtain the vector representations of the users. Finally, the dot product was used to calculate the users’ ratings of the items. Experiments were conducted on three real datasets: Movielens-100K, Movielens-1M and Book-crossing. The results show that the proposed algorithm decreases the Root Mean Square Error (RMSE) by 1.75%, 2.78% and 0.25% respectively compared with the model of Neural Factorization Machine for sparse predictive analytics (NFM), which verifies the effectiveness of UAMC model in improving the accuracy of recommendation in the rating prediction recommendation of LFM.

    Genetic algorithm for approximate concept generation and its recommendation application
    Zhonghui LIU, Ziyou WANG, Fan MIN
    2022, 42(2):  412-418.  DOI: 10.11772/j.issn.1001-9081.2021041155
    Asbtract ( )   HTML ( )   PDF (477KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Some researchers suggest replacing concept lattices with concept sets in recommendation field due to the high time complexity of concept lattice construction. However, the current studies on concept sets do not consider the role of approximate concepts. Therefore, approximate concepts were introduced into recommendation application, and a genetic algorithm based Approximate Concept Generation Algorithm (ACGA) and the corresponding recommendation scheme were proposed. Firstly, the initial concept set was generated through the heuristic method. Secondly, the crossover operator was used to obtain the approximate concepts by calculating the extension intersection set of any two concepts in the initial concept set. Thirdly, the selection operator was used to select the approximate concepts meeting the conditions according to the similarity of extensions and the relevant threshold to update the concept set, and the mutation operator was adopted to adjust the approximate concepts without meeting the conditions to meet the conditions according to the user similarity. Finally, the recommendation to the target users was performed according to the neighboring users’ preferences based on the new concept set. Experimental results show that, on four datasets commonly used by recommender systems, the approximate concepts generated by ACGA algorithm can improve the recommendation effect, especially on two movie scoring datasets, compared with Probabilistic Matrix Factorization (PMF) algorithm, ACGA algorithm has the F1-score, recall and precision increased by nearly 78%, 104% and 57% respectively; and compared with K-Nearest Neighbor (KNN) algorithm, ACGA algorithm has the precision increased by nearly 12%.

    Multi-modal deep fusion for false information detection
    Jie MENG, Li WANG, Yanjie YANG, Biao LIAN
    2022, 42(2):  419-425.  DOI: 10.11772/j.issn.1001-9081.2021071184
    Asbtract ( )   HTML ( )   PDF (1079KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Concerning the problem of insufficient image feature extraction and ignorance of single-modal internal relations and the interactions between single-modal and multi-modal, a text and image information based Multi-Modal Deep Fusion (MMDF) model was proposed. Firstly, the Bi-Gated Recurrent Unit (Bi-GRU) was used to extract the rich semantic features of the text, and the multi-branch Convolutional-Recurrent Neural Network (CNN-RNN) was used to extract the multi-level features of the image. Then the inter-modal and intra-modal attention mechanisms were established to capture the high-level interaction between the fields of language and vision, and the multi-modal joint representation was obtained. Finally, the original representation of each modal and the fused multi-modal joint representation were re-fused according to their attention weights to strengthen the role of the original information. Compared with the Multimodal Variational AutoEncoder (MVAE) model, the proposed model has the accuracy improved by 1.9 percentage points and 2.4 percentage points on the China Computer Federation (CCF) competition and the Weibo datasets respectively. Experimental results show that the proposed model can fully fuse multi-modal information and effectively improve the accuracy of false information detection.

    News recommendation model with deep feature fusion injecting attention mechanism
    Yuxi LIU, Yuqi LIU, Zonglin ZHANG, Zhihua WEI, Ran MIAO
    2022, 42(2):  426-432.  DOI: 10.11772/j.issn.1001-9081.2021050907
    Asbtract ( )   HTML ( )   PDF (755KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    When mining news features and user features, the existing news recommendation models often lack comprehensiveness since they often fail to consider the relationship between the browsed news, the change of time series, and the importance of different news to users. At the same time, the existing models also have shortcomings in more fine-grained content feature mining. Therefore, a news recommendation model with deep feature fusion injecting attention mechanism was constructed, which can comprehensively and non-redundantly conduct user characterization and extract the features of more fine-grained news fragments. Firstly, a deep learning-based method was used to deeply extract the feature matrix of news text through the Convolutional Neural Network (CNN) injecting attention mechanism. By adding time series prediction to the news that users had browsed and injecting multi-head self-attention mechanism, the interest characteristics of users were extracted. Finally, a real Chinese dataset and English dataset were used to carry out experiments with convergence time, Mean Reciprocal Rank (MRR) and normalized Discounted Cumulative Gain (nDCG) as indicators. Compared with Neural news Recommendation with Multi-head Self-attention (NRMS) and other models, on the Chinese dataset, the proposed model has the average improvement rate of nDCG from -0.22% to 4.91% and MRR from -0.82% to 3.48%. Compared with the only model with negative improvement rate, the proposed model has the convergence time reduced by 7.63%. on the English dataset, the proposed model has the improvement rates reached 0.07% to 1.75% and 0.03% to 1.30% respectively on nDCG and MRR; At the same time this model always has fast convergence speed. Results of ablation experiments show that adding attention mechanism and time series prediction module is effective.

    Named entity recognition method of elementary mathematical text based on BERT
    Yi ZHANG, Shuangsheng WANG, Bin HE, Peiming YE, Keqiang LI
    2022, 42(2):  433-439.  DOI: 10.11772/j.issn.1001-9081.2021020334
    Asbtract ( )   HTML ( )   PDF (689KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In Named Entity Recognition (NER) of elementary mathematics, aiming at the problems that the word embedding of the traditional NER method cannot represent the polysemy of a word and some local features are ignored in the feature extraction process of the method, a Bidirectional Encoder Representation from Transformers (BERT) based NER method for elementary mathematical text named BERT-BiLSTM-IDCNN-CRF (BERT-Bidirectional Long Short-Term Memory-Iterated Dilated Convolutional Neural Network-Conditional Random Field) was proposed. Firstly, BERT was used for pre-training. Then, the word vectors obtained by training were input into BiLSTM and IDCNN to extract features, after that, the output features of the two neural networks were merged. Finally, the output was obtained through the correction of CRF. Experimental results show that the F1 score of BERT-BiLSTM-IDCNN-CRF is 93.91% on the dataset of test questions of elementary mathematics, which is 4.29 percentage points higher than that of BiLSTM-CRF benchmark model, and 1.23 percentage points higher than that of BERT-BiLSTM-CRF model. And the F1 scores of the proposed method to line, angle, plane, sequence and other entities are all higher than 91%, which verifies the effectiveness of the proposed method on elementary mathematical entity recognition. In addition, after adding attention mechanism to the proposed model, the recall of the model decreases by 0.67 percentage points, but the accuracy of the model increases by 0.75 percentage points, which means the introduction of attention mechanism has little effect on the recognition effect of the proposed method.

    Data science and technology
    Query performance evaluation of distributed resource description framework data management systems
    Jun FENG, Bingfa WANG, Jiamin LU
    2022, 42(2):  440-448.  DOI: 10.11772/j.issn.1001-9081.2021020255
    Asbtract ( )   HTML ( )   PDF (602KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    With the continuous development of knowledge graph technology, knowledge information management driven by knowledge graph has been widely applied in multiple domains, so the efficiency of distributed Simple Protocol and Resource description framework Query Language (SPARQL) query for knowledge graph is particularly important. Firstly, a detailed investigation on the existing Spark-based and Random Access Memory (RAM)-based distributed RDF systems was conducted. Secondly, query performance evaluation of eight representative systems selected from the above systems was performed, thereby comparing query performance differences between Spark-based and RAM-based systems with different query types, query diameters and datasets. Thirdly, the query performance of Spark-based and RAM-based systems was evaluated by analyzing the experimental results comprehensively. Finally, the future research directions of distributed SPARQL query optimization which oriented vertical application domain were pointed out aiming at problems of the existing distributed SPARQL query, such as poor query scalability, high query join complexity and long query compilation time.

    Efficient attribute reduction algorithm based on local conditional discernibility
    Meng KANG, Zuqiang MENG
    2022, 42(2):  449-456.  DOI: 10.11772/j.issn.1001-9081.2021071170
    Asbtract ( )   HTML ( )   PDF (635KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The traditional attribute reduction method based on discernibility matrix is intuitive and easy to understand. However, its time and space complexities are high, so when dealing with large scale data or many conditional attributes, it will not be able to get the reduction result quickly. In order to solve the problem, the conditional discernibility was constructed based on the discernibility relation for attribute selection, and an attribute reduction algorithm based on conditional discernibility was proposed. In order to further accelerate the calculation of attribute importance and improve the efficiency of attribute reduction, according to the stability of frequency in the law of large numbers, the conditional discernibility was extended to local conditional discernibility by sampling, and an attribute reduction algorithm based on local conditional discernibility was proposed. It was theoretically proved that the conditional discernibility was stricter than the positive region in attribute selection. And this algorithm was compared with efficient Forward Attribute Reduction algorithm from Discernibility View (FAR-DV), attribute reduction algorithm based on k-Nearest Neighbor attribute importance and Correlation Coefficient (K2NCRS) and Fast Positive Region reduction Algorithm based on positive region sort ascending decision table (FPRA). Experimental results show that the proposed algorithm is similar to FAR-DV in attribute selection order, reduction rate and classification accuracy. Compared with the above three algorithms, the proposed algorithm has the reduction efficiency improved more than ten times. With the increase of data scale or conditional attributes, the reduction efficiency improvement of this algorithm is higher. It can be seen that the proposed algorithm has lower time and space complexities and is suitable for the attribute reduction of massive data.

    Interpretable ordered clustering method and its application analysis
    Su GAO, Junzhong BAO, Xin WANG, Lidong WANG
    2022, 42(2):  457-462.  DOI: 10.11772/j.issn.1001-9081.2021050871
    Asbtract ( )   HTML ( )   PDF (3370KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    For solving grade analysis problems in the field of management decisions, an ordered clustering method for semantic interpretability was proposed. Firstly, based on obtaining the dominance degrees of the samples, the fuzzy description and K-modes clustering method were combined to establish an ordered clustering method of Chinese seafarers’ vocational happiness indexes. Secondly, the corresponding semantic interpretation was assigned to the ordered clustering results under the framework of Axiomatic Fuzzy Set (AFS); thereby, forming a decision-making aid method for transforming the quantitative information into the qualitative description. Finally, taking the 9 175 valid questionnaires of Chinese seafarers’ vocational happiness indexes as the research samples, the constructed ordered clustering method was applied to obtain the grading results of the seafarers’ vocational happiness indexes as well as their semantic interpretation,and the factors influencing seafarers’ vocational happiness indexes were analyzed. The proposed method can produce ordered clustering results that satisfy user-specified constraints, and the results are interpretable, understandable, and have good value in assistant decision-making.

    Incremental attribute reduction method for set-valued decision information system with variable attribute sets
    Chao LIU, Lei WANG, Wen YANG, Qiangqiang ZHONG, Min LI
    2022, 42(2):  463-468.  DOI: 10.11772/j.issn.1001-9081.2021051024
    Asbtract ( )   HTML ( )   PDF (511KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the problem that static attribute reduction cannot update attribute reduction efficiently when the number of attributes in the set-valued decision information system changes continuously, an incremental attribute reduction method with knowledge granularity as heuristic information was proposed. Firstly, the related concepts of the set-valued decision information system were introduced, then the definition of knowledge granularity was introduced, and its matrix representation method was extended to this system. Secondly, the update mechanism of incremental reduction was analyzed, and an incremental attribute reduction method was designed on the basis of knowledge granularity. Finally, three different datasets were selected for the experiments. When the number of attributes of the three datasets increased from 20% to 100%, the reduction time of the traditional non-incremental method was 54.84 s, 108.01 s, and 565.93 s respectively, and the reduction time of the incremental method was 7.57 s, 4.85 s, and 50.39 s respectively. Experimental results demonstrate that the proposed incremental method is more faster than the non-incremental method under the condition that the accuracy of attribute reduction is not affected.

    Heuristic attribute value reduction model based on certainty factor
    Shunkun YU, Hongxu YAN
    2022, 42(2):  469-474.  DOI: 10.11772/j.issn.1001-9081.2021071344
    Asbtract ( )   HTML ( )   PDF (948KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The existing attribute value reduction models are complex to implement, and the key information extracted by the models is often too concise, which affects the representation ability of the decision system. To resolve above problems, a heuristic attribute value reduction model based on certainty factor was proposed. Firstly, several attribute set tools with different properties were constructed, and the relevant theorems and proofs were shown; at the same time, a reduced information function was developed to assign values to the reduced attributes. Secondly, the certainty factor was taken as heuristic information and the strategy of bottom-up hierarchical search was adopted to construct a heuristic attribute value reduction model, and the layout path and operation process of the model were visually displayed in the form of the pseudo-codes of the program. Finally, the application and verification of the model were performed on simulation data from the existing research, the advantages, applicability, and scalability of the model were summarized and discussed. The results show that the new model is feasible and effective, easy to implement by programming; it has low requirements of data characteristics and is suitable for general expert systems;moreover, the value information extracted by the new model is diverse and concise with strong generalization, and does not lose the key information of the decision system.

    Feature selection algorithm for imbalanced data based on pseudo-label consistency
    Yiheng LI, Chenxi DU, Yanyan YANG, Xiangyu LI
    2022, 42(2):  475-484.  DOI: 10.11772/j.issn.1001-9081.2021050957
    Asbtract ( )   HTML ( )   PDF (921KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem that most algorithms of granular computing ignore the class-imbalance of data, a feature selection algorithm integrating pseudo-label strategy was proposed to deal with class-imbalanced data. Firstly, to investigate feature selection from class-imbalanced data conveniently, the sample consistency and dataset consistency were re-defined, and the corresponding greedy forward search algorithm for feature selection was designed. Then, the pseudo-label strategy was introduced to balance the class distribution of the data. By integrating the learned pseudo-label of a sample into consistency measure, the pseudo-label consistency was defined to estimate the features of the class-imbalanced dataset. Finally, an algorithm for Pseudo-Label Consistency based Feature Selection (PLCFS) for class-imbalanced data was developed based on the preservation of the pseudo-label consistency measure for the class-imbalanced dataset. Experimental results indicate that the proposed PLCFS has the performance only lower than max-Relevancy and Min-Redundancy (mRMR) algorithm, but outperforms Relief algorithm and algorithm for Consistency-based Feature Selection (CFS).

    Hyperspectral band selection algorithm based on neighborhood entropy
    Dongchang ZHAI, Hongmei CHEN
    2022, 42(2):  485-492.  DOI: 10.11772/j.issn.1001-9081.2021020332
    Asbtract ( )   HTML ( )   PDF (1092KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to reduce the redundant information of hyperspectral image data, optimize the computational efficiency and improve the effectiveness of subsequent applications of image data, a hyperspectral band selection algorithm based on Neighborhood Entropy (NE) was proposed. Firstly, in order to efficiently calculate the neighborhood subset of samples, the Local Sensitive Hashing (LSH) was used as the nearest neighbor search strategy. Then, the NE theory was introduced to measure the Mutual Information (MI) between bands and classes, and minimization of the conditional entropy between feature sets and class variables was used as a method to select effective bands. Finally, two datasets were used to carry out classification experiments through Support Vector Machine (SVM) and Random Forest (RM). Experimental results show that, compared with four MI based feature selection algorithms, from the perspectives of overall accuracy and Kappa coefficient, the proposed algorithm can select effective band subset within 30 bands faster and achieve local optimization. Some experimental results of the proposed algorithm reach 92.99% and 0.860 8 at the global optimum on overall accuracy and Kappa coefficient respectively, verifying that the proposed algorithm can effectively deal with hyperspectral band selection problem.

    Cyber security
    Node failure ripple effect analysis model for aircraft ad hoc network
    Lixia XIE, Liping YAN, Hongyu YANG
    2022, 42(2):  493-501.  DOI: 10.11772/j.issn.1001-9081.2021020348
    Asbtract ( )   HTML ( )   PDF (1030KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To effectively analyze the ripple effect of node failure in Aircraft Ad Hoc Network (AANET) on the whole network and improve the stability of the network after occurring security incidents, a node failure ripple effect analysis model for AANET was proposed. Firstly, the directed weighted business network was established according to the main business of AANET, the undirected weighted physical network was established with various aircraft as the nodes based on real-time AANET, and the interdependent network model was built through the business-physical network mapping relationship. Secondly, a failure propagation model for AANET was proposed, and the node states and transformation modes between them were analyzed. Finally, the failure traffic redistribution algorithm was improved on the basis of link survivability, which was applied to the established interdependent network model to obtain the set of failed nodes and business degradation nodes caused by the ripple effect of node failure, then the set was used to analyze the ripple effect condition of the network at every moment. Experimental results show that the proposed model can effectively analyze the ripple effect condition of node failure in AANET.

    Cascading failure model in aviation network considering overload condition and failure probability
    Cheng FAN, Buhong WANG, Jiwei TIAN
    2022, 42(2):  502-509.  DOI: 10.11772/j.issn.1001-9081.2021020319
    Asbtract ( )   HTML ( )   PDF (873KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to improve the credibility of the damage degree evaluation to the aviation network due to cascading failures caused by emergency, considering the redundancy ability of airport nodes for the load, which means if the overload occurs in a certain spatial range, the node will not fail immediately but has a certain overload handling ability, an aviation network cascading failure model was proposed based on overload condition and failure probability. Firstly, the overload coefficient, weight coefficient, distribution coefficient, and capacity coefficient were introduced into the traditional "load-capacity" Motter-Lai cascading failure model. Then, the redundant capacity characteristics of network nodes were described by overload condition and failure probability, and different load redistribution strategies were applied to the failed and overloaded nodes to make the model more consistent with the aviation network reality. Theoretical analysis and simulation results show that increasing the overload coefficient within a certain range helps to reduce the impact of cascading failures, but the improvement effect is not obvious after increasing to a certain degree; with the optimal intervals for parameters of the model. the aviation network can maintain better robustness while spending smaller construction cost, and the optimized allocation of aviation network resources can improve the network’s resistance to cascading failures.

    Adversarial attack algorithm for deep learning interpretability
    Quan CHEN, Li LI, Yongle CHEN, Yuexing DUAN
    2022, 42(2):  510-518.  DOI: 10.11772/j.issn.1001-9081.2021020360
    Asbtract ( )   HTML ( )   PDF (1283KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem of model information leakage caused by interpretability in Deep Neural Network (DNN), the feasibility of using the Gradient-weighted Class Activation Mapping (Grad-CAM) interpretation method to generate adversarial samples in a white-box environment was proved, moreover, an untargeted black-box attack algorithm named dynamic genetic algorithm was proposed. In the algorithm, first, the fitness function was improved according to the changing relationship between the interpretation area and the positions of the disturbed pixels. Then, through multiple rounds of genetic algorithm, the disturbance value was continuously reduced while increasing the number of the disturbed pixels, and the set of result coordinates of each round would be maintained and used in the next round of iteration until the perturbed pixel set caused the predicted label to be flipped without exceeding the perturbation boundary. In the experiment part, the average attack success rate under the AlexNet, VGG-19, ResNet-50 and SqueezeNet models of the proposed algorithm was 92.88%, which was increased by 16.53 percentage points compared with that of One pixel algorithm, although with the running time increased by 8% compared with that of One pixel algorithm. In addition, in a shorter running time, the proposed algorithm had the success rate higher than the Adaptive Fast Gradient Sign Method (Ada-FGSM) algorithm by 3.18 percentage points, higher than the Projection & Probability-driven Black-box Attack (PPBA) algorithm by 8.63 percentage points, and not much different from Boundary-attack algorithm. The results show that the dynamic genetic algorithm based on the interpretation method can effectively execute the adversarial attack.

    Cross-chain mechanism based on Spark blockchain
    Jiagui XIE, Zhiping LI, Jian JIN
    2022, 42(2):  519-527.  DOI: 10.11772/j.issn.1001-9081.2021020353
    Asbtract ( )   HTML ( )   PDF (888KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Considering different blockchains being isolated and the data interaction and sharing difficulties in the current rapid development process of blockchain technology, a cross-chain mechanism based on Spark blockchain was proposed. Firstly, common cross-chain technologies and current mainstream cross-chain projects were analyzed, the implementation principles of different technologies and projects were studied, and their differences, advantages and disadvantages were summarized. Then, using the blockchain architecture maned main-sub blockchain mode, the key core components such as smart contract component, transaction verification component, transaction timeout component were designed, and the four stages of cross-chain process were elaborated in detail, including transaction initiation, transaction routing, transaction verification and transaction confirmation. Finally, the feasible experiments were designed for performance test and security test, and the security was analyzed. Experimental results show that Spark blockchain has significant advantages compared to other blockchains in terms of transaction delay, throughput and spike testing. Besides, when the proportion of malicious nodes is low, the success rate of cross-chain transactions is 100%, and different sub chains can conduct cross-chain transactions safely and stably. This mechanism solves the problem of data interaction and sharing between blockchains, and provides technical reference for the design of Spark blockchain application scenarios in the next step.

    Software defined network flow rule conflict detection system based on OpenFlow
    Liqun ZHANG, Haitao LIN, Wenming HUAN, Wenting BI
    2022, 42(2):  528-533.  DOI: 10.11772/j.issn.1001-9081.2021020362
    Asbtract ( )   HTML ( )   PDF (676KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In Software Defined Network (SDN), independent development of various network applications and multi-user network management may cause conflicts in the flow rules issued to switching equipment. Due to the separation of the control plane and the forwarding plane, the switching equipments lack strategy analysis capability, and cannot independently detect internal flow rule conflicts. Aiming at this problem, a flow rule conflict detection system and a detection algorithm were proposed. Firstly, by monitoring and capturing OpenFlow messages between the control plane and the forwarding plane, the information about the flow rules to be issued was obtained. Then, the conflict detection algorithm was used to determine the conflict type of the flow rules. The corresponding rule set was selected by the algorithm according to the matching protocol of flow rules, thereby reducing the detection scale. In the detection, the features of Non-Conflict (NC) rules were detected at first, so that the detection efficiency of NC rules was higher than those of other types of conflict rules. Finally, the flow rule conflicts were resolved according to the conflict types. Experimental results show that the detection accuracy of the proposed algorithm can reach 100%; compared with the dynamic conflict detection model, the proposed algorithm shortens the detection time by about 47% under the same scale of rule set. And the detection time is shortened as the proportion of NC rules increases.

    Advanced computing
    Several novel intelligent optimization algorithms for solving constrained engineering problems and their prospects
    Mengjian ZHANG, Deguang WANG, Min WANG, Jing YANG
    2022, 42(2):  534-541.  DOI: 10.11772/j.issn.1001-9081.2021020265
    Asbtract ( )   HTML ( )   PDF (849KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To study the performance and application prospects of novel intelligent optimization algorithms, six bionic intelligent optimization algorithms proposed in the past few years were analyzed, concluding Harris Hawks Optimization (HHO) algorithm, Equilibrium Optimizer (EO), Marine Predators Algorithm (MPA), Political Optimizer (PO), Slime Mould Algorithm (SMA), and Heap-Based Optimizer (HBO). Their performance and applications in different constrained engineering optimization problems were compared and analyzed. Firstly, the basic principles of six optimization algorithms were introduced. Secondly, the optimization tests were performed on ten standard benchmark functions for six optimization algorithms. Thirdly, six optimization algorithms were applied to solve three engineering optimization problems with constraints. Experimental results show that the convergence accuracy of PO is the best for the optimization of unimodal and multimodal test functions and can reach the theoretical optimal value zero many times. The EO and MPA are better for solving constrained engineering problems with fast optimization speed, high stability and standard deviation of a small order of magnitude. Finally, the improvement methods and development potentials of six optimization algorithms were analyzed.

    Adaptive reference vector based constrained multi-objective evolutionary algorithm
    Feifan SHI, Xuhua SHI
    2022, 42(2):  542-549.  DOI: 10.11772/j.issn.1001-9081.2021020337
    Asbtract ( )   HTML ( )   PDF (1068KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The current research on Multi-Objective Evolutionary Algorithm (MOEA) in dealing with Constrained Multi-objective Optimization Problems (CMOPs) is mainly to solve the single type of constraints, and in dealing with different kinds of complex constraints, the algorithm is difficult to converge or has poor population distribution. To solve this problem, based on the framework of MOEA based on Decomposition (MOEA/D), an Adaptive Reference Vector based Constrained Multi-Objective Evolutionary Algorithm (ARVCMOEA) was proposed. Firstly, the reference vectors were divided into two parts: the main reference vectors and the auxiliary reference vectors. Then, in the initial phase of the algorithm, the unconstrained auxiliary reference vectors were used to guide the population to quickly cross the infeasible interval. Finally, the distribution and search ability of the algorithm were improved by adaptively adjusting positions of the auxiliary reference vectors and weakening the distribution requirements. Experiments were carried out on 30 test functions with different kinds of complex constraints. The results show that the proposed algorithm can converge well with different kinds of constraints, and it is superior to Non-dominated Sorting Genetic Algorithm II (NSGA-II), Constraint-MOEA/D (C-MOEA/D) and MOEA/D with Detect-And-Escape strategy (MOEA/D-DAE) in overall performance, and it can obtain better results on some test functions than the current excellent Coevolutionary Constrained Multi-objective Optimization framework (CCMO), verifying that the proposed algorithm has excellent performance in the face of different kinds of CMOPs.

    Containerized network embedding algorithm based on time-varying resources
    Weijian DENG, Xi CHEN
    2022, 42(2):  550-556.  DOI: 10.11772/j.issn.1001-9081.2021020297
    Asbtract ( )   HTML ( )   PDF (1084KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to construct a large-scale containerized network, and achieve the purpose of building a high-fidelity, easy-to-program virtual network environment, a virtual network embedding algorithm based on time-varying resources was proposed to divide the OVS (Open vSwitch) and Docker based containerized network into segments and map them to several computing, network and storage resources constrained physical hosts. In the algorithm, firstly, the virtual network elements with close link relationships were aggregated hierarchically based on the topology of the virtual network to reduce the problem scale. Secondly, the importance scores of the aggregated virtual network nodes were obtained, the virtual network was segmented by the breadth first search algorithm and greedy strategy, and mapped into the physical hosts with suitable resources. Finally, the resource evaluation model in the algorithm was dynamically adjusted at runtime through the feedback at the fixed time of the resource consumption of the virtual network elements, so that the physical resources were effectively utilized. Experimental results show that the proposed algorithm can accommodate the virtual network with more than 1 300 network elements on multiple X86 hosts with low-level configuration, and can make the network jitter maintained at 0.1 ms or less.

    Computer software technology
    Developer recommendation method based on E-CARGO model
    Wei LI, Qunqun WU, Yiwen ZHANG
    2022, 42(2):  557-564.  DOI: 10.11772/j.issn.1001-9081.2021020273
    Asbtract ( )   HTML ( )   PDF (649KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Because the traditional developer recommendation methods focus on analyzing the developers’ professional abilities and the interaction information with the tasks, without considering the problem of collaboration between the developers, a developer recommendation method based on Environment-Class, Agent, Role, Group, and Object (E-CARGO) model was proposed. Firstly, the developer collaborative development process was described as a role-based collaboration problem and modeled by E-CARGO model combining the characteristics of collaborative development. Then, a fuzzy judgment matrix was established by Fuzzy Analytic Hierarchy Process (FAHP) method to obtain the developer ability index weights and weighted sum of them, thereby obtaining the set of historical comprehensive ability evaluation of the developers. Finally, in view of the uncertainty and dynamic characteristics of the developers’ comprehensive ability evaluation, the cloud model theory was used to analyze the set of historical comprehensive ability evaluation of the developers to obtain the developers’ competence for each task, and the cplex optimization package was used to solve the developer recommendation problem. Experimental results show that the proposed method can obtain the best developer recommendation results within an acceptable time range, which verifies the effectiveness of the proposed method.

    Synthesis of loop bound functions for loop programs
    Wang TAN, Yi LI
    2022, 42(2):  565-573.  DOI: 10.11772/j.issn.1001-9081.2021020221
    Asbtract ( )   HTML ( )   PDF (596KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    As the mainstream methods of loop program termination analysis,most existing ranking function methods are limited to the solution of linear or polynomial ranking functions. Concerning that the existing ranking function methods cannot prove the termination if there are no corresponding linear or polynomial ranking functions for the loop programs, a new method was proposed to synthesize the loop bound function for the given loop program. The existence of loop bound function of a given loop program implies the termination of this loop function. Firstly, the problem of solving the loop bound functions was transformed into a linear binary classification problem. Once the function’s template was selected, the mapping relationship was established according to the template to construct the training set. After that, the obtained training set was used to obtain the classification hyperplane through Support Vector Machine (SVM) to find the template coefficients, thereby obtaining the candidate loop bound function. Finally, the existing symbol verification tool Redlog was used to verify this candidate loop bound function. Experimental results show that compared with the existing ranking function methods, the proposed method not only can be applied to more loop programs, but also has the obtained loop bound functions more simplified in form than the ranking functions. Specifically, for some loops without linear ranking functions, the corresponding linear loop bound functions can be solved by the proposed method; at the same time, for some loops with only multiphase linear ranking functions, the global linear loop bound functions can be solved by the proposed method.

    Multimedia computing and computer simulation
    Image instance segmentation model based on fractional-order network and reinforcement learning
    Xueming LI, Guohao WU, Shangbo ZHOU, Xiaoran LIN, Hongbin XIE
    2022, 42(2):  574-583.  DOI: 10.11772/j.issn.1001-9081.2021020324
    Asbtract ( )   HTML ( )   PDF (2853KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the low segmentation precision caused by the lack of image feature extraction ability of the existing fractional-order nonlinear models, an instance segmentation model based on fractional-order network and Reinforcement Learning (RL) was proposed to generate high-quality contour curves of target instances in the image. The model consists of two layers of modules: 1) the first layer was a two-dimensional fractional-order nonlinear network in which the chaotic synchronization method was mainly utilized to obtain the basic characteristics of the pixels in the image, and the preliminary segmentation result of the image was acquired through the coupling and connection according to the similarity among the pixels; 2) the second layer was to establish instance segmentation as a Markov Decision Process (MDP) based on the idea of RL, and the action-state pairs, reward functions and strategies during the modeling process were designed to extract the region structure and category information of the image. Finally, the pixel features and preliminary segmentation result of the image obtained from the first layer were combined with the region structure and category information obtained from the second layer for instance segmentation. Experimental results on datasets Pascal VOC2007 and Pascal VOC2012 show that compared with the existing fractional-order nonlinear models, the proposed model has the Average Precision (AP) improved by at least 15 percentage points, verifying that the sequential decision-based instance segmentation model not only can obtain the class information of the target objects in the image, but also further enhance the ability to extract contour details and fine-grained information of the image.

    CT three-dimensional reconstruction algorithm based on super-resolution network
    Junbo LI, Pinle QIN, Jianchao ZENG, Meng LI
    2022, 42(2):  584-591.  DOI: 10.11772/j.issn.1001-9081.2021020219
    Asbtract ( )   HTML ( )   PDF (1088KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Computed Tomography (CT) three-dimensional reconstruction technique improves the quality of three-dimensional model by upsampling volume data, and reduces the jagged edges, streak artifacts and discontinuous surface in the model, so as to improve the accuracy of disease diagnosis in clinical medicine. A CT three-dimensional reconstruction algorithm based on super-resolution network was proposed to solve the problem that the model after CT three-dimensional reconstruction remains unclear enough in the past. The network model is a Double Loss Refinement Network (DLRNET), and the three-dimensional reconstruction of abdominal CT was performed by uniaxial super-resolution. The optimization learning module was introduced at the end of the network model, and besides the calculation of the loss between the baseline image and super-resolution image, the loss between the roughly reconstructed image in the network model and the baseline image was also calculated. In this way, with the force of optimization learning and double loss, the results closer to the baseline image were produced by the network. Then, spatial pyramid pooling and channel attention mechanism were introduced into the feature extraction module to learn the features of vascular tissues with different thickness degrees and scales. Finally, the upsampling method was used to dynamically generate the convolution kernel set, so that a single network model was able to complete the upsampling tasks with different scaling factors. Experimental results show that compared with Residual Channel Attention Network (RCAN), the proposed network model improves the Peak Signal-to-Noise Ratio (PSNR) by 0.789 dB on average under 2, 3, and 4 scaling factors, showing that the network model effectively improves the quality of CT three-dimensional model, recovers the continuous detail features of vascular tissues to some extent, and has practicability.

    Variable convolutional autoencoder method based on teaching-learning-based optimization for medical image classification
    Wei LI, Yaochi FAN, Qiaoyong JIANG, Lei WANG, Qingzheng XU
    2022, 42(2):  592-598.  DOI: 10.11772/j.issn.1001-9081.2021061109
    Asbtract ( )   HTML ( )   PDF (634KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the problems such as high time cost, inaccuracy and influence of parameter setting on algorithm performance when optimizing parameters of Convolutional Neural Network (CNN) by traditional manual methods, a variable Convolutional AutoEncoder (CAE) method based on Teaching-Learning-Based Optimization (TLBO) was proposed. In the algorithm, a variable-length individual encoding strategy was designed to quickly construct the CAE structure, and stack CAEs to a CNN. In addition, the excellent individual structure information was fully utilized to guide the algorithm to search the regions with more possibility, thereby improving the algorithm performance. Experimental results show that the classification accuracy of the proposed algorithm achieves 89.84% when solving medical image classification problems, which is higher than those of traditional CNN and similar neural networks. The proposed algorithm solves the medical image classification problems by optimizing the CAE structure and stacking CNN, and effectively improves the classification accuracy of medical image classification.

    Frontier and comprehensive applications
    Optimization method of automatic train operation speed curve based on genetic algorithm and particle swarm optimization
    Jing ZHANG, Aihong ZHU
    2022, 42(2):  599-605.  DOI: 10.11772/j.issn.1001-9081.2021020292
    Asbtract ( )   HTML ( )   PDF (746KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of precise parking, punctuality, comfort and energy consumption in the process of Automatic Train Operation (ATO), an optimization method of ATO speed curve based on GAPSO (Genetic Algorithm and Particle Swarm Optimization) algorithm was proposed. Firstly, a multi-objective optimization model of train ATO operation was established, the train passing through the neutral zone with power cutoff and coasting was included in the control strategy, and the operation control strategy was analyzed. Secondly, Particle Swarm Optimization (PSO) algorithm was improved, the nonlinear dynamic inertia weight and the improved acceleration coefficient were adopted, and the genetic operator was integrated into it to form a brand-new GAPSO algorithm, and the superiority of GAPSO algorithm in global search and local search ability as well as convergence speed was verified. Finally, GAPSO algorithm was used to optimize the operating mode changing points, and a set of operating mode changing point speeds satisfying multi-objective optimization was obtained, thereby obtaining the optimal target speed curve. Simulation experimental results show that under the premise that the overall running time meets the requirements of punctuality, the optimization method can make the energy consumption reduced by 13.29%, the comfort increased by 26.62%, and the parking error reduced by 21.62%. Therefore, the optimized train target speed curve can meet the multi-objective requirements, and this method provides a feasible solution for train ATO multi-objective optimization.

    Optimization of airport arrival procedures based on hybrid simulated annealing algorithm
    Sheng CHEN, Jun ZHOU, Xiaobing HU, Ji MA
    2022, 42(2):  606-615.  DOI: 10.11772/j.issn.1001-9081.2021040586
    Asbtract ( )   HTML ( )   PDF (1426KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Concerning the problem that the manual design of airport arrival procedures is time consuming and it is difficult to optimize the path length quantitatively, a three-dimensional automatic optimization design method of multiple arrival procedures was proposed. Firstly, based on the specifications of RNAV (Rules for implementation of area NAVigation), the geometric configuration and the merging structure of the arrival procedures were modeled. Then, considering airport layout and aircraft operation constraints such as obstacle avoidance and route separation, with the goal of minimizing the total length of arrival procedures, a complete mathematical model was established. Finally, a hybrid algorithm based on simulated annealing algorithm and improved A* algorithm was developed to automatically optimize the merging structure of arrival procedures. Simulation results show that, in the experiment based on Sweden Arlanda Airport, compared with the existing related integer programming method, the hybrid simulated annealing algorithm can shorten the total path length by 3% and reduce the computing time by 87%. In the experiment based on Shanghai Pudong Airport, compared with the actual arrival procedures, the length of the routes designed by the proposed algorithm is reduced by 6.6%. These results indicate that the proposed algorithm can effectively design multiple three-dimensional arrival procedures, and can provide preliminary decision support for the procedure designers.

    Air combat maneuver decision method based on three-way decision
    Kaiqiang YUE, Bo LI, Panlong FAN
    2022, 42(2):  616-621.  DOI: 10.11772/j.issn.1001-9081.2021050855
    Asbtract ( )   HTML ( )   PDF (1931KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to improve the maneuver decision ability of fighters under the condition of insufficient information, a method of aircraft air combat maneuver decision based on three-way decision was proposed. Firstly, the three-way decision intention recognition model was used to recognize the target intention. Secondly, after introducing the combat intention factor of the target into the threat assessment, a dynamic adjustment method of maneuver decision weight factor based on three-way decision was proposed with the combination of the target threat degree. Finally, the evaluation function of maneuver decision factor was constructed by using fuzzy logic, and the optimal maneuver mode of aircraft at each stage was obtained by using the dynamic adjustment strategy of weight and maneuver decision evaluation function, thus forming the effective and feasible flight route. Simulation results show that the proposed aircraft air combat maneuver decision method based on three-way decision is feasible and effective.

    Fall detection algorithm based on joint point features
    Jianrong CAO, Yaqin ZHU, Yuting ZHANG, Junjie LYU, Hongjuan YANG
    2022, 42(2):  622-630.  DOI: 10.11772/j.issn.1001-9081.2021040618
    Asbtract ( )   HTML ( )   PDF (1203KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the problems of large amount of network computation and difficulty in distinguishing falling-like behaviors in fall detection algorithms, a fall detection algorithm based on joint point features was proposed. Firstly, based on the current advanced CenterNet algorithm, a Depthwise Separable Convolution-CenterNet (DSC-CenterNet) joint point detection algorithm was proposed to accurately detect human joint points and obtain joint point coordinates while reducing the amount of backbone network computation. Then, based on the joint point coordinates and prior knowledge of the human body, the spatial and temporal features expressing the fall behavior were extracted as the joint point features. Finally, the joint point feature vector was input into the fully connected layer and processed by Sigmoid classifier to output two categories: fall or non-fall, thereby achieving the fall detection of human targets. Experimental results on UR Fall Detection dataset show that the proposed algorithm has the average accuracy of fall detection under different state changes reached 98.00%, the accuracy of distinguishing falling-like behaviors reached 98.22% and the fall detection speed of 18.6 frame/s. Compared with the algorithm of the original CenterNet combining with joint point features, the algorithm of DSC-CenterNet combining with joint point features has the average detection accuracy increased by 22.37%. The improved speed can effectively meet the realtime requirement of the human fall detection tasks under surveillance video. This algorithm can effectively increase fall detection speed and accurately detect the fall state of human body, which further verifies the feasibility and efficiency of fall detection algorithm based on joint point features in the video fall behavior analysis.

    Insulator detection algorithm based on improved Faster-RCNN
    Yaoming MA, Yu ZHANG
    2022, 42(2):  631-637.  DOI: 10.11772/j.issn.1001-9081.2021020342
    Asbtract ( )   HTML ( )   PDF (1867KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In order to increase inspection efficiency of high-voltage transmission lines, an insulator detection algorithm based on improved Faster Region-based Convolutional Neural Network (Faster-RCNN) was proposed. Firstly, the Selective Kernel Neural Network (SKNet) with attention mechanism was added to feature extraction network to make the network focus on learning the insulator features related channels. Secondly, the Filter Response Normalization (FRN) layer was used to replace the original Batch Normalization (BN) layer to avoid the model falling into the gradient saturation region. Finally, the Distance Intersection Over Union (DIoU) was used to replace the original Intersection Over Union (IoU) to accurately express the positions of the feature candidate region boxs. The open source aerial insulator dataset was enhanced by the operations such as translation, rotation, Cutout and CutMix. The dataset was expanded to 3 000 images, and 2 500 images of them were randomly selected as the training set, and the remaining 500 images were selected as the test set. Compared with the original Faster-RCNN algorithm, the average accuracy of the proposed algorithm is improved by 3.46 percentage points, and the average recall is improved by 2.76 percentage points. Experimental results show that the proposed algorithm has high detection accuracy and stability, and can meet the requirements of the application scenarios of power line insulator detection.

    Coin surface defect detection algorithm based on deformable convolution and adaptive spatial feature fusion
    Pinxue WANG, Shaobing ZHANG, Miao CHENG, Lian HE, Xiaoshan QIN
    2022, 42(2):  638-645.  DOI: 10.11772/j.issn.1001-9081.2021020227
    Asbtract ( )   HTML ( )   PDF (6462KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Concerning the problem that the surface defects of the coin are small, variable in shape, easily confused with the background and difficult to be detected, an improved algorithm of coin surface defect detection named DCA-YOLO (Deformable Convolution and Adaptive space feature fusion-YOLO) was proposed. First of all, due to the different shapes of defects, three network structures with deformable convolution modules added at different positions in the backbone network were designed, and the ability to extract defects was improved through convolution learning offset and adjusting parameters. Then, the adaptive spatial feature fusion network was used to learn the weight parameters to better adapt to targets with different scales by adjusting the contribution of each pixel in the feature maps of different scales. Finally, the anchor ratio was adjusted, the category weights were dynamically adjusted, the comparison network performance was optimized, thus, a model network to add deformable convolution before upsampling for multi-scale fusion of the output features of the backbone network was proposed. Experimental results show that on the coin defect dataset, the detection mAP (mean Average Precision) of DCA-YOLO algorithm reaches 92.8%, which is close to that of Faster-RCNN (Faster Region-based Convolutional Neural Network); compared with YOLOv3, the proposed algorithm has the detection speed basically the same with 3.3 percentage points improvement on detection mAP, and 3.2 percentage points increase on F1-score.

    Multi-label active learning algorithm for shale gas reservoir prediction
    Min WANG, Tingting FENG, Fan MIN, Hongming TANG, Jianping YAN, Jijia LIAO
    2022, 42(2):  646-654.  DOI: 10.11772/j.issn.1001-9081.2021041023
    Asbtract ( )   HTML ( )   PDF (540KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Concerning the problems of the difficulties in obtaining, the limitation of labels, and the high cost of labeling of shale gas reservoir data, a Multi-standard Active query Multi-label Learning (MAML) algorithm was proposed. First of all, with the consideration of the informativeness and representativeness of the samples, the preliminary processing was performed on the samples. Secondly, the sample richness constraints including attribute differences and label richness were added, on this basis, the valuable samples were selected and the labels of these samples were queried. Finally, a multi-label learning algorithm was used to predict the labels of the remaining samples. Through experiments on eleven Yahoo datasets, the MAML algorithm was compared with popular multi-label learning algorithms and active learning algorithms, and the superiority of the MAML algorithm was proved. Then, the experiments were extended to four real shale gas well logging datasets. In these experiments, compared with the multi-label learning algorithms: Multi-Label Multi-Label K-Nearest Neighbor (ML-KNN), BackPropagation for Multi-Label Learning (BP-MLL), multi-label learning with GLObal and loCAL label correlation (GLOCAL) and active learning by QUerying Informative and Representative Examples (QUIRE), the MAML algorithm improved the average prediction accuracy of comprehensive quality of shale gas reservoirs by 45 percentage points, 68 percentage points, 68 percentage points, and 51 percentage points, respectively. The practicability and superiority of the MAML algorithm in the prediction of shale gas reservoir sweet spots are fully proved by these experimental results.

    First-arrival automatic picking algorithm based on clustering and local linear regression
    Lei GAO, Guanfeng LUO, Dang LIU, Fan MIN
    2022, 42(2):  655-662.  DOI: 10.11772/j.issn.1001-9081.2021041046
    Asbtract ( )   HTML ( )   PDF (4785KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    First-arrival picking is an essential step in seismic data processing, which can directly affect the accuracy of normal moveout correction, static correction and velocity analysis. At present, affected by background noise and complex near-surface conditions, the picking accuracies of the existing methods are reduced. Based on this, a First-arrival automatic Picking algorithm based on Clustering and Local linear regression (FPCL) was proposed. This algorithm was implemented in two stages: pre-picking and fine-tuning. In the pre-picking stage, the k-means technique was firstly used to find first-arrival cluster. Then the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) technique was used to pick first-arrivals from the cluster. In the fine-tuning stage, the local linear regression technique was used to fill in missing values, and the energy ratio minimization technique was used to adjust error values. On two seismic datasets, compared with Improved Modified Energy Ratio (IMER) method, FPCL had the accuracy increased by 4.00 percentage points and 3.50 percentage points respectively; compared with Cross Correlation Technique (CCT), FPCL had the accuracy increased by 38.00 percentage points and 10.25 percentage points respectively; compared with Automatic time Picking for microseismic data based on a Fuzzy C-means clustering algorithm (APF), FPCL had the accuracy increased by 34.50 percentage points and 3.50 percentage points respectively; compared with First-arrival automatic Picking algorithm based on Two-stage Optimization (FPTO), FPCL had the accuracy increased by 5.50 percentage points and 16.25 percentage points respectively. The above experimental results show that FPCL is more accurate.

2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF