Loading...

Table of Content

    10 October 2021, Volume 41 Issue 10
    Artificial intelligence
    Ordinal decision tree algorithm based on fuzzy advantage complementary mutual information
    WANG Yahui, QIAN Yuhua, LIU Guoqing
    2021, 41(10):  2785-2792.  DOI: 10.11772/j.issn.1001-9081.2020122006
    Asbtract ( )   PDF (1344KB) ( )  
    References | Related Articles | Metrics
    When the traditional decision tree algorithm is applied to the ordinal classification task, there are two problems:the traditional decision tree algorithm does not introduce the order relation, so it cannot learn and extract the order structure of the dataset; in real life, there is a lot of fuzzy but not exact knowledge, however the traditional decision tree algorithm cannot deal with the data with fuzzy attribute value. To solve these problems, an ordinal decision tree algorithm based on fuzzy advantage complementary mutual information was proposed. Firstly, the dominant set was used to represent the order relations in the data, and the fuzzy set was introduced to calculate the dominant set for forming a fuzzy dominant set. The fuzzy dominant set was able to not only reflect the order information in the data, but also obtain the inaccurate knowledge automatically. Then, the complementary mutual information was generalized on the basis of fuzzy dominant set, and the fuzzy advantage complementary mutual information was proposed. Finally, the fuzzy advantage complementary mutual information was used as a heuristic method, and an decision tree algorithm based on fuzzy advantage complementary mutual information was designed. Experimental results on 5 synthetic datasets and 9 real datasets show that, the proposed algorithm has less classification errors compared with the classical decision tree algorithm on the ordinal classification tasks.
    Causal inference method based on confounder hidden compact representation model
    CAI Ruichu, BAI Yiming, QIAO Jie, HAO Zhifeng
    2021, 41(10):  2793-2798.  DOI: 10.11772/j.issn.1001-9081.2020122066
    Asbtract ( )   PDF (553KB) ( )  
    References | Related Articles | Metrics
    Causal inference methods can be used to discover causal relationships on observation data. When making causal inferences on data having causal structure with confounder, wrong causal relationships may be obtained under the influence of confounders. To solve the problem, a causal inference method based on Confounder Hidden Compact Representation (CHCR) model was proposed. Firstly, the candidate models with intermediate hidden variables that compactly represented the cause variables were constructed based on CHCR model. Secondly, the Bayesian Information Criterion (BIC) was used to calculate the scores of the candidate models and obtain the best model with the highest score. Finally, the real causal relationship between the variables was judged according to the quality of compaction in the best model. Theoretical analysis shows that, the proposed method can identify the causal structures with confounders that cannot be correctly identified by the classical constraint-based methods. In some cases such as the small sample size, BIC scoring can also improve the performance of the proposed method. Experimental results show that, when the number of samples changes, the proposed method has a significant improvement in accuracy compared with the classical methods such as Really Fast Causal Inference algorithm (RFCI), and the proposed method is suitable for situations with different numbers of possible variable values. When mixing different types of causal structures, the accuracy of the proposed method is higher than those of the classical methods such as Max-Min Hill-Climbing algorithm (MMHC). Moreover, the proposed method can obtain the correct causal relationships on Abalone dataset.
    Relationship reasoning method combining multi-hop relationship path information
    DONG Yongfeng, LIU Chao, WANG Liqin, LI Yingshuang
    2021, 41(10):  2799-2805.  DOI: 10.11772/j.issn.1001-9081.2020121905
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of the lack of a large number of relationships in the current Knowledge Graph (KG), and the lack of full consideration of the hidden information in the multi-hop path between two entities when performing relationship reasoning, a relationship reasoning method combining multi-hop relationship path information was proposed. Firstly, for the given candidate relationships and two entities, the convolution operation was used to encode the multi-hop relationship path connecting the two entities into a low-dimensional space and extract the information. Secondly, the Bidirectional Long Short-Term Memory (BiLSTM) network was used for modeling to generate the relationship path representation vector, and the attention mechanism was used to combine it with the candidate relationship representation vector. Finally, a multi-step reasoning method was used to find the relationship with the highest matching degree as the reasoning result and judge its precision. Compared with the current popular Path Ranking Algorithm (PRA), the neural network model named Path-RNN and reinforcement learning model named MINERVA, the proposed algorithm had the Mean Average Precision (MAP) increased by 1.96,8.6 and 1.6 percentage points respectively when using the large-scale knowledge graph dataset NELL995 for experiments. And when using the small-scale knowledge graph dataset Kinship for experiments, the proposed algorithm had the MAP improved by 21.3,13 and 12.1 percentage points respectively compared to PRA and MINERVA. The experimental results show that the proposed method can infer the relationship links between entities more accurately.
    Social recommendation based on dynamic integration of social information
    REN Kezhou, PENG Furong, GUO Xin, WANG Zhe, ZHANG Xiaojing
    2021, 41(10):  2806-2812.  DOI: 10.11772/j.issn.1001-9081.2020111892
    Asbtract ( )   PDF (728KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of data sparseness in recommendation algorithms, social data are usually introduced as auxiliary information for social recommendation. The traditional social recommendation algorithms ignore users' interest transfer, which makes the model unable to describe the dynamic characteristics of user interests, and the algorithms also ignore the dynamic characteristics of social influences, which causes the model to treat long before social behaviors and recent social behaviors equally. Aiming at these two problems, a social recommendation model named SLSRec with dynamic integration of social information was proposed. First, self-attention mechanism was used to construct a sequence model of user interaction items to implement the dynamic description of user interests. Then, an attention mechanism with forgetting with time was designed to model the short-term social interests, and an attention mechanism with collaborative characteristics was designed to model long-term social interests. Finally, the long-term and short-term social interests and the user's short-term interests were combined to obtain the user's final interests and generate the next recommendation. Normalized Discounted Cumulative Gain (NDCG) and Hit Rate (HR) indicators were used to compare and verify the proposed model, the sequence recommendation models (Self-Attention Sequence Recommendation (SASRec) model) and the social recommendation model (neural influence Diffusion Network for social recommendation (DiffNet) model) on the sparse dataset brightkite and the dense dataset Last.FM. Experimental results show that compared with DiffNet model, SLSRec model has the HR index increased by 8.5% on the sparse dataset; compared with SASRec model, SLSRec model has the NDCG index increased by 2.1% on the dense dataset, indicating that considering the dynamic characteristics of social information makes the recommendation results more accurate.
    Video recommendation algorithm based on danmaku sentiment analysis and topic model
    ZHU Simiao, Wei Shiwei, WEI Siheng, YU Dunhui
    2021, 41(10):  2813-2819.  DOI: 10.11772/j.issn.1001-9081.2020121997
    Asbtract ( )   PDF (852KB) ( )  
    References | Related Articles | Metrics
    A large number of self-made videos on the Internet lack user ratings and the recommendation accuracies of them are not high. In order to solve the problems, a Video Recommendation algorithm based on Danmaku Sentiment Analysis and topic model (VRDSA) was proposed. Firstly, sentiment analysis was performed to video' danmaku comments to obtain the sentiment vectors of the videos, which were used to calculate the emotional similarities between the videos. At the same time, based on the tags of videos, a topic model was built to obtain the topic distribution of the video tags which was used to calculate the topic similarities between the videos. Secondly, the emotional similarities and topic similarities were merged to calculate synthesis similarities between the videos. Thirdly, combined with the comprehensive similarities between the videos and the user's history records, the user preference for videos was obtained. At the same time, the video public recognitions were quantified by user interaction metrics such as the number of likes, danmakus and collections, and the comprehensive recognitions of the videos were calculated by combining the user's history records. Finally, based on the user preference and video comprehensive recognitions, the user's recognitions of videos were predicted, and a personalized recommendation list was generated to complete the video recommendation. Experimental results show that, compared with Danmaku video Recommendation algorithm combing Collaborative Filtering and Topic model (DRCFT) and Unifying LDA (Latent Dirichlet Allocation) and Ratings Collaborative Filtering (ULR-itemCF), the proposed algorithm has the precision increased by 17.1% on average, the recall increased by 22.9% on average, and the F1 increased by 22.2% on average. The proposed algorithm completes the recommendation of videos by analyzing the sentiments of danmakus and integrating the topic model, and fully exploits the emotionality of damaku data to make the recommendation results more accurate.
    Chinese implicit sentiment classification model based on sequence and contextual features
    YUAN Jingling, DING Yuanyuan, PAN Donghang, LI Lin
    2021, 41(10):  2820-2828.  DOI: 10.11772/j.issn.1001-9081.2020111760
    Asbtract ( )   PDF (839KB) ( )  
    References | Related Articles | Metrics
    Sentiment analysis of massive text information on social networks can better mine the behavior rules of Internet users,helping decision-making institutions understand the public opinion tendencies and helping businesses improve the quality of service. The task of Chinese implicit sentiment classification is more difficult than those of other languages due to the absence of key emotional features,expression vector forms and cultural customs. The existing Chinese implicit sentiment classification methods are mainly based on Convolutional Neural Network(CNN),and have some defects, such as the inability to obtain the sequence of words and not using contextual emotional features reasonably in implicit emotion discrimination. A Chinese implicit sentiment classification model combining sequence and contextual features named GGBA (GCNN-GRU-BiGRU-Attention) was proposed to solve the above problems. In the model, Gated Convolutional Neural Network (GCNN) was used to extract the local important information of sentences with implicit sentiments,and Gated Recurrent Unit(GRU)network was used to enhance the temporal information of features. In the context feature processing of sentences with implicit sentiments,the combination of Bidirectional Gated Recurrent Unit (BiGRU)and attention was used to extract the important emotional features. After obtaining the two types of features,the contextual important features were integrated into the implicit emotion discrimination through the fusion layer. Experimental results on the implicit sentiment analysis evaluation dataset showed that the macro average precision of GGBA model was 3. 72% higher than that of normal text CNN named TextCNN,2. 57% higher than that of GRU,and 1. 90% higher than that of Disconnected Recurrent Neural Network(DRNN). Therefore,GGBA model achieves better classification performance than the basic models in implicit sentiment analysis tasks.
    Text sentiment analysis based on sentiment lexicon and context language model
    YANG Shuxin, ZHANG Nan
    2021, 41(10):  2829-2834.  DOI: 10.11772/j.issn.1001-9081.2020121900
    Asbtract ( )   PDF (696KB) ( )  
    References | Related Articles | Metrics
    Word embedding technology plays an important role in text sentiment analysis, but the traditional word embedding technologies such as Word2Vec and GloVe (Global Vectors for word representation) will lead to the problem of single semantics. Aiming at the above problem, a text sentiment analysis model named Sentiment Lexicon Parallel-Embedding from Language Model (SLP-ELMo) based on sentiment lexicon and context language model named Embedding from Language Model (ELMo) was proposed. Firstly, the sentiment lexicon was used to filter the words in the sentence. Secondly, the filtered words were input into the character-level Convolutional Neural Network (char-CNN) to generate the character vector of each word. Then, the character vectors were input into ELMo model for training. In addition, the attention mechanism was added to the last layer of ELMo vector to train the word vectors better. Finally, the word vectors and ELMo vector were combined in parallel and input into the classifier for text sentiment classification. Compared with the existing models, the proposed model achieves higher accuracy on IMDB and SST-2 datasets, which validates the effectiveness of the model.
    Visual-textual sentiment analysis method based on multi-level spatial attention
    GUO Kexin, ZHANG Yuxiang
    2021, 41(10):  2835-2841.  DOI: 10.11772/j.issn.1001-9081.2020101676
    Asbtract ( )   PDF (6772KB) ( )  
    References | Related Articles | Metrics
    With the continuous popularization and promotion of social networks, compared with traditional text description, people are inclined to post reviews with both images and texts to express their feelings and opinions. The existing visual-textual sentiment analysis methods only consider the high-level semantic relation between images and texts, but pay less attention to the correlation between the low-level visual features and middle-level aesthetic features of images and the sentiment of texts. Thus, a visual-textual sentiment analysis method based on Multi-Level Spatial Attention (MLSA) was proposed. In the proposed method, driven by text content, MLSA was used to design the feature fusion method between images and texts. This feature fusion method not only focused on the image entity features related to texts, but also made full use of the middle-level aesthetic features and low-level visual features of images, so as to to mine the sentiment co-occurrence between images and texts from various perspectives. Compared to the classification effect of the best method among the comparison methods, the classification effect of the model was improved by 0.96 and 1.06 percentage points on accuracy, and improved by 0.96 and 0.62 percentage points on F1 score on two public multimodal sentiment datasets (MVSA_Single and MVSA_Multi) respectively. Experimental results show that the comprehensive analysis of the hierarchical relationship between text features and image features can effectively enhance the neural network's ability to capture the emotional semantics of texts and images, so as to predict the overall sentiment of texts and images more accurately.
    Chinese text sentiment analysis model based on gated mechanism and convolutional neural network
    YANG Lu, HE Mingxiang
    2021, 41(10):  2842-2848.  DOI: 10.11772/j.issn.1001-9081.2020122043
    Asbtract ( )   PDF (717KB) ( )  
    References | Related Articles | Metrics
    The particularity of Chinese data leads to noise information generation in the process of discrimination, and the traditional Convolutional Neural Network (CNN) cannot deeply mine emotional feature information. In order to solve the problems, a Dual Channel Gated Convolutional Neural Network model with Sentiment Lexicon (DC-GCNN-SL) was proposed. Firstly, the word sentiment score of sentiment lexicon was used to mark the words in the sentences, so that the prior knowledge of emotion was obtained by the network, and the noise information of the input sentence was effectively removed in the training process. Secondly, a gated mechanism based on GTRU (Gated Tanh-ReLU Unit) was proposed while capturing the deep sentiment features of the sentences, and the text convolution operation of the two input channels was used to fuse the two features, control the information transmission, and obtain more abundant hidden information effectively. Finally, the text sentiment polarity was output through the softmax function. The experiments were carried out on hotel review dataset, takeaway review dataset and commodity review dataset. Experimental results show that, compared with other models of text sentiment analysis, the proposed model has better accuracy, precision, recall and F1-score, and can effectively obtain the emotional features of sentences.
    Multi-label feature selection based on label-specific feature with missing labels
    ZHANG Zhihao, LIN Yaojin, LU Shun, GUO Chen, WANG Chenxi
    2021, 41(10):  2849-2857.  DOI: 10.11772/j.issn.1001-9081.2020111893
    Asbtract ( )   PDF (1049KB) ( )  
    References | Related Articles | Metrics
    Multi-label feature selection has been widely used in many domains, such as image classification and disease diagnosis. However, there usually exist missing labels in the label space of data in practice, which destroys the structure and correlation between labels, so that the learning algorithms are difficult to exactly select important features. To address this problem, a Multi-label Feature Selection based on Label-specific feature with Missing Labels (MFSLML) algorithm was proposed. Firstly, the label-specific feature for each class label was obtained via sparse learning method. At the same time, the mapping relations between labels and label-specific features were constructed based on linear regression model, and were used to recover the missing labels. Finally, experiments were performed on 7 datasets with using 4 evaluation metrics. Experimental results show that compared to some state-of-the-art multi-label feature selection algorithms, such as multi-label feature selection algorithm based Max-Dependency and Min-Redundancy (MDMR) and the Multi-label Feature selection with Missing Labels via considering feature interaction (MFML), MFSLML can increase the average precision by 4.61-5.5 percentage points. It can be seen that MFSLML achieves better classification performance.
    Annotation method for joint extraction of domain-oriented entities and relations
    WU Saisai, LIANG Xiaohe, XIE Nengfu, ZHOU Ailian, HAO Xinning
    2021, 41(10):  2858-2863.  DOI: 10.11772/j.issn.1001-9081.2020101678
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics
    In view of the problems of low efficiency, error propagation, and entity redundancy in traditional entities and relations annotation methods, and for the fact that there is the characteristic of "the overlapping relationship between one entity (main-entity) and multiple entities at the same time" in corpuses of some domains, a new annotation method for joint extraction of domain entities and relations was proposed. First, the main entity was marked as a fixed label, each other entity in the text that has relation with the main-entity was marked as the type of relation between the corresponding two entities. This way that entities and relations were simultaneously labeled was able to save at least half of the cost of annotation. Then, the triples were modeled directly instead of modeling entities and relations separately, and, the triple data were able to be obtained through label matching and mapping, which alleviated the problems of overlapping relation extraction, entity redundancy, and error propagation. Finally, the field of crop diseases and pests was taken as the example to conduct experiments, and the Bidirectional Encoder Representations from Transformers (BERT)-Bidirectional Long Short-Term Memory (BiLSTM)+Conditional Random Field (CRF) end-to-end model was tested the performance on the dataset of 1 619 crop diseases and pests articles. Experimental results show that this model has the F1 value 47.83 percentage points higher than the pipeline method based on the traditional annotation method+BERT model; compared with the joint learning method based on the new annotation method+BiLSTM+CRF model, Convolutional Neural Network (CNN)+BiLSTM+CRF or other classic models, the F1 value of the model increased by 9.55 percentage points and 10.22 percentage points respectively, which verify the effectiveness of the proposed annotation method and model.
    Scientific paper summarization model using macro discourse structure
    FU Ying, WANG Hongling, WANG Zhongqing
    2021, 41(10):  2864-2870.  DOI: 10.11772/j.issn.1001-9081.2020121945
    Asbtract ( )   PDF (873KB) ( )  
    References | Related Articles | Metrics
    The traditional neural network model cannot reflect the macro discourse structure information between different sections in scientific paper, which leads to the incomplete structure and incoherent content of the generated scientific paper summarization. In order to solve the problem, a scientific paper summarization model using macro discourse structure was proposed. Firstly, a hierarchical encoder based on macro discourse structure was built. Graph convolution neural network was used to encode the macro discourse structure information between sections, so as to construct the hierarchical semantic representation of sections. Then, an information fusion module was proposed to effectively fuse macro discourse structure information and word-level information, so as to assist the decoder to generate the summarization. Finally, the attention mechanism optimization unit was used to update and optimize the context vector. Experimental results show that the proposed model is 3.53, 1.15 and 4.29 percetage points higher than the baseline model in ROUGE (Recall-Oriented Understudy for Gisting Evaluation)-1, ROUGE-2 and ROUGE-L respectively. Through the analysis and comparison of the generated summarization content, it can be further proved that the proposed model can effectively improve the quality of the generated summarization.
    Knowledge extraction method for follow-up data based on multi-term distillation network
    WEI Chunwu, ZHAO Juanjuan, TANG Xiaoxian, QIANG Yan
    2021, 41(10):  2871-2878.  DOI: 10.11772/j.issn.1001-9081.2020122059
    Asbtract ( )   PDF (1052KB) ( )  
    References | Related Articles | Metrics
    As medical follow-up work is more and more valued, the task of obtaining information related to the follow-up guidance through medical image analysis has become increasingly important. However, most deep learning-based methods are not suitable for dealing with such task. In order to solve the problem, a Multi-term Knowledge Distillation (MKD) model was proposed. Firstly, with the advantage of knowledge distillation in model transfer, the classification task with long-term follow-up information was converted into a model transfer task based on domain knowledge. Then, the follow-up knowledge contained in the long-term medical images was fully utilized to realize the long-term classification of lung nodules. At the same time, facing the problem that the data collected during the follow-up process were relatively unbalanced every year, a meta-learning method based normalization method was proposed, and therefore improving the training accuracy of the model in the semi-supervised mode effectively. Experimental results on NLST dataset show that, the proposed MKD model has better classification accuracy in the task of long-term lung nodule classification than the deep learning classification models such as GoogleNet. When the amount of unbalanced long-term data reaches 800 cases, the MKD enhanced by meta-learning method can improve the accuracy by up to 7 percentage points compared with the existing state-of-the-art models.
    Chinese-Vietnamese news topic discovery method based on cross-language neural topic model
    YANG Weiya, YU Zhengtao, GAO Shengxiang, SONG Ran
    2021, 41(10):  2879-2884.  DOI: 10.11772/j.issn.1001-9081.2020122054
    Asbtract ( )   PDF (758KB) ( )  
    References | Related Articles | Metrics
    In Chinese-Vietnamese cross-language news topic discovery task, the Chinese-Vietnamese parallel corpora are rare, it is difficult to train high-quality bilingual word embedding, and the news text is generally long, so that the method of bilingual word embedding is difficult to represent the text well. In order to solve the problems, a Chinese-Vietnamese news topic discovery method based on Cross-Language Neural Topic Model (CL-NTM) was proposed. In the method, the news topic information was used to represent news text, and the bilingual semantic alignment was converted into bilingual topic alignment tasks. Firstly, the neural topic models based on the variational autoencoder were trained in Chinese and Vietnamese respectively to obtain the monolingual abstract representations of the topics. Then, a small-scale parallel corpus was used to map the bilingual topics into the same semantic space. Finally, the K-means method was used to cluster the bilingual topic representations for finding the topics of news event clusters. Experimental results show that, compared with the Improved Chinese-English Latent Dirichlet Allocation model (ICE-LDA), the proposed method increases the Macro-F1 value and topic-coherence by 4 percentage points and 7 percentage points respectively, showing that the proposed method can effectively improve the clustering effect and topic interpretability of news topics.
    Cyber security
    Secure storage and sharing scheme of internet of vehicles data based on hybrid architecture of blockchain and cloud-edge computing
    WU Guangfu, WANG Yingjun
    2021, 41(10):  2885-2892.  DOI: 10.11772/j.issn.1001-9081.2020121938
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems such as high time delay, data leakage and malicious vehicle nodes tampering data of cloud computing in Internet of Vehicles (IoV), a secure storage and sharing scheme of IoV data based on hybrid architecture of blockchain and cloud-edge computing was proposed. Firstly, the dual-chain decentralized storage structure of consortium blockchain-private blockchain was adopted to ensure the security of communication data. Then, the identity-based digital signcryption algorithm and the discrete central binomial distribution-based ring signature scheme were used to solve the security problem in the communication process. Finally, the Dynamic-layering and Reputation-evaluation Practical Byzantine Fault Tolerant mechanism (DRPBFT) was proposed, and the edge computing technology was combined with the cloud computing technology, so as to solve the high time delay problem. Security analysis shows that the proposed scheme can guarantee the security and integrity of data during the information sharing process. Experimental simulation and performance evaluation results show that, DRPBFT has the time delay within 6 s, and effectively improves the throughput of the system. The proposed IoV scheme effectively improves the enthusiasm of vehicle data sharing, leads to more efficient and stable operation of IoV system, and achieves the real-time and efficient purposes of IoV.
    Visual image encryption algorithm based on Hopfield chaotic neural network and compressive sensing
    SHEN Ziyi, WANG Weiya, JIANG Donghua, RONG Xianwei
    2021, 41(10):  2893-2899.  DOI: 10.11772/j.issn.1001-9081.2020121942
    Asbtract ( )   PDF (4865KB) ( )  
    References | Related Articles | Metrics
    At present, most image encryption algorithms directly encrypt the plaintext image into a ciphertext image without visual meaning, which is easy to be found by hackers during the transmission process and therefore subjected to various attacks. In order to solve the problem, combining Hopfield chaotic neural network and compressive sensing technology, a visually meaningful image encryption algorithm was proposed. Firstly, the two-dimensional discrete wavelet transform was used to sparse the plaintext image. Secondly, the sparse matrix after threshold processing was encrypted and measured by compressive sensing. Thirdly, the quantized intermediate ciphertext image was filled with random numbers, and Hilbert scrambling and diffusion operations were performed to the image. Finally, the generated noise-like ciphertext image was embedded into the Alpha channel of the carrier image though the Least Significant Bit (LSB) replacement to obtain the visually meaningful steganographic image. Compared with the existing visual image encryption algorithms, the proposed algorithm demonstrates very good visual security, decryption quality and robustness, showing that it has widely application scenarios.
    Protocol identification approach based on semi-supervised subspace clustering
    ZHU Yuna, ZHANG Yutao, YAN Shaoge, FAN Yudan, CHEN Hantuo
    2021, 41(10):  2900-2904.  DOI: 10.11772/j.issn.1001-9081.2020122002
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    The differences between different protocols are not considered when selecting identification features in the existing statistical feature-based identification methods. In order to solve the problem, a Semi-supervised Subspace-clustering Protocol Identification Approach (SSPIA) was proposed by combining semi-supervised learning and Fuzzy Subspace Clustering (FSC) method. Firstly, the prior constraint condition was obtained by transforming the labeled sample flow into pairwise constraints information. Secondly, the Semi-supervised Fuzzy Subspace Clustering (SFSC) algorithm was proposed on this basis and was used to guide the process of subspace clustering by using the constraint condition. Then, the mapping between class clusters and protocol types was established to obtain the weight coefficient of each protocol feature, and an individualized cryptographic protocol feature library was constructed for subsequent protocol identification. Finally, the clustering effect and identification effect experiments of five typical cryptographic protocols were carried out. Experimental results show that, compared with the traditional K-means method and FSC method, the proposed SSPIA has better clustering effect, and the protocol identification classifier constructed by SSPIA is more accurate, has higher protocol identification rate and lower error identification rate. The proposed SSPIA improves the identification effect based on statistical features.
    Advanced computing
    Improved grey wolf optimizer for location selection problem of railway logistics distribution center
    HAO Pengfei, CHI Rui, QU Zhijian, TU Hongbin, CHI Xuexin, ZHANG Diyou
    2021, 41(10):  2905-2911.  DOI: 10.11772/j.issn.1001-9081.2020121994
    Asbtract ( )   PDF (1101KB) ( )  
    References | Related Articles | Metrics
    The single mechanism based Grey Wolf Optimizer (GWO) is easy to fall into local optimum and has slow convergence speed. In order to solve the problems, an Improved Grey Wolf Optimization (IGWO) was proposed to solve the actual location selection problem of railway logistics distribution center. Firstly, based on the basic GWO, the theory of good point set was introduced to initialize the population, which improved the diversity of the initial population. Then, the D-value Elimination Strategy (DES) was used to increase the global optimization ability, so as to achieve an efficient optimization mode. The simulation results show that, compared with the standard GWO, IGWO has the fitness value increased by 3%, and the accuracy of the optimal value increased by up to 7 units in 10 test functions. Compared with Particle Swarm Optimization (PSO) algorithm, Differential Evolution (DE) algorithm and Genetic Algorithm (GA), IGWO has the location selection speed increased by 39.6%, 46.5% and 65.9% respectively, and the location selection velocity is significantly improved. The proposed algorithm can be used for railway logistics center location selection.
    Optimal path convergence method based on artificial potential field method and informed sampling
    LI Wei, JIN Shijun
    2021, 41(10):  2912-2918.  DOI: 10.11772/j.issn.1001-9081.2020122021
    Asbtract ( )   PDF (1628KB) ( )  
    References | Related Articles | Metrics
    The Rapidly exploring Random Tree star (RRT*) algorithm ensures its probabilistic completeness and asymptotic optimality in the path planning process, but still has problems such as slow convergence speed and large and dense sampling space. In order to speed up the convergence of the algorithm, a fast obtaining method of optimal path based on artificial potential field method and informed set sampling was proposed. First, the artificial potential field method was used to construct an initial path from the starting point to the target point. Then, the positions of and the distance between the starting point and the target point as well as the path cost of the initial path were used as parameters to construct the initial informed sampling set. At last, the sampling was limited in the informed set, and the range of the informed sampling set was adjusted during the running process of the algorithm to accelerate the path convergence speed. Simulation experiments show that, Potential Informed-RRT* (PI-RRT*) algorithm based on the artificial potential field combined with informed sampling method reduces the number of sampling points by about 67%, and shortens the algorithm running time by about 74.5% on average compared with RRT* algorithm; and has the number of sampling points reduced by about 40%-50%, the algorithm running time shortened by about 62.5% on average compared with Informed RRT* (Informed-RRT*) algorithm. The proposed optimal path convergence method greatly reduces the number of redundant sampling and the algorithm running time, has higher algorithm efficiency, and converges to the optimal path with faster speed.
    Fuzzy granulation prediction of traffic flow based on improved whale optimization support vector machine
    TONG Lin, GUAN Zheng
    2021, 41(10):  2919-2927.  DOI: 10.11772/j.issn.1001-9081.2020122048
    Asbtract ( )   PDF (884KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problems of Support Vector Machine (SVM) in traffic flow prediction:volatility and low prediction accuracy, an SVM model using Fuzzy Information Granulation (FIG) and Improved Whale Optimization Algorithm(IWOA) was proposed to predict the traffic flow trends and dynamic ranges. Firstly, the FIG method was performed to the data to obtain the Upper bound (Up), Lower bound (Low) and Trend value (R) of the traffic flow change interval. Secondly, in the population initialization of Whale Optimization Algorithm (WOA), the dynamic opposition-based learning was used to increase the population diversity, and the nonlinear convergence factor and adaptive weight were introduced to enhance the global search and local optimization capabilities of the algorithm. After that, the IWOA model was established and the complexity of IWOA was analyzed. Finally, with the Mean Square Error (MSE) of the predicted traffic flow as the objective function, the hyperparameters of SVM were optimized continuously in the iteration process of IWOA, and a traffic flow interval prediction model based on FIG-IWOA-SVM was established. The tests on domestic and foreign traffic flow datasets were carried out. The results show that, in the prediction of foreign traffic flow, compared with Support Vector Machine based on Genetic Algorithm optimization (GA-SVM), Support Vector Machine based on Particle Swarm Optimization algorithm (PSO-SVM) and Support Vector Machine based on Whale Optimization Algorithm (WOA-SVM), the proposed IWOA-SVM model has the Mean Absolute Error (MAE) reduced by 89.5%, 81.5% and 1.5% respectively. Compared with the FIG-GA-SVM, FIG-PSO-SVM and FIG-WOA-SVM models, the FIG-IWOA-SVM model has higher prediction accuracy and the more stable prediction range in the traffic flow dynamic interval and trend prediction. Experimental results show that, without increasing the complexity of the algorithm, the proposed FIG-IWOA-SVM model can reasonably predict the change trend and change interval of traffic flow, and provide a basis for subsequent traffic planning and flow control.
    Task allocation strategy in unmanned aerial vehicle-assisted mobile edge computing
    WANG Daiwei, XU Gaochao, LI Long
    2021, 41(10):  2928-2936.  DOI: 10.11772/j.issn.1001-9081.2020121917
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    In the scenario of using Unmanned Aerial Vehicle (UAV) as the data collector for computation offloading to provide Mobile Edge Computing (MEC) services to User Equipment (UE), a wireless communication strategy to achieve efficient UE coverage through UAV was designed. Firstly, under the condition of a given UE distribution, for the UAV flight trajectory and communication strategy, an optimization method of Successive Convex Approximation (SCA) was used to obtain an approximate optimal solution that was able to minimize the global energy. In addition, for scenarios with large-scale distribution of UEs or a large number of tasks, an adaptive clustering algorithm was proposed to divide the UEs on the ground into as few clusters as possible, and to ensure the offloading data of all UEs in each cluster was able to be collected in one flight. Finally, the computation offloading data collection tasks of the UEs in each cluster were allocated to one flight, so that the goal of reducing the number of dispatches required for a single UAV or the UAV number of dispatches required for multiple UAVs to complete the task was achieved. The simulation results show that the proposed method can generate fewer clusters than the K-Means algorithm and converge quickly, and is suitable for UAV-assisted computation offloading scenarios with widely distributed UEs.
    Multimedia computing and computer simulation
    Lightweight real-time semantic segmentation algorithm based on separable pyramid
    GAO Shiwei, ZHANG Changzhu, WANG Zhuping
    2021, 41(10):  2937-2944.  DOI: 10.11772/j.issn.1001-9081.2020121939
    Asbtract ( )   PDF (2525KB) ( )  
    References | Related Articles | Metrics
    The existing semantic segmentation algorithms have too many parameters and huge memory usage, so that it is difficult to meet the requirements real-world applications such as automatic driving. In order to solve the problem, a novel, effective and lightweight real-time semantic segmentation algorithm based on Separable Pyramid Module (SPM) was proposed. Firstly, factorized convolution and dilated convolution were adopted in the form of a feature pyramid to construct the bottleneck structure, providing a simple but effective way to extract local and contextual information. Then, the Context Channel Attention (CCA) module based on computer vision attention was proposed to modify the channel weights of shallow feature maps by utilizing deep semantic features, thereby optimizing the segmentation results. Experimental results show that without pre-training or any additional processing, the proposed algorithm achieves mean Intersection-over-Union (mIoU) of 71.86% on Cityscapes test set at the speed of 91 Frames Per Second (FPS). Compared to Efficient Residual Factorized ConvNet (ERFNet), the proposed algorithm has the mIoU 3.86 percentage points higher, and the processing speed of 2.2 times. Compared with the latest Light-weighted Network with Efficient Reduced Non-local operation for real-time semantic segmentation (LRNNet), the proposed algorithm has the mIoU slightly lower by 0.34 percentage points, but the processing speed increased by 20 FPS. The experimental results show that the proposed algorithm has great value for completing tasks such as efficient and accurate street scene image segmentation required in automatic driving.
    Semantic SLAM algorithm based on deep learning in dynamic environment
    ZHENG Sicheng, KONG Linghua, YOU Tongfei, YI Dingrong
    2021, 41(10):  2945-2951.  DOI: 10.11772/j.issn.1001-9081.2020111885
    Asbtract ( )   PDF (1572KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the existence of moving objects in the application scenes will reduce the positioning accuracy and robustness of the visual Synchronous Localization And Mapping (SLAM) system, a semantic information based visual SLAM algorithm in dynamic environment was proposed. Firstly, the traditional visual SLAM front end was combined with the YOLOv4 object detection algorithm, during the extraction of ORB (Oriented FAST and Rotated BRIEF) features of the input image, the image was semantically segmented. Then, the object type was judged to obtain the area of the dynamic object in the image, and the feature points distributed on the dynamic object were eliminated. Finally, the camera pose was solved by using inter-frame matching between the processed feature points and the adjacent frames. The test results on TUM dataset show that, the accuracy of the pose estimation of this algorithm is 96.78% higher than that of ORB-SLAM2 (Orient FAST and Rotated BRIEF SLAM2) in a high dynamic environment, and the average consumption time per frame of tracking thread of the algorithm is 0.065 5 s, which is the shortest time consumption compared to those of the other SLAM algorithms used in dynamic environment. The above experimental results illustrate that the proposed algorithm can realize real-time precise positioning and mapping in dynamic environment.
    Semantic segmentation method of power line on mobile terminals based on encoder-decoder structure
    HUANG Juting, GAO Hongli, DAI Zhikun
    2021, 41(10):  2952-2958.  DOI: 10.11772/j.issn.1001-9081.2020122037
    Asbtract ( )   PDF (1631KB) ( )  
    References | Related Articles | Metrics
    The traditional vision algorithms have low accuracy and are greatly affected by environmental factors during the detection of long and slender power lines in complex scenes, and the existing power line detection algorithms based on deep learning are not efficient. In order to solve the problems, an end-to-end fully convolutional neural network model was proposed which was suitable for power line detection on mobile terminals. Firstly, a symmetrical encoder-decoder structure was adopted. In the encoder part, the max-pooling layer was used for down-sampling, so as to extract multi-scale features. In the decoder part, the max-pooling indices based non-linear up-sampling was used to fuse multi-scale features layer by layer to restore the image details. Then, a weighted loss function was adopted to train the model, thereby solving the imbalance problem between power line pixels and background pixels. Finally, a power line dataset with complex background and pixel-level labels was constructed to train and evaluate the model, and a public power line dataset was relabeled as a different source test set. Compared with a model named Dilated ConvNet for power line semantic segmentation on mobile devices, the proposed model has the prediction speed for 512×512 resolution images on the mobile device GPU NVIDIA JetsonTX2 twice that of Dilated ConvNet, which is 8.2 frame/s; the proposed model achieves a mean Intersection over Union (mIoU) of 0.857 3, F1 score of 0.844 7, Average Precision (AP) of 0.927 9 on the same source test set, which are increased by 0.011, 0.014 and 0.008 respectively; and the proposed model achieves mIoU of 0.724 4, F1 score of 0.634 1, AP of 0.664 4 on the public test set, which are increased by 0.004, 0.007 and 0.032 respectively. Experimental results show that the proposed model has better performance of real-time power line segmentation on mobile terminals.
    Reconstruction method for uncertain spatial information based on improved variational auto-encoder
    TU Hongyan, ZHANG Ting, XIA Pengfei, DU Yi
    2021, 41(10):  2959-2963.  DOI: 10.11772/j.issn.1001-9081.2020081338
    Asbtract ( )   PDF (1274KB) ( )  
    References | Related Articles | Metrics
    Uncertain spatial information is widely used in many scientific fields. However, the current methods for uncertain spatial information reconstruction need to scan the Training Image (TI) for many times, and then obtain the simulation results through complex probability calculation, which leads to the low efficiency and complex simulation process. To address this issue, a method of Fisher information and Variational Auto-Encoder (VAE) jointly applying to the reconstruction of uncertain spatial information was proposed. Firstly, the structural features of the spatial information were learned through the encoder neural network, and the mean and variance of the spatial information were obtained by training. Then, the random sampling was carried out to reconstruct the intermediate results according to the mean and variance of the sampling results and the spatial information, and the encoder neural network was optimized by combining the optimization function of the network with the Fisher information. Finally, the intermediate results were input into the decoder neural network to decode and reconstruct the spatial information, and the optimization function of the decoder was combined with the Fisher information to optimize the reconstruction results. By comparing the reconstruction results of different methods and the training data on multiple-point connectivity curve, variogram, pore distribution and porosity, it is shown that the reconstruction quality of the proposed method is better than those of other methods. In specific, the average porosity of the reconstruction results of the proposed method is 0.171 5, which is closer to the 0.170 5 porosity of the training data compared to those of other methods. Compared with the traditional method, this method has the average CPU utilization reduced from 90% to 25%, and the average memory consumption reduced by 50%, which indicates that the reconstruction efficiency of this method is higher. Through the comparison of reconstruction quality and reconstruction efficiency, the effectiveness of this method is illustrated.
    High-precision sparse reconstruction of CT images based on multiply residual UNet
    ZHANG Yanjiao, QIAO Zhiwei
    2021, 41(10):  2964-2969.  DOI: 10.11772/j.issn.1001-9081.2020121985
    Asbtract ( )   PDF (1075KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of producing streak artifacts during sparse analytic reconstruction of Computed Tomography (CT), in order to better suppress strip artifacts, a Multiply residual UNet (Mr-UNet) network architecture was proposed based on the classical UNet network architecture. Firstly, the sparse images with streak artifacts were sparsely reconstructed by the traditional Filtered Back Projection (FBP) analytic reconstruction algorithm. Then, the reconstructed images were used as the input of the network structure, and the corresponding high-precision images were trained as the labels of the network, so that the network had a good performance of suppressing streak artifacts. Finally, the original four-layer down-sampling of the classical residual UNet was deepened to five layers, and the residual learning mechanism was introduced into the proposed model, so that each convolution unit was constructed to residual structure to improve the training performance of the network. In the experiments, 2 000 pairs of images containing images with streak artifacts and the corresponding high-precision images with the size of 256×256 were used as the dataset, among which, 1 900 pairs were used as the training set, 50 pairs were used as the verification set, and the rest were used as the test set to train the network, and verify and evaluate the network performance. The experimental results show that, compared with the traditional Total Variation (TV) minimization algorithm and the classical deep learning method of UNet, the proposed model can reduce the Root Mean Square Error (RMSE) by about 0.002 5 on average and improve the Structural SIMilarity (SSIM) by about 0.003 on average, and can retain the texture and detail information of the image better.
    Salient object detection in weak light images based on ant colony optimization algorithm
    WANG Hongyu, ZHANG Yu, YANG Heng, MU Nan
    2021, 41(10):  2970-2978.  DOI: 10.11772/j.issn.1001-9081.2020111814
    Asbtract ( )   PDF (1306KB) ( )  
    References | Related Articles | Metrics
    With substantial attention being received from industry and academia over last decade, salient object detection has become an important fundamental research in computer vision. The solution of salient object detection will be helpful to make breakthroughs in various visual tasks. Although various works have achieved remarkable success for saliency detection tasks in visible light scenes, there still remain a challenging issue on how to extract salient objects with clear boundary and accurate internal structure in weak light images with low signal-to-noise ratios and limited effective information. For that fuzzy boundary and incomplete internal structure cause low accuracy of salient object detection in weak light scenes, an Ant Colony Optimization (ACO) algorithm based saliency detection framework was proposed. Firstly, the input image was transformed into an undirected graph with different nodes by multi-scale superpixel segmentation. Secondly, the optimal feature selection strategy was adopted to capture the useful information contained in the salient object and eliminate the redundant noise information from weak light image with low contrast. Then, the spatial contrast strategy was introduced to explore the global saliency cues with relatively high contrast in the weak light image. To acquire more accurate saliency estimation at low signal-to-noise ratio, the ACO algorithm was used to optimize the saliency map. Through the experiments on three public datasets (MSRA, CSSD and PASCAL-S) and the Nighttime Image (NI) dataset, it can be seen that the Area Under the Curve (AUC) value of the proposed model reached 87.47%, 84.27% and 81.58% on three public datasets respectively, and the AUC value of the model was increased by 2.17 percentage points compared to that of the Low Rank Matrix Recovery (LR) model (which ranked the second) on the NI dataset. The results demonstrate that the proposed model has the detection effect with more accurate structure and clearer boundary compared to 11 mainstream saliency detection models and effectively suppresses the interference of weak light scenes on the detection performance of salient objects.
    Robust 3D object detection method based on localization uncertainty
    PEI Yiyao, GUO Huiming, ZHANG Danpu, CHEN Wenbo
    2021, 41(10):  2979-2984.  DOI: 10.11772/j.issn.1001-9081.2020122055
    Asbtract ( )   PDF (1259KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of inaccurate localization of model which is caused by inaccurate manual labeling in 3D point cloud training data, a novel robust 3D object detection method based on localization uncertainty was proposed. Firstly, with the 3D voxel grid-based Sparsely Embedded CONvolutional Detection (SECOND) network as basic network, the prediction of localization uncertainty was added based on Region Proposal Network (RPN). Then, during the training process, the localization uncertainty was modeled by using Gaussian and Laplace distribution models, and the localization loss function was redefined. Finally, during the prediction process, the threshold filtering and Non-Maximum Suppression (NMS) methods were performed to filter candidate objects based on the object confidence which was consisted of the localization uncertainty and classification confidence. Experimental results on the KITTI 3D object detection dataset show that compared with SECOND network, the proposed algorithm has the detection accuracy improved by 0.5 percentage points on car category at moderate level. The detection accuracy of the proposed algorithm is 3.1 percentage points higher than that of SECOND network with adding disturbance simulation noise to the training data in the best case. The proposed algorithm improves the accuracy of 3D object detection, which reduces false detection and improves the accuracy of 3D bounding boxes, and is more robust to noisy data.
    Deepfake image detection method based on autoencoder
    ZHANG Ya, JIN Xin, JIANG Qian, LEE Shin-jye, DONG Yunyun, YAO Shaowen
    2021, 41(10):  2985-2990.  DOI: 10.11772/j.issn.1001-9081.2020122046
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics
    The image forgery method based on deep learning can generate images which are difficult to distinguish with the human eye. Once the technology is abused to produce fake images and videos, it will have a serious negative impact on a country's politics, economy, and culture, as well as the social life and personal privacy. To solve the problem, a Deepfake detection method based on autoencoder was proposed. Firstly, the Gaussian filtering was used to preprocess the image, and the high-frequency information was extracted as the input of the model. Secondly, the autoencoder was used to extract features from the image. In order to obtain better classification effect, an attention mechanism module was added to the encoder. Finally, it was proved by the ablation experiments that the proposed preprocessing method and the addition of attention mechanism module were helpful for the Deepfake image detection. Experimental results show that, compared with ResNet50, Xception and InceptionV3, the proposed method can effectively detect images forged by multiple generation methods when the dataset has a small sample size and contains multiple scenes, and its average accuracy is up to 97.10%, which is significantly better than those of the comparison methods, and its generalization performance is also significantly better than those of the comparison methods.
    Pedestrian re-identification method based on multi-scale feature fusion
    HAN Jiandong, LI Xiaoyu
    2021, 41(10):  2991-2996.  DOI: 10.11772/j.issn.1001-9081.2020121908
    Asbtract ( )   PDF (1794KB) ( )  
    References | Related Articles | Metrics
    Pedestrian re-identification tasks lack the consideration of the pedestrian feature scale variation during feature extraction, so that they are easily affected by environment and have low accuracy of pedestrian re-identification. In order to solve the problem, a pedestrian re-identification method based on multi-scale feature fusion was proposed. Firstly, in the shallow layer of the network, multi-scale pedestrian features were extracted through mixed pooling operation, which was helpful to improve the feature extraction capability of the network. Then, strip pooling operation was added to the residual block to extract the remote context information in horizontal and vertical directions respectively, which avoided the interference of irrelevant regions. Finally, after the residual network, the dilated convolutions with different scales were used to further preserve the multi-scale features, so as to help the model to analyze the scene structure flexibly and effectively. Experimental results show that, on Market-1501 dataset, the proposed method has the Rank1 of 95.9%, and the mean Average Precision (mAP) of 88.5%; on DukeMTMC-reID dataset, the proposed method has the Rank1 of 90.1%, and the mAP of 80.3%. It can be seen that the proposed method can retain the pedestrian feature information better, thereby improving the accuracy of pedestrian re-identification tasks.
    Video abnormal behavior detection based on dual prediction model of appearance and motion features
    LI Ziqiang, WANG Zhengyong, CHEN Honggang, LI Linyi, HE Xiaohai
    2021, 41(10):  2997-3003.  DOI: 10.11772/j.issn.1001-9081.2020121906
    Asbtract ( )   PDF (1399KB) ( )  
    References | Related Articles | Metrics
    In order to make full use of appearance and motion information in video abnormal behavior detection, a Siamese network model that can capture appearance and motion information at the same time was proposed. The two branches of the network were composed of the same autoencoder structure. Several consecutive frames of RGB images were used as the input of the appearance sub-network to predict the next frame, while RGB frame difference image was used as the input of the motion sub-network to predict the future frame difference. In addition, considering one of the reasons that affected the detection effect of the prediction-based method, that is the diversity of normal samples, and the powerful "generation" ability of the autoencoder network, that is it has a good prediction effect on some abnormal samples. Therefore, a memory enhancement module that learns and stores the "prototype" features of normal samples was added between the encoder and the decoder, so that the abnormal samples were able to obtain greater prediction error. Extensive experiments were conducted on three public anomaly detection datasets Avenue, UCSD-ped2 and ShanghaiTech. Experimental results show that, compared with other video abnormal behavior detection methods based on reconstruction or prediction, the proposed method achieves better performance. Specifically, the average Area Under Curve (AUC) of the proposed method on Avenue, UCSD-ped2 and ShanghaiTech datasets reach 88.2%, 97.5% and 73.0% respectively.
    Unmanned aerial vehicle image positioning algorithm based on scene graph division
    ZHANG Chi, LI Zhuhong, LIU Zhou, SHEN Weiming
    2021, 41(10):  3004-3009.  DOI: 10.11772/j.issn.1001-9081.2020111795
    Asbtract ( )   PDF (1581KB) ( )  
    References | Related Articles | Metrics
    Due to the problems of slow speed and error drift in the positioning of large-scale long-sequence Unmanned Aerial Vehicle (UAV) images, a positioning algorithm of UAV images based on scene graph division was proposed according to the characteristics of UAV images. Firstly, the Global Positioning System (GPS) ancillary information was used to narrow the spatial search scope for feature matching, so as to accelerate the extraction of corresponding points. After that, visual consistency and spatial consistency were combined to construct the scene graphs, and Normalized Cut (Ncut) was used to divide them. Then, incremental reconstruction was performed to each group of scene graphs. Finally, all scene graphs were fused to establish a 3S scene model by Bundle Adjustment (BA). In addition, the GPS spatial constraint information was added to the cost function in the BA stage. In the experiments on four UAV image datasets, compared with COLMAP and other Structure From Motion (SFM) algorithms, the proposed algorithm has the positioning speed increased by 50%, the reprojection error decreased by 41%, and the positioning error was controlled within 0.5 m. Through the experimental comparison of algorithms with or without GPS assistance, it can be seen that BA with relative and absolute GPS constraints solves the problem of error drift, avoids the ambiguous results and greatly reduces positioning error.
    Shipping monitoring image recognition model based on attention mechanism network
    ZHANG Kaiyue, ZHANG Hong
    2021, 41(10):  3010-3016.  DOI: 10.11772/j.issn.1001-9081.2020121899
    Asbtract ( )   PDF (1343KB) ( )  
    References | Related Articles | Metrics
    In the existing shipping monitoring image recognition model named Convolutional 3D (C3D), the intermediate representation learning ability is limited, the extraction of effective features is easily disturbed by noise, and the relationship between global features and local features is ignored in feature extraction. In order to solve these problems, a new shipping monitoring image recognition model based on attention mechanism network was proposed. The model was based on the Convolutional Neural Network (CNN) framework. Firstly, the shallow features of the image were extracted by the feature extractor. Then, the attention information was generated and the local discriminant features were extracted based on the different response strengths of the CNN to the active features of different regions. Finally, the multi-branch CNN structure was used to fuse the local discriminant features and the global texture features of the image, thus the interaction between the local discriminant features and the global texture features of the image was utilized to improve the learning ability of CNN to the intermediate representations. Experimental results show that, the recognition accuracy of the proposed model is 91.8% on the shipping image dataset, which is improved by 7.2 percentage points and 0.6 percentage points compared with the current C3D model and Discriminant Filter within a Convolutional Neural Network (DFL-CNN) model respectively. It can be seen that the proposed model can accurately judge the state of the ship, and can be effectively applied to the shipping monitoring project.
    Extremely dim target search algorithm based on detection and tracking mutual iteration
    XIAO Qi, YIN Zengshan, GAO Shuang
    2021, 41(10):  3017-3024.  DOI: 10.11772/j.issn.1001-9081.2020122000
    Asbtract ( )   PDF (1788KB) ( )  
    References | Related Articles | Metrics
    It is difficult to distinguish the intensity between dim moving targets and background noise in the case of extremely Low Signal-to-Noise Ratio (LSNR). In order to solve the problem, a new extremely dim target search algorithm based on detection and tracking mutual iteration was proposed with a new strategy for combining and iterating the process of temporal domain detection and spatial domain tracking. Firstly, the difference between the signal segment in the detection window and the extracted background estimated feature was calculated during the detection process. Then, the dynamic programming algorithm was adopted to remain the trajectories with the largest trajectory energy accumulation in the tracking process. Finally, the threshold parameters of the detector of the remained trajectory were adaptively adjusted in the next detection process, so that the pixels in this trajectory were able to be retained to the next detection and tracking stage with a more tolerant strategy. Experimental results show that, the dim moving targets with SNR as low as 0 dB can be detected by the proposed algorithm, false alarm rate of 1% - 2% and detection rate of about 70%. It can be seen that the detection ability for dim targets with extremely LSNR can be improved effectively by the proposed algorithm.
    High-precision classification method for breast cancer fusing spatial features and channel features
    XU Xuebin, ZHANG Jiada, LIU Wei, LU Longbin, ZHAO Yuqing
    2021, 41(10):  3025-3032.  DOI: 10.11772/j.issn.1001-9081.2020111891
    Asbtract ( )   PDF (1343KB) ( )  
    References | Related Articles | Metrics
    The histopathological image is the gold standard for identifying breast cancer, so that the automatic and accurate classification of breast cancer histopathological images is of great clinical application. In order to improve the classification accuracy of breast cancer histopathology images and thus meet the needs of clinical applications, a high-precision breast classification method that incorporates spatial and channel features was proposed. In the method, the histopathological images were processed by using color normalization and the dataset was expanded by using data enhancement, and the spatial feature information and channel feature information of the histopathological images were fused based on the Convolutional Neural Network (CNN) models DenseNet and Squeeze-and-Excitation Network (SENet). Three different BCSCNet (Breast Classification fusing Spatial and Channel features Network) models, BCSCNetⅠ, BCSCNetⅡ and BCSCNetⅢ, were designed according to the insertion position and the number of Squeeze-and-Excitation (SE) modules. The experiments were carried out on the breast cancer histopathology image dataset (BreaKHis), and through experimental comparison, it was firstly verified that color normalization and data enhancement of the images were able to improve the classification accuracy of breast canner, and then among the three designed breast canner classification models, the one with the highest precision was found to be BCSCNetⅢ. Experimental results showed that BCSCNetⅢ had the accuracy of binary classification ranged from 99.05% to 99.89%, which was improved by 0.42 percentage points compared with Breast cancer Histopathology image Classification Network (BHCNet); and the accuracy of multi-class classification ranged from 93.06% to 95.72%, which was improved by 2.41 percentage points compared with BHCNet. It proves that BCSCNet can accurately classify breast cancer histopathological images and provide reliable theoretical support for computer-aided breast cancer diagnosis.
    Rapid calculation method of orthopedic plate fit based on improved iterative closest point algorithm
    ZHU Xincheng, HE Kunjin, NI Na, HAO Bo
    2021, 41(10):  3033-3039.  DOI: 10.11772/j.issn.1001-9081.2020122012
    Asbtract ( )   PDF (2201KB) ( )  
    References | Related Articles | Metrics
    In order to quickly calculate the optimal fitting position of the orthopedic plate on the surface of broken bone to reduce the repeated adjustment times of the orthopedic plate during the surgical operation, a rapid calculation method of orthopedic plate fit based on improved Iterative Closest Point (ICP) algorithm was proposed. Firstly, under the guidance of the doctor, the fitting area was selected on the surface of the broken bone, and the point cloud of the inner surface for the orthopedic plate was extracted by using the angle between the normal vectors of the surface points for the orthopedic plate. Then, the two groups of point cloud models were smoothed, and the grid sampling method was adopted to simplify the point cloud models, after these operations, the characteristic relationship between the point clouds was used for the initial registration. Finally, the boundary and internal feature key points of the inner surface point cloud model of the orthopedic plate were extracted, K-Dimensional Tree (KD-Tree) was used to search the adjacent points, so that the feature key points of the orthopedic plate and the selected area of the broken bone surface were accurately registered by ICP. Taking tibia as the example to carry out experiments, and the results show that the proposed method can improve the registration efficiency while maintaining relatively high registration degree compared with other registration algorithms proposed in recent years. The proposed algorithm can realize the rapid registration between different damage types of tibia and orthopedic plate, and it is universal to other damaged bones.
    Frontier and comprehensive applications
    Inventory routing optimization model with heterogeneous vehicles based on horizontal collaboration strategy
    YANG Hualong, WANG Meiyu, XIN Yuchen
    2021, 41(10):  3040-3048.  DOI: 10.11772/j.issn.1001-9081.2020101577
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics
    In order to minimize the expected logistics cost of the supplier alliance, the Inventory Routing Problem (IRP) of multiple suppliers and multiple products under random fluctuations of demand was studied. Based on the horizontal collaboration strategy, a reasonable share method of vehicle distribution costs among the members of the supplier alliance was designed. By considering the retailer's distribution soft and hard time windows and inventory service level requirements, a heterogeneous vehicle inventory routing mixed-integer stochastic programming model of multiple suppliers and multiple products was established, and the inverse function of demand cumulative distribution was employed to transform this model into a deterministic programming model. Then an improved genetic algorithm was designed to solve the programming model. The results of example analysis show that the use of heterogeneous vehicles for distribution can reduce the total cost of supplier alliance by 8.3% and 11.92% respectively and increase the loading rate of distribution vehicles by 24% and 17% respectively, compared with the use of homogeneous heavy-duty and light-duty vehicles. The sensitivity analysis results indicate that no matter how the proportion of suppliers' supply to the total supply of the alliance and the variation coefficient of retailers' commodity demand change, the total cost of the supplier alliance can be effectively reduced by using heterogeneous vehicles for distribution; and the greater the demand variation coefficient is, the more obvious the advantage of using heterogeneous vehicles for distribution has.
    Optimization algorithm of ship dispatching in container terminals with two-way channel
    ZHENG Hongxing, ZHU Xutao, LI Zhenfei
    2021, 41(10):  3049-3055.  DOI: 10.11772/j.issn.1001-9081.2020121973
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics
    For the problems of encountering and overtaking in the process of in-and-out port of ships in the container terminals with two-way channel, a new ship dispatching optimization algorithm focusing on the service rules was proposed. Firstly, the realistic constraints of two-way channel and the safety regulations of port night sailing were considered at the same time. Then, a mixed integer programming model with the goal of minimizing the total waiting time of ships in the terminal was constructed to obtain the optimal in-and-out port sequence of ships. Finally, the branch-cut algorithm with embedded polymerization strategy was designed to solve the model. The numerical experimental results show that, the average relative deviation between the result of the branch-cut algorithm using embedded polymerization strategy and the lower bound is 2.59%. At the same time, compared with the objective function values obtained by the simulated annealing algorithm and quantum differential evolution algorithm, the objective function values obtained by the proposed branch-cut algorithm are reduced by 23.56% and 17.17% respectively, which verifies the effectiveness of the proposed algorithm. The influences of different safe time intervals of ship arriving the port and ship type proportions were compared in the sensitivity analysis of the scheme obtained by the proposed algorithm, providing the decision and support for ship dispatching optimization in container terminals with two-way channel.
    Task allocation optimization for automated guided vehicles based on variable neighborhood simulated annealing algorithm
    YANG Wei, LI Ran, ZHANG Kun
    2021, 41(10):  3056-3062.  DOI: 10.11772/j.issn.1001-9081.2020121919
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics
    In order to solve the task allocation problem of multi-Automated Guided Vehicle (AGV) storage system, a Variable Neighborhood_Simulated Annealing (VN_SA) algorithm was proposed. Firstly, according to the system operation process and operating characteristics of AGV, with the path cost, time cost and task equilibrium value cost of AGV during the task execution as the goals, and adding the power consumption situations of AGV driving with and without load to the constraints, a more practical multi-objective optimization model of task allocation for multi-AGV storage system was built. Then, aiming at the characteristics of the problem, a VN_SA algorithm was designed. The search range of the simulated annealing algorithm was expanded by the neighborhood perturbation operation in the algorithm, and the local optimum was jumped out by the algorithm and the global development effect was obtained by combining the probability mutation characteristics. The simulation experiments were carried out on works with the number of tasks of 20, 50, 100 respectively. Experimental results show that, the optimized total cost of the proposed algorithm is reduced by 6.4, 7.5 and 13.2 percentage points respectively compared with Genetic Algorithm (GA), which verifies the effectiveness of the proposed algorithm under different task sizes. It can be seen that the proposed algorithm has better convergence and search efficiency.
    Multi-objective robust optimization design of blood supply chain network based on improved whale optimization algorithm
    DONG Hai, WU Yao, QI Xinna
    2021, 41(10):  3063-3069.  DOI: 10.11772/j.issn.1001-9081.2020111729
    Asbtract ( )   PDF (615KB) ( )  
    References | Related Articles | Metrics
    In order to solve the uncertainty problem of blood supply chain network design, a multi-objective robust optimization design model of blood supply chain network was established. Firstly, for the blood supply chain network with five nodes, an optimization function considering safe stock, minimum cost and shortest storage time was established, and the ε-constraint, Pareto optimization and robust optimization method were used to deal with the established model, so that the multi-objective problem was transformed into a single objective robust problem. Secondly, by improving the original Whale Optimization Algorithm (WOA), the concept of crossover and mutation of the differential algorithm was introduced to WOA to enhance the search ability and improve the limitations, so as to obtain the Differential WOA (DWOA), which was used to solve the processed model. Finally, a numerical example verified that the shortage of the robust model is 76% less than that of the deterministic model when the test problems are the same. Therefore, the optimization robust model has more advantages in dealing with demand shortage. Compared with WOA, Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), DWOA has shorter interruption time and lower cost.
    Prediction of organic reaction based on gated graph convolutional neural network
    LAI Zicheng, ZHANG Yuping, MA Yan
    2021, 41(10):  3070-3074.  DOI: 10.11772/j.issn.1001-9081.2020111752
    Asbtract ( )   PDF (1291KB) ( )  
    References | Related Articles | Metrics
    Under the development of modern pharmaceutical and computer technologies, using artificial intelligence technology to accelerate drug development progress has become a research hotspot. And efficient prediction of organic reaction products is a key issue in drug retrosynthesis path planning. Concerning the problem of uneven distribution of chemical reaction types in the sample dataset, an Active Sampling-training Gated Graph Convolutional Neural-network (ASGGCN) model was proposed. Firstly, the SMILES (Simplified Molecular Input Line Entry Specification) codes of the chemical reactants were input into the model, and the location of the reaction center was predicted through Gated Graph Convolutional Neural-network (GGCN) and attention mechanism. Then, according to chemical constraint conditions and the candidate reaction centers, the possible chemical bond combinations were enumerated to generate candidate reaction products. After that, the gated graph convolutional difference network was used to rank the candidate products and obtain the final reaction product. Compared with the traditional graph convolutional network, the gated graph convolutional network has three weight parameter matrices and fuse the information through gating, so it can obtain more abundant atom hidden feature information. At the same time, the gated graph convolutional network is trained by active sampling, which can take into account both the analysis abilities of poor samples and ordinary samples. Experimental results show that the Top-1 prediction accuracy of the reaction product of the proposed model reaches 87.2%, which is increased by 1.6 percentage points compared to the accuracy of WLDN (Weisfeiler-Lehman Difference Network) model, illustrating that the organic reaction products can be predicted more accurately by the proposed model.
    Element component content dynamic monitoring system based on time sequence characteristics of solution images
    LU Rongxiu, CHEN Mingming, YANG Hui, ZHU Jianyong
    2021, 41(10):  3075-3081.  DOI: 10.11772/j.issn.1001-9081.2020101682
    Asbtract ( )   PDF (687KB) ( )  
    References | Related Articles | Metrics
    In view of the difficulties in real-time monitoring of component contents in rare earth extraction process and the high time consumption and memory consumption of existing component content detection methods, a dynamic monitoring system for element component content based on time sequence characteristics of solution images was designed. Firstly, the image acquisition device was used to obtain the time sequence image of the extraction tank solution. Considering the color characteristics of the extracted liquid and the incompleteness of single color space, the time sequence characteristics of the image were extracted in the color space of the fusion of HSI (Hue, Saturation, Intensity) and YUV (Luminance-Bandwidth-Chrominance) by using Principal Component Analysis (PCA) method, and combined with the production index, the Whale Optimization Algorithm (WOA) based Least Squares Support Vector Machine (LSSVM) classifier was constructed to judge the status of the working condition. Secondly, when the working condition was not optimal, the color histogram and color moment features of the image were extracted in HSV (Hue, Saturation, Value) color space, and an image retrieval system was developed with the linear weighted value of the mixed feature difference between solution images as the similarity measurement to obtain the value of component content. Finally, the test of the mixed solution of the praseodymium/neodymium extraction tank was carried out, and the results show that this system can realize the dynamic monitoring of element component content.
    Early identification and prediction of abnormal carotid arteries based on variational autoencoder
    HUANG Xiaoxiang, HU Yongmei, WU Dan, REN Lijie
    2021, 41(10):  3082-3088.  DOI: 10.11772/j.issn.1001-9081.2020101695
    Asbtract ( )   PDF (662KB) ( )  
    References | Related Articles | Metrics
    Carotid artery stenosis, Carotid Intima Media Thickness (CIMT) or carotid artery plaque may lead to stroke. For large-scale preliminary screening of stroke, an improved Variational AutoEncoder (VAE) based on medical data was proposed to predict and identify abnormal carotid arteries. Firstly, for the missing values in medical data, K-Nearest Neighbor (KNN), Mixture of mean, mode and KNN (MKNN) method and improved VAE were respectively used to impute the missed values to obtain the complete dataset, improving the application range of the data. Secondly, the feature attributes were analyzed and the features were ranked in order of importance. Thirdly, four supervised algorithms, Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF) and eXtreme Gradient Boosting Tree (XGBT), were combined with Genetic Algorithm (GA) to build the abnormal carotid artery identification models. Finally, based on the improved VAE, a semi-supervised abnormal carotid artery prediction model was built. Compared to the performance of baseline model, the performance of the semi-supervised model based on the improved VAE improves significantly with sensitivity of 0.893 8, specificity of 0.927 2, F1-measure of 0.910 5 and classification accuracy of 0.910 5. Experimental results show that this semi-supervised model can be used to identify the abnormal carotid arteries and thus serves as a tool to recognize high-risk groups of stroke, preventing and reducing the occurrence of stroke.
    Automatic segmentation method of microwave ablation region based on Nakagami parameters images of ultrasonic harmonic envelope
    ZHUO Yuxin, HAN Suya, ZHANG Yufeng, LI Zhiyao, DONG Yifeng
    2021, 41(10):  3089-3096.  DOI: 10.11772/j.issn.1001-9081.2020121948
    Asbtract ( )   PDF (4320KB) ( )  
    References | Related Articles | Metrics
    The existing Nakagami parametric imaging of ultrasonic harmonic envelope signals can realize non-invasive monitoring of the ablation process, but it cannot estimate the ablation area accurately. In order to solve the problem, a Gaussian Approximation adaptive Threshold Segmentation (GATS) method based on ultrasonic harmonic envelope Nakagami parameter images was proposed to monitor microwave ablation areas accurately and effectively. Firstly, a high-pass filter was used to obtain the harmonic components of the ultrasound echo Radio Frequency (RF) signal. Then, the Nakagami shape parameters of the harmonic signal envelope were estimated, and Nakagami parameter image was generated by composite window imaging. Finally, Gaussian approximation of Nakagami parameter image was applied to present the ablation area, the anisotropic smoothing preprocessing was performed to the approximated image, and the threshold segmentation of the smoothed image was used to accurately estimate the ablation area. The results of microwave ablation experiments show that, the long and short axis errors of the threshold segmentation ablation area after anisotropic smoothing based on Perona-Malik (P-M) algorithm and the actual ablation area are reduced by 3.15 percentage points and 2.21 percentage points respectively compared with the errors obtained by using Catte algorithm, and decreased by 7.87 percentage points and 5.74 percentage points compared with the errors obtained by using Median algorithm. It can be seen that GATS using P-M algorithm for ultrasonic harmonic envelope Nakagami parameter images can estimate the ablation area more accurately and provide effective monitoring for clinical ablation surgery.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF