Loading...

Table of Content

    10 July 2021, Volume 41 Issue 7
    Artificial intelligence
    Difference detection method of adversarial samples oriented to deep learning
    WANG Shuyan, HOU Zeyu, SUN Jiaze
    2021, 41(7):  1849-1856.  DOI: 10.11772/j.issn.1001-9081.2020081282
    Asbtract ( )   PDF (2685KB) ( )  
    References | Related Articles | Metrics
    Deep Neural Network (DNN) is proved to be vulnerable to adversarial sample attacks in many key deep learning systems such as face recognition and intelligent driving. And the detection of various types of adversarial samples has problems of insufficient detection and low detection efficiency. Therefore, a deep learning model oriented adversarial sample difference detection method was proposed. Firstly, the residual neural network model commonly used in industrial production was constructed as the model of the adversarial sample generation and detection system. Then, multiple kinds of adversarial attacks were used to attack the deep learning model to generate adversarial sample groups. Finally, a sample difference detection system was constructed, containing total 7 adversarial sample difference detection methods in sample confidence detection, perception detection and anti-interference degree detection. Empirical research was carried out by the constructed method on the MNIST and Cifar-10 datasets. The results show that the adversarial samples belonging to different adversarial attacks have obvious differences in the performance detection on confidence, perception and anti-interference degrees, for example, in the detection of confidence and anti-interference, the adversarial samples with excellent performance indicators in perception show significant insufficiencies compared to other types of adversarial samples. At the same time, it is proved that there is consistency of the differences in the two datasets. By using this detection method, the comprehensiveness and diversity of the model's detection of adversarial samples can be effectively improved.
    Dynamic network representation learning model based on graph convolutional network and long short-term memory network
    ZHANG Yuanjun, ZHANG Xihuang
    2021, 41(7):  1857-1864.  DOI: 10.11772/j.issn.1001-9081.2020081304
    Asbtract ( )   PDF (1298KB) ( )  
    References | Related Articles | Metrics
    Concerning the low accuracy and long running time of link prediction between dynamic network nodes, a dynamic network representation learning model using denoising AutoEncoder (dAE) as the framework and combining with Graph Convolutional Network (GCN) and Long Short-Term Memory (LSTM) network, named dynGAELSTM, was proposed. Firstly, the GCN was used in the front-end of this model to capture the feature information of the high-order graph neighborhood of the dynamic network nodes. Secondly, the extracted information was input into the coding layer of the dAE to obtain the low-dimensional feature vectors, and the spatio-temporal dependent features of the dynamic network were obtained on the LSTM network. Finally, a loss function was constructed by comparing the prediction map reconstructed through the decoding layer of the dAE with the real map, so as to optimize the model to complete the link prediction. Theoretical analysis and simulation experiments showed that compared with the model with the second-best prediction performance, the dynGAELSTM model had the prediction performance improved by 0.79, 1.19 and 3.13 percentage points respectively, and the running time reduced by 0.92% and 1.73% respectively. In summary, the dynGAELSTM model has higher accuracy and lower complexity in the link prediction tasks compared to the existing models.
    Knowledge graph driven recommendation model of graph neural network
    LIU Huan, LI Xiaoge, HU Likun, HU Feixiong, WANG Penghua
    2021, 41(7):  1865-1870.  DOI: 10.11772/j.issn.1001-9081.2020081254
    Asbtract ( )   PDF (991KB) ( )  
    References | Related Articles | Metrics
    The abundant structure and association information contained in Knowledge Graph (KG) can not only alleviate the data sparseness and cold-start in the recommender systems, but also make personalized recommendation more accurately. Therefore, a knowledge graph driven end-to-end recommendation model of graph neural network, named KGLN, was proposed. First, a signal-layer neural network framework was used to fuse the features of individual nodes in the graph, then the aggregation weights of different neighbor entities were changed by adding influence factors. Second, the single-layer was extended to multi-layer by iteration, so that the entities were able to obtain abundant multi-order associated entity information. Finally, the obtained features of entities and users were integrated to generate the prediction score for recommendation. The effects of different aggregation methods and influence factors on the recommendation results were analyzed. Experimental results show that on the datasets MovieLen-1M and Book-Crossing, compared with the benchmark methods such as Factorization Machine Library (LibFM), Deep Factorization Machine (DeepFM), Wide&Deep and RippleNet, KGLN obtains an AUC (Area Under ROC (Receiver Operating Characteristic) curve) improvement of 0.3%-5.9% and 1.1%-8.2%, respectively.
    User recommendation method of cross-platform based on knowledge graph and restart random walk
    YU Dunhui, ZHANG Luyi, ZHANG Xiaoxiao, MAO Liang
    2021, 41(7):  1871-1877.  DOI: 10.11772/j.issn.1001-9081.2020111745
    Asbtract ( )   PDF (1188KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of the single result of recommending similar users and insufficient understanding of user interests and behavior information for single social network platforms, a User Recommendation method of Cross-Platform based on Knowledge graph and Restart random walk (URCP-KR) was proposed. First, in the similar subgraphs segmented and matched by the target platform graph and the auxiliary platform graph, an improved multi-layer Recurrent Neural Network (RNN) was used to predict the candidate user entities. And the similar users were selected by comprehensive use of the similarity of topological structure features and user portrait similarity. Then, the relationship information of similar users in the auxiliary platform graph was used to complete the target platform graph. Finally, the probabilities of the users in the target platform graph walking to each user in the community were calculated, so that the interest similarity between users was obtained to realize the user recommendation. Experimental results show that the proposed method has higher recommendation precision and diversity than Collaborative Filtering (CF) algorithm, User Recommendation algorithm based on Cross-Platform online social network (URCP) and User Recommendation algorithm based on Multi-developer Community (UR-MC) with the recommendation precision up to 95.31% and the recommendation coverage up to 88.42%.
    Deep attention video popularity prediction model fusing content features and temporal information
    WU Wei, LI Zeping, YANG Huawei, LIN Chuan, WANG Zhongde
    2021, 41(7):  1878-1884.  DOI: 10.11772/j.issn.1001-9081.2020101619
    Asbtract ( )   PDF (1092KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that it is difficult to capture the temporal information during the dynamic change of video popularity, a Deep Attention video popularity prediction model Fusing Content and Temporal information (DAFCT) was proposed. Firstly, according to the users' feedback information, an Attention mechanism based Long Short-Term Memory network (Attention-LSTM) model was constructed to capture the popular trend and mine the temporal information. Secondly, Neural Factorization Machine (NFM) was used to process multi-modal content features and embedding techniques were adopted to reduce the computational complexity of the model by reducing the dimension of sparse high-dimensional features. Finally, the concatenate method was employed to fuse the temporal information and content features, and a Deep Attention Video Popularity Prediction (DAVPP) algorithm was designed to solve the proposed DAFCT. Experimental results show that compared with Attention-LSTM model and NFM model, the recall of DAFCT is improved by 10.82 and 3.31 percentage points, and the F1 score was improved by 9.80 and 3.07 percentage points, respectively.
    Fuzzy prototype network based on fuzzy reasoning
    DU Yan, LYU Liangfu, JIAO Yichen
    2021, 41(7):  1885-1890.  DOI: 10.11772/j.issn.1001-9081.2020091482
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the fuzziness and uncertainty of real data may seriously affect the classification results of few-shot learning, a Fuzzy Prototype Network (FPN) based on fuzzy reasoning was proposed by improving and optimizing the traditional few-shot learning prototype network. Firstly, the image feature information was obtained from Convolutional Neural Network (CNN) and fuzzy neural network, respectively. Then, linear knowledge fusion was performed on the two obtained parts of information to obtain the final image features. Finally, to achieve the final classification effect, the Euclidean distance between each category prototype and the query set was measured. A series of experiments were carried out on the mainstream datasets Omniglot and miniImageNet for few-shot learning classification. On miniImageNet dataset, the model achieves accuracy of 49.38% under the experimental setting of 5-way 1-shot, accuracy of 67.84% under the experimental setting of 5-way 5-shot, and accuracy of 51.40% under the experimental setting of 30-way 1-shot; and compared with the traditional prototype network, the model also has the accuracy greatly improved on Omniglot dataset.
    Chinese emergency event extraction method based on named entity recognition task feedback enhancement
    WU Guoliang, XU Jining
    2021, 41(7):  1891-1896.  DOI: 10.11772/j.issn.1001-9081.2020091492
    Asbtract ( )   PDF (1083KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the Bidirectional Long Short-Term Memory network-Conditional Random Field (BiLSTM-CRF) based event extraction model can only obtain the semantic information of character granularity, and the upper limit of the model is low due to the low dimensionality of learnable features, a Chinese emergency event extraction method based on named entity recognition task feedback enhancement was proposed by taking the Chinese public emergency event data in open field as the research object, namely FeedBack-Lattice-Bidirectional Long Short-Term Memory network-Conditional Random Field (FB-Latiice-BiLSTM-CRF). Firstly, the Lattice mechanism was integrated with Bidirectional Long Short-Term Memory network (BiLSTM) as the sharing layer of the model to obtain the semantic features of words in sentences. Secondly, the named entity recognition auxiliary task was added to jointly learn and mine entity semantic information. At the same time, the output of the named entity recognition task was fed back to the input end, and the word segmentation results corresponding to the entities were extracted as the external input of the Lattice mechanism, so as to reduce the computing overhead brought by the large number of self-formed words of the mechanism and further enhance the extraction of entity semantic features. Finally, the total loss of the model was calculated by the maximum Gaussian likelihood estimation method to maximize the homoscedasticity uncertainty, so as to solve the problem of loss imbalance caused by multi-task joint learning. Experimental results show that FB-Latiice-BiLSTM-CRF has the accuracy of 81.25%, the recall of 76.50%, and the F1 value of 78.80% on the test set, which are 7.63, 4.41 and 5.95 percentage points higher than those of the benchmark model, respectively, verifying the effectiveness of the improvement performing to the benchmark model.
    Authorship identification of text based on attention mechanism
    ZHANG Yang, JIANG Minghu
    2021, 41(7):  1897-1901.  DOI: 10.11772/j.issn.1001-9081.2020101528
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics
    The accuracy of authorship identification based on deep neural network decreases significantly when faced with a large number of candidate authors. In order to improve the accuracy of authorship identification, a neural network consisting of fast text classification (fastText) and an attention layer was proposed, and it was combined with the continuous Part-Of-Speech (POS) n-gram features for authorship identification of Chinese novels. Compared with Text Convolutional Neural Network (TextCNN), Text Recurrent Neural Network (TextRNN), Long Short-Term Memory (LSTM) network and fastText, the experimental results show that the proposed model obtains the highest classification accuracy. Compared with the fastText model, the introduction of attention mechanism increases the accuracy corresponding to different POS n-gram features by 2.14 percentage points on average; meanwhile, the model retains the high-speed and efficiency of fastText, and the text features used by it can be applied to other languages.
    Unsupervised parallel hash image retrieval based on correlation distance
    YANG Su, OUYANG Zhi, DU Nisuo
    2021, 41(7):  1902-1907.  DOI: 10.11772/j.issn.1001-9081.2020091472
    Asbtract ( )   PDF (967KB) ( )  
    References | Related Articles | Metrics
    To address the problems of insufficient learning of semantic information between image data and the need to retrain the model every time when the hash code length is changed in traditional unsupervised hash image retrieval model, an unsupervised search framework for large-scale image dataset retrieval, the unsupervised parallel hash image retrieval model based on correlation distance, was proposed. First, the Convolutional Neural Network (CNN) was used to learn the high-dimensional feature continuous variables of the image. Second, the pseudo-label matrix was constructed by using the correlation distance measure feature variables, and the hash function was combined with deep learning. Finally, the parallel method was used to gradually approximate the original visual characteristics during the hash code generation, realizing the purpose of generating the multi-length hash codes in one training. Experimental results show that the mean Average Precisions (mAPs) of the proposed model for four of 16 bit, 32 bit, 48 bit and 64 bits hash codes on FLICKR25K dataset are 0.726, 0.736, 0.738, 0.738,respectively, which are 9.4, 8.2, 6.2, 7.3 percentage points higher than those of Semantic Structure-based Unsupervised Deep Hashing (SSDH) model, respectively; and compared with SSDH model, the training time of the proposed model is reduced by 6.6 hours. It can be seen that the proposed model can effectively shorten the training time and improve the retrieval accuracy in large-scale image retrieval.
    Video summarization generation model based on improved bi-directional long short-term memory network
    WU Guangli, LI Leiting, GUO Zhenzhou, WANG Chengxiang
    2021, 41(7):  1908-1914.  DOI: 10.11772/j.issn.1001-9081.2020091512
    Asbtract ( )   PDF (1515KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems that traditional video summarization methods often do not consider temporal information and the extracted video features are too complex and prone to overfitting, a video summarization generation model based on improved Bi-directional Long Short-Term Memory (BiLSTM) network was proposed. Firstly, the deep features of the video frames were extracted by Convolutional Neural Network (CNN), and in order to make the generated video summarization more diverse, the BiLSTM was adopted to convert the deep feature recognition task into the sequence feature annotation task of the video frames, so that the model was able to obtain more context information. Secondly, considering that the generated video summarization should be representative, the fusion of max pooling was adopted to reduce the feature dimension and highlight the key information to weaken the redundant information, so that the model was able to learn the representative features, and the reduction of the feature dimension also reduced the parameters required in the fully connected layer to avoid the overfitting problem. Finally, the importance scores of the video frames were predicted and converted into the shot scores, which was used to select the key shots to generate video summarization. Experimental results show that the improved video summarization model improves the accuracy of video summarization generation on two standard datasets TvSum and SumMe, its F1-score values are improved by 1.4 and 0.3 percentage points respectively compared with the existing Long Short-Term Memory (LSTM) network based video summarization model DPPLSTM (Determinantal Point Process Long Short-Term Memory).
    Human skeleton-based action recognition algorithm based on spatiotemporal attention graph convolutional network model
    LI Yangzhi, YUAN Jiazheng, LIU Hongzhe
    2021, 41(7):  1915-1921.  DOI: 10.11772/j.issn.1001-9081.2020091515
    Asbtract ( )   PDF (1681KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the existing human skeleton-based action recognition algorithms cannot fully explore the temporal and spatial characteristics of motion, a human skeleton-based action recognition algorithm based on Spatiotemporal Attention Graph Convolutional Network (STA-GCN) model was proposed, which consisted of spatial attention mechanism and temporal attention mechanism. The spatial attention mechanism used the instantaneous motion information of the optical flow features to locate the spatial regions with significant motion on the one hand, and introduced the global average pooling and auxiliary classification loss during the training process to enable the model to focus on the non-motion regions with discriminability ability on the other hand. While the temporal attention mechanism automatically extracted the discriminative time-domain segments from the long-term complex video. Both of spatial and temporal attention mechanisms were integrated into a unified Graph Convolution Network (GCN) framework to enable the end-to-end training. Experimental results on Kinetics and NTU RGB+D datasets show that the proposed algorithm based on STA-GCN has strong robustness and stability, and compared with the benchmark algorithm based on Spatial Temporal Graph Convolutional Network (ST-GCN) model, the Top-1 and Top-5 on Kinetics are improved by 5.0 and 4.5 percentage points, respectively, and the Top-1 on CS and CV of NTU RGB+D dataset are also improved by 6.2 and 6.7 percentage points, respectively; it also outperforms the current State-Of-the-Art (SOA) methods in action recognition, such as Res-TCN (Residue Temporal Convolutional Network), STA-LSTM, and AS-GCN (Actional-Structural Graph Convolutional Network). The results indicate that the proposed algorithm can better meet the practical application requirements of human action recognition.
    Generative adversarial network synthesized face detection based on deep alignment network
    TANG Guihua, SUN Lei, MAO Xiuqing, DAI Leyu, HU Yongjin
    2021, 41(7):  1922-1927.  DOI: 10.11772/j.issn.1001-9081.2020081214
    Asbtract ( )   PDF (1450KB) ( )  
    References | Related Articles | Metrics
    The existing Generative Adversarial Network (GAN) synthesized face detection method has misjudgment of real faces with angles and occlusion, therefor a GAN-synthesized face detection method based on Deep Alignment Network (DAN) was proposed. Firstly, a facial landmark extraction network was designed based on DAN to extract the locations of facial landmarks of genuinus and synthesized faces. Then, in order to reduce the redundant information and feature dimensionality, each group of landmarks was mapped to the three-dimensional space by using the Principal Component Analysis (PCA) method. Finally, the features were classified by using 5-fold cross-validation of Support Vector Machine and the accuracy was calculated. Experimental results show that the proposed method improves the face dissonance caused by location errors by improving the accuracy of facial landmark location, which reduces the misjudgment rate of real faces. Compared with VGG19, XceptionNet and Dlib-SVM methods, this proposed method has the Area Under Receiver Operating Characteristic curve (AUC) increased by 4.48 to 32.96 percentage points and Average Precision (AP) increased by 4.26 to 33.12 percentage points on frontal faces; and has the AUC increased by 10.56 to 30.75 percentage points and AP increased by 7.42 to 42.45 percentage points on faces with angles and occlusion.
    RefineDet based on subsection weighted loss function
    XIAO Zhenyuan, WANG Yihan, LUO Jianqiao, XIONG Ying, LI Bailin
    2021, 41(7):  1928-1932.  DOI: 10.11772/j.issn.1001-9081.2020101615
    Asbtract ( )   PDF (1561KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor performance of the Single-Shot Refinement Neural Network for Object Detection (RefineDet) of the object detection network when detecting small sample classes in inter-class imbalanced datasets, a Subsection Weighted Loss (SWLoss) function was proposed. Firstly, the inverse of the number of samples from different classes in each training batch was used as the heuristic inter-class sample balance factor to weight the different classes in the classification loss, thus strengthening the concern on the small sample class learning. After that, a multi-task balancing factor was introduced to weight classification loss and regression loss to reduce the difference between the learning rates of two tasks. At last, experiments were conducted on Pascal VOC2007 dataset and dot-matrix character dataset with large differences in the number of target class samples. The results demonstrate that compared to the original RefineDet, the SWLoss-based RefineDet clearly improves the detection precision of small sample classes, and has the mean Average Precision (mAP) on the two datasets increased by 1.01 and 9.86 percentage points, respectively; and compared to the RefineDet based on loss balance function and weighted pairwise loss, the SWLoss-based RefineDet has the mAP on the two datasets increased by 0.68, 4.73 and 0.49, 1.48 percentage points, respectively.
    Bamboo strip surface defect detection method based on improved CenterNet
    GAO Qinquan, HUANG Bingcheng, LIU Wenzhe, TONG Tong
    2021, 41(7):  1933-1938.  DOI: 10.11772/j.issn.1001-9081.2020081167
    Asbtract ( )   PDF (1734KB) ( )  
    References | Related Articles | Metrics
    In bamboo strip surface defect detection, the bamboo strip defects have different shapes and messy imaging environment, and the existing target detection model based on Convolutional Neural Network (CNN) does not take advantage of the neural network when facing such specific data; moreover, the sources of bamboo strips are complicated and there exist other limited conditions, so that it is impossible to collect all types of data, resulting in a small amount of bamboo strip defect data that CNN cannot fully learn. To address these problems, a special detection network aiming at bamboo strip defects was proposed. The basic framework of the proposed network is CenterNet. In order to improve the detection performance of CenterNet in less bamboo strip defect data, an auxiliary detection module based on training from scratch was designed:when the network started training, the CenterNet part that uses the pre-training model was frozen, and the auxiliary detection module was trained from scratch according to the defect characteristics of the bamboo strips; when the loss of the auxiliary detection module stabilized, the module was intergrated with the pre-trained main part by a connection method of attention mechanism. The proposed detection network was trained and tested on the same training sets with CenterNet and YOLO v3 which is currently commonly used in industrial detection. Experimental results show that on the bamboo strip defect detection dataset, the mean Average Precision (mAP) of the proposed method is 16.45 and 9.96 percentage points higher than those of YOLO v3 and CenterNet, respectively. The proposed method can effectively detect the different shaped defects of bamboo strips without increasing too much time consumption, and has a good effect in actual industrial applications.
    Tire defect detection method based on improved Faster R-CNN
    WU Zeju, JIAO Cuijuan, CHEN Liang
    2021, 41(7):  1939-1946.  DOI: 10.11772/j.issn.1001-9081.2020091488
    Asbtract ( )   PDF (1816KB) ( )  
    References | Related Articles | Metrics
    The defects such as sidewall foreign matter, crown foreign body, air bubble, crown split and sidewall root opening that appear in the process of tire production will affect the use of tires after leaving factory, so it is necessary to carry out nondestructive testing on each tire before leaving the factory. In order to achieve automatic detection of tire defects in industry, an automatic tire defect detection method based on improved Faster Region-Convolutional Neural Network (Faster R-CNN) was proposed. Firstly, at the preprocessing stage, the gray level of tire image was stretched by the histogram equalization method to enhance the contrast of the dataset, resulting in a significant difference between gray values of the image target and the background. Secondly, to improve the accuracy of position detection and identification of tire defects, the Faster R-CNN structure was improved. That is the convolution features of the third layer and the convolution features of the fifth layer in ZF (Zeiler and Fergus) convolutional neural network were combined together and output as the input of the regional proposal network layer. Thirdly, the Online Hard Example Mining (OHEM) algorithm was introduced after the RoI (Region-of-Interesting) pooling layer to further improve the accuracy of defect detection. Experimental results show that the tire X-ray image defects can be classified and located accurately by the improved Faster R-CNN defect detection method with average test recognition of 95.7%. In addition, new detection models can be obtained by fine-tuning the network to detect other types of defects..
    Data science and technology
    Ensemble classification model for distributed drifted data streams
    YIN Chunyong, ZHANG Guojie
    2021, 41(7):  1947-1955.  DOI: 10.11772/j.issn.1001-9081.2020081277
    Asbtract ( )   PDF (1255KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low classification accuracy in big data environment, an ensemble classification model for distributed data streams was proposed. Firstly, the microcluster mode was used to reduce the amount of data transmitted from local nodes to the central nodes, so as to reduce the communication cost. Secondly, the training samples of the global classifier were generated by using the sample reconstruction algorithm. Finally, an ensemble classification model for drift data streams was proposed, which adopted the weighted combination strategy of dynamic classifiers and steady classifiers, and the mixed labeling strategy was used to label the most representative instances to update the ensemble model. Experiments on two virtual datasets and two real datasets showed that the model suffered less fluctuation from concept drift compared with two distributed mining models DS-means and BDS-ensemble, and had higher accuracy than Online Active Learning Ensemble model (OALEnsemble), with the accuracy on four datasets improved by 1.58、0.97、0.77 and 1.91 percentage points respectively. Although the memory consumption of this model was slightly higher than those of BDS-ensemble and DS-means models, this model was able to improve the classification performance at a lower memory cost. Therefore, the model is suitable for the classification of big data with distributed and mobility characteristics, such as network monitoring and banking business system.
    Deep network embedding method based on community optimization
    LI Yafang, LIANG Ye, FENG Weiwei, ZU Baokai, KANG Yujian
    2021, 41(7):  1956-1963.  DOI: 10.11772/j.issn.1001-9081.2020081193
    Asbtract ( )   PDF (1616KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of technologies such as modern network communication and social media, the networked big data is difficult to be applied due to the lack of efficient and available node representation. Network representation learning is widely concerned by transforming high-dimensional sparse network data into low-dimensional, compact and easy-to-apply node representation. However, the existing network embedding methods obtain the low-dimensional feature vectors of nodes and then use them as the inputs for other applications (such as node classification, community discovery, link prediction and visualization) for further analysis, without building models for specific applications, which makes it difficult to achieve satisfactory results. For the specific application of network community discovery, a deep auto-encoder clustering model that combines community structure optimization for low-dimensional feature representation of nodes was proposed, namely Community-Aware Deep Network Embedding (CADNE). Firstly, based on the deep auto-encoder model, the node low-dimensional representation was learned by maintaining the topological characteristics of the local and global links of the network, and then the low-dimensional representation of the nodes was further optimized by using the network clustering structure. In this method, the low-dimensional representations of the nodes and the indicator vectors of the communities that the nodes belong to were learnt at the same time, so that the low-dimensional representation of the nodes can not only maintain the topological characteristics of the original network structure, but also maintain the clustering characteristics of the nodes. Comparing with the existing classical network embedding methods, the results show that CADNE achieves the best clustering results on Citeseer and Cora datasets, and improves the accuracy by up to 0.525 on 20NewsGroup. In classification task, CADNE performs the best on Blogcatalog and Citeseer datasets and the performance on Blogcatalog is improved by up to 0.512 with 20% training samples. In the visualization comparison, CADNE molel can get a low-dimensional representation of nodes with clearer class boundary, which verifies that the proposed method has better low-dimensional representation ability of nodes.
    Influence maximization algorithm based on user interactive representation
    ZHANG Meng, LI Weihua
    2021, 41(7):  1964-1969.  DOI: 10.11772/j.issn.1001-9081.2020081225
    Asbtract ( )   PDF (952KB) ( )  
    References | Related Articles | Metrics
    The problem of influence maximization is to select a group of effective seed users in social networks, through which information can reach the largest scope of spread. Traditional researches on influence maximization rely on the specific network structures and diffusion models, however, the manually processed simplified networks and the diffusion models based on assumptions have great limitations on assessing the real influence of users. To solve this problem, an Influence Maximization algorithm based on User Interactive Representation (IMUIR) was proposed. First, the context pairs were constructed through random sampling according to users' interaction traces, and the vector representations of the users were obtained by the SkipGram model training. Then, the greedy strategy was used to select the best seed set according to the activity degrees of the source users and the interaction degrees between these users with other users. To verify the effectiveness of IMUIR, experiments were conducted to compare it with Random, Average Cascade (AC), Kcore and Imfector algorithms on two social networks with real interactive information. The results show that IMUIR selects the seed set with higher quality, produces a wider scope of influence spread, and performs stablely on the two datasets.
    Cyber security
    Improvement and analysis of certificate-based wired local area network security association scheme
    XIAO Yuelei, DENG Xiaofan
    2021, 41(7):  1970-1976.  DOI: 10.11772/j.issn.1001-9081.2020081155
    Asbtract ( )   PDF (883KB) ( )  
    References | Related Articles | Metrics
    In the Tri-element Peer Authentication (TePA)-based wired Local Area Network (LAN) media access control Security (TLSec), the certificate-based wired LAN security association scheme has communication waste in the exchange key establishment processes and is not suitable for trusted computing environment. To solve these two problems, firstly, an improved certificate-based wired LAN security association scheme was proposed. In this scheme, the exchange key establishment process between the newly added switch and each nonadjacent switch was simplified, thus improving the communication performance of the exchange key establishment processes. Then, a certificate-based wired LAN security association scheme for trusted computing environment was proposed based on the above scheme. In this scheme, the platform authentication of the newly added terminal devices was added in the process of certificate-based authentication, so as to realize the trusted network access of the newly added terminal devices, and effectively prevent the newly added terminal devices from bringing worms, viruses and malicious softwares into the wired LAN. Finally, the two schemes were proved secure by using the Strand Space Model (SSM). In addition, through qualitative and quantitative comparative analysis, the two schemes are better than those proposed in related literatures.
    Blockchain storage expansion model based on Chinese remainder theorem
    QING Xinyi, CHEN Yuling, ZHOU Zhengqiang, TU Yuanchao, LI Tao
    2021, 41(7):  1977-1982.  DOI: 10.11772/j.issn.1001-9081.2020081256
    Asbtract ( )   PDF (1043KB) ( )  
    References | Related Articles | Metrics
    Blockchain stores transaction data in the form of distributed ledger, and its nodes hold copies of current data by storing hash chain. Due to the particularity of the blockchain structure, the number of blocks increases over time and the storage pressure of nodes also increases with the increasing of blocks, so that the storage scalability has become one of the bottlenecks in blockchain development. To address this problem, a blockchain storage expansion model based on Chinese Remainder Theorem (CRT) was proposed. In the model, the blockchain was divided into high-security blocks and low-security blocks, which were stored by different storage strategies. Among them, low-security blocks were stored in the form of network-wide preservation (all nodes need to preserve the data), while the high-security blocks were stored in a distributed form after being sliced by the CRT-based partitioning algorithm. In addition, the error detection and correction of Redundant Residual Number System (RRNS) was used to restore data to prevent malicious node attacking, so as to improve the stability and integrity of data. Experimental results and security analysis show that the proposed model not only has security and fault tolerance ability, but also ensures the integrity of data, as well as effectively reduces the storage consumption of nodes and increases the storage scalability of the blockchain system.
    Blockchain digital signature scheme with improved SM2 signature method
    YANG Longhai, WANG Xueyuan, JIANG Hesong
    2021, 41(7):  1983-1988.  DOI: 10.11772/j.issn.1001-9081.2020081220
    Asbtract ( )   PDF (1080KB) ( )  
    References | Related Articles | Metrics
    In order to improve the storage security and signature efficiency of digital signature keys in the consortium blockchain Practical Byzantine Fault Tolerance (PBFT) algorithm consensus process, considering the actual application environment of the consortium blockchain PBFT consensus algorithm, a trusted third-party proof signature scheme based on key division and Chinese encryption SM2 algorithm was proposed. In this scheme, by a trusted third-party, the key was generated and split, and the sub-split private key was distributed to the consensus nodes. In each consensus, the identity must be proved to the trusted third-party at first, and then the other half of the sub-split private key was obtained by the verification party to perform identity verification. In this signature scheme, the segmentation and preservation of the private key was realized by combining the characteristics of the consortium chain, and the modular inversion process in the traditional SM2 algorithm was eliminated by using consensus feature and hash digest. The theoretical analysis proved that the proposed scheme was resistant to data tampering and signature forgery, while Java Development Kit (JDK1.8) and TIO network framework were used to simulate the signature process in consensus. Experimental results show that compared with the traditional SM2 algorithm, the proposed scheme is more efficient, and the more consensus nodes, the more obvious the efficiency gap. When the node number reaches 30, the efficiency of the scheme is improved by 27.56%, showing that this scheme can satisfy the current application environment of the consortium blockchain PBFT consensus.
    E-forensics model for internet of vehicles based on blockchain
    CHEN Weiwei, CAO Li, GU Xiang
    2021, 41(7):  1989-1995.  DOI: 10.11772/j.issn.1001-9081.2020081205
    Asbtract ( )   PDF (1260KB) ( )  
    References | Related Articles | Metrics
    To resolve the difficulties of forensics and determination of responsibility for traffic accidents, a blockchain-based e-forensics scheme under Internet Of Vehicles (IOV) communications architecture was proposed. In this scheme, the remote storage of digital evidence was implemented by using the decentralized storage mechanism of blockchain, and the fast retrieval of digital evidence and effective tracing of related evidence chain were realized by using the smart contracts. The access control of data was performed by using the token mechanism to protect the privacy of vehicle identities. Meanwhile, a new consensus mechanism was proposed to meet real-time requirements of IOV for forensics. Simulation results show that the new consensus algorithm in this proposed scheme has higher efficiency compared with the traditional Delegated Proof Of Stake (DPOS) consensus algorithm and the speed of forensics meets the requirements of IOV environment, which ensures the characteristics of electronic evidence such as non-tampering, non-repudiation and permanent preservation, so as to realize the application of blockchain technology in judicial forensics.
    Intrusion detection based on improved triplet network and K-nearest neighbor algorithm
    WANG Yue, JIANG Yiming, LAN Julong
    2021, 41(7):  1996-2002.  DOI: 10.11772/j.issn.1001-9081.2020081217
    Asbtract ( )   PDF (1105KB) ( )  
    References | Related Articles | Metrics
    Intrusion detection is one of the important means to ensure network security. To address the problem that it is difficult to balance detection accuracy and computational efficiency in network intrusion detection, based on the idea of deep metric learning, a network intrusion detection model combining improved Triplet Network (imTN) and K-Nearest Neighbor (KNN) was proposed, namely imTN-KNN. Firstly, a triplet network structure suitable for solving intrusion detection problems was designed to obtain the distance features that are more conducive to the subsequent classification. Secondly, due to the overfitting problem caused by removing the Batch Normalization (BN) layer from the traditional model which affected the detection precision, a Dropout layer and a Sigmoid activation layer were introduced to replace the BN layer, thus improving the model performance. Finally, the loss function of the traditional triplet network model was replaced with the multi-similarity loss function. In addition, the distance feature output of the imTN was used as the input of the KNN algorithm for retraining. Comparison experiments on the benchmark dataset IDS2018 show that compared with the Deep Neural Network based Intrusion Detection System (IDS-DNN) and Convolutional Neural Networks and Long Short Term Memory (CNN-LSTM) based detection model, the detection accuracy of imTN-KNN is improved by 2.76% and 4.68% on Sub_DS3, and the computational efficiency is improved by 69.56% and 74.31%.
    Advanced computing
    Construction method of cloud manufacturing virtual workshop for manufacturing tasks
    ZHAO Qiuyun, WEI Le, SHU Hongping
    2021, 41(7):  2003-2011.  DOI: 10.11772/j.issn.1001-9081.2020081245
    Asbtract ( )   PDF (1325KB) ( )  
    References | Related Articles | Metrics
    To quickly select and organize relevant manufacturing resources and guarantee the execution of manufacturing tasks under the cloud manufacturing mode, a construction method of cloud manufacturing virtual workshop was proposed for manufacturing tasks. In this method, the manufacturing processes were abstracted into manufacturing task execution chains, in which the nodes were corresponding to manufacturing equipment cloud services or inspection cloud services and the directed edges were corresponding to logistics cloud services. At the same time, the cloud services were organized and managed through the industry domain, location domain and type domain to construct smaller candidate sets of cloud services with reducing the computing amount of function matching, performance matching, price matching and time matching, thus constructing cloud manufacturing virtual workshops rapidly. The numerical example analysis shows that compared to other methods, the proposed method can select cloud services in a shorter time and ensure that the Quality of Service (QoS) of the selected cloud services is better in the relevant domains.
    Constrained multi-objective optimization algorithm based on coevolution
    ZHANG Xiangfei, LU Yuming, ZHANG Pingsheng
    2021, 41(7):  2012-2018.  DOI: 10.11772/j.issn.1001-9081.2020081344
    Asbtract ( )   PDF (975KB) ( )  
    References | Related Articles | Metrics
    In view of the problem that it is difficult for constrained multi-objective optimization algorithms to effectively balance convergence and diversity, a new constrained multi-objective optimization algorithm based on coevolution was proposed. Firstly, a population with certain number of feasible solutions was obtained by using the feasible solution search method based on steady-state evolution. Then, this population was divided into two sub-populations and both convergence and diversity were achieved by coevolution of the two sub-populations. Finally, standard constrained multi-objective optimization problems CF1~CF7, DOC1~DOC7 and the practical engineering problems were used for simulation experiments to test the solution performance of the proposed algorithm. Experimental results show that compared with Nondominated Sorting Genetic Algorithm Ⅱ based on Constrained Dominance Principle (NSGA-Ⅱ-CDP), Two-Phase algorithm (ToP), Push and Pull Search algorithm (PPS) and Two-Archive Evolutionary Algorithm for Constrained multiobjective optimization (C-TAEA), the proposed algorithm achives good results in both Inverted Generational Distance (IGD) and HyperVolume (HV), indicating that the proposed algorithm can effectively balance convergence and diversity.
    Network and communications
    Relay selection and performance analysis of multi-relay cooperative spatial modulation
    LI Tong, QIU Runhe
    2021, 41(7):  2019-2025.  DOI: 10.11772/j.issn.1001-9081.2020081238
    Asbtract ( )   PDF (1001KB) ( )  
    References | Related Articles | Metrics
    A relay selection scheme based on the location of relay node was proposed for the selection problem in a multi-relay cooperative Spatial Modulation (SM) system, and the Bit Error Rate (BER) performance of the system was analyzed. The SM technique was used at the source node by the system, only one transmitting antenna was activated in each time slot, and based on the location information of the relay node, the Amplify Forward (AF) relay closest to the midpoint between the source node and the destination node was selected among all relays in each time slot for forwarding. The approach of moment generating function was used to derive the solution of the pairwise error probability of the system under Rayleigh fading channel, and therefore the theoretical BER of the system was given. Simulation results show that this relay selection method can achieve better BER performance of the system compared with the relay random selection and cyclic forwarding methods.
    Power allocation algorithm for CR-NOMA system based on tabu search and Q-learning
    ZHOU Shuo, QIU Runhe, TANG Minjun
    2021, 41(7):  2026-2032.  DOI: 10.11772/j.issn.1001-9081.2020081249
    Asbtract ( )   PDF (1128KB) ( )  
    References | Related Articles | Metrics
    For the demand of high speed and massive connections of next-generation mobile communication, improving the total secondary users' transmission rate by the optimization of power allocation in Cognitive Radio-Non-Orthogonal Multi-Access (CR-NOMA) hybrid system was studied, and an algorithm of Power Allocation based on Tabu Search and Q-learning (PATSQ) was proposed. Firstly, the users' power allocation was observed and learnt by the cognitive base station in the system environment, and the secondary users used NOMA to access the authorized channel. Then, the power allocation, channel state and total transmission rate in the power allocation problem were expressed as action, state and reward in the Markov decision process, which was solved by combining tabu search and Q-learning and an optimal tabu Q-table was obtained. Finally, under the constraints of primary and secondary users' Quality of Service (QoS) and maximum transmitting power, optimal power allocation factors were obtained by the cognitive base station by looking up the tabu Q-table, so as to maximize the total transmission rate of secondary users in the system. Simulation results show that under the same total power, the proposed algorithm is superior to Cognitive Mobile Radio Network (CMRN) algorithm, Secondary user First Decode Mode (SFDM) algorithm and the traditional equal power allocation algorithm in terms of the total transmission rate of secondary users and the number of users contained in the system.
    Two-phase resource allocation technology for network slices in smart grid
    SHANG Fangjian, LI Xin, Di ZHAI, LU Yang, ZHANG Donglei, QIAN Yuwen
    2021, 41(7):  2033-2038.  DOI: 10.11772/j.issn.1001-9081.2020081343
    Asbtract ( )   PDF (1004KB) ( )  
    References | Related Articles | Metrics
    To satisfy the diverse demands of network slicing in smart grid, a slicing resource allocation model based on cloud-edge collaboration in smart grid was proposed. Furthermore, a two-phase cooperative slice allocation model was developed to optimize the allocation of the network slices. In the first phase, an optimization model for the resource allocation in local edge network was established to optimize the user experience, and the optimization problem was solved with the Lagrange multiplier method. In the second phase, the system was modeled as a Markov decision process, and then the deep reinforcement learning was adopted to adaptively allocate the resources to the slices of the core cloud. Experimental results show that the proposed two-phrase slice resource allocation model can effectively reduce the network delay and improve the user satisfaction.
    Multimedia computing and computer simulation
    Image smoothing method based on gradient surface area and sparsity constraints
    LI Hui, WU Chuansheng, LIU Jun, LIU Wen
    2021, 41(7):  2039-2047.  DOI: 10.11772/j.issn.1001-9081.2020081325
    Asbtract ( )   PDF (6854KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of easy loss of low-contrast edges and incomplete suppression of texture details during texture image smoothing, an image smoothing method based on gradient surface area and sparsity constraints was proposed. Firstly, the image was regarded as a two-dimensional embedded surface in three-dimensional space. On this basis, geometric characteristics of the image were analyzed and the regularization term of the gradient surface area constraint was proposed, which improves the texture suppression performance. Secondly, based on the statistical characteristics of the image, a hybrid regularization constrained image smoothing model with L0 gradient sparseness and adaptive gradient surface area constraints was established. At last, the alternating direction multiplier method was used to solve the non-convex non-smooth optimization model efficiently. The experimental results in texture suppression, edge detection, texture enhancement and image fusion show that the proposed algorithm overcomes the defects of L0 gradient minimization smoothing method, such as staircase effect and insufficient filtering, and is able to maintain and sharpen the significant edge contours of the image while removing a large amount of texture information.
    Image colorization algorithm based on foreground semantic information
    WU Lidan, XUE Yuyang, TONG Tong, DU Min, GAO Qinquan
    2021, 41(7):  2048-2053.  DOI: 10.11772/j.issn.1001-9081.2020081184
    Asbtract ( )   PDF (4553KB) ( )  
    References | Related Articles | Metrics
    An image can be divided into foreground part and background part, while the foreground is often the visual center. Due to the large categories and complex situations of foreground part, the image colorization is difficult, thus the foreground part of an image may suffer from poor colorization and detail loss problems. To solve these problems, an image colorization algorithm based on foreground semantic information was proposed to improve the image colorization effect and achieve the purpose of natural overall image color and rich content color. First, the foreground network was used to extract the low-level features and high-level features of the foreground part. Then these features were integrated into the foreground subnetwork to eliminate the influence of background color information and emphasize the foreground color information. Finally, the network was continuously optimized by the generation loss and pixel-level color loss, so as to guide the generation of high-quality images. Experimental results show that after introducing the foreground semantic information, the proposed algorithm improves Peak Signal-to-Noise Ratio (PSNR) and Learned Perceptual Image Patch Similarity (LPIPS), effectively solving the problems of dull color, detail loss and low contrast in the colorization of the central visual regions; compared with other algorithms, the proposed algorithm achieves a more natural colorization effect on the overall image and a significant improvement on the content part.
    Panoptic segmentation algorithm based on grouped convolution for feature fusion
    FENG Xingjie, ZHANG Tianze
    2021, 41(7):  2054-2061.  DOI: 10.11772/j.issn.1001-9081.2020091523
    Asbtract ( )   PDF (1584KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the computing of the image panoptic segmentation task is not fast enough for the existing network structures in practical applications, a panoptic segmentation algorithm based on grouped convolution for feature fusion was proposed. Firstly, through the bottom-up method, the classic Residual Network structure (ResNet) was selected for feature extraction, and the multi-scale feature fusion of semantic segmentation and instance segmentation was performed on the extracted features by using the Atrous convolutional Spatial Pyramid Pooling operation (ASPP) with different expansion rates. Secondly, a single-channel grouped convolution upsampling method was proposed to integrate the semantics and instance features for performing upsampling feature fusion to a specified size. Finally, a more refined panoptic segmentation output result was obtained by performing loss function on semantic branch, instance branch and instance center point respectively. The model was compared with Attention-guided Unified Network for panoptic segmentation (AUNet), Panoptic Feature Pyramid Network (Panoptic FPN), Single-shot instance Segmentation with Affinity Pyramid (SSAP), Unified Panoptic Segmentation Network (UPSNet), Panoptic-DeepLab and other methods on CityScapes dataset. Compared with the Panoptic-DeepLab model, which is the best-performing model in the comparison models, with the decoding network parameters reduced significantly, the proposed model has the Panoptic Quality (PQ) of 0.565, with a slight decrease of 0.003, and the segmentation qualities of objects such as buildings, trains, bicycles were improved by 0.3-5.5, the Average Precision (AP) and the Average Precision with target IoU (Intersection over Union) threshold over 50% (AP50) were improved by 0.002 and 0.014 respectively, and the mean IoU (mIoU) value was increased by 0.06. It can be seen that the proposed method improves the speed of image panoptic segmentation, has good accuracy in the three indexes of PQ, AP and mIoU, and can effectively complete the panoptic segmentation tasks.
    Adaptive binary simplification method for 3D feature descriptor
    LIU Shuangyuan, ZHENG Wangli, LIN Yunhan
    2021, 41(7):  2062-2069.  DOI: 10.11772/j.issn.1001-9081.2020091501
    Asbtract ( )   PDF (1286KB) ( )  
    References | Related Articles | Metrics
    In the study of 3-Dimensional (3D) local feature descriptor, it is difficult to strike a balance among accuracy, matching time and memory consumption. To solve this problem, an adaptive binary simplification method for 3D feature descriptor was proposed based on the standard deviation principle in statistical theory. First, different binary feature descriptors were generated by changing the binarization unit length and the number of standard deviations in the simplification model, which were applied into the currently widely used Signature of Histogram of OrienTations (SHOT) descriptor, and the optimal combination of binarization unit length and the number of standard deviations was determined by experiments. Finally, the simplified descriptor under the optimal combination was named Standard Deviation feature descriptor for Signature of Histogram of OrienTations (SD-SHOT). Experimental results show that compared with the SHOT descriptor without simplification, SD-SHOT reduces the key point matching time to 1/15 times and the memory occupancy to 1/32 times of SHOT; compared with the existing mainstream simplification methods such as Binary Feature Descriptor for Signature of Histogram of OrienTations (B-SHOT), SD-SHOT has the optimal comprehensive performance. In addition, the validity of the proposed method is verified in the actual robot sorting scene consisting of five different categories of objects.
    Video similarity detection method based on perceptual hashing and dicing
    WU Yue, LUO Jiangtao, LIU Rui, HU Zhongyin
    2021, 41(7):  2070-2075.  DOI: 10.11772/j.issn.1001-9081.2020081177
    Asbtract ( )   PDF (1358KB) ( )  
    References | Related Articles | Metrics
    For a long time, video copyright infringement problems have emerged one after another, and the detection of video similarity is an important approach of identifying video copyright infringement. Concerning the problems of the correlation difficulty of multi-feature relation and high time complexity in the existing video similarity detection methods, a fast comparison method based on perceptual hashing and dicing was proposed. First, the key image frames of the video were used to generate a digital fingerprint set. Then, based on the dicing method, the corresponding inverted index was generated to speed up the comparison between digital fingerprints. Finally, the similarity was judged according to the obtained Hamming distance between the digital fingerprints. Experimental results show that the proposed method can reduce the detection time by an average of 93% with ensuring the detection accuracy compared to the traditional perceptual hashing comparison methods; in the comparison with three common methods including Multi-Feature Hashing (MFH), Self-Taught Hashing (STH) and SPectral Hashing (SPH), the mean Average Precision (mAP) of the proposed method is increased by 1.4%, 2% and 2.3%,respectively, and the detection time is shortened by 25%, 32% and 16%, respectively, which verifies the feasibility of the proposed method.
    Shadow detection method based on hybrid attention model
    TAN Daoqiang, ZENG Cheng, QIAO Jinxia, ZHANG Jun
    2021, 41(7):  2076-2081.  DOI: 10.11772/j.issn.1001-9081.2020081308
    Asbtract ( )   PDF (1583KB) ( )  
    References | Related Articles | Metrics
    The shadow regions in an image may lead to uncertainty of the image content, which is not conducive to other computer vision tasks, so shadow detection is often considered as a pre-processing process of computer vision algorithms. However, most of the existing shadow detection algorithms use a multi-level network structure, which leads to difficulties in model training, and although some algorithms adopting single-layer network structure have been proposed, they only focus on local shadows and ignore the relation between shadows. To solve this problem, a shadow detection algorithm based on hybrid attention model was proposed to improve the accuracy and robustness of shadow detection. Firstly, the pre-trained deep network ResNext101 was used as the front-end feature extraction network to extract the basic features of the image. Secondly, the bidirectional pyramid structure was used for feature fusion from shallow to deep and deep to shallow, and an information compensation mechanism was proposed to reduce the loss of deep semantic information. Thirdly, a hybrid attention model was proposed for feature fusion by combining spatial attention and channel attention, so as to capture differences between shaded and non-shaded regions. Finally, the prediction results of two directions were merged to obtain the final shadow detection result. Comparison experiments were conducted on public datasets SBU and UCF. The results show that compared with DSC (Direction-aware Spatial Context) algorithm, the Balance Error Rate (BER) of the proposed algorithm is reduced by 30% and 11% respectively, proving that the proposed method can better suppress shadow error detection and enhance shadow details.
    Medical image fusion with intuitionistic fuzzy set and intensity enhancement
    ZHANG Linfa, ZHANG Yufeng, WANG Kun, LI Zhiyao
    2021, 41(7):  2082-2091.  DOI: 10.11772/j.issn.1001-9081.2020101539
    Asbtract ( )   PDF (2743KB) ( )  
    References | Related Articles | Metrics
    Image fusion technology plays an important role in computer-aided diagnosis. Detail extraction and energy preservation are two key issues in image fusion, and the traditional fusion methods address them simultaneously by designing the fusion method. However, it tends to cause information loss or insufficient energy preservation. In view of this, a fusion method was proposed to solve the problems of detail extraction and energy preservation separately. The first part of the method aimed at detail extraction. Firstly, the Non-Subsampled Shearlet Transform (NSST) was used to divide the source image into low-frequency and high-frequency subbands. Then, an improved energy-based fusion rule was used to fuse the low-frequency subbands, and an strategy based on the intuitionistic fuzzy set theory was proposed for the fusion of the high-frequency subbands. Finally, the inverse NSST was employed to reconstruct the image. In the second part, an intensity enhancement method was proposed for energy preservation. The proposed method was verified on 43 groups of images and compared with other eight fusion methods such as Principal Component Analysis (PCA) and Local Laplacian Filtering (LLF). The fusion results on two different categories of medical image fusion (Magnetic Resonance Imaging (MRI) and Positron Emission computed Tomography (PET), MRI and Single-Photon Emission Computed Tomography (SPECT)) show that the proposed method can obtain more competitive performance on both visual quality and objective evaluation indicators including Mutual Information (MI), Spatial Frequency (SF), Q value, Average Gradient (AG), Entropy of Information (EI), and Standard Deviation (SD), and can improve the quality of medical image fusion.
    Medical image fusion based on edge-preserving decomposition and improved sparse representation
    PEI Chunyang, FAN Kuangang, MA Zheng
    2021, 41(7):  2092-2099.  DOI: 10.11772/j.issn.1001-9081.2020081303
    Asbtract ( )   PDF (4280KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of artifacts and loss of details in multimodal medical fusion, a two-scale multimodal medical image fusion method framework using multiscale edge-preserving decomposition and sparse representation was proposed. Firstly, the source image was decomposed at multiple scales by utilizing an edge-preserving filter to obtain the smoothing and detail layers of the source image. Then, an improved sparse representation fusion algorithm was employed to fuse the smoothing layers, and on this basis, an image block selection based strategy was proposed to construct the dataset of the over-complete dictionary and the dictionary learning algorithm was used for training the joint dictionary, as well as a novel multi-norm based activity level measurement method was introduced to select the sparse coefficients; the detail layers were merged by an adaptive weighted local regional energy fusion rule. Finally, the fused smoothing layer and detail layers were reconstructed with multi-scale to obtain the fused image. Comparison experiments were conducted on the medical images from three different imaging modalities. The results demonstrate that the proposed method preserves more salient edge features with the improvement of contrast and has advantages in both visual effect and objective evaluation compared to other multi-scale transform and sparse representation methods.
    Frontier and comprehensive applications
    Channel structure choice of closed-loop supply chain under uncertain demand and recovery
    ZHANG Meng, GUO Jianquan
    2021, 41(7):  2100-2107.  DOI: 10.11772/j.issn.1001-9081.2020101617
    Asbtract ( )   PDF (1256KB) ( )  
    References | Related Articles | Metrics
    Aiming at the optimal choice of sales channel structure in the closed-loop supply chain, considering the uncertainty of market demand and quality level of recycled products, four average gross profit models for the closed-loop supply chain system with four sales channel structures under the government differentially weighted subsidy were constructed with the objective of maximizing the gross profit. Firstly, Fuzzy Chance Constrained Programming (FCCP) method was used to transform the fuzzy constraints into clear corresponding expressions equivalently. Then, Particle Swarm Optimization (PSO) algorithm and Genetic Algorithm (GA) were used to solve numerical examples of the model comparatively. Finally, sensitivity analysis was performed on the parameters. The results show that the maximum difference ratio of the above two algorithms is 0.018%, indicating that both algorithms do not fall into the local optimal solution, which verifies the validity of the algorithms and the confidence of the models. Enterprises can formulate optimal recycling, production and sales strategies according to different confidence levels of the potential demands, choose the optimal channel structure and increase the gross profit gradually.
    Signal timing optimization model of dual-ring phase under condition of setting waiting area
    YANG Zhen, MA Jianxiao, WANG Baojie
    2021, 41(7):  2108-2112.  DOI: 10.11772/j.issn.1001-9081.2020081332
    Asbtract ( )   PDF (909KB) ( )  
    References | Related Articles | Metrics
    In order to improve the driving efficiency of intersections with waiting areas, the effect of setting waiting area was firstly equal to the increase of lane green ratio. Then a signal timing optimization model for intersection was developed based on National Electronic Manufacturers Association (NEMA) standard dual-ring phase with the objective of minimizing the average vehicular delay. Next, a genetic algorithm for solving the model was designed by considering the ring-barrier constraint in phase structure. Finally, the model and algorithm were applied to the example intersection. The results show that compared to the signal timing scheme obtained by Synchro software, the model can obtain the scheme with shorter cycle and lower average vehicular delay. The delay reduction of the proposed model ranges from 12.9% to 17.4% when only left-turn waiting areas are provided at the intersections, and from 17.5% to 25.5% when both left-turn and through-movement waiting areas are provided. Besides, the model is not sensitive to the value of queue clearance rate, and can obtain almost the same signal timing scheme at the minimum and maximum vehicular speeds.
    Raw sugar demand forecasting model for sugar manufacturing enterprise based on modified Elman neural network
    LI Yangying, CHEN Zhijun, ZHANG Zihao, YOU Lan
    2021, 41(7):  2113-2120.  DOI: 10.11772/j.issn.1001-9081.2020061000
    Asbtract ( )   PDF (1406KB) ( )  
    References | Related Articles | Metrics
    The sugar manufacturing enterprises use traditional algorithm to forcast the raw sugar demand, which ignors the influence of time factors and the industry characteristics, resulting in low accuracy. To address this problem, combining with the periodic characteristics of the supply and demand of raw materials of refining sugar,a temporal feature-correlated raw sugar demand forecast model based on improved Elman Neural Network with Modified Cuckoo Search(MCS) optimization was proposed, namely TMCS-ENN. Firstly, an adaptive learning rate formula was proposed to optimize Elman Neural Network (ENN). Secondly, the adaptive parasitic failure probability and adaptive step-length control variable formula were introduced to obtain MCS algorithm to optimize the weight and threshold of ENN, which effectively improved the local search ability of the model and avoided local optimum. Finally, combining time correlation and hysteresis of raw material purchase of sugar manufacturing enterprise, the data slices were designed based on week granularity, and the ENN was trained with festivals and holidays as important features to obtain TMCS-ENN. Experimental results show that, with week as time granularity, the forecasting accuracy of the proposed TMCS-ENN forecasting model reaches 93. 89%. It can be seen that TMCS-ENN can meet the forecast accuracy demand of sugar manufacturing enterprises and effectively improve their production efficiency.
    Two-dimensional mapping of swarm robot based on random walk
    LU Guoqing, SUN Hao
    2021, 41(7):  2121-2127.  DOI: 10.11772/j.issn.1001-9081.2020081239
    Asbtract ( )   PDF (1249KB) ( )  
    References | Related Articles | Metrics
    Robots need to quickly and accurately obtain environmental map information when exploring unknown environments autonomously. For the problems of efficient exploration and map construction of unknown environments, the random walk algorithm was applied to the exploration of swarm robots, which simulate Brownian motion and build maps of the searched area. Then, the Brownian motion algorithm was improved to avoid the robot to search a region repeatedly by setting the maximum rotation angle when the robot walks randomly, so that the robot was able to explore more areas in the same time and the search efficiency of the robot was improved. Finally, simulation experiments were carried out through a group of mobile robots equipped with lidar, the influences of maximum rotation angle increment, the number of robots and movement steps of robot on the search area were analyzed.
    Path planning method of unmanned aerial vehicle based on chaos sparrow search algorithm
    TANG Andi, HAN Tong, XU Dengwu, XIE Lei
    2021, 41(7):  2128-2136.  DOI: 10.11772/j.issn.1001-9081.2020091513
    Asbtract ( )   PDF (1479KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issues of large alculation amount and difficult convergence of Unmanned Aerial Vehicle (UAV) path planning, a path planning method based on Chaos Sparrow Search Algorithm (CSSA) was proposed. Firstly, a two-dimensional task space model and a path cost model were established, and the path planning problem was transformed into a multi-dimensional function optimization problem. Secondly, the cubic mapping was used to initialize the population, and the Opposition-Based Learning (OBL) strategy was used to introduce elite particles, so as to enhance the diversity of the population and expand the scope of the search area. Then, the Sine Cosine Algorithm (SCA) was introduced, and the linearly decreasing strategy was adopted to balance the exploitation and exploration abilities of the algorithm. When the algorithm fell into stagnation, the Gaussian walk strategy was adopted to make the algorithm jump out of the local optimum. Finally, the performance of the proposed improved algorithm was verified on 15 benchmark test functions and applied to solve the path planning problem. Simulation results show that CSSA has better optimization performance than Particle Swarm Optimization (PSO) algorithm, Beetle Swarm Optimization (BSO) algorithm, Whale Optimization Algorithm (WOA), Grey Wolf Optimizer (GWO) algorithm and Sparrow Search Algorithm (SSA), and can quickly obtain a safe and feasible path with optimal cost and satisfying constraints, which proves the effectiveness of the proposed method.
    Attention-based object detection with millimeter wave radar-lidar fusion
    LI Chao, LAN Hai, WEI Xian
    2021, 41(7):  2137-2144.  DOI: 10.11772/j.issn.1001-9081.2020081334
    Asbtract ( )   PDF (1710KB) ( )  
    References | Related Articles | Metrics
    To address problems of missing occluded objects, distant objects and objects in extreme weather scenarios when using lidar for object detection in autonomous driving, an attention-based object detection method with millimeter wave radar-lidar feature fusion was proposed. Firstly, the scan frame data of millimeter wave radar and lidar were aggregated into their respective labeled frames, and the points of millimeter wave radar and lidar were spatially aligned, then PointPillar was employed to encode both the millimeter wave radar and lidar data into pseudo images. Finally, the features of both millimeter wave radar and lidar sensors were extracted by the middle convolution layer, and the features maps of them were fused by attention mechanism, and the fused feature map was passed through a single-stage detector to obtain detection results. Experimental results on nuScenes dataset show that compared to the basic PointPillar network, the mean Average Precision (mAP) of the proposed attention fusion algorithm is higher, which performs better than concatenation fusion, multiply fusion and add fusion methods. The visualization results show that the proposed method is effective and can improve the robustness of the network for detecting occluded objects, distant objects and objects surrounded by rain and fog.
    Simultaneous measurement of range and speed based on pulse position and amplitude modulation
    HUANG Shaowei, HUANG Wanlin, LEI Runlong, MAO Xuesong
    2021, 41(7):  2145-2149.  DOI: 10.11772/j.issn.1001-9081.2020081666
    Asbtract ( )   PDF (1113KB) ( )  
    References | Related Articles | Metrics
    For dealing with the problem that Doppler laser radar employing traditional wave methods cannot obtain high resolution for both parameters when it is used in range and speed measurement applications, a new measurement signal waveform modulated by both position and amplitude was proposed, which can solve the contradiction between measurement precision of range and speed, and make the two parameters independent in measurement process. In addition, the feasibility of applying the method for range and speed measurement in road environments for intelligent driving vehicles was analyzed. Firstly, the difficulties of classical modulation methods in range and speed measurement simultaneously were discussed, based on which a solution was designed for simultaneous modulation of the transmit signal waveform in position and amplitude, and the physical realizability of the proposed method was introduced by combing the amplifying properties of the in-line optical fiber amplifier. Then, the frequency calculation method for output heterodyne signal of laser radar employing position and amplitude modulation method and the data accumulation method for laser radar receiver output echo signal were discussed, so as to measure the range and speed independently. Finally, within the range of Doppler frequency that can be generated by moving targets in road environments, simulations were performed to verify the feasibility of the proposed method and the independence of the two measurement parameters, meanwhile, the measurement precision was analyzed. Simulation results show that the method of simultaneous position and amplitude modulation scheme can effectively measure the range and speed of targets even when the Signal-to-Noise Ratio (SNR) of laser radar receiver output signal is below 0 dB, and the measurement process of these two parameters to be measured are totally independent.
    Synthetic aperture radar ship detection method based on self-adaptive and optimal features
    HOU Xiaohan, JIN Guodong, TAN Lining, XUE Yuanliang
    2021, 41(7):  2150-2155.  DOI: 10.11772/j.issn.1001-9081.2020081187
    Asbtract ( )   PDF (1428KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of poor small target detection effect in Synthetic Aperture Radar (SAR) target ship detection, a self-adaptive anchor single-stage ship detection method was proposed. Firstly, on the basis of Feature Selective Anchor-Free (FSAF) algorithm, the optimal feature fusion method was obtained by using the Neural Architecture Search (NAS) to make full use of the image feature information. Secondly, a new loss function was proposed to solve the imbalance of positive and negative samples while enabling the network to regress the position more accurately. Finally, the final detection results were obtained by combining the Soft-NMS filtering detection box which is more suitable for ship detection. Several groups of comparison experiments were conducted on the open SAR ship detection dataset. Experimental results show that, compared with the original target detection algorithm, the proposed method significantly reduces the missed detections and false positives of small targets, and improves the detection performance for inshore ships to a certain extent.
    Multiple ring scan chains using the same test pin in round robin manner
    ZHANG Ling, KUANG Jishun
    2021, 41(7):  2156-2160.  DOI: 10.11772/j.issn.1001-9081.2020081665
    Asbtract ( )   PDF (869KB) ( )  
    References | Related Articles | Metrics
    Test architecture design is the basic and key issue of Integrated Circuit (IC) test, and the design of effective test architecture that meet the needs of IC is of great importance to reduce chip cost, improve product quality and increase product competitiveness. Therefore, a test architecture with several ring scan chains using the same test pin in the round robin manner was proposed, namely RRR Scan. In RRR Scan, the scan flip-flops were designed as multiple ring scan chains, which can work in stealth scan mode, ring shift scan mode and linear scan mode. The ring shift scan mode enables the reuse of test data, thus reducing the size of the test set; the stealth scan mode can shorten the test data shifting path, thus significantly reduing the test shifting power consumption, so that the architecture is a general test architecture with the characteristics of data reuse and low power consumption. In addition, in the architecture, the physically adjacent scan cells can be set into the same ring scan chain with little wiring cost. With stealth scan mode, both the shifting length and the delay of test data can be reduced. Experimental results show that the shifting power consumption can be reduced greatly by RRR Scan, and for S13207 circuit, the shifting power consumption is only 0.42% of that of the linear scan.
2022 Vol.42 No.11

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF