Loading...

Table of Content

    10 May 2020, Volume 40 Issue 5
    Artificial intelligence
    Review on deep learning-based pedestrian re-identification
    YANG Feng, XU Yu, YIN Mengxiao, FU Jiacheng, HUANG Bing, LIANG Fangxuan
    2020, 40(5):  1243-1252.  DOI: 10.11772/j.issn.1001-9081.2019091703
    Asbtract ( )   PDF (1156KB) ( )  
    References | Related Articles | Metrics
    Pedestrian Re-IDentification (Re-ID) is a hot issue in the field of computer vision and mainly focuses on “how to relate to specific person captured by different cameras in different physical locations”. Traditional methods of Re-ID were mainly based on the extraction of low-level features, such as local descriptors, color histograms and human poses. In recent years, in view of the problems in traditional methods such as pedestrian occlusion and posture disalignment, pedestrian Re-ID methods based on deep learning such as region, attention mechanism, posture and Generative Adversarial Network (GAN) were proposed and the experimental results became significantly better than before. Therefore, the researches of deep learning in pedestrian Re-ID were summarized and classified, and different from the previous reviews, the pedestrian Re-ID methods were divided into four categories to discuss in this review. Firstly, the pedestrian Re-ID methods based on deep learning were summarized by following four methods region, attention, posture, and GAN. Then the performances of mAP (mean Average Precision) and Rank-1 indicators of these methods on the mainstream datasets were analyzed. The results show that the deep learning-based methods can reduce the model overfitting by enhancing the connection between local features and narrowing domain gaps. Finally, the development direction of pedestrian Re-ID method research was forecasted.
    Gradient-based deep network pruning algorithm
    WANG Zhongfeng, XU Zhiyuan, SONG Chunhe, ZHANG Hongyu, CAI Yingkai
    2020, 40(5):  1253-1259.  DOI: 10.11772/j.issn.1001-9081.2019081374
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics

    Deep neural network models usually have a large number of redundant weight parameters. Calculating the deep network model requires a large amount of computing resources and storage pace, which makes the deep network model difficult to be deployed on some edge devices and embedded devices. To resolve this issue, a Gradient-based Deep network Pruning (GDP) algorithm was proposed. The core idea of GDP algorithm was to use the gradient as the basis for judging the importance of each weight. To eliminate the weights corresponding to the gradients smaller than the threshold, an adaptive method was used to find the threshold to screen the weights. The deep network model was retrained after pruning to restore the network performance. The experimental results show that the GDP algorithm reduces the computational cost by 35.3 percentage points with a precision loss of only 0.14 percentage points on the CIFAR-10 dataset. Compared with the state-of-the-art PFEC (Pruning Filters for Efficient ConvNets) algorithm, the GDP algorithm increases the network model accuracy by 0.13 percentage points, and reduces the computational cost by 1.1 percentage points, indicating that the proposed algorithm has superior performance of deep network in terms of both compression and acceleration.

    CNN model compression based on activation-entropy based layer-wise iterative pruning strategy
    CHEN Chengjun, MAO Yingchi, WANG Yichao
    2020, 40(5):  1260-1265.  DOI: 10.11772/j.issn.1001-9081.2019111977
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics

    Since the existing pruning strategies of the Convolutional Neural Network (CNN) model are different and have general effects, an Activation-Entropy based Layer-wise Iterative Pruning (AE-LIP) strategy was proposed to reduce the parameter amount of the model while ensuring the accuracy of the model within a controllable range. Firstly, combined with the neuronal activation value and information entropy, a weight evaluation criteria based on activation-entropy was constructed, and the weight importance score was calculated. Secondly, the pruning was performed layer by layer, the weights were sorted according to the importance score, and the pruning number in each layer was combined to filter out the weights to be pruned and set them to zero. Finally, the model was fine-tuned, and the above process was repeated until the iteration ended. The experimental results show that the activation-entropy based layer-wise iterative pruning strategy makes the AlexNet model compressed 87.5%, and the corresponding accuracy is reduced by 2.12 percentage points, which is 1.54 percentage points higher than that of the magnitude-based weight pruning strategy and 0.91 percentage points higher than that of the correlation-based weight pruning strategy; the strategy makes VGG-16 model compressed 84.1%, and the corresponding accuracy is reduced by 2.62 percentage points, which is 0.62 and 0.27 percentage points higher than those of the two above strategies. It can be seen that the proposed strategy reduces the size of the CNN model effectively while ensuring the accuracy of the model, and is helpful for the deployment of CNN model on mobile devices with limited storage.

    Feature selection algorithm based on new forest optimization algorithm
    XIE Qi, XU Xu, CHENG Gengguo, CHEN Heping
    2020, 40(5):  1266-1271.  DOI: 10.11772/j.issn.1001-9081.2019091614
    Asbtract ( )   PDF (484KB) ( )  
    References | Related Articles | Metrics

    A new feature selection algorithm using forest optimization algorithm was proposed, which aimed at solving the problems of the traditional feature selection using forest optimization algorithm in the stages of initialization, candidate forest generation and updating. In the algorithm, Pearson correlation coefficient and L1 regularization method were used to replace the random initialization strategy in the initialization stage, the methods of separating good and bad trees and fulfilling the difference were used to solve the problems of incompletion of good and bad trees in the candidate forest generation stage, and trees having the same precision but different dimension with the optimal tree were added to the forest in the updating stage. In the experiments, with the same experimental data and experimental parameters, the proposed algorithm and the traditional feature selection using forest optimization algorithm were used to test the small, medium and large dimension data respectively. The experimental results show that the proposed algorithm is better than the traditional feature selection using forest optimization algorithm in the classification performance and dimension reduction ability on two medium and two large dimension data. The experimental results prove the effectiveness of the proposed algorithm in solving feature selection problems.

    Medical insurance fraud detection algorithm based on graph convolutional neural network
    YI Dongyi, DENG Genqiang, DONG Chaoxiong, ZHU Miaomiao, LYU Zhouping, ZHU Suisong
    2020, 40(5):  1272-1277.  DOI: 10.11772/j.issn.1001-9081.2019101766
    Asbtract ( )   PDF (2297KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems of insufficient fraud samples, expensive data labeling and low accuracy of traditional Euclidean space model, a new One-Class medical insurance fraud detection model based on Graph convolution and Variational Auto-Encoder (OCGVAE) was proposed. Firstly, a social network was established through patient visit records, the weight relationships between the patients and the doctors were calculated, and a 2-layer Graph Convolutional neural Network (GCN) was designed as the input of the social network data to reduce the data dimension of the social network. Secondly, a Variational Auto-Encoder (VAE) was designed to implement the model training under only one-class fraud sample label. Finally, a Logistic Regression (LR) model was designed to discriminate the data category. The experimental results show that the detection accuracy of the OCGVAE model reaches 87.26%, which is 16.1%,70.2%,31.7%,36.5%,and 27.6% higher than that of One-Class Adversarial Net (OCAN), One-Class Gaussian Process (OCGP), One-Class Nearest Neighbor (OCNN), One-Class Support Vector Machine (OCSVM) and Semi-supervised GCN (Semi-GCN) algorithm, demonstrating that the proposed model effectively improves the accuracy of medical insurance fraud screening.

    Multi-scale quantum free particle optimization algorithm for solving travelling salesman problem
    YANG Yunting, WANG Peng
    2020, 40(5):  1278-1283.  DOI: 10.11772/j.issn.1001-9081.2019112019
    Asbtract ( )   PDF (478KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of slowness of the current meta-heuristic algorithms when solving Travelling Salesman Problem (TSP) in combinatorial optimization problems, a multi-scale adaptive quantum free particle optimization algorithm was proposed based on the inspiration of the wave function in quantum theory. Firstly, the particles representing the city sequences were randomly initialized in the feasible region as the initial search centers. Then, the new solution was obtained by taking each particle as the center to perform the sampling with uniformly distributed function and exchanging the city numbers in the sampling positions. Finally, according to the comparison result of the new solution with the optimal solution in the previous iteration, the search scale was adaptively adjusted, and the iterative search was carried out at different scales until the end condition of the algorithm was satisfied.The algorithm was compared with Hybrid Particle Swarm Optimization (HPSO) algorithm, Simulated Annealing (SA), Genetic Algorithm (GA) and Ant Colony Optimization(ACO) algorithm on TSP. The experimental results show that the multi-scale quantum free particle optimization algorithm is suitable for solving combinatorial optimization problems, and increases the solving speed by over 50% on average compared with the current better algorithms on the TSP datasets.

    Multi-task Logistic survival prediction method for time-dependent time-to-event data
    RUAN Canhua, LIN Jiaxiang
    2020, 40(5):  1284-1290.  DOI: 10.11772/j.issn.1001-9081.2019091673
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics

    Time-to-event data are ubiquitous in clinical medicine research domain, and include a large number of time-dependent time-dependent risk factor variables. To effectively analyze the time-dependent time-to-event data and to overcome the limitation of parameter hypothesis of the survival model, a multi-task Logistic survival leaning and prediction method was proposed. The survival prediction was transformed into a series of multi-task binary survival classification problems at various time points, and all observations of time-dependent risk factors were used to estimate the cumulative risk. By learning all data of event samples and censored samples, the Logistic regression parameters were regularized. The time-dependent relationships between risk factors and time-to-event were evaluated, and the time-to-event was estimated according to the survival probability. The comparative experiments on multiple real clinical datasets demonstrate the applicability of the proposed multi-task prediction method for time-dependent data and that the method can guarantee the accuracy and reliability of the prediction results.

    Greedy binary lion swarm optimization algorithm for solving multidimensional knapsack problem
    YANG Yan, LIU Shengjian, ZHOU Yongquan
    2020, 40(5):  1291-1294.  DOI: 10.11772/j.issn.1001-9081.2019091638
    Asbtract ( )   PDF (537KB) ( )  
    References | Related Articles | Metrics

    The Multidimensional Knapsack Problem (MKP) is a kind of typical multi-constraint combinatorial optimization problems. In order to solve this problem, a Greedy Binary Lion Swarm Optimization (GBLSO) algorithm was proposed. Firstly, with the help of binary code transform formula, the locations of lion individuals were discretized to obtain the binary lion swarm algorithm. Secondly, the inverse moving operator was introduced to update the location of lion king and redefine the locations of the lionesses and lion cubs. Thirdly, the greedy algorithm was fully utilized to make the solution feasible, so as to enhance the local search ability and speed up the convergence. Finally, Simulations on 10 typical MKP examples were carried out to compare GBLSO algorithm with Discrete binary Particle Swarm Optimization (DPSO) algorithm and Binary Bat Algorithm (BBA). The experimental results show that GBLSO algorithm is an effective new method for solving MKP and has good convergence efficiency, high optimization accuracy and good robustness in solving MKP.

    Multi-branch neural network model based weakly supervised fine-grained image classification method
    BIAN Xiaoyong, JIANG Peiling, ZHAO Min, DING Sheng, ZHANG Xiaolong
    2020, 40(5):  1295-1300.  DOI: 10.11772/j.issn.1001-9081.2019111883
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the local feature and rotation invariant feature cannot be jointly paid attention to in traditional attention-based neural networks, a multi-branch neural network model based weakly supervised fine-grained image classification method was proposed. Firstly, the lightweight Class Activation Map (CAM) network was utilized to localize the local region with potential semantic information, and the residual network ResNet-50 with deformable convolution and Oriented Response Network (ORN) with rotation invariant coding were designed. Secondly, the pre-trained model was employed to initialize the feature networks respectively, and the original image and the above regions were input to fine-tune the model. Finally, the three intra-branch losses and between-branch losses were combined to optimize the entire network, and the classification and prediction were performed on the test set. The proposed method achieves the classification accuracies of 87.7% and 90.8% on CUB-200-2011 dataset and FGVC_Aircraft dataset respectively, which are increased by 1.2 percentage points, and 0.9 percentage points respectively compared with those of the Multi-Attention Convolutional Neural Network (MA-CNN) method. On Aircraft_2 dataset, the proposed method reaches 91.8% classification accuracy, which is 4.1 percentage points higher than that of ResNet-50. The experimental results show that the proposed method improves the accuracy of weakly supervised fine-grained image classification effectively.

    Microscopic image identification for small-sample Chinese medicinal materials powder based on deep learning
    WANG Yiding, HAO Chenyu, LI Yaoli, CAI Shaoqing, YUAN Yuan
    2020, 40(5):  1301-1308.  DOI: 10.11772/j.issn.1001-9081.2019091646
    Asbtract ( )   PDF (1619KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems that a wide variety of Chinese medicinal materials have small samples, and it is difficult to classify the vessels of them, an improved convolutional neural network method was proposed based on multi-channel color space and attention mechanism model. Firstly, the multi-channel color space was used to merge the RGB color space with other color spaces into 6 channels as the network input, so that the network was able to learn the characteristic information such as brightness, hue and saturation to make up for the insufficient samples. Secondly, the attention mechanism model was added to the network, in which the two pooling layers were connected tightly by the channel attention model, and the multi-scale cavity convolutions were combined by the spatial attention model, so that the network focused on the key feature information in the small samples. Aiming at 8 774 vessel images of 34 samples collected from Chinese medicinal materials, the experimental results show that by using the multi-channel color space and attention mechanism model method, compared with the original ResNet network, the accuracy is increased by 1.8 percentage points and 3.1 percentage points respectively, and the combination of the two methods increases accuracy by 4.1 percentage points. It can be seen that the proposed method greatly improves the accuracy of small-sample classification.

    3D model recognition based on capsule network
    CAO Xiaowei, QU Zhijian, XU Lingling, LIU Xiaohong
    2020, 40(5):  1309-1314.  DOI: 10.11772/j.issn.1001-9081.2019101750
    Asbtract ( )   PDF (2645KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of feature information loss caused by the introduction of a large number of pooling layers in traditional convolutional neural networks, based on the feature of Capsule Network (CapsNet)——using vector neurons to save feature space information, a network model 3DSPNCapsNet (3D Small Pooling No dense Capsule Network) was proposed for recognizing 3D models. Using the new network structure, more representative features were extracted while the model complexity was reduced. And based on Dynamic Routing (DR) algorithm, Dynamic Routing-based algorithm with Length information (DRL) algorithm was proposed to optimize the iterative calculation process of capsule weights. Experimental results on ModelNet10 show that compared with 3DCapsNet (3D Capsule Network) and VoxNet, the proposed network achieves better recognition results, and has the average recognition accuracy on the original test set reached 95%. At the same time, the recognition ability of the network for the rotation 3D models was verified. After the rotation training set is appropriately extended, the average recognition rate of the proposed network for rotation models of different angles reaches 81%. The experimental results show that 3DSPNCapsNet has a good ability to recognize 3D models and their rotations.

    Tampered image recognition based on improved three-stream Faster R-CNN
    XU Dai, YUE Zhang, YANG Wenxia, REN Xiao
    2020, 40(5):  1315-1321.  DOI: 10.11772/j.issn.1001-9081.2019081515
    Asbtract ( )   PDF (1699KB) ( )  
    References | Related Articles | Metrics

    A tampered image recognition system with better universality based on convolutional neural network of three-stream feature extraction was proposed to improve the recognition accuracy of three main tampering methods stitching, scaling and rotating, copying and pasting. Firstly, by comparing the similarity of feature sub-blocks according to image local color invariant feature, comparing the noise correlation coefficients of tampered region edges with noise correlation, and calculating the standard deviation contrast of sub-blocks based on image resampling trace, the features of the RGB stream, noise stream and signal stream of the image were extracted separately. Then, through multilinear pooling, combined with an improved piecewise AdaGrad gradient algorithm, the feature dimension reduction and parameter self-adaptive updating were realized. Finally, through network training and classification, three main image tampering methods of stitching, scaling and rotating, copying and pasting were identified and the corresponding tampered areas were located. In order to measure the performance of this model, experiments were carried out on VOC2007 and CIFAR-10 datasets. The experimental results of about 9 000 images show that the proposed model can accurately identify and locate the three tampering methods stitching, scaling and rotating, copying and pasting, and its recognition rates are 0.962,0.956 and 0.935 respectively. Compared with the two-stream feature extraction method in the latest literature, the model has the recognition rates increased by 1.050%, 2.137% and 2.860% respectively. The proposed three-stream model enriches the image feature extraction by convolutional neural network, improves the training performance and recognition accuracy of the network. Meanwhile, controlling the descent speed of parameter learning rate piecewisely by the improved gradient algorithm reduces the over-fitting and convergence oscillation, as well as increases the convergence speed, so as to realize the optimization design of the algorithm.

    Data science and technology
    Outlier detection algorithm based on graph random walk
    DU Xusheng, YU Jiong, YE Lele, CHEN Jiaying
    2020, 40(5):  1322-1328.  DOI: 10.11772/j.issn.1001-9081.2019101708
    Asbtract ( )   PDF (1616KB) ( )  
    References | Related Articles | Metrics

    Outlier detection algorithms are widely used in various fields such as network intrusion detection, and medical aided diagnosis. Local Distance-Based Outlier Factor (LDOF), Cohesiveness-Based Outlier Factor (CBOF) and Local Outlier Factor (LOF) algorithms are classic algorithms for outlier detection with long execution time and low detection rate on large-scale datasets and high dimensional datasets. Aiming at these problems, an outlier detection algorithm Based on Graph Random Walk (BGRW) was proposed. Firstly, the iterations, damping factor and outlier degree for every object in the dataset were initialized. Then, the transition probability of the rambler between objects was deduced based on the Euclidean distance between the objects. And the outlier degree of every object in the dataset was calculated by iteration. Finally, the objects with highest outlier degree were output as outliers. On UCI (University of California, Irvine) real datasets and synthetic datasets with complex distribution, comparison between BGRW and LDOF, CBOF, LOF algorithms about detection rate, execution time and false positive rate were carried out. The experimental results show that BGRW is able to decrease execution time and false positive rate, and has higher detection rate.

    Time series anomaly detection method based on autoencoder and HMM
    HUO Weigang, WANG Huifang
    2020, 40(5):  1329-1334.  DOI: 10.11772/j.issn.1001-9081.2019091631
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics

    To solve the issue that the existing symbolic methods of anomaly detection based on Hidden Markov Model (HMM) cannot well represent the original time series, an Autoencoder and HMM-based Anomaly Detection (AHMM-AD) method was proposed. Firstly, the time series samples were segmented by sliding window, and several time series segmented sample sets were formed according to the positions of the segmentations, and the autoencoder of each segmentation was trained by the segmented sample set of different positions on the normal time series. Then, the low-dimensional feature representation of each segmented time series sample was obtained by using the autoencoder, and through K-means clustering of low-dimensional feature representation vector sets, the symbolization of time series sample sets was realized. Finally, the HMM was generated based on the symbol sequence set of the normal time series, and the abnormal detection was carried out according to the output probability values of the test samples on the established HMM. The experimental results on multiple common benchmark datasets show that AHMM-AD improves the accuracy, recall rate, and F1 value by 0.172, 0.477 and 0.313 respectively compared to those of the HMM-based time series anomaly detection model, and has 0.108, 0.450 and 0.319 increasement in these three aspects respectively compared with the autoencoder-based time series anomaly detection model. The experimental results illustrate that AHMM-AD method can extract the nonlinear features in time series, solve the problem that the time series cannot be well represented during the symbolization process of existing HMM-based time series modeling, and also improve the performance of time series anomaly detection.

    Stream data anomaly detection method based on long short-term memory network and sliding window
    QIU Yuan, Chang Xiangmao, QIU Qian, PENG Cheng, SU Shanting
    2020, 40(5):  1335-1339.  DOI: 10.11772/j.issn.1001-9081.2019111970
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics

    Aiming at the characteristics of large volume, rapid generation and concept drift of current stream data, a stream data anomaly detection method based on Long Short-Term Memory (LSTM) network and sliding window was proposed. Firstly, the LSTM network was used for data prediction, and the difference between the predicted value and the actual value was calculated. For each datum, the appropriate sliding window was selected, and the distribution modeling was performed to all the differences in the sliding window interval, then the probability of data anomaly was calculated according to the probability density of each difference in the current distribution. The LSTM network was not only able to predict data, but also able to predict and learn at the same time, as well as update and adjust the network in real time to ensure the validity of the model. The use of sliding windows was able to make the allocation of abnormal scores more reasonable. Finally, the simulation data made on the basis of real data were used for experiment. The experimental results verify that the average Area Under Curve (AUC) value of the proposed method in low-noise environment is 0.187 and 0.05 higher than that of direct difference detection and Abnormal data Distribution Modeling (ADM) method, respectively.

    Cyber security
    System wide information management emergency response mechanism based on subscribe/publish service
    WU Zhijun, WANG Hang
    2020, 40(5):  1340-1347.  DOI: 10.11772/j.issn.1001-9081.2019091699
    Asbtract ( )   PDF (1095KB) ( )  
    References | Related Articles | Metrics

    System Wide Information Management (SWIM) is a distributed, large-scale network system that provides uninterrupted aviation information data sharing and transmission services to air traffic management departments, airports and airlines in real time. In order to guarantee the continuity of SWIM services, the emergency response mechanism of SWIM based on subscription/release service was studied. Firstly, by real-time monitoring various performance indicators of SWIM network, a network survivability evaluation method based on improved fuzzy analytic hierarchy process was proposed. Secondly, when the network survivability index fell below the boundary value of the parameter, the corresponding information was published to the subscriber. It was determined by the subscriber whether to perform the service migration. Finally, an Emergency Response Model based on Subscribe/Publish service (ERMSP) for SWIM was proposed for natural disasters and Distributed Denial of Service (DDoS) attacks. The model is based on subscribe, publish and trust management mechanisms. Simulation experimental results show that the resistibility is improved by 8.9% and the business continuity is improved by 18.2% by real-time monitoring of network performance indicators and deployment of ERMSP, which can realize the emergency response of SWIM.

    PS-MIFGSM: focus image adversarial attack algorithm
    WU Liren, LIU Zhenghao, ZHANG Hao, CEN Yueliang, ZHOU Wei
    2020, 40(5):  1348-1353.  DOI: 10.11772/j.issn.1001-9081.2019081392
    Asbtract ( )   PDF (1400KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of the present mainstream adversarial attack algorithm that the attack invisibility is reduced by disturbing the global image features, an untargeted attack algorithm named PS-MIFGSM (Perceptual-Sensitive Momentum Iterative Fast Gradient Sign Method) was proposed. Firstly, the areas of the image focused by Convolutional Neural Network (CNN) in the classification task were captured by using Grad-CAM algorithm. Then, MI-FGSM (Momentum Iterative Fast Gradient Sign Method) was used to attack the classification network to generate the adversarial disturbance, and the disturbance was applied to the focus areas of the image with the non-focus areas of the image unchanged, thereby, a new adversarial sample was generated. In the experiment, based on three image classification models Inception_v1, Resnet_v1 and Vgg_16, the effects of PS-MIFGSM and MI-FGSM on single model attack and set model attack were compared. The results show that PS-MIFGSM can effectively reduce the difference between the real sample and the adversarial sample with the attack success rate unchanged.

    Lossy compression algorithm for encrypted binary images using Markov random field
    LI Tianzheng, WANG Chuntao
    2020, 40(5):  1354-1363.  DOI: 10.11772/j.issn.1001-9081.2019101740
    Asbtract ( )   PDF (1082KB) ( )  
    References | Related Articles | Metrics

    Although nowadays there are many compression methods of binary images, they cannot be directly applied to compress encrypted binary images. In scenarios like cloud computing and distributed computing, how to perform lossy compression efficiently on encrypted binary images remains a challenge, and there are few researches focusing on it. Aiming at this problem, a lossy compression algorithm for encrypted binary images using Markov Random Field (MRF) were proposed. MRF was used to characterize the spatial statistics of binary image, and MRF as well as the decompressed pixels was used to deduce those pixels discarded in the compression process of encrypted binary image. In the proposed algorithm, the stream cipher was used by the sender to encrypt the binary image, the subsampling method with uniform blocks and random in the block and Low-Density Parity-Check (LDPC)-based encoding were employed by the cloud server to compress the encrypted binary image, and the joint factor graph including the decoding, decryption and MRF-based reconstruction was constructed by the receiver to realize the lossy reconstruction of the binary image. The experimental results show that the proposed algorithm achieves desirable compression efficiency with the Bit Error Rate (BER) of the lossy reconstructed binary image smaller than 5% when compression rate is 0.2 to 0.4 bpp (bit per pixel). When compared with the compression efficiency of the international compression standard JBIG2 (Joint Bi-level Image experts Group version 2) of original unencrypted binary images, the proposed algorithm obtains the comparable compression efficiency. These fully demonstrate the feasibility and effectiveness of the proposed algorithm.

    Location based service location privacy protection method based on location security in augmented reality
    YANG Yang, WANG Ruchuan
    2020, 40(5):  1364-1368.  DOI: 10.11772/j.issn.1001-9081.2019111982
    Asbtract ( )   PDF (542KB) ( )  
    References | Related Articles | Metrics

    Rapid development of Location Based Service (LBS) and Augmented Reality (AR) technology lead to the hidden danger of user location privacy leakage. After analyzing the advantages and disadvantages of existing location privacy protection methods, a location privacy protection method was proposed based on location security. The zone security degree and the camouflage region were introduced into the method, and the zone security was defined as a metric that indicates whether a zone needs protection. The zone security degree of insecure zones (zones need to be protected) was set to 1 while that of secure zones (zones not need to be protected) was set to 0. And the location security degree was calculated by expanding zone security degree and recognition levels. Experimental results show that, compared with the method without introducing location security, this method can reduce average location error and enhance average security, therefore effectively protecting the user location privacy and increasing the service quality of LBS.

    Advanced computing
    Design and implementation of Lite register model
    PAN Guoteng, OU Guodong, CHAO Zhanghu, LI Mengjun
    2020, 40(5):  1369-1373.  DOI: 10.11772/j.issn.1001-9081.2019091674
    Asbtract ( )   PDF (2258KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the scale of integrated circuits and the number of on-chip registers are increasing, which makes the verification more difficult, a lightweight register model was proposed. Firstly, a concise underlying structure was designed, and parameterized settings were combined to reduce the memory consumption of the register model at runtime. Then the register verification requirements at different levels such as module level and system level were analyzed, and SystemVerilog language was used to implement various functions required for verification. Finally, the built-in test cases and register model automatic generation tools were developed to reduce the setup time of the verification environment in which the register model was located. The experimental results show that the proposed register model is 21.65% of the Universal Verification Methodology (UVM) register model in term of memory consumption at runtime; in term of function, the proposed register model can be applied to traditional UVM verification environments and non-UVM verification environments, and the functions such as read-write property, reset value and backdoor access path of 25 types of registers are checked. This lightweight register model has good universality and flexibility in engineering practice, meets the needs of register verification, and can effectively improve the efficiency of register verification.

    Fuzzy membership degree based virtual machine placement algorithmin cloud environment
    GUO Shujie, LI Zhihua, LIN Kaiqing
    2020, 40(5):  1374-1381.  DOI: 10.11772/j.issn.1001-9081.2019081408
    Asbtract ( )   PDF (1010KB) ( )  
    References | Related Articles | Metrics

    Virtual machine placement is one of the core problems of resource scheduling in cloud data center. It has an important impact on the performance, resource utilization and energy consumption of data center. In order to optimize the data center energy consumption, improve resource utilization and ensure Quality of Service (QoS), a fuzzy membership degree based virtual machine placement algorithm was proposed. Firstly, combined the overload probability of physical hosts with the fitness placement relationship between virtual machines and physical hosts, a new distance measurement method was proposed. Then, according to the fuzzy membership function, the fitness fuzzy membership matrix between virtual machines and physical hosts was calculated. Finally, with the mechanism of energy awareness, the local search was performed in the fuzzy membership matrix to obtain the optimal placement scheme of the migration virtual machines. Simulation results show that the proposed algorithm can reduce the energy consumption of cloud data center, improve resource utilization and ensure QoS.

    Adaptive distribution based quantum-behaved particle swarm optimization algorithm for engineering constrained optimization problem
    SHI Xiaoqian, CHEN Qidong, SUN Jun, MAO Zhongjie
    2020, 40(5):  1382-1388.  DOI: 10.11772/j.issn.1001-9081.2019091577
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics

    Aiming at the nonlinear design optimization problems with multiple constraints in the field of engineering shape design, an Adaptive Gaussian Quantum-behaved Particle Swarm Optimization (AG-QPSO) algorithm was proposed. By adjusting the Gaussian distribution adaptively, AG-QPSO algorithm was able to have strong global search ability at the initial stage of search process, and with the search process continued, the algorithm was able to have stronger local search ability, so as to meet the demands of the algorithm at different stages of the search process. In order to verify the effectiveness of the algorithm, 50 rounds of independent experiments were carried out on the two engineering constraint optimization problems: pressure vessel design and tension string design. The experimental results show that AG-QPSO algorithm achieves the average result of 5 890.931 5 and the optimal result of 5 885.332 8 on the pressure vessel design problem, and achieves the average result of 0.010 96 and the optimal result of 0.010 96 on the tension string design problem, which are better than the results of the existing algorithms such as the standard Particle Swarm Optimization (PSO) algorithm, Quantum Particle Swarm Optimization (QPSO) algorithm and Gaussian Quantum-behaved Particle Swarm Optimization (G-QPSO) algorithm. At the same time, the small variance of the results obtained by AG-QPSO algorithm indicates that the algorithm is very robust.

    Network and communications
    Content distribution acceleration strategy in mobile edge computing
    LIU Xing, YANG Zhen, WANG Xinjun, ZHU Heng
    2020, 40(5):  1389-1391.  DOI: 10.11772/j.issn.1001-9081.2019091679
    Asbtract ( )   PDF (490KB) ( )  
    References | Related Articles | Metrics

    Focusing on the content distribution acceleration problem in Mobile Edge Computing (MEC), with the consideration of the influence of MEC server storage space limitation on content cache, with the object obtaining delays of the mobile users as optimization goal, an Interest-based Content Distribution Acceleration Strategy (ICDAS) was proposed. Considering the MEC server storage space, the interests of the mobile user groups on different objects and the file sizes of the objects, the objects were selectively cached on MEC servers, and the objects cached on MEC servers were timely updated in order to meet the content requirements of mobile user groups as more as possible. The experimental results show that the proposed strategy has good convergence performance, which cache hit ratio is relatively stable and significantly better than that of the existing strategies. When the system runs stably, compared with the existing strategies, this strategy can reduce the object data obtaining delay for users by 20%.

    User association mechanism based on evolutionary game
    WANG Yueping, XU Tao
    2020, 40(5):  1392-1396.  DOI: 10.11772/j.issn.1001-9081.2019112024
    Asbtract ( )   PDF (546KB) ( )  
    References | Related Articles | Metrics

    User association is the problem that a wireless terminal chooses to access one serving base station. User association can be seen as the first step in wireless resource management, which has an important impact on network performance, and plays a very important role in achieving load balance, interference control, improvement of spectrum and energy efficiency. Aiming at the characteristics of multi-layer heterogeneous network including macro base stations and full-duplex small base stations, a separate multiple access mechanism was considered, which means allowing a terminal access different and multiple base stations in the uplink and downlink, so as to realize the performance improvement. On this basis, the user association problem with separation of uplink and downlink multi-access in heterogeneous network was modeled into an evolutionary game problem, in which the users act as the players to perform the resource competition with each other, the access choices of base stations are strategies in the game, and every user wants to obtain the maximum of own effectiveness by the choice of strategy. Besides, a low-complex self-organized user association algorithm was designed based on evolutionary game and reinforcement learning. In the algorithm, the user was able to adjust the strategy according to the revenue of current strategy choice, and reached an equilibrium state in the end, realizing user fairness. Finally, extensive simulations were performed to verify the effectiveness of the proposed method.

    Opportunistic network message forwarding algorithm based on time-effectiveness of encounter probability and repeated diffusion perception
    GE Yu, LIANG Jing
    2020, 40(5):  1397-1402.  DOI: 10.11772/j.issn.1001-9081.2019081495
    Asbtract ( )   PDF (671KB) ( )  
    References | Related Articles | Metrics

    In order to select more reasonable relay nodes for message transmission and improve the efficiency of message delivery in opportunistic networks, message forwarding utility was designed, a corresponding message copy forwarding algorithm was proposed. Firstly, based on the historical encounter information of nodes, the indirect encounter probability of nodes and the corresponding time-effectiveness were analyzed, then a time-effectiveness indicator was proposed to evaluate the encounter information value. Secondly, combined with the similarity of node motion, the problem of message repeated diffusion was analyzed, a deviation indicator of node movement was proposed to evaluate the possibility of message repeated diffusion. Simulation results show that compared with Epidemic, ProPHET, Maxprop, SAW (Spray And Wait) algorithms, the proposed algorithm has better performance in delivery success rate, overhead and delay.

    Virtual reality and multimedia computing
    Virtual-real registration method of natural features based on binary robust invariant scalable keypoints and speeded up robust features
    ZHOU Xiang, TANG Liyu, LIN Ding
    2020, 40(5):  1403-1408.  DOI: 10.11772/j.issn.1001-9081.2019091621
    Asbtract ( )   PDF (1572KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the accuracy and real-time effects of virtual-real registration in Augmented Reality (AR) based on vision are greatly affected by the changes of illumination, occlusion and perspective, which is easy to lead to failure of registration, a virtual-real registration method of natural features based on Binary Robust Invariant Scalable Keypoints-Speeded Up Robust Features (BRISK-SURF) algorithm was proposed. Firstly, Speeded Up Robust Features (SURF) feature extractor was used to detect the feature points. Then, Binary Robust Invariant Scalable Keypoints (BRISK) descriptor was used to describe the feature points in binary, and the feature points were matched accurately and efficiently by combining Hamming distance. Finally, the virtual-real registration was realized according to the homography relationship between images. Experiments were performed from the aspects of image feature matching and virtual-real registration. Results show that the average precision of BRISK-SURF algorithm is basically the same with that of SURF algorithm, is about 25% higher than that of BRISK algorithm, and the average recall of BRISK-SURF is increased by about 10% compared to that of BRISK algorithm; the result of the virtual-real registration method based on BRISK-SURF is close to the reference standard data with high precision and good real-time performance. The Experimental results illustrate that the proposed method has high recognition accuracy, registration precision and real-time effects for images with different illuminations, occlusions and perspectives. Besides, the interactive tourist resource presentation and experience system based on AR is realized by using the proposed method.

    Interactive water flow heating simulation based on smoothed particle hydrodynamics method
    WANG Jiangkun, HE Kunjin, CAO Hongfei, WANG Jinqiang, ZHANG Yan
    2020, 40(5):  1409-1414.  DOI: 10.11772/j.issn.1001-9081.2019101734
    Asbtract ( )   PDF (2338KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of interaction difficulty and low efficiency in traditional water flow heating simulation, a method about thermal motion simulation based on Smoothed Particle Hydrodynamics (SPH) was proposed to control the process of water flow heating interactively. Firstly, the continuous water flow was transformed into particles based on the SPH method, the particle group was used to simulate the movement of the water flow, and the particle motion was limited in the container by the collision detection method. Then, the water particles were heated by the heat conduction model of the Dirichlet boundary condition, and the motion state of the particles was updated according to the temperature of the particles in order to simulate the thermal motion of the water flow during the heating process. Finally, the editable system parameters and constraint relationships were defined, and the heating and motion processes of water flow under multiple conditions were simulated by human-computer interaction. Taking the heating simulation of solar water heater as an example, the interactivity and efficiency of the SPH method in solving the heat conduction problem were verified by modifying a few parameters to control the heating work of the water heater, which provides convenience for the applications of interactive water flow heating in other virtual scenes.

    Stereo matching algorithm based on image segmentation
    ZHANG Yifei, LI Xinfu, TIAN Xuedong
    2020, 40(5):  1415-1420.  DOI: 10.11772/j.issn.1001-9081.2019101771
    Asbtract ( )   PDF (1843KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of inaccurate matching of weak texture and pure color region in stereo matching and long time consumption of image segmentation algorithms, a stereo matching algorithm fused with image segmentation was proposed. Firstly, the initial image was filtered by Gaussian and smoothed by Sobel to obtain the edge feature map of the image. Secondly, the red, green and blue channel values of the original image were dichotomized by using the Otsu method and then refused to obtain the segmentation template map. Finally, the obtained grayscale map, edge feature map and segmentation template map were applied in the parallax calculation and parallax optimization process in order to calculate the parallax map. The proposed algorithm has the accuracy improved by 14.23 percentage points on average with the time cost per 10 000 pixels increased by 7.16 ms in comparison with Sum of Absolute Differences (SAD) algorithm. The experimental results show that the proposed algorithm can obtain smoother matching results in pure color and weak texture regions and parallax discontinuity regions, and it can automatically calculate the threshold and segment the image faster.

    Illumination calculation for large-scale scene based on improved voxel cone tracking
    YU Guangji, LUO Weitai, WU Xiaoyun
    2020, 40(5):  1421-1424.  DOI: 10.11772/j.issn.1001-9081.2019091574
    Asbtract ( )   PDF (724KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of large amount of global illumination calculation for large-scale scene rendering, a global illumination calculation algorithm based on improved voxel cone filter tracking was proposed. Firstly, the cascading texture structure was used to improve the simple voxel structure, realizing fast and efficient storage of scene illumination information filtered anisotropically, so as to reduce the memory cost. Then, the direct illumination attenuation was calculated based on normal weighted accumulation in order to alleviate the problem of normal consistency. Finally, the cone filter was improved by the cascading texture structure, which realized the dynamic voxel search and avoided the traversal of the entire storage structure, and the effective calculation of global illumination was achieved. The experimental results show that, the improved algorithm can reduce the memory footprint and improve the rendering speed while maintaining the similar rendering effect to that of the classic voxel cone tracking algorithm.

    Hyperspectral band selection based on multi-kernelized fuzzy rough set and grasshopper optimization algorithm
    ZHANG Wu, CHEN Hongmei
    2020, 40(5):  1425-1430.  DOI: 10.11772/j.issn.1001-9081.2019101769
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics

    Band selection can effectively reduce the spatial redundancy of hyperspectral data and provide effective support for subsequent classification. Multi-kernel fuzzy rough set model is able to analyze numerical data containing uncertainty and approximate description, and grasshopper optimization algorithm can solve optimization problem with strong exploration and development capabilities. Multi-kernelized fuzzy rough set model was introduced into hyperspectral uncertainty analysis modeling, grasshopper optimization algorithm was used to select the subset of bands, then a hyperspectral band selection algorithm based on multi-kernel fuzzy rough set and grasshopper optimization algorithm was proposed. Firstly, the multi-kernel operator was used to measure the similarity in order to improve the adaptability of the model to data distribution. The correlation measure of bands based on the kernel fuzzy rough set was determined, and the correlation between bands was measured by the lower approximate distribution of ground objects at different pixel points in fuzzy rough set. Then, the band dependence, band information entropy and band correlation were considered comprehensively to define the fitness function of band subset. Finally, with J48 and K-Nearest Neighbor (KNN) adopted as the classifier algorithms, the proposed algorithm was compared with Band Correlation Analysis (BCA) and Normalized Mutual Information (NMI) algorithms in the classification performance on a common hyperspectral dataset Indiana Pines agricultural area. The experimental results show that the proposed algorithm has the overall average classification accuracy increased by 2.46 and 1.54 percentage points respectively when fewer bands are selected.

    Image generation based on semantic labels and noise prior
    ZHANG Susu, NI Jiancheng, ZHOU Zili, HOU Jie
    2020, 40(5):  1431-1439.  DOI: 10.11772/j.issn.1001-9081.2019101757
    Asbtract ( )   PDF (2335KB) ( )  
    References | Related Articles | Metrics

    Existing generation models have difficulty in directly generating high-resolution images from complex semantic labels. Thus, a Generative Adversarial Network based on Semantic Labels and Noise Prior (SLNP-GAN) was proposed. Firstly, the semantic labels (including information of shape, position and category) were directly used as input, the global generator was used to encode them, the coarse-grained global attributes were learned by combining the noise prior, and the low-resolution images were generated. Then, with the attention mechanism, the local refined generator was used to query the high-resolution sub-labels corresponding to the sub-regions of the low-resolution images, and the fine-grained information was obtained, the complex images with clear textures were thus generated. Finally, the improved Adam with Momentum (AMM) algorithm was introduced to optimize the adversarial training. The experimental results show that, compared with the existing method text2img, the proposed method has the Pixel Accuracy (PA) increased by 23.73% and 11.09% respectively on COCO_Stuff and the ADE20K datasets; in comparison with the Adam algorithm, the AMM algorithm doubles the convergence speed with much smaller loss amplitude. It proves that SLNP-GAN can efficiently obtain global features as well as local textures and generate fine-grained high-quality images.

    Bionic image enhancement algorithm based on top-bottom hat transformation
    YU Tianhe, LI Yuzuo, LAN Chaofeng
    2020, 40(5):  1440-1445.  DOI: 10.11772/j.issn.1001-9081.2019081496
    Asbtract ( )   PDF (3089KB) ( )  
    References | Related Articles | Metrics

    In view of the low contrast, poor details and low color saturation of low-illumination images, by analyzing the non-linear relationship between the subjective brightness sensation of the human eye and the transmission characteristics of the receptive field in the retinal ganglion cells of the human eye, a bionic image enhancement algorithm combining top-hat transformation and bottom-hat transformation was proposed. Firstly, the RGB (Red, Green, Blue) color space of low-illumination image was converted into HSV (Hue, Saturation, Value) space, and the global brightness logarithmic transformation was performed on the brightness component. Secondly, the retinal neuron receptive wild tri-Gaussian model was used to adjust the contrast of the local edge of the image. Finally, top-hat transformation and bottom-hat transformation were used to assist the extraction of background with high brightness. The experimental results show that the low-illumination images enhanced by the proposed algorithm have clear details and high contrast, without the problems of uneven illumination and image depth of field in the images captured by the device. These enhanced images have high color saturation and strong visual sensation effect.

    Light-weight automatic residual scaling network for image super-resolution reconstruction
    DAI Qiang, CHENG Xi, WANG Yongmei, NIU Ziwei, LIU Fei
    2020, 40(5):  1446-1452.  DOI: 10.11772/j.issn.1001-9081.2019112014
    Asbtract ( )   PDF (1461KB) ( )  
    References | Related Articles | Metrics

    Recently, deep learning has been a hot research topic in the field of image super-resolution due to the excellent performance of deep convolutional neural networks. Many large-scale models with very deep structures have been proposed. However, in practical applications, the hardware of ordinary personal computers or smart terminals are obviously not suitable for large-scale deep neural network models. A light-weight Network with Automatic Residual Scaling (ARSN) for single image super-resolution was proposed, which has fewer layers and parameters compared with many other deep learning based methods. In addition, the specified residual blocks and skip connections in this network were utilized for residual scaling, global and local residual learning. The results on test datasets show that this model achieves state-of-the-art performance on both reconstruction quality and running speed. The proposed network achieves good results in terms of performance, speed and hardware consumption, and has high practical value.

    Microscopic image segmentation method of C.elegans based on deep learning
    ZENG Zhaoxin, LIU Jun
    2020, 40(5):  1453-1459.  DOI: 10.11772/j.issn.1001-9081.2019091683
    Asbtract ( )   PDF (2638KB) ( )  
    References | Related Articles | Metrics

    To analyze the morphological parameters of Caenorhabditis elegans (C.elegans) automatedly and accurately by computers, the critical step is the segmentation of nematode body shape from the microscopic image. However, the design of C.elegans segmentation algorithm with robustness is still facing challenges because of a lot of noise in the microscopic image, the similarity between the pixels of the nematode edge with the surrounding environment, and the flagella and other attachments of the nematode body shape which need to be separated. Aiming at these problems, a method based on deep learning for nematode segmentation was proposed, in which the morphological features of nematodes were studied by training Mask Region-Convolutional Neural Network (Mask R-CNN) to realize automatic segmentation. Firstly, the high-level semantic features were combined with the low-level edge features by improving the multi-level feature pooling, and Large-Margin Softmax Loss (LMSL) algorithm was combined to improve the loss calculation. Then, the non-maximum suppression was improved. Finally, the methods such as fully connected fusion branch were added to further optimize the segmentation results. Experimental results show that compared to original Mask R-CNN, the proposed method has Average Precision (AP) increased by 4.3 percentage points, and the mean Intersection Over Union (mIOU) increased by 4 percentage points, which means that the proposed deep learning segmentation method can improve the segmentation accuracy effectively and segment the nematodes from the microscopic images more accurately.

    Mass and calcification classification method in mammogram based on multi-view transfer learning
    XIAO He, LIU Zhiqin, WANG Qingfeng, HUANG Jun, ZHOU Ying, LIU Qiyu, XU Weiyun
    2020, 40(5):  1460-1464.  DOI: 10.11772/j.issn.1001-9081.2019101744
    Asbtract ( )   PDF (1943KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of insufficient available training data in the classification task of breast mass and calcification, a multi-view model based on secondary transfer learning was proposed combining with imaging characteristics of mammogram. Firstly, CBIS-DDSM (Curated Breast Imaging Subset of Digital Database for Screening Mammography) was used to construct the breast local tissue section dataset for the pre-training of the backbone network, and the domain adaptation learning of the backbone network was completed, so the backbone network had the essential ability of capturing pathological features. Then, the backbone network was secondarily transferred to the multi-view model and was fine-tuned based on the dataset of Mianyang Central Hospital. At the same time, the number of positive samples in the training was increased by CBIS-DDSM to improve the generalization ability of the network. The experimental results show that the domain adaption learning and data augmentation strategy improves the performance criteria by 17% averagely and achieves 94% and 90% AUC (Area Under Curve) values for mass and calcification respectively.

    Smoke recognition method based on dense convolutional neural network
    CHENG Guangtao, GONG Jiachang, LI Jian
    2020, 40(5):  1465-1469.  DOI: 10.11772/j.issn.1001-9081.2019091583
    Asbtract ( )   PDF (847KB) ( )  
    References | Related Articles | Metrics

    To address the poor robustness of the extracted image features in traditional smoke detection methods, a smoke recognition method based on Dense convolution neural Network (DenseNet) was proposed. Firstly, the dense network blocks were constructed by applying convolution operation and feature map fusion, and the dense connection mechanism was designed between the convolution layers, so as to promote the information circulation and feature reuse in the dense network block structure. Secondly, the DenseNet was designed by stacking the designed dense network blocks for smoke recognition, saving the computing resources and enhancing the expression ability of smoke image features. Finally, aiming at the problem of small smoke image data size, data augmentation technology was adopted to further improve the recognition ability of the training model. Experiments were carried out on public smoke datasets. The experimental results illustrate that the proposed method achieves high accuracy of 96.20% and 96.81% on two test sets respectively with only 0.44 MB model size.

    Faster R-CNN based color-guided flame detection
    HUANG Jie, CHAOXIA Chenyu, DONG Xiangyu, GAO Yun, ZHU Jun, YANG Bo, ZHANG Fei, SHANG Weiwei
    2020, 40(5):  1470-1475.  DOI: 10.11772/j.issn.1001-9081.2019101737
    Asbtract ( )   PDF (947KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of low detection rate of depth feature based object detection method Faster R-CNN (Faster Region-based Convolutional Neural Network) in flame detection tasks, a color-guided anchoring strategy was proposed. In this strategy, a flame color model was designed to limit the generation of anchors, which means the flame color was used to limit the generation locations of the anchors, thereby reducing the number of initial anchors and improving the computational efficiency. To further improve the computational efficiency of the network, the masked convolution was used to replace the original convolution layer in the region proposal network. Experiments were conducted on BoWFire and Corsician datasets to verify the detection performance of the proposed method. The experimental results show that the proposed method improves detection speed by 10.1% compared to the original Faster R-CNN, has the F-measure of flame detection of 0.87 on BoWFire, and has the accuracy reached 99.33% on Corsician.The proposed method can improve the efficiency of flame detection and can accurately detect flames in images.

    Speech enhancement using multi-microphone state space model under industrial noise environment
    WU Qinghe, WU Haifeng, SHEN Yong, ZENG Yu
    2020, 40(5):  1476-1482.  DOI: 10.11772/j.issn.1001-9081.2019081514
    Asbtract ( )   PDF (1567KB) ( )  
    References | Related Articles | Metrics

    When a speech communication is performed in the industrial environment of collaborative operation, the speech is often submerged in the industrial noise. In this case, the effectiveness of speech communication is affected. For the speech environment with industrial noise, a Kalman speech enhancement algorithm using multi-microphone was proposed. In the algorithm, the difference equation in the State Space Model (SSM) was simplified to reduce the complexity, and the denoising signal was obtained in each sampling point to improve the real-time performance. In addition, to further simplify the complexity, the least square method was used to enhance the speech. In experiments, the speech signals and factory noise signals from a public database were used to simulate the noisy speech under multi-microphone environment, and the proposed algorithm was compared with the traditional algorithm. The experimental results show that the proposed algorithm has the output speech-to-noise ratio (a ratio of enhanced speech to residual noise) higher than the traditional algorithm by about 2 dB, and the running time less than 2% of that of the traditional algorithm. At the same time, the delay time of the algorithm is only several milliseconds.

    Frontier & interdisciplinary applications
    Scheme and platform of trusted fund-raising and donation based on smart contract
    TAN Wenan, WANG Hui
    2020, 40(5):  1483-1487.  DOI: 10.11772/j.issn.1001-9081.2019111999
    Asbtract ( )   PDF (1060KB) ( )  
    References | Related Articles | Metrics

    The centralized management of traditional donation platforms is difficult to meet the needs of highly trusted mechanism. The truth of fund-raising information is difficult to distinguish, and the flow of funds is not transparent. Blockchain technology has characteristics of decentralization, data not being tampered, traceability, and peer-to-peer transaction, which lays a foundation for building a trusted donation platform. Therefore, based on the blockchain technology, a donation scheme based on the Ethereum smart contract was proposed. Firstly, the fund-raising information and donation transaction events were stored on the Ethereum blockchain, and the margin mechanism was used to ensure the authenticity and traceability of the data. Meanwhile, the architecture model of the scheme was described. The smart contract algorithm Donate was proposed to replace the manual operations in order to prevent the misappropriation and long-term non-payment problems of funds. Finally, the feasibility of the scheme was validated by the trusted fund-raising and donation platform based on smart contract. Compared with the traditional fund-raising platform, it is proved that the proposed platform can prevent false fund-raising and fund misappropriation safely and effectively.

    Urban road short-term traffic flow prediction based on spatio-temporal node selection and deep learning
    CAO Yu, WANG Cheng, WANG Xin, GAO Yueer
    2020, 40(5):  1488-1493.  DOI: 10.11772/j.issn.1001-9081.2019091568
    Asbtract ( )   PDF (712KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems of insufficient consideration of the traffic flow characteristics and the low accuracy of the prediction, a short-term prediction method of urban road traffic flow based on spatio-temporal node selection and deep learning was proposed. Firstly, the characteristics of traffic flow were analyzed in theory and data representation to obtain its spatial characteristics, and temporal characteristics and candidate spatio-temporal nodes set. Secondly, the set of candidate spatio-temporal nodes was determined according to the reachable range of traffic flow, and the fitness was calculated by taking the inverse of the sum of squares of errors as the objective function. In the historical training set, genetic algorithm and Back Propagation Neural Network (BPNN) were used to select spatio-temporal nodes, and the final spatio-temporal nodes and BPNN structure were obtained. Finally, the measured values of the selected spatio-temporal nodes were taken as the input of BPNN in the working set to obtain the predicted values. The experimental results show that compared with only using data of adjacent spatio-temporal nodes, using other time node ranges, Support Vector Machine (SVM) and Gradient Boosting Decision Tree (GBDT), the proposed model has a slight reduction in Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE), which are 10.631 6 and 14.275 8%, respectively; and 0.257 3和0.999 1 percentage points lower than those by using adjacent spatio-temporal nodes.

    Multi-objective closed-loop logistics network model of fresh foods based on improved genetic algorithm
    HUO Qingqing, GUO Jianquan
    2020, 40(5):  1494-1500.  DOI: 10.11772/j.issn.1001-9081.2019091682
    Asbtract ( )   PDF (702KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems of high economic costs, large amount of carbon emissions and insufficient attention to social benefits in the closed-loop logistics network for fresh foods, a multi-objective closed-loop logistics network model for fresh foods under uncertain conditions was established by considering the uncertainty of return quantity and aiming at the minimum economic costs, the minimum carbon emissions and the maximum social benefits. Firstly, the improved Genetic Algorithm (GA) was used to solve the model. Then, the feasibility of the model was verified by combining the operation and management data of a fresh food enterprise in Shanghai. Finally, the results of improved GA was compared to the results of Particle Swarm Optimization (PSO) algorithm to verify the effectiveness of the algorithm, and to highlight the superiority of the improved GA in solving multi-objective complex constraint problems. The example results show that the satisfaction degree of multi-objective optimization is 0.92, which is higher than that of single-objective optimization, demonstrating the effectiveness of the proposed model.

    Stock closing price prediction algorithm using adaptive whale optimization algorithm and Elman neural network
    ZHU Changsheng, KANG Lianghe, FENG Wenfang
    2020, 40(5):  1501-1509.  DOI: 10.11772/j.issn.1001-9081.2019091678
    Asbtract ( )   PDF (1434KB) ( )  
    References | Related Articles | Metrics

    Focused on the issue that Elman neural network has slow convergence speed and low prediction accuracy in the closing price prediction based on the network public opinion of the stock market, a prediction model combining Improved Whale Optimization Algorithm (IWOA) and Elman neural network was proposed, which is based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN)algorithm. Firstly, text mining technology was used to mine and quantify the network public opinions of Shanghai Stock Exchange (SSE) 180 shares, and in order to reduce the complexity of attribute set, Boruta algorithm was used to select the important attributes. Then, CEEMDAN algorithm was used to add a certain number of white noises with specific variances in order to realize the decomposition and noise reduction of the attribute sequence. At the same time, in order to enhance the global search and local mining capabilities, adaptive weight was used to improve Whale Optimization Algorithm (WOA). Finally, the initial weights and thresholds of Elman neural network were optimized by WOA in the iterative process. The results show that, compared to Elman neural network, the proposed model has the Mean Absolute Error (MAE) reduced from 358.812 0 to 113.055 3; compared to the original dataset without CEEMDAN algorithm, the proposed model has the Mean Absolute Percentage Error (MAPE) reduced from 4.942 3% to 1.445 31%, demonstrating that the model effectively improves the prediction accuracy and provides an effective experimental method for predicting the network public opinion of stock market.

    Protein complex identification algorithm based on XGboost and topological structural information
    XU Zhoubo, YANG Jian, LIU Huadong, HUANG Wenwen
    2020, 40(5):  1510-1514.  DOI: 10.11772/j.issn.1001-9081.2019111992
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics

    Large amount of uncertainty in PPI network and the incompleteness of the known protein complex data add inaccuracy to the methods only considering the topological structural information to search or performing supervised learning to the known complex data. In order to solve the problem, a search method called XGBoost model for Predicting protein complex (XGBP) was proposed. Firstly, feature extraction was performed based on the topological structural information of complexes. Then, the extracted features were trained by XGBoost model. Finally, a mapping relationship between features and protein complexes was constructed by combining topological structural information and supervised learning method, in order to improve the accuracy of protein complex prediction. Comparisons were performed with eight popular unsupervised algorithms: Markov CLustering (MCL), Clustering based on Maximal Clique (CMC), Core-Attachment based method (COACH), Fast Hierarchical clustering algorithm for functional modules discovery in Protein Interaction (HC-PIN), Cluster with Overlapping Neighborhood Expansion (ClusterONE), Molecular COmplex DEtection (MCODE), Detecting Complex based on Uncertain graph model (DCU), Weighted COACH (WCOACH); and three supervisedmethods Bayesian Network (BN), Support Vector Machine (SVM), Regression Model (RM). The results show that the proposed algorithm has good performance in terms of precision, sensitivity and F-measure.

    Multi-unmanned aerial vehicle adaptive formation cooperative trajectory planning
    XU Yang, QIN Xiaolin, LIU Jia, ZHANG Lige
    2020, 40(5):  1515-1521.  DOI: 10.11772/j.issn.1001-9081.2019112047
    Asbtract ( )   PDF (2198KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of neglecting some narrow roads due to the formation constraints in the multi-UAV (Unmanned Aerial Vehicle) cooperative trajectory planning, a Fast Particle Swarm Optimization method based on Adaptive Distributed Model Predictive Control (ADMPC-FPSO) was proposed. In the method, the formation strategy combining leader-follower method and virtual structure method was used to construct adaptive virtual formation guidance points to complete the cooperative formation control task. According to the idea of model predictive control, combined with the distributed control method, the cooperative trajectory planning was transformed into a rolling online optimization problem, and the minimum distance and other performance indicators were used as cost functions. By designing the evaluation function criterion, the variable weight fast particle swarm optimization algorithm was used to solve the problem. The simulation results show that the proposed algorithm can effectively realize the multi-UAV cooperative trajectory planning, can quickly complete the adaptive formation transformation according to the environmental changes, and has lower cost than the traditional formation strategy.

    Remote real-time fault diagnosis for automatic flight control system based on residual decision
    SUN Shuguang, ZHOU Qi
    2020, 40(5):  1522-1528.  DOI: 10.11772/j.issn.1001-9081.2019091530
    Asbtract ( )   PDF (1212KB) ( )  
    References | Related Articles | Metrics

    The automatic flight control system has complex structure with many relevant components, resulting in long time fault diagnosis, which affects the efficiency of aircraft operation. Aiming at the problems, a remote real-time fault diagnosis scheme based on Aircraft Communication Addressing & Reporting System (ACARS) was proposed. Firstly, the fault characteristics of the automatic flight control system were analyzed, and the detection filter was designed and built. Then, the key information of the automatic flight control system transmitted in real time by ACARS data-link was used to realize the residual calculation of the relevant components, and the fault diagnosis and location were carried out according to the residual decision algorithm. Finally, because of the large difference between residual errors of different fault components and inconsistent decision-making threshold, an improved residual decision-making algorithm based on quadratic difference was proposed. This algorithm reduced the overall change trend of the detection object, reduced the influence of random noise and interference, and avoided a transient fault being considered as a system fault. The simulation results show that the proposed algorithm avoids the complexity of multiple decision-making thresholds. Its fault detection time is about 2 seconds with 0.1 second sampling time, which shortens the fault detection time greatly, and the effective fault detection rate is more than 90%.

    Target detection of carrier-based aircraft based on deep convolutional neural network
    ZHU Xingdong, TIAN Shaobing, HUANG Kui, FAN Jiali, WANG Zheng, CHENG Huacheng
    2020, 40(5):  1529-1533.  DOI: 10.11772/j.issn.1001-9081.2019091694
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics

    The carrier-based aircrafts on the carrier deck are dense and occluded, so that the carrier-based aircraft targets are difficult to detect, and the detection effect is easily affected by the lighting condition and target size. Therefore, an improved Faster R-CNN (Faster Region with Convolutional Neural Network) carrier-based aircraft target detection method was proposed. In this method, a loss function with a repulsion loss strategy was designed, and combined with multi-scale training, pictures collected under laboratory condition were used to train and test the deep convolutional neural network. Test experiments show that compared with the original Faster R-CNN detection model, the improved model has a better detection effect on occluded aircraft targets, the recall increased by 7 percentage points, and the precision increased by 6 percentage points. The experimental results show that the proposed improved method can automatically and comprehensively extract the characteristics of carrier-based aircraft targets, solve the detection problem of occluded carrier-based aircraft targets, has the detection accuracy and speed which can meet the actual needs, and has strong adaptability and high robustness under different lighting conditions and target sizes.

    Continuous respiratory volume monitoring system during sleep based on radio frequency identification tag array
    XU Xiaoxiang, CHANG Xiangmao, CHEN Fangjin
    2020, 40(5):  1534-1538.  DOI: 10.11772/j.issn.1001-9081.2019111971
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics
    Continuous and accurate respiratory volume monitoring during sleep helps to infer the user’s sleep stage and provide clues about some chronic diseases. The existing works mainly focus on the detection and monitoring of respiratory frequency, and lack the means for continuous monitoring of respiratory volume. Therefore, a system named RF-SLEEP which uses commercial Radio Frequency IDentification (RFID) tags to wirelessly sense the respiratory volume during sleep was proposed. The phase value and timestamp data returned by the tag array attached to the chest surface was collected continuously by RF-SLEEP through the reader, and the displacement amounts of different points of the chest caused by breathing were calculated, then the model of relationship between the displacement amounts of different points of the chest and the respiratory volume was constructed by General Regression Neural Network (GRNN), so as to evaluate the respiratory volume of user during sleep. The errors in the calculation of chest displacement caused by the rollover of the user’s body during sleep were eliminated by RF-SLEEP through attaching the double reference tags to the user’s shoulders. The experimental results show that the average accuracy of RF-SLEEP for continuous monitoring of respiratory volume during sleep is 92.49% on average for different users.
    Heart rate variability analysis based sleep music recommendation system
    PENG Cheng, CHANG Xiangmao, QIU Yuan
    2020, 40(5):  1539-1544.  DOI: 10.11772/j.issn.1001-9081.2019111969
    Asbtract ( )   PDF (1052KB) ( )  
    References | Related Articles | Metrics

    The existing sleep monitoring researches mainly focus on non-interfering monitoring methods for sleep quality, and lack research on active adjustment methods of sleep quality. The researches of mental state and sleep staging based on Heart Rate Variability (HRV) analysis focus on the acquisition of these two kinds of information, which needs people wearing professional medical equipment, and the researches lack the application and adjustment of the information. Music can be used as a non-pharmaceutical method to solve sleep problems, but existing music recommendation methods do not consider the differences in individual sleep and mental states. A music recommendation system according to mental stress and sleep state by mobile devices was proposed to solve above problems. Firstly, the photoplethysmography signals were collected by the watch to extract features and calculate the heart rate. Then, the collected signals were transmitted to the mobile phone via bluetooth, and these signals were used by the mobile phone to evaluate the person’s mental stress and sleep state to play the adjusted music. Finally, the music was recommended according to the sleep time per night of the individual. The experimental results show that after using the sleep music recommendation system, the total sleep time of users increases by 11.0%.

    Surface defect detection method of light-guide plate based on improved coherence enhancing diffusion and texture energy measure-Gaussian mixture model
    ZHANG Yazhou, LU Xianling
    2020, 40(5):  1545-1552.  DOI: 10.11772/j.issn.1001-9081.2019091519
    Asbtract ( )   PDF (7279KB) ( )  
    References | Related Articles | Metrics

    Existing Liquid Crystal Display (LCD) light-guide plate surface defect detection methods have high missing rate and false detection rate as well as low adaptability to the complex texture structure with gradual change of product surface. Therefore, a LCD light-guide plate surface defect detection method was proposed based on Improved Coherence Enhancing Diffusion (ICED) and Texture Energy Measure-Gaussian Mixture Model (TEM-GMM). Firstly, an ICED model was established, Mean Curvature Flow (MCF) filter was introduced based on the structure tensor, so that the Coherence Enhancing Diffusion (CED) model had better retention effect on the edge of the thin-line defect, and the texture-enhanced and background texture-suppressed filtered image was obtained by using coherence. Then, the texture features of the image were extracted based on the Laws Texture Energy Measure (TEM), and with the texture features of background as the training data for the Gaussian Mixture Model (GMM), the parameters of GMM were estimated by Expectation Maximization (EM) algorithm. Finally, the posterior probability of each pixel in the target image was calculated and used to judge the defect pixel at the online detection stage. The experimental results show that compared with other methods, the missing rate and false detection rate of this method in the distribution of the light-guide particles randomly and the regularly distributed defect image test datasets was 3.27%, 4.32% and 3.59%, 4.87% respectively. The proposed detection method has an extensive application scope, and can effectively detect defects such as scratches, foreign objects, dirt and crushing on the surface of LCD light-guide plate.

2024 Vol.44 No.10

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF