Table of Content

    10 September 2018, Volume 38 Issue 9
    CNN quantization and compression strategy for edge computing applications
    CAI Ruichu, ZHONG Chunrong, YU Yang, CHEN Bingfeng, LU Ye, CHEN Yao
    2018, 38(9):  2449-2454.  DOI: 10.11772/j.issn.1001-9081.2018020477
    Asbtract ( )   PDF (944KB) ( )  
    References | Related Articles | Metrics
    Focused on the problem that the memory and computational resource intensive nature of Convolutional Neural Network (CNN) limits the adoption of CNN on embedded devices such as edge computing, a convolutional neural network compression method combining network weight pruning and data quantization for embedded hardware platform data types was proposed. Firstly, according to the weights distribution of each layer of the original CNN, a threshold based pruning method was illustrated to eliminate the weights that have less impact on the network processing accuracy. The redundant information in the network model was removed while the important connections were preserved. Secondly, the required bit-width of the weights and activation functions were analyzed based on the computational characteristics of the embedded platform, and the dynamic fixed-point quantization method was employed to reduce the bit-width of the network model. Finally, the network was fine-tuned to further compress the model size and reduce the computational consumption while ensuring the accuracy of model inference. The experimental results show that this method reduces the network storage space of VGG-19 by over 22 times while reducing the accuracy by only 0.3%, which achieves almost lossless compression. Meanwhile, by evaluating on multiple models, this method can reduce the storage space of the network model by a maximum of 25 times within the range of average accuracy lose of 1.46%, which proves the effective compression of the proposed method.
    Fast iterative learning control for regular system in sense of Lebesgue-p norm
    CAO Wei, LI Yandong, WANG Yanwei
    2018, 38(9):  2455-2458.  DOI: 10.11772/j.issn.1001-9081.2018020439
    Asbtract ( )   PDF (728KB) ( )  
    References | Related Articles | Metrics
    Focused on the problem that the convergence speed of traditional iterative learning control algorithm used in linear regular systems is slow, a kind of fast iterative learning control algorithm was designed for a class of linear regular systems. Compared with the traditional P-type iterative learning control algorithm, the algorithm increases tracking error at neighboring two iterations generated from last difference signal and present difference signal. And the convergence of the algorithm was proven by using Yong inequality of convolutional inference in the sense of Lebesgue-p norm. The results show the tracking error of the system will converge to zero with infinite iterations. The convergence condition is also given. Compared with P-type iterative learning control, the proposed algorithm can fasten the convergence and avoid the shortcomings of using λ norm to measure the tracking error. Simulation further testifies the validity and effectiveness.
    Walking stability control method based on deep Q-network for biped robot on uneven ground
    ZHAO Yuting, HAN Baoling, LUO Qingsheng
    2018, 38(9):  2459-2463.  DOI: 10.11772/j.issn.1001-9081.2018030714
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that biped robots may easily lose their motion stability when walking on uneven ground, a value-based deep reinforcement learning algorithm called Deep Q-Network (DQN) gait control method was proposed, which is an intelligent learning method of posture adjustment. Firstly, an off-line gait for a flat ground environment was obtained through the gait planning of the robot. Secondly, instead of implementing a complex dynamic model compared to traditional control methods, a bipedal robot was regarded as an agent to establish robot environment space, state space, action space and Reward-Punishment (RP) mechanism. Finally, through multiple rounds of training, the biped robot learned to adjust its posture on the uneven ground and ensures the stability of walking. The performance and effectiveness of the proposed algorithm was validated in a V-Rep simulation environment. The results demonstrate that the biped robot's lateral tile angle is less than 3° after implementing the proposed method and the walking stability is improved obviously, which achieves the robot's posture adjustment behavior learning and proves the effectiveness of the method.
    Joint model of microblog emotion recognition of emoticons and emotion cause detection based on neural network
    ZHANG Chen, QIAN Tao, JI Donghong
    2018, 38(9):  2464-2468.  DOI: 10.11772/j.issn.1001-9081.2018020481
    Asbtract ( )   PDF (949KB) ( )  
    References | Related Articles | Metrics
    As a deep text emotion understanding, emotion cause detection has become a hot issue in the field of text emotion analysis, but current research usually regards emotion cause detection and emotion recognition as two independent tasks, which easily leads to propagation of errors. Considering that emotion cause detection and emotion recognition are interacted, and that the emoticons in the microblog text usually express the emotion of the text, a joint model of emotion cause detection and emotion recognition of emoticons based on Bi-directional Long Short-Term Memory Conditional Random Field (Bi-LSTM-CRF) model was proposed. This model formalizes emotion cause detection and emotion recognition into a unified sequence labeling problem, it makes full use of the interaction between emotion causes and emotions and simultaneously processes the two tasks. The experimental results show that this model achieves the F score as 82.70% in emotion cause detection and 74.74% in emotion recognition of emoticons, compared with the serial model, the F score is enhanced by 5.82% and 17.12%, which means the joint model can effectively reduce propagation of errors and improve the F scores of emotion cause detection and emotion recognition of emoticons.
    Spam messages recognizing method based on word embedding and convolutional neural network
    LAI Wenhui, QIAO Yupeng
    2018, 38(9):  2469-2476.  DOI: 10.11772/j.issn.1001-9081.2018030643
    Asbtract ( )   PDF (1380KB) ( )  
    References | Related Articles | Metrics
    It is of great social value and times background significance to filter and recognize spam messages. Traditional artificially designed feature selection methods may lead to data sparseness, insufficient co-occurrence of feature information and difficulty in feature extraction. To solve above problems, a spam messages recognizing method based on word embedding and convolutional neural network was proposed. Firstly, word2vec's skip-gram model was used to train the word embedding of each word in the short message dataset according to the Wiki Chinese corpus, and the two-dimensional feature matrix representing short message was composed of word embedding of each word in a short message. Then, the feature matrix was used as the input to the convolutional neural network. The multi-scale short message features were extracted by using different scale convolution kernels of the convolution layer, and the 1-max pooling strategy was used to obtain the local optimal features. Finally, the fusion feature vector, composed of the local optimal features, was put into the softmax classifier to get the classification results. Experiments were performed on 100000 short messages. The experimental results show that the recognition accuracy based on the convolutional neural network model can reach 99.5%, which is 2.4% to 5.1% higher than that of the traditional machine learning models with the same feature extraction method, and the recognition accuracy of each model maintains above 94%.
    Maritime cooperative search planning based on memory bank particle swarm optimization
    LYU Jinfeng, ZHAO Huaici
    2018, 38(9):  2477-2482.  DOI: 10.11772/j.issn.1001-9081.2018030554
    Asbtract ( )   PDF (1031KB) ( )  
    References | Related Articles | Metrics
    Maritime search tasks are usually completed by multiple facilities. In view of the maritime cooperative search planning problem, a Memory Bank Particle Swarm Optimization (MBPSO) algorithm was proposed. Combinatorial optimization strategy and continuous optimization strategy were employed. The candidate solutions and memory bank for every single facility were constructed at first. New candidate solutions were generated based on memory consideration and random selection. Then the memory bank was updated based on a method of lattice, which means that for each lattice, there was only one candidate solution to be stored in the memory bank at most. Based on that, the diversity of the solutions in the memory bank could be ensured and effective global search was performed. At last, initial cooperative search plans were generated by combing candidate solutions in the memory bank randomly. Based on the strategy of Particle Swarm Optimization (PSO), effective local search was performed by searching around the solutions with high quality. Experimental results show that, in terms of efficiency, the time consumed by the proposed algorithm is short; the lowest variance is acquired and the success probability can be increased by 1% to 5%. The proposed algorithm can be applied to make maritime cooperative search plans effectively.
    Deep automatic sleep staging model using synthetic minority technique
    JIN Huanhuan, YIN Haibo, HE Lingna
    2018, 38(9):  2483-2488.  DOI: 10.11772/j.issn.1001-9081.2018020440
    Asbtract ( )   PDF (1174KB) ( )  
    References | Related Articles | Metrics
    Since current available sleep electroencephalogram data sets for sleep staging are all class imbalanced small data sets, it is hard to achieve ideal staging result by directly migration application of deep learning models. A deep automatic sleep staging model for class imbalanced small data sets was proposed, from the aspect of data oversampling and model training optimization. Firstly, a Modified Synthetic Minority Oversampling TEchnique (MSMOTE) was improved from the perspective of reducing the decision region, and the new technique was applied to generate the minority class samples in the original data sets. Then, the reconstructed class balanced data sets were used to pre-activate the sleep staging model. The 15-fold cross-validation experiment showed the overall classification accuracy was 86.73% and the macro-averaged F1-score was 81.70%. The value of F1 for the minimum class increased from 45.16% to 53.64% by using the data sets reconstructed by improved MSMOTE, to pre-activate the model. In conclusion, the model can realize the end-to-end learning for raw sleep electroencephalogram signals. It has a higher classification efficiency by comparison with recent advanced research and is suitable for the portable sleep monitors that work in conjunction with remote servers.
    Image classification based on multi-layer non-negativity and locality Laplacian sparse coding
    WAN Yuan, ZHANG Jinghui, WU Kefeng, MENG Xiaojing
    2018, 38(9):  2489-2494.  DOI: 10.11772/j.issn.1001-9081.2018020501
    Asbtract ( )   PDF (1164KB) ( )  
    References | Related Articles | Metrics
    Focused on that limitation of single-layer structure on image feature learning ability, a deep architecture based on sparse representation of image blocks was proposed, namely Multi-layer incorporating Locality and non-negativity Laplacian Sparse Coding method (MLLSC). Each image was divided uniformly into blocks and SIFT (Scale-Invariant Feature Transform) feature extraction on each image block was performed. In the sparse coding stage, locality and non-negativity were added in the Laplacian sparse coding optimization function, dictionary learning and sparse coding were conducted at the first and second levels, respectively. To remove redundant features, Principal Component Analysis (PCA) dimensionality reduction was performed before the second layer of sparse coding. And finally, multi-class linear SVM (Support Vector Machine) was adopted for image classification. The experimental results on four standard datasets show that MLLSC has efficient feature expression ability, and it can capture deeper feature information of images. Compared with the single-layer algorithms, the accuracy of the proposed algorithm is improved by 3% to 13%; compared with the multi-layer sparse coding algorithms, the accuracy of the proposed algorithm is improved by 1% to 2.3%. The effects of different parameters were illustrated, which fully demonstrate the effectiveness of the proposed algorithm in image classification.
    End-to-end Chinese speech recognition system using bidirectional long short-term memory networks and weighted finite-state transducers
    YAO Yu, RYAD Chellali
    2018, 38(9):  2495-2499.  DOI: 10.11772/j.issn.1001-9081.2018020402
    Asbtract ( )   PDF (821KB) ( )  
    References | Related Articles | Metrics
    For the assumption of unreasonable conditions in speech recognition by Hidden Markov Model (HMM), the ability of sequence modeling of recurrent neural networks was further studied, an acoustic model based on Bidirectional Long Short-Term Memory (BLSTM) neural networks was proposed. The training criterion based on Connectionist Temporal Classification (CTC) was successfully applied to the acoustic model training, and an end-to-end Chinese speech recognition system was built which does not rely on HMM. Meanwhile, a speech decoding method based on Weighted Finite-State Transducer (WFST) was designed to effectively solve the problem that lexicon and language model are difficult to integrate into the decoding process. Compared with the traditional GMM-HMM system and hybrid DNN-HMM system, the experimental results show that the end-to-end system not only significantly reduces the recognition error rate, but also significantly improves the speech decoding speed, indicating that the proposed acoustic model can effectively enhance the model discrimination and optimize the system structure.
    Data governance collaborative method based on blockchain
    SONG Jundian, DAI Bingrong, JIANG Liwen, ZHAO Yao, LI Chao, WANG Xiaoqiang
    2018, 38(9):  2500-2506.  DOI: 10.11772/j.issn.1001-9081.2018030594
    Asbtract ( )   PDF (1276KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of inconsistent data standards, uneven data quality, and compromised data security privacy in current data governance process, a new blockchain-based data governance collaboration method which integrates the characteristics of multi-party cooperation, security and reliability of blockchain was proposed and applied to the construction of data standards, the protection of data security, and the control of data sharing processes. Based on data governance requirements and blockchain characteristics, a collaborative data governance model was formed. A multi-collaborative data standard process, an efficient data standard construction and update mechanism, and secure data sharing and access control were developed afterwards. So this blockchain-based data governance collaboration method could be implemented to improve the efficiency and security of data standardization work. The experimental and analysis results show that the proposed method has a significant improvement in the efficiency of the standard term application time than the traditional data standard construction method. Especially in the big data environment, the application of the smart contract improves the time efficiency. The distributed storage of the blockchain provides powerful basis and guarantee for system security, user behavior traceability and auditing. Methods mentioned above provide a good application demonstration for data governance and a reference for the industry's metadata management, data standards sharing and application.
    Big data correlation mining algorithm based on factorial design
    TANG Xiaochuan, LUO Liang
    2018, 38(9):  2507-2510.  DOI: 10.11772/j.issn.1001-9081.2018020460
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue of dimensionality reduction in high-dimensional big data, a feature selection algorithm based on statistical factorial design was proposed, which was named Full Factorial Design (FFD). Firstly, the factor effect of the factorial design was used to measure the correlation between features and the target variable; secondly, a divide-and-conquer algorithm for finding the optimal factorial design for a given dataset was proposed; thirdly, in order to solve the problem that the traditional experimental design required manual execution of experiments, a data-driven approach was proposed to automatically search the response values for the factorial design from the input dataset; finally, the factor effects were calculated based on the design matrix and the average response values, and the features and interactions were sorted by the factor effects. Then the significant features and interactions could be obtained. The experimental results show that the average classification error rate of FFD over Mutual Information Maximisation (MIM), Joint Mutual Information Maximisation (JMIM) and ReliefF was 2.95, 3.33 and 6.62 percentage points, respectively. Therefore, FFD can effectively identify significant features and interactions that are highly correlated with the target variable in real-world datasets.
    Parameter-free clustering algorithm based on Laplace centrality and density peaks
    QIU Baozhi, CHENG Luan
    2018, 38(9):  2511-2514.  DOI: 10.11772/j.issn.1001-9081.2018010177
    Asbtract ( )   PDF (780KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of selecting center manually in a clustering algorithm, a Parameter-free Clustering Algorithm based on Laplace centrality and density peaks (ALPC) was proposed. Laplace centrality was used to measure the centrality of objects, and a normal distribution probability statistical method was used to determine clustering centers. The problem that clustering algorithms rely on empirical parameters and manually determine cluster centers was solved by the proposed algorithm. Each object was assigned to the corresponding cluster center according to the distance between the object and the center. The experimental results on synthetic data sets and UCI data sets show that the new algorithm can not only automatically determine cluster centers without any priori parameters, but also get better results with higher accuracy compared with the Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm, Clustering by fast search and find of Density Peaks (DPC) algorithm and Laplace centrality Peaks Clustering (LPC) algorithm.
    Preference feature extraction based on Nyström method
    YANG Meijiao, LIU Jinglei
    2018, 38(9):  2515-2522.  DOI: 10.11772/j.issn.1001-9081.2018020296
    Asbtract ( )   PDF (1373KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of low feature extraction efficiency in movie scoring, a Nyström method combined with QR decomposition was proposed. Firstly, sampling was performed using an adaptive method, QR decomposition of the internal matrix was performed, and the decomposed matrix was recombined with the internal matrix for feature decomposition. The approximate process of Nyström method was closely related to the number of selected landmarks and the process of selecting marker points. A series of point markers were selected to ensure the similarity after sampling. The adaptive sampling method can ensure the accuracy of approximation. QR decomposition can ensure the stability of the matrix and improve the accuracy of the preference feature extraction. The higher the accuracy of the preference feature extraction, the higher the stability of the recommendation system and the higher the accuracy of the recommendation. Finally, a feature extraction experiment was performed on a dataset of actual audience ratings of movies. The movie rating dataset contained 480189 users and 17770 movies. The experimental results show that when extracting the same number of landmarks, accuracy and efficiency of the improved Nyström method are improved to a certain degree, the time complexity is reduced from original O(n3) to O(nc2) (c<<n) compared to pre-sampling. Compared with the standard Nyström method, the error is controlled below 25%.
    Regularized matrix decomposition recommendation model integrating social networks and interest correlation
    WEN Kai, ZHU Chuanliang
    2018, 38(9):  2523-2528.  DOI: 10.11772/j.issn.1001-9081.2018030683
    Asbtract ( )   PDF (924KB) ( )  
    References | Related Articles | Metrics
    In view of the fact that users' preferences and social interaction data are very sparse, and the fact that users may prefer products recommended by friends than recommended by foes, a regularized matrix decomposition recommendation algorithm integrating with social network and interest preference similarity was proposed. First of all, for the problem of sparse data of social relations. Global and local topological characteristics of the network were used to extract trust and distrust matrices between users respectively. Secondly, a method for calculating interest preference similarity between users was defined. Finally, in the process of matrix decomposition, the trust matrix, the distrust matrix, and the interest correlation were synthetically taken into consideration to make recommendations for the users. Experiments show that this method is superior to other regularization recommendation methods. Compared with the basic matrix decomposition model (SocialMF), SoRec, TrustMF, CTRPMF and RecSSN algorithm, the proposed algorithm reduces 1.1% to 9.5% and 2% to 10.1% respectively in the root mean square error (RMSE) and the mean absolute error (MAE), improved recommendations effectively.
    Cross-media retrieval algorithm based on semantic correlation and topological relationship
    DAI Gang, ZHANG Hong
    2018, 38(9):  2529-2534.  DOI: 10.11772/j.issn.1001-9081.2018030553
    Asbtract ( )   PDF (957KB) ( )  
    References | Related Articles | Metrics
    Focused on how to mine the intrinsic correlation between feature data with the same semantics in different modalities, a novel cross-media retrieval algorithm based on Semantic Correlation and Topological Relationship (SCTR) was proposed. On one hand, the potential correlation between multimedia data with the same semantics was exploited to construct multimedia semantic correlation hypergraph. On the other hand, the topological relationship of multimedia data was mined to build multimedia nearest neighbor relationship hypergraph. The main idea was to learn an optimal projection matrix for each media type by combining the semantic correlation and topological relationship of multimedia data, then to project the feature vectors of the multimedia data into a common space to achieve cross-media retrieval. On the XMedia dataset, compared with the average precisions of the Heterogeneous Metric Learning with Joint Graph Regularization (JGRHML) algorithm, Cross Modality Correlation Propagation (CMCP) algorithm, Heterogeneous Similarity measure with Nearest Neighbors (HSNN) algorithm and Joint Representation Learning (JRL) algorithm, the average precision of the proposed algorithm in multiple retrieval tasks is 51.73%, which is increased by 22.73, 15.23, 11.7, 9.11 percentage points respectively. Experimental results prove from many aspects that the proposed algorithm effectively improves the average precision of cross-media retrieval.
    Quality evaluation model of network operation and maintenance based on correlation analysis
    WU Muyang, LIU Zheng, WANG Yang, LI Yun, LI Tao
    2018, 38(9):  2535-2542.  DOI: 10.11772/j.issn.1001-9081.2018020412
    Asbtract ( )   PDF (1421KB) ( )  
    References | Related Articles | Metrics
    Traditional network operation and maintenance evaluation method has two problems. First, it is too dependent on domain experts' experience in indicator selection and weight assignment, so that it is difficult to obtain accurate and comprehensive assessment results. Second, the network operation and maintenance quality involves data from multiple manufacturers or multiple devices in different formats and types, and a surge of users brings huge amounts of data. To solve the problems mentioned above, an indicator selection method based on correlation was proposed. The method focuses on the steps of indicator selection in the process of evaluation. By comparing the strength of the correlation between the data series of indicators, the original indicators could be classified into different clusters, and then the key indicators in each cluster could be selected to construct a key indicators system. The data processing methods and weight determination methods without human participation were also utilized into the network operation and maintenance quality evaluation model. In the experiments, the indicators selected by the proposed method cover 72.2% of the artificial indicators. The information overlap rate is 31% less than the manual indicators'. The proposed method can effectively reduce human involvement, and has a higher prediction accuracy for the alarm.
    Homomorphic MACs for arithmetic circuits on cloud environment
    BAI Ping, ZHANG Wei, WANG Xu'an
    2018, 38(9):  2543-2548.  DOI: 10.11772/j.issn.1001-9081.2018020454
    Asbtract ( )   PDF (944KB) ( )  
    References | Related Articles | Metrics
    Focused on the low efficiency of verifying data on the cloud servers, to ensure correct execution of user's commands and high-efficient validation, a method supporting homomorphic MAC for arithmetic circuits on cloud environment was provided. Precise search was obtained through the following ways. Firstly, a label generation algorithm was used to represent a validation label with a polynomial. Secondly, a transformation algorithm was used to transform the validation label to satisfy homomorphic form, meanwhile, homomorphic decryption was used reduce the dimensionality of the label. Finally, a verification algorithm was used to verify the search result. Moreover, the scheme carries out infinite multiplicative homomorphism without increasing the size of verification labels, and is efficient. The drawback of the scheme is that the computational complexity increases with the increase of the input bits of enhancement circuit.
    Stealth download hijacking vulnerability of Android application package
    ZHU Zhu, FU Xiao, WANG Zhijian
    2018, 38(9):  2549-2553.  DOI: 10.11772/j.issn.1001-9081.2018020449
    Asbtract ( )   PDF (1030KB) ( )  
    References | Related Articles | Metrics
    During the distributing and downloading of Android application packages, it is always be vulnerable to download hijacking attacks. Traffic analysis could be used by sites to detect if they are under this kind of regular download hijacking attacks. But the stealth download hijacking attacks cannot be discovered by using such a method. Based on the discovery and analysis of an actual case, a vulnerability of Android application package stealth download hijacking was proposed. Attackers exploited this vulnerability to implement a stealth download hijacking through deploying bypass devices between the downloaders and the publishers. And the victim sites can hardly notice it by using current methods. The cause, influence and mechanism of the vulnerability were discussed, and a solution was tried to put forward in respects of distributed detection, centralized analysis and active prevention.
    Ciphertext retrieval ranking method based on counting Bloom filter
    LI Yong, XIANG Zhongqi
    2018, 38(9):  2554-2559.  DOI: 10.11772/j.issn.1001-9081.2018020429
    Asbtract ( )   PDF (987KB) ( )  
    References | Related Articles | Metrics
    It is difficult to retrieve ciphertext in cloud computing. Existing searchable encryption schemes have low time efficiency, which file retrieval index does not support update, and retrieval results cannot be sorted accurately. To solve these problems, firstly, file retrieval index was constructed based on counting Bloom filter, through hash mapping files keywords to counting Bloom filter index vector, to realize ciphertext retrieval with keywords, and support updating of the ciphertext retrieval index. Secondly, because the counting Bloom filter does not have semantic function, it cannot achieve the ranking of retrieval results according to the relevance scores of the keywords. in this paper, the relevance scores of keywords were computed by using keyword frequency matrix and Term Frequency-Inverse Document Frequency (TF-IDF) model, to achieve the ranking function of retrieval results with the relevance score. Finally, theoretical and experimental performance analysis show that, this proposed method is secure, updatable, sortable, and efficient.
    Dynamic task dispatching strategy for stream processing based on flow network
    LI Ziyang, YU Jiong, BIAN Chen, LU Liang, PU Yonglin
    2018, 38(9):  2560-2567.  DOI: 10.11772/j.issn.1001-9081.2017122910
    Asbtract ( )   PDF (1352KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that sharp increase of data input rate leads to the rising of computing latency which influences the real-time of computing in big data stream processing platform, a dynamic dispatching strategy based on flow network was proposed and applied to a data stream processing platform named Apache Flink. Firstly, a Directed Acyclic Graph (DAG) was transformed to a flow network by defining the capacity and flow of every edge and a capacity detection algorithm was used to ascertain the capacity value of every edge. Secondly, a maximum flow algorithm was used to acquire the improved network and the optimization path in order to promote the throughput of cluster when the data input rate is increasing; meanwhile the feasibility of the algorithm was proved by evaluating its time-space complexity. Finally, the influence of an important parameter on the algorithm execution was discussed and recommended parameter values of different types of jobs were obtained by experiments. The experimental results show that the throughput promoting rate of the strategy is higher than 16.12% during the increasing phases of the data input rate in different types of benchmarks compared with the original dispatching strategy of Apache Flink, so the dynamic dispatching strategy efficiently promotes the throughput of cluster under the premise of task latency constraint.
    Dynamic multi-subgroup collaborative barebones particle swarm optimization based on kernel fuzzy clustering
    YANG Guofeng, DAI Jiacai, LIU Xiangjun, WU Xiaolong, TIAN Yanni
    2018, 38(9):  2568-2574.  DOI: 10.11772/j.issn.1001-9081.2018030638
    Asbtract ( )   PDF (1251KB) ( )  
    References | Related Articles | Metrics
    To solve problems such as easily getting trapped in local optimum and slow convergence rate in BareBones Particle Swarm Optimization (BBPSO) algorithm, a dynamic Multi-Subgroup collaboration Barebones Particle Swarm Optimization based on Kernel Fuzzy Clustering (KFC-MSBPSO) was proposed. Based on the standard BBPSO algorithm, firstly, kernel fuzzy clustering method was used to divide the main group into several subgroups, and the subgroups optimized collaboratively to improve the searching efficiency. Then, nonlinear dynamic mutation factor was introduced to control subgroup mutation probabilities according to the number of particles and convergence conditions, the main group was reconstructed by means of particle mutation and the exploration ability was improved. The main group particle absorption strategy and subgroup merge strategy were proposed to strengthen the information exchange between main group and subgroups and enhanced the stability of the algorithm. Finally, the subgroup reconstruction strategy was used to adjust the iterations of subgroup reconstruction by combining the optimal solutions. The results of experiments on six benchmark functions, such as Sphere, show that the accuracy of KFC-MSBPSO algorithm has improved by at least 11.1% compared with classical BBPSO algorithm, Opposition-Based Barebones Particle Swarm Optimization (OBBPSO) algorithm and other improved algorithms. The best mean value in high dimensional space accounts for 83.33% and has a faster convergence rate. This indicates that KFC-MSBPSO algorithm has good search performance and robustness, which can be applied to the optimization of high dimensional complex functions.
    Two-stage hardware acceleration resource deployment mechanism for virtual network function
    FAN Hongwei, HU Yuxiang, LAN Julong
    2018, 38(9):  2575-2580.  DOI: 10.11772/j.issn.1001-9081.2018020488
    Asbtract ( )   PDF (1222KB) ( )  
    References | Related Articles | Metrics
    It is a hot research topic to solve the low performance of Virtual Network Function (VNF) in SDN/NFV (Software Defined Networking/Network Function Virtualization) architecture by designing hardware acceleration mechanism. After introducing the hardware acceleration resources to VNF, how to control and deploy these acceleration resources has been an urgent problem. To solve the problems above, a uniform hardware acceleration management architecture based on the accelerator cards on servers and OpenFlow switches was proposed. Based on this architecture, the model of acceleration resource deployment was built, and the evaluation indicators for the resource deployment mechanism was proposed by analyzing the impact of acceleration resources on service chain mapping. Finally, a two-stage acceleration resource deployment algorithm was designed. The experimental results show that, compared with Single-attribute Acceleration Resource Deployment algorithm (SARD) and Uniform Acceleration Resource Deployment algorithm (UARD), the proposed mechanism can optimize the deployment of the acceleration resources and improve the total traffic handled by acceleration resources and the utilization of acceleration resources by 41.4% and 14.5% respectively.
    3D-coverage algorithm based on adjustable radius in wireless sensor network
    DANG Xiaochao, SHAO Chenguang, HAO Zhanjun
    2018, 38(9):  2581-2586.  DOI: 10.11772/j.issn.1001-9081.2018020357
    Asbtract ( )   PDF (1192KB) ( )  
    References | Related Articles | Metrics
    For the problem of coverage in 3D Wireless Sensor Network (WSN), this paper introduced a Three-Dimensional Coverage Algorithm based on Adjustable Radius in wireless sensor network (3D-CAAR). Virtual force was used to achieve uniform distribution of nodes in WSN, at the same time, the distance between a sensor node and the target points in the covered area were determined by the radius adjustable coverage mechanism of sensor nodes. An energy consumption threshold was introduced to enable nodes to adjust their radii according to their own situations, thus reducing the overall network energy consumption and improving node utilization rate. Finally, compared with the traditional ECA3D (Exact Covering Algorithm in Three-Dimensional space) and APFA3D (Artificial Potential Field Algorithm in Three-Dimensional space) by experiments, 3D-CAAR can effectively solve the problem of target node coverage in sensor network.
    Routing policy based on virtual currency in mobile wireless sensor networks
    WANG Guoling, YANG Wenzhong, ZHANG Zhenyu, XIA Yangbo, YIN Yabo, YANG Huiting
    2018, 38(9):  2587-2592.  DOI: 10.11772/j.issn.1001-9081.2018020446
    Asbtract ( )   PDF (996KB) ( )  
    References | Related Articles | Metrics
    For the routing problem that nodes in mobile wireless sensor network, based on random moving model, a low energy consumption routing strategy named DTVC (Data Transmission based on Virtual Currency) was proposed. When two nodes met each other, the buyer and the seller determined the price of data message and selected relay node according to node attributes and data message attributes. To improve the network performance, the number of the data message's replicas was controlled according to node type and data messages in the queue were sorted according to each message's delay tolerance. The nodes in the network were divided into source nodes and relay nodes for each data message and only the source node could copy it. The smaller the delay tolerance was, the greater the priority was. In order to reduce the energy consumption in the network, the data message in the storage queue that had been transmitted successfully was deleted according to the message broadcast by the sink node. The simulation results on Matlab showed that the data delivery rate of DTVC was increased by at least 2.5%, and the average number of replicas was reduced by at least 25% than those of FAD (the message Fault tolerance-based Adaptive data Delivery scheme), FLDEAR (Fuzzy-Logic based Distance and Energy Aware Routing protocol) and a routing algorithm based on energy consumption optional evolution mechanism.
    Design and implementation of embedded multi-gateway system based on 6LoWPAN
    QU Qiji, ZHENG Lin
    2018, 38(9):  2593-2597.  DOI: 10.11772/j.issn.1001-9081.2018020470
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics
    6LoWPAN (IPv6 over Low power Wireless Personal Area Network) is a technology to realize the IP connection of wireless sensor network based on IEEE802.15.4 standard. The network congestion and energy consumption problem exists around the single bounder router under the existing single DODAG (Destination Oriented Directed Acyclic Graph) protocol. An embedded 6LoWPAN multi-gateway protocol and system was designed. The embedded gateway node has dual-mode communication function, which can realize the physical connection between WSN (Wireless Sensor Network) and fixed IPv6 network. The dual-mode gateway implements uplink and downlink routes by establishing an IP tunnel between the 6LoWPAN root border router and it. By supplementing and optimizing the existing 6LoWPAN protocol standard, the dual-mode node has intra-network and inter-network routing capabilities, so as to achieve multi-gateway architecture and multi-path routing function. The optimized multi-point interworking topology and traffic sharing algorithm was used to realize the effective load balance between uplink and downlink links, and also to reduce the energy consumption of multi-hop routing of nodes. Experiments were carried out on multi-gateway platforms and single gateway systems. The results show that the proposed scheme can achieve 6LoWPAN multi-gateway Ethernet access, reduce network node transmission delay and packet loss rate, and improve the overall network throughput.
    Indoor localization algorithm based on geomagnetic field/WiFi/PDR of smartphone
    RUAN Kun, WANG Mei, LUO Liyan, XIONG Luqi, SONG Xiyu
    2018, 38(9):  2598-2602.  DOI: 10.11772/j.issn.1001-9081.2018020368
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics
    Focusing on the repetitiveness of geomagnetic fingerprints and accumulated error in Pedestrian Dead Reckoning (PDR), a fusion indoor localization method was proposed by using multiple sensors of smartphone. Firstly, WiFi and RANdom SAmple Consensus (RANSAC) algorithm were used to find initial positions of users. Then the step length was calculated by using the accelerometer of smartphone, and the conclusion of turn was given by gyroscope. Finally, the localization of PDR was corrected by means of geomagnetic field using map-constrained adaptive Particle Filter (PF), which is a high precision indoor localization method. The simulation results show that the proposed method can effectively overcome the accumulated error of the PDR and the repetitiveness of geomagnetic values, improve the positioning accuracy and reduce the energy consumption.
    Ensemble smartphone indoor localization algorithm based on wireless signal and image analysis
    HOU Songlin, YANG Fan, ZHONG Yong
    2018, 38(9):  2603-2609.  DOI: 10.11772/j.issn.1001-9081.2018030557
    Asbtract ( )   PDF (1155KB) ( )  
    References | Related Articles | Metrics
    Since the present performance of smartphone based personal indoor localization is still far from satisfaction with problems in accuracy, cost and etc., a duel-step filtering indoor localization algorithm fusing Wi-Fi fingerprints and images implemented on devices like smartphones was proposed in this paper. This algorithm comprised of offline stage and online stage. On the offline stage, Wi-Fi fingerprints were collected and a fingerprint library sampled in different positions in the coordinate system was constructed. Photos were taken in that stage to extract the image features. On the online stage, the first filter process was to determine the possible area where the user is currently by using Wi-Fi information captured in real-time. Then a distance compensation algorithm was proposed in this paper to extract features of the real-time image taken by the user to determine the exact localized position. Experimental results show that this algorithm can effectively improve localization precision compared with traditional Wi-Fi and image based localization methods in environments with fewer APs (Access Points) and similar layouts, thus is capable for general localization or LBS (Location-Based Service) relevant applications.
    Joint power controlled resource allocation scheme for device-to-device communication in heterogeneous cellular networks
    LI Zhongjie, XIE Dongpeng
    2018, 38(9):  2610-2615.  DOI: 10.11772/j.issn.1001-9081.2018020351
    Asbtract ( )   PDF (889KB) ( )  
    References | Related Articles | Metrics
    To solve the interference issue caused by Device-to-Device (D2D) users and small cellular users underlaying macro cellular user resources in heterogeneous cellular networks, a joint power controlled resource allocation scheme was proposed. Firstly, the optimal transmit power of each D2D user and small cell user underlaying macro-cell user channel resources was derived according to the system interference model by satisfying the user Signal to Interference and Noise Ratio (SINR) and transmit power constraints. Secondly, the user's channel selection was programmed as a two-sided matching problem between the user and the channel, and a stable matching solution was obtained by using the Gale-Shapley algorithm. Finally, the matching solution was taken as the initial condition, and the allocation scheme was further optimized by the exchange search algorithm. The simulation results show that the system total capacity and energy efficiency of the proposed scheme are 93.62% and 92.14% of the optimal solution. Compared with stochastic resource allocation scheme, the allocation scheme without power control and exchange search, and the allocation scheme with power control without exchange search, the system capacity averagely increases 48.29%, 15.97% and 4.8% respectively, and the system energy efficiency averagely increases 62.72%, 44.48% and 4.45% respectively. The proposed scheme can achieve approximately optimal system total capacity and effectively improves frequency utilization and energy efficiency.
    Cognitive medium access control protocol optimization design for distributed multi-hop network using single transceiver
    GAO Shijuan, TAN Tongde, ZHU Qingchao
    2018, 38(9):  2616-2620.  DOI: 10.11772/j.issn.1001-9081.2018030676
    Asbtract ( )   PDF (910KB) ( )  
    References | Related Articles | Metrics
    A novel optimized model of distributed multi-hop Cognitive Medium Access Control (C-MAC) protocol with a single transceiver was proposed to handle two restrictions in MAC protocol of Mobile Ad Hoc NETwork (MANET). One is the restriction of multiple transceivers and payload imbalance in different channels, and the other one is the restriction of multi-hop and control overhead. Firstly, channel sensing and data transmission of the new C-MAC protocol were realized in ATIM (Announcement Traffic Indication Message) window and DATA window respectively based on Power Saving Mode (PSM), which made them separated in time domain. Secondly, power value was nonuniform quantized and Gray encoded to decrease float overhead caused by node mobility. Then, a distributed cooperation mechanism was raised for multi-hop networking, based on which channel switching rule was redefined to guarantee that channels would be used fairly and balanced. Last, metrics as Channel Vacate Time (CVT), Channel Opening Time (COT), throughput and channel payload time were considered by simulation. Results show that, the novel protocol could execute independently without several transceivers, the difference values of CVT, COT and payload time among different channels reduce by 13 ms, 20 ms and 100 s respectively, and corresponding throughput rises about 1.5%, all of which realize the optimization of protocol performance in transceiver restriction, payload balance, throughput and overhead control.
    Random structure based design method for multiplierless ⅡR digital filters
    FENG Shuaidong, CHEN Lijia, LIU Mingguo
    2018, 38(9):  2621-2625.  DOI: 10.11772/j.issn.1001-9081.2018030572
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the traditional multiplierless Infinite Impulse Response (ⅡR) digital filters have fixed structure and poor performance, a random structure based design method for multiplierless ⅡR digital filters was proposed. Stable 2-order subsystems with shifters were directly used to design the multiplierless filter structure. Firstly, a set of encoding structures of the multiplierless digital filter were created randomly. Then, Differential Evolution with a Successful-Parent-Selecting Framework (SPS-DE) was used to optimize the multiplierless filter structure. The proposed method realized diversified structure design, and SPS-DE effectively balanced exploration and exploitation due to adopting a Successful-Parent-Selecting framework, which achieved good results in the optimization of the multiplierless filter structure. Compared with state-of-the-art design methods, the passband ripple of the multiplierless ⅡR filter designed in this paper is reduced by 43% and the stopband maximum attenuation is decreased by 40.4%. Simulation results show that the multiplierless ⅡR filter designed by the proposed method meets structural requirements and has good performance.
    Research on factors affecting quality of mobile application crowdsourced testing
    CHENG Jing, GE Luqi, ZHANG Tao, LIU Ying, ZHANG Yifei
    2018, 38(9):  2626-2630.  DOI: 10.11772/j.issn.1001-9081.2018030575
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that the influencing factors of crowdsourced testing are complex and diverse, and the test quality is difficult to assess, a method for analyzing the quality influencing factors based on Spearman correlation coefficient was proposed. Firstly, the potential quality influencing factors were obtained through the analysis of test platforms, tasks, and testers. Secondly, Spearman correlation coefficient was used to calculate the correlation degrees between potential factors and test quality and to screen out key factors. Finally, the multiple stepwise regression was used to establish a linear evaluation relationship between key factors and test quality. The experimental results show that compared with the traditional expert artificial evaluation method, the proposed method can maintain smaller fluctuations in evaluation error when facing a large number of test tasks. Therefore, the method can accurately screen out the key influencing factors of mobile application crowdsourced test quality.
    Reliability evaluation model for cloud storage systems with proactive fault tolerance
    LI Jing, LIU Dongshi
    2018, 38(9):  2631-2636.  DOI: 10.11772/j.issn.1001-9081.2018020502
    Asbtract ( )   PDF (1155KB) ( )  
    References | Related Articles | Metrics
    In addition to traditional reactive fault-tolerant technologies, proactive fault tolerance can be used to improve storage system reliability significantly. There is few research on reliability of proactive cloud storage systems, supposing exponential distribution of drive failure. Two reliability state transfer models were developed for proactive redundant arrays of independent disks RAID-5 and RAID-6 systems respectively. Based on the models, Monte Carlo simulations were designed to estimate the expected number of data-loss events in proactive RAID-5 and RAID-6 systems within a given time period. Weibull distribution was used to model time-based (decreasing, constant occurrence, or increasing) disk failure rates, and express the impact of proactive fault tolerance, operational failures, failure restoration, latent block defects, and drive scrubbing on the system's reliability. The proposed method can help system designers to evaluate the impact of different fault tolerance mechanisms and system parameters on the reliability of cloud storage systems, and help to create highly reliable storage systems.
    Software defect number prediction method based on data oversampling and ensemble learning
    JIAN Yiheng, YU Xiao
    2018, 38(9):  2637-2643.  DOI: 10.11772/j.issn.1001-9081.2018020507
    Asbtract ( )   PDF (1349KB) ( )  
    References | Related Articles | Metrics
    Predicting the number of the defects in software modules can help testers pay more attention to the modules with more defects, thus reasonably allocating limited testing resource. Focusing on the issue that software defect datasets are imbalanced, a method based on oversampling and ensemble learning (abbreviate as SMOTENDEL) for predicting the number of defects was proposed in this paper. Firstly, n balanced datasets were obtained by oversampling the original software defect dataset n times. Then, n individual models for predicting the number of defects were trained on the n balanced datasets using regression algorithms. Finally, the n individual models were combined to obtain an ensemble prediction model, and the ensemble prediction model was used to predict the number of defects in a new software module. The experimental results show that SMOTENDEL has better performance than the original prediction method. When using Decision Tree Regression (DTR), Bayesian Ridge Regression (BRR) and Linear Regression (LR) as the individual prediction model, the improvement is 7.68%, 3.31% and 3.38%, respectively.
    Dynamic measurement of Android kernel based on ARM virtualization extension
    LU Zicong, XU Kaiyong, GUO Song, XIAO Jingxu
    2018, 38(9):  2644-2649.  DOI: 10.11772/j.issn.1001-9081.2018010224
    Asbtract ( )   PDF (996KB) ( )  
    References | Related Articles | Metrics
    Aiming at the integrity threat of Android systems at present brought by kernel-level attacks, a method for dynamic measurement of Android kernel, namely DIMDroid (Dynamic Integrity Measurement of Android), was proposed. The hardware-assisted virtualization technology was used to provide the isolation between the measurement module and the measured Android system. First of all, the static and dynamic measurement objects were obtained by analyzing the kernel elements that affect kernel integrity in the running of the Android system. Secondly, these measurement objects were semantically reconstructed at the measurement layer. Finally, an integrity analysis was performed to determine whether the Android kernel is under attack or not. At the same time, the boot protection based on hardware-based trust chain and the runtime protection based on memory isolation were performed to ensure the security of DIMDroid itself. The experimental results show that DIMDroid can detect the rootkit which breaks Android kernel integrity in time, and the performance loss of the method is within an acceptable range.
    Fault injection strategy for network of integrated modular avionics platform
    SUN Yigang, XU Chang, LIU Zhexu
    2018, 38(9):  2650-2654.  DOI: 10.11772/j.issn.1001-9081.2018020401
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    The network of Integrated Modular Avionics (IMA) platform has complex communication structure. When fault injection testing, it is difficult to select a appropriate test path, and there are much equivalent and invalid fault injection. According to the characteristics of the network communication structure of the IMA platform, a new fault injection strategy was proposed. Firstly, according to the requirements of real-time and certainty in the network of IMA platform, a test path optimization algorithm based on communication links was proposed, optimal test paths were generated to achieve orderly coverage of IMA platform network test tasks. Secondly, after determining the test path, a test case automatic generation model was constructed by using Colored Petri Net (CPN) modeling method, the equivalent and invalid faults were eliminated, and test cases required for each test task in the path were streamlined. The simulation results show that the proposed method is less than the traditional fault injection strategy in terms of test times and test time, so it can overcome the shortcomings of disorder and blindness in the traditional strategy and reduce the time cost of the test.
    Binocular camera multi-pose calibration method based on radial alignment constraint algorithm
    YANG Shangkun, WANG Yansong, GUO Hui, WANG Xiaolan, LIU Ningning
    2018, 38(9):  2655-2659.  DOI: 10.11772/j.issn.1001-9081.2018020503
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics
    In binocular stereo vision, the camera needs to be calibrated to obtain its internal and external parameters in 3D measurement or precise positioning of the object.Through the study of the camera model with first-order radial distortion, linear formulas of solving internal and external parameters were constructed based on Radial Alignment Constraint (RAC) calibration method. Inclination angle, rotation angle, pitch angle and main distortion elements of lens were taken into consideration in this algorithm, which modified the defects in the traditional RAC calibration method that it only considers radial distortion and some parameters need priori values. The 3D reconstruction experiment of multi-pose binocular camera was carried out by using the obtained internal and external parameters. The experimental results show that,the reprojection error of this calibration method is distributed in[-0.3,0.3], and the similarity between the measurement trajectory and the actual trajectory is 96%, which has a positive effect on reducing the error rate of binocular stereo vision 3D measurement.
    Design of augmented reality navigation simulation system for pelvic minimally invasive surgery based on stereoscopic vision
    GAO Qinquan, HUANG Weiping, DU Min, WEI Mengyu, KE Dongzhong
    2018, 38(9):  2660-2665.  DOI: 10.11772/j.issn.1001-9081.2018020335
    Asbtract ( )   PDF (1132KB) ( )  
    References | Related Articles | Metrics
    Minimally invasive endoscopic surgery always remains a challenge due to the complexity of the anatomical location and the limitations of endoscopic vision. An Augmented Reality (AR) navigation system was designed for simulation of pelvic minimally invasive surgery. Firstly, a 3D model of pelvis which was segmented and reconstructed from the preoperative CT (Computed Tomography) was textured mapping with the real pelvic surgical video, and then a surgical video with the ground truth pose was simulated. The blank model was initially registered with the intraoperative video by a 2D/3D registration based on color consistency of visible surface points. After that, an accurate tracking of intraoperative endoscopy was performed using a stereoscopic tracking algorithm. According to the multi-DOFs (Degree Of Freedoms) transformation matrix of endoscopy, the preoperative 3D model could then be fused to the intraoperative vision to achieve an AR navigation. The experimental results show that the root mean square error of the estimated trajectory compared to the ground truth is 2.3933 mm, which reveals that the system can achieve a good AR display for visual navigation.
    New 3D scene modeling language and environment based on BNF paradigm
    XU Xiaodan, LI Bingjie, LI Bosen, LYU Shun
    2018, 38(9):  2666-2672.  DOI: 10.11772/j.issn.1001-9081.2018030552
    Asbtract ( )   PDF (1259KB) ( )  
    References | Related Articles | Metrics
    Due to the problems of high degree of business coupling, insufficient description ability to object attributes and characteristics of complex scenes in the existing Three-Dimensional (3D) scene modeling models, a new scene modeling language and environment based on BNF (Backus-Naur Form) was proposed to solve the problems of 3D virtual sacrifice scene modeling. Firstly, the 3D concepts of scene object, scene object template and scene object template attribute were introduced to analyze the constitutional features of the 3D virtual sacrifice scene in detail. Secondly, a 3D scene modeling language with loose coupling, strong attribute description capability and flexible generality was proposed. Then, the operations of the scene modeling language were designed, so that the language could be edited by Application Programming Interface (API) calls, and the language supported the interface modeling. Finally, a set of Extensible Markup Language (XML) mapping methods was defined for the language. It made the scene modeling results stored in XML text format, improved the reusability of modeling results, and demonstrated the application of modeling. The application results show that the method enhances the support of new data type features, and improves the description of sequence attributes and structure attribute types, and improves the description capabilities, the versatility, the flexibility of complex scenes. The proposed method is superior to the method proposed by SHU et al. (SHU B, QIU X J, WANG Z Q. Survey of shape from image. Journal of Computer Research and Development, 2010, 47(3):549-560), and solves the problem of 3D virtual sacrifice scene modeling. The proposed method is also suitable for modeling 3D scenes with low granularity, multiple attribute components, and high coupling degree, and can improve modeling efficiency.
    Application of binocular stereo vision technology in key dimension detection of CRH body
    GAO Jingang, LIU Zhiyong, ZHANG Shuang, HOU Daishuang, LIU Xiaofeng
    2018, 38(9):  2673-2677.  DOI: 10.11772/j.issn.1001-9081.2018020479
    Asbtract ( )   PDF (1010KB) ( )  
    References | Related Articles | Metrics
    It is difficult to realize on-line measurement for the large dimension range of China Railway High-speed (CRH) body, the complexity of testing items and the variety of vehicles. Firstly, a measurement scheme of key dimensions for a large-scale bullet train was proposed, where binocular Charge Coupled Device (CCD) stereo vision was used to set up the measuring sub stations of each key dimension, and the laser tracker and coordinate transformation algorithm were used to complete the global calibration of each CCD camera's measuring sub station. In each measuring sub station, the stereo spatial ball detection technology was used to measure local key dimensions. At the same time, a neural network temperature error compensation model based on wavelet analysis was constructed, and the precision of space distance compensation reached 0.05 mm. The comparison between the proposed method and three-coordinate measuring machine, shows that the proposed method is simple in operation, high in flexibility and high in precision, which can effectively solve the key dimension detection problem of CRH body.
    Rapid mismatching elimination algorithm based on motion smoothing constraints
    LI Wei, LI Weixiang, ZHANG Fan, JIE Wei
    2018, 38(9):  2678-2682.  DOI: 10.11772/j.issn.1001-9081.2018030621
    Asbtract ( )   PDF (1019KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of huge computation cost and low matching accuracy in the course of iterative calculation while using RANdom SAmple Consensus (RANSAC) algorithm for image splicing, a mismatching elimination algorithm was proposed based on motion smoothing constraint terms. Firstly, feature points were extracted with ORB (Oriented FAST and Rotated BRIEF) algorithm, and initial matching of feature points was implemented based on Hamming distance. Secondly, the statistical neighboring support estimators based on motion smoothing constraint terms were used to achieve rough mismatching elimination, and then spatial geometric constraints were applied to refine mismatching elimination. Finally, grouping sorting was used to solve the model parameters, and weighted averaging was used to realize image fusion. The experimental results show that the mismatching elimination rate is improved by 75.6% compared to the algorithm for reducing the total number of sampling points and 24% compared to adaptive threshold algorithm. This method can effectively eliminate mismatching and realize accurate image mosaic.
    Medical image registration by integrating modified brain storm optimization algorithm and Powell algorithm
    LIANG Zhigang, GU Junhua
    2018, 38(9):  2683-2688.  DOI: 10.11772/j.issn.1001-9081.2018020353
    Asbtract ( )   PDF (1087KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of poor accuracy, easy to fall into local maximum and slow convergence in existing medical image registration methods, based on multi-resolution analysis, a hybrid algorithm of Modified Brain Storm Optimization (MBSO) and Powell algorithm was proposed. MBSO algorithm, the proportion of individuals participating in local and global search was adjusted by changing the way of individual generation, and variable step size was adopted to enhance search ability, to achieve the purpose of accelerating convergence and jumping out of local optimum. Firstly, the MBSO algorithm was used to search globally in the low resolution layer. Then the result was used as the start point of Powell algorithm to search in the high resolution layer. Finally, Powell algorithm was used to search and locate the globally optimal value in the original image layer. Compared with the Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm, Genetic Algorithm (GA) combined with Powell algorithm, the average root mean square error of the proposed algorithm decreased by 20.89%, 30.46% and 18.54%, and the average registration time reduced by 17.86%, 27.05% and 26.60% with success rate of 100%. The experimental results show that the proposed algorithm has good robustness and can accomplish the medical image registration task quickly and accurately.
    Medical image super-resolution algorithm based on deep residual generative adversarial network
    GAO Yuan, LIU Zhi, QIN Pinle, WANG Lifang
    2018, 38(9):  2689-2695.  DOI: 10.11772/j.issn.1001-9081.2018030574
    Asbtract ( )   PDF (1167KB) ( )  
    References | Related Articles | Metrics
    Aiming at the ambiguity caused by the loss of details in the super-resolution reconstruction of medical images, a medical image super-resolution algorithm based on deep residual Generative Adversarial Network (GAN) was proposed. Firstly, a generative network and a discriminative network were designed in the method. High resolution images were generated by the generative network and the authenticities of the images were identified by the discriminative network. Secondly, a resize-convolution was used to eliminate checkerboard artifacts in the upsampling layer of the designed generative network and the batch-normalization layer of the standard residual block was removed to optimize the network. Also, the number of feature maps was further increased in the discriminative network and the network was deepened to improve the network performance. Finally, the network was continuously optimized according to the generative loss and the discriminative loss to guide the generation of high-quality images. The experimental results show that compared with bilinear interpolation, nearest-neighbor interpolation, bicubic interpolation, deeply-recursive convolutional network for image super-resolution and Super-Resolution using a Generative Adversarial Network (SRGAN), the improved algorithm can reconstruct the images with richer texture and more realistic vision. Compared with SRGAN, the proposed algorithm has an increase of 0.21 dB and 0.32% in Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM). It provides a deep residual generative adversarial network method for the theoretical research of medical image super-resolution, which is reliable and effective in practical applications.
    Nonlocal self-similarity based low-rank spase image denoising
    ZHANG Wenwen, HAN Yusheng
    2018, 38(9):  2696-2700.  DOI: 10.11772/j.issn.1001-9081.2018020310
    Asbtract ( )   PDF (1002KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that many image denoising methods are easy to lose detailed information when removing noise, a nonlocal self-similarity based low-rank sparse image denoising method was proposed. Firstly, external natural clean image patches were put into groups by a method of block matching based on Mahalanobis Distance (MD), and then a patch group based Gaussian Mixture Model (GMM) was developed to learn the nonlocal self-similarity prior. Secondly, based on the Stable Principal Component Pursuit (SPCP) method, the noise image matrix was decomposed into low-rank, sparse and noise parts, while the sparse matrix contained useful information. Finally, the global objective function was minimized to achieve denoising. The experimental results show that compared to the previous denoising methods, such as EPLL (Expected Patch Log Likelihood), NCSR (Non-locally Centralized Sparse Representation), PCLR (external Patch prior guided internal CLusteRing), etc., the proposed method has better results in Peak Signal-to-Noise Ratio (PSNR) and Structure self-SIMilarity (SSIM), speed, denoising effect and detail retention ability.
    Synthetic aperture radar image enhancement method based on combination of non-subsampled shearlet transform and fuzzy contrast
    GUO Qingrong, JIA Zhenhong, YANG Jie, Nikola KASABOV
    2018, 38(9):  2701-2705.  DOI: 10.11772/j.issn.1001-9081.2018030527
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    Aiming at the noises and artifacts were introduced to Synthetic Aperture Radar (SAR) image in the process of imaging and transmission, which cause many problems such as reduction of definition and lack of details, an SAR image enhancement method based on the combination of Non-Subsampled Shearlet Transform (NSST) and fuzzy contrast was proposed. Firstly, the original image was decomposed into a low-frequency component and several high-frequency components by NSST. Then, the low-frequency component was linearly stretched to improve the overall contrast, and the threshold method was adopted for high-frequency components to remove noise. And then the reconstruction image was obtained by applying the inverse NSST to the processed low-frequency and high-frequency components. Finally, fuzzy contrast method was used to improve detail information and layering of reconstruction image and obtain the final image. The experimental results on 40 images show that, compared with Histogram Equalization (HE), Multi-Scale Retinex (MSR) enhancement algorithm, Remote sensing image enhancement algorithm based on shearlet transform and multi-scale Retinex, and medical image enhancement method based on improved Gamma correction in Shearlet domain, the Peak Signal-to-Noise Ratio (PSNR) of this proposed method promotes at least 22.9%, and the Root Mean Square Error (RMSE) optimizes at least 36.2%. And finally this proposed method can obviously improve image definition and obtains clearer texture information.
    Hybrid fruit fly optimization algorithm for field service scheduling problem
    WU Bin, WANG Chao, DONG Min
    2018, 38(9):  2706-2711.  DOI: 10.11772/j.issn.1001-9081.2018010159
    Asbtract ( )   PDF (947KB) ( )  
    References | Related Articles | Metrics
    The skills level of employees has a great influence on the execution efficiency of Field Service Scheduling Problem (FSSP). Employee skill factors are not considered in the existing research. To solve the problem, firstly, taking the travel time, service time and waiting time of staff as optimization goals, the FSSP model considering the skill level of staff was established. Then, a Hybrid Fruit fly Optimization Algorithm (HFOA) was proposed to optimize the model. Based on the features of the problem and the merits of the algorithm, an encoding method based on the matrix was designed. Two operators of matrix were defined based on the theory of swarm intelligence, and then three search operators were proposed, and the smell-based search strategy and the vision-based search strategy of Fruit fly Optimization Algorithm (FOA) were redesigned. At the same time, in order to improve the algorithm's performance, an initialization operator based on the nearest insertion heuristic algorithm was constructed. Finally, the simulation experiment was carried out through typical instances and the proposed algorithm was compared with Genetic Algorithm (GA) and Greedy Randomized Adaptive Search Procedure (GRASP) algorithm. The experimental data show that HFOA performs better in terms of mean value and optimal value than the other two algorithms. The results show that HFOA outperforms other algorithms in terms of optimization accuracy and stability after improving the initialization method and search strategy.
    Satellite scheduling method for intensive tasks based on improved fireworks algorithm
    ZHANG Ming, WANG Jindong, WEI Bo
    2018, 38(9):  2712-2719.  DOI: 10.11772/j.issn.1001-9081.2018030547
    Asbtract ( )   PDF (1302KB) ( )  
    References | Related Articles | Metrics
    Traditional satellite scheduling models are generally simple, when the problem is large and the tasks are concentrated, the disadvantages of mutual exclusion between tasks and low task revenue often occur. To solve this problem, an intensive task imaging satellite scheduling method based on Improved FireWorks Algorithm (IFWA) was proposed. On the basis of analyzing the characteristics of intensive task processing and imaging satellite observation, synthetic constraint analysis on the tasks was firstly carried out, and then a multi-satellite intensive task scheduling Constraint Satisfaction Problem (CSP) model based on task synthesis was established by comprehensively considering the constraints such as the observable time of the imaging satellite, the attitude adjustment time between tasks, the energy and capacity of the imaging satellite method. Finally, an improved fireworks algorithm was used to solve the model, elitist selection strategy was used to ensure the diversity of population and accelerate the convergence of the algorithm, thus a better satellite scheduling scheme was obtained. The simulation results show that the proposed model increases the average revenue by 30% to 35% and improves the time efficiency by 32% to 45% compared with the scheduling model without consideration of task synthesis factor, which validates its feasibility and effectiveness.
    Reconfiguration strategy of distribution network based on improved particle swarm optimization
    WANG Qingrong, WANG Ruifeng
    2018, 38(9):  2720-2724.  DOI: 10.11772/j.issn.1001-9081.2018030524
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics
    Existing optimizations have low precision and slow speed for reconfiguration of distribution network. In order to improve the safety and reliability of distribution network with Distributed Generation (DG), a simplified particle swarm optimization with adaptive inertial weight and full information was proposed based on leap-frog grouping. Firstly, from the viewpoints of reducing the active power loss of the network, increasing the voltage stability, and balancing the load of the feeder, a multi-objective mathematical model for distribution network was established. Secondly, through the Pareto dominance principle, the multi-objective was converted into several single objects with the same dimension, the same attribute and the same order of magnitude according to the standardized satisfaction of fuzzy membership function to make up for the disadvantages subjectivity and disunited dimension of weight method. Finally, in order to avoid random initialization to produce a large number of infeasible solutions, a kind of multi-objective reconfiguration strategy of distribution network with DG-combining Ant Colony Optimization (ACO) algorithm with random spanning tree and improved particle swarm optimization was designed. Through the IEEE33 node distribution system simulation, the experimental results show that the proposed reconfiguration strategy has a decrease of 41.0% in search efficiency compared to Particle Swarm Optimization (PSO) algorithm. Compared to before reconfiguration, the active power loss of the network is decreased by 41.47%, the voltage stability is decreased by 57.0%, and the load of the feeder is improved by 31.25%. The reconfiguration strategy effectively improves the optimizing accuracy and speeds up the optimization, therefore, improves the safety and reliability of distribution network operation.
    Train interval optimization of rail transit based on artificial bee colony algorithm
    FANG Chunlin, LIU Xiaojuan, XIN Yingying, LUO Huan
    2018, 38(9):  2725-2729.  DOI: 10.11772/j.issn.1001-9081.2018020493
    Asbtract ( )   PDF (878KB) ( )  
    References | Related Articles | Metrics
    As the core of the operation and management of a rail transit enterprise, the rail transit operation organization plays a very important role in reducing the operation cost of the enterprise, improving the service level and the travel efficiency of passengers. A strategy based on Artificial Bee Colony (ABC) optimization algorithm was proposed to optimize the train traffic interval. Based on the consideration of the respective interests of operators and passengers, the train departure interval was taken as the decision variable to establish a bi-objective nonlinear programming model for the lowest average passenger waiting time and the largest train waiting time. Artificial Bee Colony (ABC) algorithm was used to optimize the model. The simulation results on Beijing-Tianjin inter-city passenger flow at different times of a day demonstrate the effectiveness of the proposed algorithms and models.
    Fault detection strategy based on local neighbor standardization and dynamic principal component analysis
    ZHANG Cheng, GUO Qingxiu, FENG Liwei, LI Yuan
    2018, 38(9):  2730-2734.  DOI: 10.11772/j.issn.1001-9081.2018010071
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics
    Aiming at the processes with dynamic and multimode characteristics, a fault detection strategy based on Local Neighbor Standardization (LNS) and Dynamic Principal Component Analysis (DPCA) was proposed. First, the K nearest neighbors set of each sample in training data set was found, then the mean and standard deviation of each variable were calculated. Next, the above mean and standard deviation were applied to standardize the current samples. At last, the traditional DPCA was applied in the new data set to determine the control limits of T2 and SPE statistics respectively for fault detection. LNS can eliminate the multimode characteristic of a process and make the new data set follow a multivariate Gaussian distribution; meanwhile, the feature of a outlier deviating from normal trajectory can also be maintained. LNS-DPCA can reduce the impact of multimode structure and improve the detectability of fault in processes with dynamic property. The efficiency of the proposed strategy was implemented in a simulated case and the penicillin fermentation process. The experimental results indicate that the proposed method outperforms the Principal Component Analysis (PCA), DPCA and Fault Detection based on K Nearest Neighbors (FD-KNN).
    Multi-sensor fault diagnosis method for quad-rotor aircraft based on adaptive observer
    WANG Rijun, BAI Yue, ZENG Zhiqiang, DUAN Nengquan, DANG Changying, DU Wenhua, WANG Junyuan
    2018, 38(9):  2735-2741.  DOI: 10.11772/j.issn.1001-9081.2018030561
    Asbtract ( )   PDF (1003KB) ( )  
    References | Related Articles | Metrics
    In order to detect and diagnose multi-sensor faults of quad-rotor aircraft, a multi-sensor fault diagnosis method based on adaptive observer was proposed. Firstly, the sensor fault was regarded as the virtual actuator fault after the establishment of the aircraft dynamics model and sensor model, the multi-sensor fault detection and diagnosis system of quad-rotor aircraft was constructed. Secondly, a nonlinear fault observer was designed to realize multi-sensor fault detection and isolation, and the nonlinear adaptive observer was designed based on the Laypunov method to estimate the mutiple fault biases. Finally, the stability and parameter convergence of the adaptive laws were proved in the presence of sensor measurement noise. The experimental results show that the method can detect and isolate the faults of multiple sensors effectively, and realize the estimation and tracking of multiple sensors fault biases simultaneously.
    Trajectory tracking control for quadrotor UAV based on extended state observer and backstepping sliding mode
    ZHANG Jianyang, YU Chunmei, YE Jianxiao
    2018, 38(9):  2742-2746.  DOI: 10.11772/j.issn.1001-9081.2018010026
    Asbtract ( )   PDF (698KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of external disturbances and the uncertainty of system model parameters for the underactuated quadrotor Unmanned Aerial Vehicle (UAV) existing in actual flight, a flight control scheme based on Extended State Observer (ESO) and integral backstepping sliding mode was designed. Firstly, according to the semi-coupling characteristics and the strict feedback architecture of system, a backstepping control was adopted to design the attitude inner loop and the position outer loop controllers. Then, a sliding mode algorithm with strong anti-jamming ability and integral control were incorporated to enhance system robustness and reduce static error respectively. Finally, ESO was used to eliminate the total internal and external disturbances and to compensate the interference in the control law online. The closed-loop control system was proven to be globally asymptotically stable by the Lyapunov stability analysis. In addition, the effectiveness and robustness of the proposed flight control scheme were verified through simulation analysis.
    Weak signal detection based on combination of power and exponential function model in tri-stable stochastic resonance
    ZHANG Gang, GAO Junpeng
    2018, 38(9):  2747-2752.  DOI: 10.11772/j.issn.1001-9081.2018010192
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics
    Under the background of strong noise, it is difficult to detect and extract weak signals. To solve the above problems, a new combination model of power and exponential function in tri-stable system was proposed based on the classic bistable system model and the Gaussian Potential model. First of all, the tri-stable system model was constructed by combining power function and exponential function, then the stochastic resonance was generated by adjusting related parameters, which was validated by numerical simulations. Secondly, using the average Signal-to-Noise Ratio (SNR) of output as a measure index, the artificial fish swarm intelligence algorithm was used to optimize the corresponding parameters, which makes the tri-stable system combining the power function and the exponential function achieve the maximum output SNR, and the phenomenon stochastic resonance was generated. Finally, it was applied to the diagnosis of bearing faults. At the same condition that the output SNR is -25.8 dB, the output SNR of the bistable system and tri-stable system combining the power function and the exponential function is -13.1 dB and -8.59 dB respectively. Simulation results demonstrate that the performance of the proposed system is better than the bistable system, and it is effective in weak signal detection and extraction.
2024 Vol.44 No.5

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF