Table of Content

    10 December 2016, Volume 36 Issue 12
    Domain partition and controller placement for large scale software defined network
    LIU Bangzhou, WANG Binqiang, WANG Wenbo, WU Di
    2016, 36(12):  3239-3243.  DOI: 10.11772/j.issn.1001-9081.2016.12.3239
    Asbtract ( )   PDF (961KB) ( )  
    References | Related Articles | Metrics
    Concerning the high complexity of multiple controller placement model in existing works, several metrics to improve network service quality were defined and an approach to partition network domain and implement controller placement for large scale Software Defined Network (SDN) was proposed. The network was partitioned into several domains based on Label Propagation Algorithm (LPA) and then the controllers in the small domains were deployed separately, which makes the model complexity be linear with the network size on consideration of control path average latency, reliability and the load balance. Simulation results show that our strategy improves the load balance dramatically compared with the original LPA, decreases the model complexity and enhances network service quality compared with CCP. In Internet2, the average control path latency decreases by 9% and the reliability increases by 10% at most.
    Coverage algorithm based on differential evolution and mixed virtual force for directional sensor networks
    GUAN Zhiyan, FENG Xiufang
    2016, 36(12):  3244-3250.  DOI: 10.11772/j.issn.1001-9081.2016.12.3244
    Asbtract ( )   PDF (1073KB) ( )  
    References | Related Articles | Metrics
    For Directional Sensor Network (DSN) consists of sensors with adjustable directions, in order to reduce coverage holes and overlapping area to the utmost, so as to improve the effective coverage, a differential evolution-mixed virtual force based coverage algorithm was put forward. Firstly, the directional sensing model was established, the mixed virtual forces between nodes, nodes and obstacles, nodes and the boundary were analyzed, and the adjustment formula between node rotation angle and force was established. Secondly, the differential evolution model was used to weaken defects of local suboptimal solutions caused by mixed virtual force. The virtual force was taken as a factor of evolutionary update. The best fitness value was found between nodes to optimize the effective coverage through mutation, crossover and selection operations. The coverage simulation experiments show that, in the detection area of 100 m×100 m, after 100 times of random deployment, the average effective coverage rate of the network is increased by 19.68% by the differential evolution-mixed virtual force algorithm, and the average effective coverage rate of the network is respectively increased by 10.32% and 11.35% by mixed virtual force algorithm and differential evolution algorithm. The network of differential evolution-mixed virtual force algorithm tends to be stable after 80 iterations, while the network of mixed virtual force algorithm and differential evolution algorithm respectively requires 130 iterations and 140 iterations. Compared with mixed virtual force algorithm and differential evolution algorithm, the differential evolution-mixed virtual force based coverage algorithm is faster, and can improve the effective coverage rate more obviously.
    Link prediction algorithm based on node importance in complex networks
    CHEN Jiaying, YU Jiong, YANG Xingyao, BIAN Chen
    2016, 36(12):  3251-3255.  DOI: 10.11772/j.issn.1001-9081.2016.12.3251
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics
    Enhancing the accuracy of link prediction is one of the fundamental problems in the research of complex networks. The existing node similarity-based prediction indexes do not make full use of the importance influences of the nodes in the network. In order to solve the above problem, a link prediction algorithm based on the node importance was proposed. The node degree centrality, closeness centrality and betweenness centrality were used on the basis of similarity indexes such as Common Neighbor (CN), Adamic-Adar (AA) and Resource Allocation (RA) of local similarity-based link prediction algorithm. The link prediction indexes of CN, AA and RA with considering the importance of nodes were proposed to calculate the node similarity. The simulation experiments were taken on four real-world networks and Area Under the receiver operation characteristic Curve (AUC) was adopted as the standard index of link prediction accuracy. The experimental results show that the link prediction accuracies of the proposed algorithm on four data sets are higher than those of the other comparison algorithms, like Common Neighbor (CN) and so on. The proposed algorithm outperforms traditional link prediction algorithm and produces more accurate prediction on the complex network.
    Non-orthogonal network coding for complex field distributed detection based on super node MAC relay
    HUANG Chengbing, TANG Gang, WANG Bo
    2016, 36(12):  3256-3261.  DOI: 10.11772/j.issn.1001-9081.2016.12.3256
    Asbtract ( )   PDF (991KB) ( )  
    References | Related Articles | Metrics
    In order to solve the waiting problem of data transmission in the process of orthogonal communication, a non-orthogonal network coding strategy for complex field distributed detection based on super node Multiple Access Channel (MAC) relay was proposed. Firstly, the classical orthogonal channel distribution detection technology was introduced, and aiming at its existing problems, the relay MAC was used in wireless sensor networks, and the complex network coding technology was also used in wireless sensor networks, which contributed to achieve cooperative diversity to reduce the adverse effects of channel fading. Secondly, according to the relay MAC complex field network coding based orthogonal channel distribution detection technology, a Maximum Likelihood (ML) optimal sensor label selection algorithm based on network symbol error probability was proposed to reduce error probability, which considered the false alarm rate and detection probability of sensor. At the same time, the fair distribution of relay power and total transmit power was obtained by the super node approximation. The simulation results show that, in the detection of non-orthogonal network coding, the detection rate of the proposed algorithm can achieve 91.3%, and the error rate is only 25.1%. The proposed algorithm can effectively improve the detection performance of non-orthogonal network coding algorithm in practical applications.
    System dynamics relevancy analysis model based on stochastic function Petri-net
    HUANG Guangqiu, HE Tong, LU Qiuqin
    2016, 36(12):  3262-3268.  DOI: 10.11772/j.issn.1001-9081.2016.12.3262
    Asbtract ( )   PDF (1046KB) ( )  
    References | Related Articles | Metrics
    There are several problems that the System Dynamics (SD) model cannot express both random delay and conditional transitions between different states, and the Stochastic Petri-Net (SPN) itself still has the defect of insufficient computing ability. In order to solve the problems, firstly, the SPN was expanded and the Stochastic Function Petri-Net (SFPN) model was proposed. Secondly, combining SFPN with SD, the SFPN-SD model was put forward. Because the transits in SFPN could be used to accurately describe random delay, therefore, the first problems in SD model was solved. Because the conditions arcs in SFPN could be used to express the conditional transferring among places, as a result, the second problem in SD was solved. Finally, some state variables and state transition equations were appended in places and transitions of SPN, while these state variables and state transition equations were the different interpretations of level, auxiliary and rate variables as well as level and rate equations in SD model. The state transition equations could realize complicated computations, and thus the problem of insufficient computing ability in SPN was solved. The SFPN-SD model inherited all the features of the SD model, at the same time, all the features of SPN were incorporated into the SFPN-SD model. Compared with the SD model, the proposed SFPN-SD model has such advantages that system states and the meaning of their types as well as the process of state evolution become more clear. And the system's dynamic changes are driven by events in SFPN-SD, which it can describe autonomous dynamic stochastic evolution of complex system more realistically. The case studies show that, compared with the SD model, the proposed SFPN-SD model has stronger, more comprehensive abilities such as relevancy analysis description and simulation of complex system.
    Analysis of evolutionary game on client's evaluation strategy selection in e-commerce
    LIU Dafu, SU Yang, XIE Hong'an, YANG Kai
    2016, 36(12):  3269-3273.  DOI: 10.11772/j.issn.1001-9081.2016.12.3269
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics
    It's difficult for incentive mechanism to effectively enable customers to make true evaluation in e-commerce. According to the evaluation behavior characteristics of incomplete information and limited rationality, an evolutionary game model was established to analyze and improve the incentive mechanism in trust model. Firstly, the Replicator Dynamics mechanism was used to simulate the evolution of the customer selection strategy. Then, the existence of Evolutionary Stable Strategy (ESS) in common incentive mechanism was discussed. Finally, the compensation benefits were proposed to improve the incentive system based on system dynamics. In the comparison simulation experiments with an Interest Group based Trust model (IGTrust) which conducted on NetLogo system dynamics modeler, the results show that the modified incentive mechanism achieved ESS, and also successfully predicted and controlled the customer evaluation strategy. The proposed model can possess robustness and maintain evolutionary stable state when dealing with 7% variation of evaluation strategy.
    GPU parallel particle swarm optimization algorithm based on adaptive warp
    ZHANG Shuo, HE Fazhi, ZHOU Yi, YAN Xiaohu
    2016, 36(12):  3274-3279.  DOI: 10.11772/j.issn.1001-9081.2016.12.3274
    Asbtract ( )   PDF (883KB) ( )  
    References | Related Articles | Metrics
    The parallel Particle Swarm Optimization (PSO) algorithm was improved through Graphics Processor Unit (GPU) based on Compute Unified Device Architecture (CUDA). According to the structural characteristics of the CUDA hardware system, it can be concluded that block is executed serially and the basic scheduled and executive unit of Streaming Multiprocessor (SM) is warp. GPU parallel PSO algorithm based on adaptive warp was carried out in order to make full use of thread parallelism in the block. The dimensions of particles were corresponded to the threads of particles. Each particle was corresponded to one or more warps in accordance with its self-dimension adaptively by using the warp level parallelism of GPU. One or more particles were corresponded to each block. Comparison with the existing coarse-grained parallel approach (corresponding each particle to the thread) and fine-grained parallel approach (corresponding each particle to the block) was made, and the experimental results show that the proposed parallel approach achieves CPU speed-up ratio of 40 more than two kinds of approaches mentioned above.
    Data analysis method for parallel DHP based on Hadoop
    YANG Yanxia, FENG Lin
    2016, 36(12):  3280-3284.  DOI: 10.11772/j.issn.1001-9081.2016.12.3280
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics
    It is a bottleneck of Apriori algorithm for mining association rules that the candidate set C2 is used to generate the frequent 2-item set L2. In the Direct Hashing and Pruning (DHP) algorithm, a generated Hash table H2 is used to delete the unused candidate item sets in C2 for improving the efficiency of generating L2. However,the traditional DHP is a serial algorithm, which cannot effectively deal with large scale data. In order to solve the problem, a DHP parallel algorithm, termed H_DHP algorithm, was proposed. First, the feasibility of parallel strategy in DHP was analyzed and proved theoretically. Then, the generation method for the Hash table H2 and frequent item sets L1, L3-Lk was developed in parallel based on Hadoop, and the association rules were generated by Hbase database. The simulation experimental results show that, compared with the DHP algorithm, the H_DHP algorithm has better performance in the processing efficiency of data, the size of the data set, the speedup and scalability.
    Local similarity detection algorithm for time series based on distributed architecture
    LIN Yang, JIANG Yu'e, LIN Jie
    2016, 36(12):  3285-3291.  DOI: 10.11772/j.issn.1001-9081.2016.12.3285
    Asbtract ( )   PDF (1125KB) ( )  
    References | Related Articles | Metrics
    The CrossMatch algorithm based on the idea of Dynamic Time Warping (DTW) can be used to solve the problems of local similarity between time series. However, due to the high complexity of time and space, large amounts of computing resources are required. Thus, it is almost impossible to be used for long sequences. To solve the above mentioned problems, a new algorithm for local similarity detection based on distributed platform was proposed. The proposed algorithm was a distributed solution for CrossMatch. The problem of insufficient computing resources including time and space requirement was solved. Firstly, the series should be splited and distributed on several nodes. Secondly, the local similarity of every node's own series was dealt with. Finally, the results would be merged and assembled in order to find the local similarity of series. The experimental results show that the accuracy between the proposed algorithm and the CrossMatch algorithm is similar, and the proposed algorithm uses less time. The improved distributed algorithm can not only solve the computation problem of long sequence of time series which can not be processed by a single machine, but also improve the running speed by increasing the number of parallel computing nodes.
    Highly efficient Chinese text classification algorithm of KNN based on Spark framework
    YU Pingping, NI Jiancheng, YAO Binxiu, LI Linlin, CAO Bo
    2016, 36(12):  3292-3297.  DOI: 10.11772/j.issn.1001-9081.2016.12.3292
    Asbtract ( )   PDF (936KB) ( )  
    References | Related Articles | Metrics
    The time complexity of K-Nearest Neighbor(KNN) classification algorithm is proportional to the number of training samples, which needs a large number of computation, and the bottleneck of slow processing exists in traditional architecture under the big data background. In order to solve the problems, a highly efficient algorithm of KNN based on Spark framework and clustering was proposed. Firstly, the training set was cut twice by the optimized K-medoids algorithm through introducing constriction factor. Then the K was iterated constantly in the process of classification and the classification result was obtained. And the data was partitioned and iterated to realize parallelization combining the Spark framework in the calculation. The experimental results show that, the classification time of the traditional KNN algorithm and the KNN algorithm based on K-medoids is 3.92-31.90 times of the proposed algorithm in different datasets. The proposed algorithm has high computational efficiency and better speedup ratio than KNN based on Hadoop platform, and it can effectively classify the big data.
    Particle swarm optimization algorithm with firefly behavior and Levy flight
    FU Qiang, GE Hongwei, SU Shuzhi
    2016, 36(12):  3298-3302.  DOI: 10.11772/j.issn.1001-9081.2016.12.3298
    Asbtract ( )   PDF (848KB) ( )  
    References | Related Articles | Metrics
    Particle Swarm Optimization (PSO) is easy to fall into local minimum, and has poor global search ability. Many improved algorithms cannot optimize PSO performance fully by using a single search strategy in a way. In order to solve the problem, a novel PSO with Firefly Behavior and Levy Flight (FBLFPSO) was proposed. The local search ability of PSO was improved to avoid falling into local optimum by using improved self-regulating step firefly search strategy. Then, the principle of Levy flight was taken to enhance population diversity and improve the global search ability of PSO, which contributed to escape from local optimal solution. The simulation results show that, compared with the existing correlation algorithms, the global search ability and the search accuracy of FBLFPSO are greatly improved.
    Application of symbiotic system-based artificial fish school algorithm in feed formulation optimization
    LIU Qing, LI Ying, QING Maiyu, ODAKA Tomohiro
    2016, 36(12):  3303-3310.  DOI: 10.11772/j.issn.1001-9081.2016.12.3303
    Asbtract ( )   PDF (1134KB) ( )  
    References | Related Articles | Metrics
    In consideration of intelligence algorithms' extensive applicability to various types of feed formulation optimization models, the Artificial Fish Swarm Algorithm (AFSA) was firstly applied in feed formulation optimization. For meeting the required precision of feed formulation optimization, a symbiotic system-based AFSA was employed. which significantly improved the convergence accuracy and speed compared with the original AFSA. In the process of optimization, the positions of Artificial Fish (AF) individuals in solution space were directly coded as the form of solution vector to the problem via the feed ratio, a penalty-based objective function was employed to evaluate AF individuals' fitness. AF individuals performed several behavior operators to explore the solution space according to a predefined behavioral strategy. The validity of the proposed algorithm was verified on three practical instances. The verification results show that, the proposed algorithm has worked out the optimal feed formulation, which can not only remarkably reduce the fodder cost, but also satisfy various nutrition constraints. The optimal performance of the proposed algorithm is superior to the other existing algorithms.
    Mechanism of parked domain recognition based on authoritative domain name servers
    LIU Mei, ZHANG Yongbin, RAN Chongshan, SUN Lianshan
    2016, 36(12):  3311-3316.  DOI: 10.11772/j.issn.1001-9081.2016.12.3311
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    The massive parked domains exist in the Internet, which seriously affect the Internet experience and Internet environment of online users when surfing. In order to recognize parked domains, a new method of parked domain recognition was proposed based on authoritative Domain Name Server (DNS). The set of authoritative DNS which could be used for domain parking service was extracted by the typosquatting domains commonly used in domain parking service. Then the set was clustered by semi-supervised clustering method to identify the authoritative DNS associated with domain parking service. When detecting a parked domain, the parked domain was recognized by the judgments that whether its authoritative DNS was applied in domain parking service and whether its mapped IP addresses was concluded in the set of IP addresses of parking Web servers. By using the existing detecting method based on webpages' features, the accuracy of the proposed method was analyzed. The experimental results show the proposed method has achieved the accuracy rate of 92.8%, and avoids crawling the webpage information, which has a good performance on parked domains detection in real-time.
    Novel secure network coding scheme against global wiretapping
    HE Keyan, ZHAO Hongyu
    2016, 36(12):  3317-3321.  DOI: 10.11772/j.issn.1001-9081.2016.12.3317
    Asbtract ( )   PDF (879KB) ( )  
    References | Related Articles | Metrics
    The existing schemes against global wiretapping attacks for network coding have the problems of bringing bandwidth overhead and incuring high computational complexity. In order to reduce the bandwidth overhead and enhance the actual coding efficiency, a novel secure network coding scheme against global wiretapping was proposed. For the network coding with the size of field is q, two permutation sequences of length q were generated by using the key, and the source message was mixed and replaced by using the permutation sequences so as to resist global wiretapping attacks. The source message was only encrypted at the source node and had no change at the intermediate nodes. The proposed scheme has a simple encryption algorithm, low coding complexity and doesn't need pre-coding, so it doesn't bring bandwidth overhead and has high actual coding efficiency. The analysis results show that the proposed scheme can resist not only the ciphertext-only attacks but also the known-plaintext attacks efficiently.
    Privacy preserving interest matching scheme for social network
    LUO Xiaoshuang, YANG Xiaoyuan, WANG Xu'an
    2016, 36(12):  3322-3327.  DOI: 10.11772/j.issn.1001-9081.2016.12.3322
    Asbtract ( )   PDF (889KB) ( )  
    References | Related Articles | Metrics
    Concerning the sensitive information leakage problem resulted from making friends by interest matching in social network, a privacy preserving interest matching scheme based on private attributes was proposed. Bloom Filters were used to get the intersection of interest set for both sides, and the interest matching level was determined in the proposed scheme. Both sides intended to add each other as a friend according to their will as long as they met the matching requirements. Based on the semi-honest model, the cryptographic protocols were adopted to protect data security for preventing malicious users obtaining sensitive information illegally, which could avoid information abuse and leakage. Theoretical analysis and calculation results show that the proposed scheme has linear complexity about operational time, support large-scale data sets, and can be applied in Internet environments with different kinds of information and great number of data content, meet user's demands of real-time and efficiency.
    Improved differential fault attack on scalar multiplication algorithm in elliptic curve cryptosystem
    XU Shengwei, CHEN Cheng, WANG Rongrong
    2016, 36(12):  3328-3332.  DOI: 10.11772/j.issn.1001-9081.2016.12.3328
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics
    Concerning the failure problem of fault attack on elliptic curve scalar multiplication algorithm, an improved algorithm of differential fault attack was proposed. The nonzero assumption was eliminated, and an authentication mechanism was imported against the failure threat of "fault detection". Using the elliptic curve provided by SM2 algorithm, the binary scalar multiplication algorithm, binary Non-Adjacent Form (NAF) scalar multiplication algorithm and Montgomery scalar multiplication algorithm were successfully attacked with software simulation. The 256-bit private key was restored in three hours. The attacking process of binary NAF scalar multiplication algorithm was optimized, so the attack time was reduced to one fifth of the original one. The experimental results show that the proposed algorithm can improve the effectiveness of the attack.
    Object recognition algorithm based on deep convolution neural networks
    HUANG Bin, LU Jinjin, WANG Jianhua, WU Xingming, CHEN Weihai
    2016, 36(12):  3333-3340.  DOI: 10.11772/j.issn.1001-9081.2016.12.3333
    Asbtract ( )   PDF (1436KB) ( )  
    References | Related Articles | Metrics
    Focused on the problem of traditional object recognition algorithm that the artificially designed features were more susceptible to diversity of object shapes, illumination and background, a deep convolutional neural network algorithm was proposed for object recognition. Firstly, this algorithm was trained with NYU Depth V2 dataset, and single depth information was transformed into three channels. Then color images and transformed depth images in the training set were used to fine-tune two deep convolutional neural networks, respectively. Next, color and depth image features were extracted from the first fully connected layers of the two trained models, and the two features from the resampling training set were combined to train a Linear Support Vector Machine (LinSVM) classifier. Finally, the proposed object recognition algorithm was used to extract super-pixel features in scene understanding task. The proposed method can achieve a classification accuracy of 91.4% on the test set which is 4.1 percentage points higher than SAE-RNN (Sparse Auto-Encoder with the Recursive Neural Networks). The experimental results show that the proposed method is effective in extracting color and depth image features, and can effectively improve classification accuracy.
    Computing global unbalanced degree of signed networks based on culture algorithm
    ZHAO Xiaohui, LIU Fang'ai
    2016, 36(12):  3341-3346.  DOI: 10.11772/j.issn.1001-9081.2016.12.3341
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    Many approaches which are developed to compute structural balance degree of signed networks only focus on the balance information of local network without considering the balance of network in larger scale and even from the whole viewpoint, which can't discover unbalanced links in the network. In order to solve the problem, a method of computing global unbalanced degree of signed networks based on culture algorithm was proposed. The computation of unbalanced degree was converted to an optimization problem by using the Ising spin glass model to describe the global state of signed network. A new cultural algorithm with double evolution structures named Culture Algorithm for Signed Network Balance (CA-SNB) was presented to solve the optimization problem. Firstly, the genetic algorithm was used to optimize the population space. Secondly, the better individuals were recorded in belief space and the situation knowledges were summarized by using greedy strategy. Finally, the situation knowledge was used to guide population space evolution. The convergence rate of CA-SNB was improved on the basis of population diversity. The experimental results show that, the CA-SNB can converge to the optimal solution faster and can be more robust than genetic algorithm and matrix transformation algorithm. The proposed algorithm can compute the global unbalanced degree and discover unbalanced links at the same time.
    Application of weighted Fast Newman modularization algorithm in human brain structural network
    XIA Yidan, WANG Bin, DONG Yingzhao, LIU Hui, XIONG Xin
    2016, 36(12):  3347-3352.  DOI: 10.11772/j.issn.1001-9081.2016.12.3347
    Asbtract ( )   PDF (1026KB) ( )  
    References | Related Articles | Metrics
    The binary brain network modularization is not enough to describe physiological features of human brain. In order to solve the problem, a modularization algorithm for weighted brain network based on Fast Newman binary algorithm was presented. Using the hierarchical clustering idea of condensed nodes as the base, a weighted modularity indicator was built with the main bases of single node's weight and entire network's weight. Then the modularity increment was taken as the testing index to decide which two nodes should be combined in weighted brain network and realize module partition. The proposed method was applied to detect the modular structure of the group average data of 60 healthy people. The experiment results showed that, compared with the modular structure of the binary brain network, the brain network modularity of the proposed method was increased by 28% and more significant difference between inside and outside of modules could be revealed. Moreover, the modular structure found by the proposed method is more consistent with the physiological characteristics of human brain. Compared with the other two existing weighted modular algorithms, the proposed method can also slightly improve the modularity and guarantee a reasonable identification for human brain modular structure.
    Multiple classifier fusion model for activity recognition based on high reliability weighted
    WANG Zhongmin, WANG Ke, HE Yan
    2016, 36(12):  3353-3357.  DOI: 10.11772/j.issn.1001-9081.2016.12.3353
    Asbtract ( )   PDF (781KB) ( )  
    References | Related Articles | Metrics
    To improve the recognition accuracy of human activity based on the smart mobile device, an Multiple Classifier Fusion Model for activity recognition (MCFM) based on high reliability weighting was proposed. According to the triaxial acceleration imformation obtained by different smart device with built-in acceleration sensor, those features of high correlation with human daily activities were extracted from the original acceleration as the input of MCFM. Then the three base classifiers of decision tree, Support Vector Machine (SVM) and Back Propagation (BP) neural network were trained for a new fusion classifier by using the High Reliability Weighted Voting (HRWV) algorithm. The experimental results show that the the proposed classifier fusion model can effectively improve the accuracy of human activity recognition, its average recognition accuracy of the five daily activities (stay, walk, run, stairs, downstairs) reaches 94.88%.
    Multisensor information fusion algorithm based on intelligent particle filtering
    CHEN Weiqiang, CHEN Jun, ZHANG Chuang, SONG Liguo, TAN Zhuoli
    2016, 36(12):  3358-3362.  DOI: 10.11772/j.issn.1001-9081.2016.12.3358
    Asbtract ( )   PDF (733KB) ( )  
    References | Related Articles | Metrics
    In order to solve the low-quality and degeneration problem of particles in the process of particle filtering, a multisensor information fusion algorithm based on intelligent particle filtering was proposed. The process of the proposed algorithm was divided into two steps. Firstly, the multisensor data was sent to the appropriate particle filtering calculation module, and the proposal distribution density was updated for the purpose of optimizing the particle distribution. Then, the integrated likelihood function model was structured by using the multisensor data in intelligent particle filtering module, meanwhile, the small-weight particles were modified into large-weight ones according to the designed genetic operators. The posterior distribution was more sufficiently approximated, thus large-weight particles were reserved in the process of resampling, which avoided the problem of exhausting particles, further maintained the diversity of the particles and improved the filtering precision. Finally, the optimal accurate estimated value was obtained. The proposed algorithm was applied to the GPS/SINS/LOG integrated navigation system according to the prototype testing data, and its effectiveness was verified by the simulation calculation. The simulation results show that, the proposed algorithm can get accurate informations of location, speed and heading, and effectively improve the filtering performance, which can improve the calculating precision of the integrated navigation system and meet the requirement of high precision navigation and positioning of the ship.
    Trust network random walk model based on user preferences
    ZHANG Meng, NAN Zhihong
    2016, 36(12):  3363-3368.  DOI: 10.11772/j.issn.1001-9081.2016.12.3363
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy of rating prediction and resolve cold-start problem in recommended systems, on the basis of the TrustWalker model, a random walk model based on user preferences, named PtTrustWalker, was proposed. Firstly, the similarities of users and items were calculated in social networks through matrix factorization method. And then, the items were clustered and the preference of user to items and the user similarity in different categories were calculated through user's scores. Finally, by making use of authority score and user preference, the credibility was detailed into user's credit in different categories, and the score was predicted by the item score of trusted user's highest preference which was similar to the target item in the process of migration. The proposed model decreases the influence of noisy data and improves the stability of the recommendation. The experimental results show that, the PtTrustWalker model has some improvements in the quality and speed of recommendation compared with the existing random walk models.
    Estimation algorithm of switching speech power spectrum for automatic speech recognition system
    LIU Jingang, ZHOU Yi, MA Yongbao, LIU Hongqing
    2016, 36(12):  3369-3373.  DOI: 10.11772/j.issn.1001-9081.2016.12.3369
    Asbtract ( )   PDF (922KB) ( )  
    References | Related Articles | Metrics
    In order to solve the poor robust problem of Automatic Speech Recognition (ASR) system in noisy environment, a new estimation algorithm of switching speech power spectrum was proposed. Firstly, based on the assumption of the speech spectral amplitude was better modelled for a Chi distribution, a modified estimation algorithm of speech power spectrum based on Minimum Mean Square Error (MMSE) was proposed. Then incorporating the Speech Presence Probability (SPP), a new MMSE estimator based on SPP was obtained. Next, the new approach and the conventional Wiener filter were combined to develop a switch algorithm. With the heavy noise environment, the modified MMSE estimator was used to estimate the clean speech power spectrum; otherwise, the Wiener filter was employed to reduce calculating amount. The final estimation algorithm of switching speech power spectrum for ASR system was obtained. The experimental results show that,compared with the traditional MMSE estimator with Rayleigh prior, the recognition accurate of the proposed algorithm was averagely improved by 8 percentage points in various noise environments. The proposed algorithm can improve the robustness of the ASR system by removing the noise, and reduce the computational cost.
    Unvoiced/voiced mode codebook design algorithm based on cellular evenness
    XU Jingyun, ZHAO Xiaoqun, CAI Zhiduan, WANG Peiliang
    2016, 36(12):  3374-3377.  DOI: 10.11772/j.issn.1001-9081.2016.12.3374
    Asbtract ( )   PDF (589KB) ( )  
    References | Related Articles | Metrics
    The parameter distribution of unvoiced/voiced Line Spectrum Frequency (LSF) has differences. In order to improve the quantization performance of LSF parameters in vocoder, an unvoiced/voiced mode codebook design algorithm based on Cell Evenness (CE) was presented by using the difference between unvoiced/voiced LSF parameters distribution and CE. Firstly, the optimal amount ratio of unvoiced/voiced LSF parameters participating in the codebook training was deduced according to CE. Then the specified number of atypia LSF parameters were eliminated from unvoiced speech. The final codebook was retrained. The experimental results show that, compared with the shared codebook algorithm under the same bit-rate condition, the average spectrum distortion of the proposed algorithm was reduced by 2.5%, the mean opinion score was increased by 2.3% and the storage of codebook was reduced by 21.1%. The proposed algorithm is also adapted to the vocoder without unvoiced/voiced symbol transmission and the algorithm is also adapted to the vocoder without unvoiced/voiced symbol transmission.
    Hybrid intelligent model for fashion sales forecasting based on discrete grey forecasting model and artificial neural network
    LIU Weixiao
    2016, 36(12):  3378-3384.  DOI: 10.11772/j.issn.1001-9081.2016.12.3378
    Asbtract ( )   PDF (1039KB) ( )  
    References | Related Articles | Metrics
    Fashion sales forecasting is very important for the retail industry and accurate sales forecasting can improve the final fashion sales profits greatly. The current fashion sales forecast data is limited and the data volatility makes it harder to accurately forecast. In order to solve the problems, a new hybrid intelligent prediction algorithm comprising Artificial Neural Network (ANN) and Discrete Grey forecasting Model (DGM(1,1)) was proposed. The Correlation Analysis (CA) was used to get important influence variables with large correlation and DGM(1,1)+ANN were used to forecast the sales data. Then the residual of real sales data and the forecasting results of DGM(1,1)+ANN was added into influence variables for forecasting the second residual by using ANN and adopting an idea of secondary residual. Finally, the experiments based on real data sets of fashion sales were conducted to evaluate the feasibility and accuracy of the proposed hybrid algorithm. The experimental results show that, in forecasting fashion sales data, the forecasting Mean Absolute Percent Error (MAPE) of the proposed algorithm is about 25%. The forecast accuracy has greatly improved, compared to AutoregRessive Integrated Moving Average model (ARIMA), Extended Extreme Learning Machine (EELM), DGM(1,1), DGM(1,1)+ANN algorithm, the average forecasting accuracy is improved about 8 percentage points. The proposed hybrid intelligent algorithm for fashion sales can be used for real-time sales forecasting and improve sales greatly.
    Scale adaptive tracker based on kernelized correlation filtering
    LI Qiji, LI Leimin, HUANG Yuqing
    2016, 36(12):  3385-3388.  DOI: 10.11772/j.issn.1001-9081.2016.12.3385
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of fixed target size in Kernel Correlation Filtering (KCF) tracker, a scale adaptive tracking method was proposed. Firstly, the Lucas-Kanade optical flow method was used to track the movement of keypoints in the neighbor frames, and the reliable points were obtained by introducing the forward-backward method. Secondly, the reliable points were used to estimate the target changing in scale. Thirdly, the scale estimation was applied to the adjustable Gaussian window. Finally, the forward-backward tracking method was used to determine whether the target was occluded or not, the template updating strategy was revised. The fixed target size limitation in the KCF was solved, the tracker was more accurate and robust. The object tracking datasets were used to test the algorithm. The experimental results show that the proposed method ranks over the original KCF, Tracking-Learning-Detection (TLD), Structured output tracking with kernel (Struck) algorithms both in success plot and precision plot. Compared with the original method, the proposed tracker can be better applied in target tracking with scale variation and occlusion.
    Improved pairwise rotation invariant co-occurrence local binary pattern algorithm used for texture feature extraction
    YU Yafeng, LIU Guangshuai, MA Ziheng, GAO Pan
    2016, 36(12):  3389-3393.  DOI: 10.11772/j.issn.1001-9081.2016.12.3389
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    The texture feature extraction algorithm of Pairwise Rotation Invariant Co-occurrence Local Binary Pattern (PRICoLBP) has characteristics of high computing feature dimension, poor rotation invariance and sensitivity to illumination change. In order to solve the issues, an improved PRICoLBP algorithm was proposed. Firstly, the coordinates of two neighboring pixels were obtained by respectively maximizing and minimizing the binary sequence of image pixels. Then, the position coordinates of co-occurred pixel points were calculated via the position coordinates of the center pixel and the two neighboring pixels. Secondly, the texture information of every image pixel was extracted through utilizing the Completed Local Binary Pattern (CLBP) algorithm. Compared with PRICoLBP, the recognition rate of the proposed method was improved respectively by the percentage points of 0.17, 0.24, 2.65, 2.39 and 2.04, on the image libraries of Brodatz, Outex(TC10, TC12), Outex(TC14), CUReT and KTH_TIPS under the same classifier. The experimental results show that the proposed algorithm has better recognition effect for the images with texture rotation variation and illumination change.
    Surface reconstruction for scattered point clouds with adaptive α-shape
    HE Hua, LI Zongchun, LI Guojun, RUAN Huanli, LONG Changyu
    2016, 36(12):  3394-3397.  DOI: 10.11772/j.issn.1001-9081.2016.12.3394
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics
    The α-shape algorithm is not suitable for surface reconstruction of scattered and non-uniformly sampled points. In order to solve the problem, an improved surface reconstruction algorithm with adaptive α-shape based on Local Feature Size (LFS) of point cloud data was proposed. Firstly, Medial Axis (MA) of the surface was approximated by the negative poles computed by k-nearest neighbors of sampled points. Secondly, the LFS of sampled points was calculated by the approximated MA, and the original point clouds were unequally simplified based on LFS. Finally, the surface was adaptively reconstructed based on the radius of circumscribed ball of triangles and the corresponding α value. In the comparison experiments with α-shape algorithm, the proposed algorithm could effectively and reasonably reduce the number of point clouds, and the simplification rate of point clouds achieved about 70%. Simultaneously, the reconstruction result were obtained with less redundant triangles and few holes. The experimental results show that the proposed algorithm can adaptively reconstruct the surface of non-uniformly sampled point clouds.
    Backtracking regularized stage-wised orthogonal matching pursuit algorithm
    LI Yan, WANG Yaoli
    2016, 36(12):  3398-3401.  DOI: 10.11772/j.issn.1001-9081.2016.12.3398
    Asbtract ( )   PDF (596KB) ( )  
    References | Related Articles | Metrics
    The signal reconstitution result of Stage-wise Orthogonal Matching Pursuit (StOMP) algorithm is undesirable. In order to solve the problem, a new algorithm named Backtracking Regularized Stage-wise Orthogonal Matching Pursuit (BR-StOMP) was proposed. Firstly, atoms with larger energy were selected using the regularization method to reduce the number of atoms in candidate set of threshold stage. Then atoms were tested using backtracking, and the atoms in support set of solutions were filtered again. The atoms with little contribution to the result were deleted to increase the reconstitution ratio. Finally, the sensing matrix was normalized to make the algorithm more simple. The simulation results show that, compared with the Orthogonal Matching Pursuit (OMP) algorithm, the Peak Signal-to-Noise Ratio (PSNR) of the BR-StOMP algorithm is improved by 8% to 10% and its run time is reduced by 70% to 80%; compared with StOMP algorithm, the PSNR of the BR-StOMP algorithm is increased by 19% to 35%. The BR-StOMP algorithm can reconstruct the original signals accurately, and its reconstruction effect outperforms OMP algorithm and StOMP algorithm.
    Improved accurate image registration algorithm based on FREAK descriptor
    FANG Yiguang, LIU Wu, GAO Mengzhu, TAN Shoubiao, ZHANG Ji
    2016, 36(12):  3402-3405.  DOI: 10.11772/j.issn.1001-9081.2016.12.3402
    Asbtract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics
    The algorithm of Fast REtinA Keypoint (FREAK) descriptor has achieved the rotation invariance via the direction of calculation model, but its matching performance for large change of rotation scale is not ideal and the matching error rate is high. In order to solve the problem, an improved image registration algorithm based on FREAK descriptor was proposed. Firstly, long distance point pairs judged with a given distance threshold, was added to the original FREAK. Only the points of long distance in the keypoint sampling pattern were used to generate angle information. Then, the Hamming distance was weighted. In order to generate descriptor selection point pairs for every key point, the mean of each column of training data descriptors was computed. The mean was closer to 0.5, the weight of the column was larger. This method improved the coarse-calculating state of original Hamming distance and made the distance calculation more accurate. The nearest neighbor matching method combined with the ratio of the nearest neighbor and next nearest neighbor, and the method of RANdom SAmple Consensus (RANSAC) were used for rapid matching and optimization. The experimental results show that, the improved algorithm is more suitable for the applications with large variation of rotation scale and high demand of matching performance.
    Single image fast dehazing method based on dark channel prior
    WANG Yating, FENG Ziliang
    2016, 36(12):  3406-3410.  DOI: 10.11772/j.issn.1001-9081.2016.12.3406
    Asbtract ( )   PDF (822KB) ( )  
    References | Related Articles | Metrics
    To solve the bad influence on images in foggy days such as decrease of definition and color deviation problem, a fast dehazing method for single image based on dark channel prior was proposed. Firstly, minimum was replaced by gray-scale opening operation to obtain rough dark channel image, and the regions which showed the sudden changes in the foggy image were marked based on the variance so that small window was used to correct the dark channel value in these areas. Next, the rough transmission map was acquired and the guided filter was adopted to refine the transmission. Then, the transmission of sky or other light area was corrected dynamically by using a self-adapting tolerance mechanism. Finally, the final haze-free image was restored from the atmospheric scattering model. The experimental results demonstrate that, compared with other representative dehazing algorithms, the proposed method can achieve a faster processing speed and provide more detailed restored image with good color effect.
    Adaptive denoising method of hyperspectral remote sensing image based on PCA and dictionary learning
    WANG Haoran, XIA Kewen, REN Miaomiao, LI Chuo
    2016, 36(12):  3411-3417.  DOI: 10.11772/j.issn.1001-9081.2016.12.3411
    Asbtract ( )   PDF (1265KB) ( )  
    References | Related Articles | Metrics
    The distributed state of noise existing among different bands of hyperspectral remote sensing image is complex, so the traditional denoising methods are hard to achieve the desired effect. In order to solve this problem, based on Principal Component Analysis (PCA), a novel denoising method for hyperspectral data was proposed combining with noise estimation and dictionary learning. Firstly, a group of the principal component images were achieved from the original hyperspectral data by using the PCA transform, which were divided into clear image group and noisy image group according to the corresponding energy. Then, according to any band image from noisy hyperspectral data, the noise standard deviation of the image was estimated via a noise estimation method based on Singular Value Decomposition (SVD). Meanwhile, combining this noise estimation method with denoising method via K-SVD dictionary learning, a new dictionary learning denoising method with adaptive noise estimation characteristics was proposed and applied to denoise those images from noisy image group with low energy where noise mainly existed. Finally, the final denoising image was obtained by weighted fusion according to the corresponding energy of each principal component image. The experimental results on simulated and real hyperspectral remote sensing data show that, compared with PCA, PCA-Bish and PCA-Contourlet, the Peak Signal-to-Noise Ratio (PSNR) of the image denoised by the proposed algorithm is improved by 1-3 dB, and more detailed information and better visual effect of the denoised image by the proposed method are achieved.
    Text segmentation based on superpixel fusion
    ZHANG Kuang, ZHU Yuanping
    2016, 36(12):  3418-3422.  DOI: 10.11772/j.issn.1001-9081.2016.12.3418
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics
    Improving performance of text segmentation is an important problem in text recognition, which is disturbed by complex background and noises in text image. In order to solve the problem, a text segmentation method based on superpixel fusion was proposed. Firstly, the text image was binarized initially and text stroke width was estimated. Then, superpixel segmentation and superpixel fusion were completed in the images. Finally, the local consistence characteristic of the fused superpixel was taken to check the original binary image. The experimental results show that, compared with Maximally Stable Extremal Region (MSER) and Stroke based Superpixel Grouping (SSG), the segmentation precision of the proposed method is improved by 8.00 percentage points and 7.00 percentage points on KAIST Datebase, and the text recognition rate of the proposed method is improved by 5.33 percentage points and 4.88 percentage points on ICDAR2003 Datebase. The proposed method has strong ability of denoising.
    Characterized dictionary-based low-rank representation for face recognition
    CHENG Xiaoya, WANG Chunhong
    2016, 36(12):  3423-3428.  DOI: 10.11772/j.issn.1001-9081.2016.12.3423
    Asbtract ( )   PDF (876KB) ( )  
    References | Related Articles | Metrics
    The existing Low-rank representation methods for face recognition fuse of local and global feature information of facial images inadequately. In order to solve the problem, a new face recognition method called Characterized Dictionary-based Low-Rank Representation (LRR-CD) was proposed. Firstly, every face image was represented as a set of characterized patches, then the low-rank reconstruction characteristic coefficients based on training samples as well as the corresponding intra-class characteristic variance were minimized. To obtain the efficient and high discriminative reconstruction coefficient matrix of face image patches, a new mathematical formula was presented. This formula could be used to completely preserve both global and local features of original hyper-dimensional face images, especially the local intra-class variance features, by the way of minimizing the low-rank constraint problem of corresponding patches in training samples and correlated intra-class variance dictionary. What's more, owing to the adequate mining of patch features, the proposed method obtained good robustness to the general noise such as facial occlusion and luminance variance. Several experiments were carried out on the face databases such as AR, CMU-PIE and Extended Yale B. The experimental results fully illustrate that the LRR-CD outperforms the compared algorithms of Sparse Representation Classification (SRC), Collaborative Representation Classification (CRC), LRR with Normalized CUT (LRR-NCUT) and LRR with Recursive Least Square (LRR-RLS), with the higher recognition rate of 2.58-17.24 percentage points. The proposed method can be effectively used for the global and local information fusion of facial features and obtains a good recognition rate.
    Cone-beam CT functional imaging method by using area integral model
    QIAN Ying, LIAO Tingting
    2016, 36(12):  3429-3435.  DOI: 10.11772/j.issn.1001-9081.2016.12.3429
    Asbtract ( )   PDF (1043KB) ( )  
    References | Related Articles | Metrics
    Cone-Beam Computed Tomography (CBCT) cannot directly scan to access Time-Density Curve (TDC) in the functional imaging. In order to solve the problem, a mathematical model of CBCT imaging was proposed to get the perfusion parameters. First, the area integral model of the simulated projection data was established, the CBCT projection data was simulated by using the New Zealand white rabbits Dynamic Contrast Enhanced CT (DCE-CT) image. The function model was established by using the projection data and the model parameters were solved by using optimization numerical technique. The approximate TDC of each voxel was obtained, the correctness of function model was verified by comparing the approximate TDC and the actually measured TDC. The perfusion parameters were solved by using the deconvolution model and the optimization parameters. The similarity of the simulation TDC solved by area integral model and actual TDC reached 82.91%. The simulation TDC could be used to calculate the perfusion parameters and get pseudo color image. CBCT projection data and function model can be used to get approximate TDC of voxel and tissue perfusion parameters, and achieve the purpose of CBCT functional imaging.
    SAR image scene classification with fully convolutional network and modified conditional random field-recurrent neural network
    TANG Hao, HE Chu
    2016, 36(12):  3436-3441.  DOI: 10.11772/j.issn.1001-9081.2016.12.3436
    Asbtract ( )   PDF (982KB) ( )  
    References | Related Articles | Metrics
    The Synthetic Aperture Radar (SAR) image uses Support Vector Machine (SVM) and Markov Random Field (MRF) or Conditional Random Field (CRF) to classify based on feature extraction of coarsely segmented pixel blocks. The traditional method exists the deviation issue of different type pixels inside the same pixel block and it only considers the adjacent area without using global information and structure information. Fully Convolutional Network (FCN) was introduced to solve the deviation problem, and the original classification probability of pixel was gotten by constructing convolutional layers based on pixel level for sample training and using ESAR images as samples. Then CRF-Recurrent Neural Network (CRF-RNN) was introduced as post layer to combine the original classification probability obtained by FCN with full image information transfer and structure information, which was produced by CRF structure. Finally, the RNN iteration was used to further optimize the experimental results. By taking advantages of global information and structure information, the proposed method based on pixel level solved some disadvantages of the traditional classification. The classification accuracy rate of the proposed method was improved by average 6.5 percentage points compared with SVM or CRF. The distance weight of CRF-RNN is fitted by Gaussian kernel, which can not be changed or determined according to the training data, thus it remains some deviation. So a convolutional network based on trainable full image distance weight was proposed to improve CRF-RNN. The experiment results show that the classification accuracy rate of the improved CRF-RNN is further improved by 1.04 percentage points.
    Rapid displacement compensation method for liquid impurity detection images
    RUAN Feng, ZHANG Hui, LI Xuanlun
    2016, 36(12):  3442-3447.  DOI: 10.11772/j.issn.1001-9081.2016.12.3442
    Asbtract ( )   PDF (1067KB) ( )  
    References | Related Articles | Metrics
    When the intelligent inspection machine extracts the impurity in infusion liquid, because of the interference of image displacement deviation, the misjudgment phenomenon always occurs when using the frame difference method to detect the impurity. In order to solve the problem, a new method of binary descriptor block matching was proposed based on Features of Accelerated Segment Test (FAST). Firstly, the feature points were detected by accelerating the segment test on different scales of the image, and the best feature point was chosen by using non-maximal suppression and the entropy difference. Then, the improved template was used for sampling around the feature point, which formed the new binary descriptor with strong robustness to scale changes, noise interference and illumination changes. The dimension of new descriptor was further reduced. Finally, by using the block matching and threshold method, the two frame images were matched quickly and accurately, and the displacement deviation was solved and compensated. The experimental results show that, when processing the 1.92 million pixel image, the overall real-time performance of the proposed method can be up to 190 ms, and the new descriptor generation only accounts 96 ms. The matching accuracy of the proposed algorithm is more than 99%, which suppresses the error matching of large spatial position offset successfully. The calculated deviation error of the proposed method is much less than the existing algorithms of Scale Invariant Feature Transform (SIFT) and ORiented Binary robust independent elementary features (ORB) with high matching precision. And with the displacement compensation which can be accurate to sub-pixel level, the proposed method can rapidly compensate the displacement deviation of the bottle in the image.
    Source code comments quality assessment method based on aggregation of classification algorithms
    YU Hai, LI Bin, WANG Peixia, JIA Di, WANG Yongji
    2016, 36(12):  3448-3453.  DOI: 10.11772/j.issn.1001-9081.2016.12.3448
    Asbtract ( )   PDF (1127KB) ( )  
    References | Related Articles | Metrics
    Source code comments is an important part of the software, so researchers need to use manual or automated methods to generate comments. In the past, the quality assessment of this kind of comments is done manually, which is inefficient and not objective. In order to solve this problem, an assessment criterion was built in which four aspects of the comments including comment format, language form, content and code-related degree were considered. Then a code comments quality assessment method based on an aggregation of classification algorithms was proposed, in which machine learning and natural language processing technology were introduced into comments quality assessment, by using classification algorithms the comments were classified into four levels, including unqualified, qualified, good and excellent ones. The evaluation results were improved by the aggregation of the basic classification algorithms. The precision and F1 measure of the aggregated classification algorithm were improved about 20 percentage points compared with using a single classification algorithm, and all the indexes have reached more than 70% except the macro average F1 measure. The experimental results show that this method can be applied to assess the quality of comments effectively.
    Software testing data generation technology based on software hierarchical model
    XU Weishan, YU Lei, FENG Junchi, HOU Shaofan
    2016, 36(12):  3454-3460.  DOI: 10.11772/j.issn.1001-9081.2016.12.3454
    Asbtract ( )   PDF (1080KB) ( )  
    References | Related Articles | Metrics
    Since Markov chain model based software testing does not consider the software structural information and has low ability of path coverage and fault detection, a new software testing model called software hierarchical testing model was proposed based on the combination of statistical testing and Markov chain model based testing. The software hierarchical testing model contains the interaction between software and external environment, and also describes the internal structural information of software. Besides, the algorithm for generating test data set was put forward:firstly, the test sequences conforming to the actual usage of software were generated; then the input data which covered software internal structure was generated for the test sequences. Finally, in the comparison experiments with software testing based on Markov chain, the new model satisfies the software testing sufficiency and improves the test data set's ability of path coverage and fault detection.
    Clone genealogy extraction method based on software code evolution information
    CHEN Zhuo, ZHANG Liping, WANG Chunhui
    2016, 36(12):  3461-3467.  DOI: 10.11772/j.issn.1001-9081.2016.12.3461
    Asbtract ( )   PDF (1115KB) ( )  
    References | Related Articles | Metrics
    The current clone evolution pattern classification is not clear, and clone genealogy extraction tool has less quantity and low efficiency. In order to solve the problems, a clone genealogy extraction method was proposed according to the code clone mapping relationships and evolution information. Firstly, clone group and clone fragment were mapped by word frequency vector calculation, code line distance and clone attribute from different stages. And then the evolution pattern was appended to clone group and clone fragment according to the mapping results. Finally, clone genealogy was constructed by combining clone mapping relationships and evolution pattern in all versions. Four open source softwares were tested and artificially verified in experiments. The experimental results show that the clone genealogy extraction tool-Extract Clone Genealogy (ECG) is valid and efficient. In addition, it is found that about 42% of clone codes have not changed in the evolution process from the extraction results, and about 3.48% of clone codes have inconsistent change, such clones may introduce potential bugs which need to be focused on. The proposed method will provide reference and data support for code clone quality assessment and management.
    Solution for classification imbalance in harmfulness prediction of clone code
    WANG Huan, ZHANG Liping, YAN Sheng
    2016, 36(12):  3468-3475.  DOI: 10.11772/j.issn.1001-9081.2016.12.3468
    Asbtract ( )   PDF (1160KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem of imbalanced classification of harmful data and harmless data in the prediction of the harmful effects of clone code, a K-Balance algorithm based on Random Under-Sampling (RUS) was proposed, which could adjust the classification imbalance automatically. Firstly, a sample data set was constructed by extracting static features and evolution features of clone code. Then, a new data set of imbalanced classification with different proportion was selected. Next, the harmful prediction was carried out to the new selected data set. Finally, the most suitable percentage value of classification imbalance was chosen automatically by observing the different performance of the classifier. The performance of the harmfulness prediction model of clone code was evaluated with seven different types of open-source software systems containing 170 versions written in C language. Compared with the other classification imbalance solution methods, the experimental results show that the proposed method is increased by 2.62 percentage points to 36.7 percentage points in the classification prediction effects (Area Under ROC(Receive Operating Characteristic) Curve (AUC)) of harmful and harmless clones. The proposed method can improve the classification imbalance prediction effectively.
    Simulation generating algorithm of Web log based on user interest migration
    PENG Xingxiong, XIAO Ruliang
    2016, 36(12):  3476-3480.  DOI: 10.11772/j.issn.1001-9081.2016.12.3476
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    When the existing simulation generation algorithm uses the distribution of the static model to generate a Web log, there is a big difference with real data. In order to solve the problem, a new algorithm of Web Log Simulation Generation based on user interest migration (WLSG) was proposed. Firstly, the relationship between Web log and time was modeled. Secondly, the migration of user interest was simulated when the user accessed to the file in different time. Finally, it was also simulated that the user adaptively access to the file which he was most interested in at the current moment. Compared with the distribution of the existing static model, the proposed algorithm had significantly improved the self-similarity by about 2.86% on average. The experimental results show that, the proposed algorithm can well simulate Web log by user interest in migration to change user access sequence, which is capable of being effectively applied in the Web log simulation generation.
    Software reliability prediction model based on grey Elman neural network
    CAO Weidong, ZHU Yuanzhi, ZHAI Panpan, WANG Jing
    2016, 36(12):  3481-3485.  DOI: 10.11772/j.issn.1001-9081.2016.12.3481
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics
    The current software reliability prediction model has big prediction accuracy fluctuation and poor adaptability in field data of reliability with strong randomness and dynamics. In order to solve the problems, a software reliability prediction model based on grey Elman neural network was proposed. First, the grey GM (1,1) model was used to predict the failure data and weaken its randomness. Then the Elman neural network was utilized to build the model for predicting the residual produced by GM (1,1), and catch the dynamic change rules. Finally, the prediction results of GM (1,1) and Elman neural network residual were combined to get the final prediction outcomes. The simulation experiment was conducted by using field failure data set produced by the flight inquiry system. The gray Elman neural network model was compared with Back-Propagation (BP) neural network model and Elman neural network model, the corresponding Mean Squared Error (MSE) and Mean Relative Error (MRE) of the three models were respectively 105.1, 270.9, 207.5 and 0.0011, 0.0021, 0.0016. The errors of gray Elman neural network prediction model were the minimum. The experimental results show that the proposed gray Elman neural network prediction model has higher prediction accuracy.
    Operation control method for industrial robots based on hand gesture recognition
    JIANG Suifeng, LI Yanchun, XIAO Nanfeng
    2016, 36(12):  3486-3491.  DOI: 10.11772/j.issn.1001-9081.2016.12.3486
    Asbtract ( )   PDF (1166KB) ( )  
    References | Related Articles | Metrics
    The human-computer interaction modes between operators and industrial robots are rather mechanized currently. In order to solve the problem, a hand gesture control method by using Kinect sensor as a hand gesture acquisition equipment to control industrial robots was proposed. Firstly, the method of combining depth threshold algorithm and hand bones points was used to extract the hand gesture images accurately from the data obtained by a Kinect infrared camera. In the process of extraction, the operator did not need to wear any equipment, while it had no requirements to operator location and background environment. Then the method of combining deep autoencoder network and Softmax classifier was used for hand gesture image recognition. The hand gesture recognition included pretraining and fine tuning. The greedy layerwise approach was leveraged to train each layer of network in turn in pretraining, while all layers of the neural network were treated as a whole to fine tune the parameters of the entire network in fine tuning. The hand gesture recognition accuracy was up to 99.846%. Finally, the experiments were conducted on self-developed industrial robot simulation platform, the good results had been achieved in one hand and both hands gestures. The experimental results show that the proposed method by using hand gesture to control the industrial robot is feasible and available.
    Battery SOC estimation based on unscented Kalman filtering
    SHI Gang, ZHAO Wei, LIU Shanshan
    2016, 36(12):  3492-3498.  DOI: 10.11772/j.issn.1001-9081.2016.12.3492
    Asbtract ( )   PDF (922KB) ( )  
    References | Related Articles | Metrics
    In order to estimate the State-Of-Charge (SOC) of automobile power lithium-ion battery online, an Unscented Kalman Filtering (UKF) algorithm was proposed combined with neural network. First of all, Thevenin circuit was treated as an equivalent circuit, the state space representation of the battery model was established and the least square method was applied to identify the parameters of model. Then on this basis, the neural network algorithm was expected to fit the functional relationships between SOC of battery and model parameters respectively. After many experiments, the convergence curve of the neural network algorithm was determined. The proposed method was more accurate than the traditional curve fitting. In addition, the Extended Kalman Filtering (EKF) principle and the UKF principle were introduced separately and some tests were designed including the validation experiment of battery equivalent circuit model, the test experiment of SOC and the convergence experiment of the algorithms. The experimental results show that, the proposed method which can be used for SOC estimation online has higher estimation precision and stronger environmental adaptability than simple extended Kalman filtering algorithm under different conditions, its maximum error is less than 4%. Finally, the proposed algorithm combining UKF and neural network has better convergence and robustness, which can be used to solve the problems of inaccurate estimation of initial value and cumulative error effectively.
    Optimal line-shape parameter estimation algorithm of orbit plane based on inertial angle measurement
    LI Xiaowen, YUAN Xianghui, ZHOU Chunxiang
    2016, 36(12):  3499-3504.  DOI: 10.11772/j.issn.1001-9081.2016.12.3499
    Asbtract ( )   PDF (891KB) ( )  
    References | Related Articles | Metrics
    The kernels of retest on existing railway are orbit line-shape segmentation and line-shape parameter optimization. Based on statistics of inertial angle measurement, an algorithm for line-shape segmentation of orbit plane and optimal line-shape parameter estimation was proposed. On the basis of change laws of orbit line-shape, the combined iterative method was utilized to calculate the optimal line-shape parameter of the orbit. The option of orbit plane line-shape was modeled as an optimization issue. Firstly, the orbit was roughly segmented by least square fitted slope change of fixed-curvature curves. Then, the orbit line-shape was fitted based on the measured data. Finally, the combined iterative method was applied to achieve precise segmentation and establish the optimal line-shape parameter. The simulation examples indicate that the proposed algorithm surpasses the existing artificial estimation algorithm which obtains the line-shape parameter fitting results based on two sets of different segmentation points, and has less differences from the results yielded by the exhaustion method. The Root Mean Square Error (RMSE) of the proposed algorithm is only higher than the exhaustion method by 4.93%, while the computational complexity is just 0.02% of that of the exhaustion method. The actual measurements on Xi'an metro line No.3 also have convinced the availability of the proposed algorithm.
    Prediction for intermittent faults of ground air conditioning based on improved Apriori algorithm
    CHEN Weixing, QU Rui, SUN Yigang
    2016, 36(12):  3505-3510.  DOI: 10.11772/j.issn.1001-9081.2016.12.3505
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems caused by intermittent faults of ground air conditioning, including low use efficiency, maintenance lag etc., a prediction method of intermittent faults which combined re-association Array Summation (AS)-Apriori with clustering K-means was raised, based on this method, delayed maintenance forecast was realized. The low efficiency problem of frequently scanning transaction database in Apriori was solved in AS-Apriori algorithm, by constructing intermittent fault arrays and giving a summation of corresponding items on them in real-time. The goal of delayed maintenance forecast is to estimate the critical region of permanent fault to arrange reasonable maintenance, which can be realized by using Gaussian distribution for the solution of maintenance wave of different intermittent fault variables and delay probability and then giving an accumulation in order. The results show that, the operational efficiency is improved, the support degree of re-association rules is upgraded by 20.656 percentage points, and more accurate prediction of intermittent failure is realized. Moreover, according to the analysis of data, the probability of forecasting maintenance-wave and delay-probability is shown as a linear distribution, which means that the high predictability of intermittent faults is more convenient to maintain and manage in advance and the formation of permanent fault is reduced.
    Multilane traffic flow detection algorithm based on adaptive virtual loop
    GAN Ling, LI Rui
    2016, 36(12):  3511-3514.  DOI: 10.11772/j.issn.1001-9081.2016.12.3511
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics
    Aiming at such interferences as false detection and missed detection which can't be overcome by the existing virtual loop detection algorithm in multilane traffic flow detection, a novel traffic flow detection algorithm based on adaptive virtual loop was put forward. According to the image binarization principle, quadratic estimation was adopted in the foreground detection part of the Visual Background extractor (ViBe) algorithm, and the background updating mechanism was changed. A new improved ViBe algorithm was presented to achieve the purposes of rapidly eliminating the ghost and completing the foreground object extraction. Then, the fixed detection area was set on the road, and the mobile virtual loop was established or canceled according to the moving target trajectory of fixed detection area. The traffic flow algorithm based on virtual loop was further used to achieve traffic flow statistics. Three different scenarios:no vehicle lane change with 4 lanes, vehicle lane change with 2 lanes, vehicle lane change with 3 lanes and sudden environmental change, were chosen for experiments and the traffic flow detection accuracy of the proposed algorithm was 8.9, 25 and 16.6 percentage points higher than that of the traditional virtual loop detection algorithm. The experimental results show that the proposed algorithm is more suitable for multilane traffic flow detection.
    Mobile terminal positioning method driven by road test data
    YUAN Guangjie, LI Xiaodong, JIANG Zhaoyi, YUAN Peng, GUO Zhiwei
    2016, 36(12):  3515-3520.  DOI: 10.11772/j.issn.1001-9081.2016.12.3515
    Asbtract ( )   PDF (979KB) ( )  
    References | Related Articles | Metrics
    The current wireless positioning technology can not adapt to complex environment and has low positioning accuracy. In order to solve the problems, a mobile terminal positioning method driven by road test data was proposed. Firstly, based on the location algorithm of base station and the description algorithm of base station signal coverage, the location-coverage model of base station base was established. By matching the initial parameters of the mobile terminal with the model base, the initial range of the mobile terminal was obtained. Secondly, the road classification database was established based on the extraction algorithm of road feature, and the wireless signal feature matching algorithm was used to match the road information of the mobile terminal. Finally, the model base of longitude-latitude and intensity mapping was established and the precise position of the mobile terminal was determined by using the terminal signal comparison algorithm. The theoretical analysis and experimental results show that the probability of 2 m localization accuracy of the base station reaches 60%, the probability of 3 m reaches 77%, which are improved respectively by about 39% and 12% than those before whitening, and the description algorithm of base station signal coverage can also describe the coverage of base station signal more accurately. The accuracy improvement of the two parts can improve the final positioning accuracy.
2024 Vol.44 No.6

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF