Table of Content

    10 September 2017, Volume 37 Issue 9
    Overview on feature selection in high-dimensional and small-sample-size classification
    WANG Xiang, HU Xuegang
    2017, 37(9):  2433-2438.  DOI: 10.11772/j.issn.1001-9081.2017.09.2433
    Asbtract ( )   PDF (1146KB) ( )  
    References | Related Articles | Metrics

    With the development of bioinformatics, gene expression microarray and image recognition, classification on high-dimensional and small-sample-size data has become a challenging task in data ming, machine learning and pattern recognition as well. High-dimensional and small-sample-size data may cause the problem of "curse of dimensionality" and overfitting. Feature selection can prevent the "curse of dimensionality" effectively and promote the generalization ability of classification mode, and thus become a hot research topic. Accordingly, some recent development of world-wide research on feature selection in high-dimensional and small-sample-size classification was briefly reviewed. Firstly, the nature of high-dimensional and small-sample feature selection was analyzed. Secondly, according to their essential difference, feature selection algorithms for high-dimensional and small-sample-size classification were divided into four categories and compared to summarize their advantages and disadvantages. Finally, challenges and prospects for future trends of feature selection in high-dimensional small-sample-size data were proposed.

    Frequent subtree mining method based on coverage patterns
    XIA Ying, LI Hongxu
    2017, 37(9):  2439-2442.  DOI: 10.11772/j.issn.1001-9081.2017.09.2439
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    Unordered tree is widely used for semi-structured data modeling, frequent subtrees mining on it has benefit for finding hidden knowledge. The traditional methods of mining frequent subtrees often output large-scale frequent subtrees with redundant information, such an output will reduce the efficiency of subsequent operations. In view of the shortcomings of traditional methods, the Mining CoveRage Pattern (MCRP) algorithm was proposed for mining coverage patterns. Firstly, a tree coding rule according to the tree width and the number of children was presented. Then, all candidate subtrees were generated by edge extension based on the maximum prefix coding sequence. Finally, a set of coverage patterns was output on the basis of frequent subtrees and δ'-coverage concept. Compared with the traditional algorithms for mining frequent closed tree patterns and maximal frequent tree patterns, the proposed algorithm can output fewer frequent subtrees in the case of preserving all the frequent subtrees, and the processing efficiency is increased by 15% to 25%.The experimental results show that the algorithm can effectively reduce the size and redundant information of the output frequent subtrees, and it has high feasibility in practical operation.
    Visualization of time series data based on spiral graph
    YANG Huanhuan, LI Tianrui, CHEN Xindi
    2017, 37(9):  2443-2448.  DOI: 10.11772/j.issn.1001-9081.2017.09.2443
    Asbtract ( )   PDF (914KB) ( )  
    References | Related Articles | Metrics
    Phased time series data is common in daily life. It describes an event that contains a number of state transitions. Each state has a time attribute, and there are multiple paths between state transitions. Aiming at the problem that the existing visualization techniques are not sufficient in visualizing the transition of each phase or the time variation of paths between states, a novel visualization model based on spiral graph was proposed. In the proposed model, each state was represented by a circle and the states of an event were represented by a set of concentric circles, and the reachable paths between neighboring states were represented by spirals. The start point of each spiral depended on its start time and the start states, and the end point of each spiral depended on its end time and the end states. To solve the overlapping problem caused by large amount of paths, the transparency adjustment algorithm based on long-tailed function was applied on the paths. The transparency of each path was assigned according to the number of intersections of this path and other paths. Flexible interactive facilities such as path filtering, highlighting, bomb box and zooming were provided to support efficient data exploration. The proposed model was implemented on China railway data, the experimental result shows that the model can effectively display trains of any running duration in limited space and is able to reduce the chaos caused by paths overlapping when confronted with large amount of trains as well as keep the information of trains and provide decision support for the user route choice, which validates the effectiveness of the proposed model in visualizing phased time series data.
    Task allocation mechanism for crowdsourcing system based on reliability of users
    SHI Zhan, XIN Yu, SUN Yu'e, HUANG He
    2017, 37(9):  2449-2453.  DOI: 10.11772/j.issn.1001-9081.2017.09.2449
    Asbtract ( )   PDF (789KB) ( )  
    References | Related Articles | Metrics
    Considering the shortcomings of existing research on the problem of user reliability in crowdsourcing systems, it was assumed that each user had different reliability for different type of tasks, and on this basis, a task allocation mechanism for crowdsourcing system was designed based on the reliability of users. Firstly, an efficient task allocation mechanism was designed by using the greedy technology to maximize the profit of task publishers, and the task allocation scheme with the maximum benefit was chosen every time. Secondly, a mechanism of user reliability updating based on historical information was designed and determined by user historical reliability and the quality of the current task, and the final payment paid to the user was linked with the reliability of the user, so as to motivate the user to finish tasks with high quality continuously. Finally, the effectiveness of the designed mechanisms was analyzed in three ways:the total profit of task publishers, the task completion rate and the user reliability. The simulation results show that compared with ProMoT (Profit Maximizing Truthful auction mechanism), the proposed method is more effective and feasible, and the rate of the total benefit of task publishers is 16% higher. At the same time, it can solve the problem of user unreliability in the existing methods, and increase the reliability of crowdsourcing systems and the total revenue of task publishers.
    Conditional strong matching preclusion for k-ary n-cubes
    FENG Kai
    2017, 37(9):  2454-2456.  DOI: 10.11772/j.issn.1001-9081.2017.09.2454
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    To measure the ability of k-ary n-cube to remain the property of having a perfect matching or an almost perfect matching in the presence of failures, by analysis of constructions of the fault sets that make the k-ary n-cube having neither a perfect matching nor an almost perfect matching under the conditional fault model, the minimum number of failures that make the k-ary n-cube unmatchable under the conditional fault model was studied. When k ≥ 4 is even and n ≥ 2, the exact value of the fault-tolerant parameter for k-ary n-cube was obtained and all the corresponding minimum fault sets were characterized. When k ≥ 3 is an odd number and n ≥ 2, a sharp lower bound and a sharp upper bound on the fault tolerance parameter for k-ary n-cube were given. The results indicate that the parallel computer system, which takes the k-ary n-cube with odd k as underlying interconnection topology, has good ability to remain the property of having a perfect matching or an almost perfect matching under the conditional fault model. Moreover, this system is still matchable if the number of failures does not exceed 2n and at most 4n-3 failures could make this system unmatchable.
    Hierarchical routing algorithm for wireless sensor networks based on small world model
    WEI Shihong, TANG Qichao
    2017, 37(9):  2457-2462.  DOI: 10.11772/j.issn.1001-9081.2017.09.2457
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    Hierarchical routing algorithm is now a hotspot in the field of wireless sensor network. Aiming at the problem that the energy of sensor nodes are limited, a hierarchical routing algorithm for wireless sensor networks based on small world model (HASWNM) was proposed. The Wireless Sensor Network (WSN) could reflect characteristics of the small world by adding nodes with high performance as well as shortcuts among cluster heads. As the energy consumption was mainly concentrated in the data transmission phase, the energy of the cluster head was taken into account while choosing the relay node between clusters. Besides, the different adaptive search area was determined according to the distance between the cluster head and the base station. The experimental results showed that the network can show the characteristics of small world when the number of high-performance nodes reached 100, and compared to the algorithms of CSWN (topology Control based on Small World Network), TSWN (Tree-based Small World Network), DASM (Directed Angulation towards the Sink node Model), the number of death rounds of the first node was delayed by 6%,6%,29% separately, and the average energy consumption of the network per round was reduced by 5%,12%,17% respectively. Thus the wireless sensor network constructed by the proposed algorithm has the characteristics of small world and low energy consumption.
    Distributed deployment algorithm for connected on-demand coverage in mobile sensor network
    MAO Lingchu, ZHAO Haitao
    2017, 37(9):  2463-2469.  DOI: 10.11772/j.issn.1001-9081.2017.09.2463
    Asbtract ( )   PDF (1106KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the number of sensors needed in the monitoring area of the mobile sensor network is different and no path is formed between the targets, a method of on-demand coverage for different targets was proposed by virtual force method. The attractive force between targets and sensor nodes based on the gravitational attraction, the repulsive force based on the Coulomb force between nodes and the gravitational lines between targets were set according to the coverage requirements of different targets. The nodes covered the targets or formed the paths under the guidance of its resultant force. The simulation results show that the proposed method has a shorter convergence time compared with the existing representative algorithm, and the moving fairness index is as high as 99%, and the influence of GPS error can be controlled below 1%, which can be distributed under sparse or dense initial conditions.
    Self-adaptive fuzzy controller based power control for wireless sensor networks
    HU Huangshui, SHEN Weina, WANG Chuhang, ZHANG Bangcheng
    2017, 37(9):  2470-2473.  DOI: 10.11772/j.issn.1001-9081.2017.09.2470
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of node's premature death in existing power control methods for Wireless Sensor Network (WSN), a new method called Self-Adaptive Fuzzy Control (SAFPC) was proposed. Firstly, the model of two level fuzzy controller with "input-output-feedback" mechanism was designed, whose main controller was responsible for the node transmission power adjustment, and auxiliary controller was responsible for the desired node degree adjustment, so as to adjust the transmission power adaptively according to the residual energy of the node. Secondly, the fuzzification, fuzzy rules and defuzzification process were described in detail. Finally, SAFPC was simulated and analyzed in terms of network convergence time, average energy consumption and network life cycle. The experimental results show that, compared with FCTP (Fuzzy Control Transmission Power method), SAFPC can increase convergence rate by 12.5%, the average energy consumption of the nodes is reduced by 3.68% and the network life cycle is prolonged by 7.9%. It can be seen that SAFPC can effectively prolong the network life cycle, as well as improve network dynamic adaptability and link robustness significantly.
    Multi-cell channel estimation based on compressive sensing in MASSIVE MIMO system
    LIU Ziyan, TANG Hu, LIU Shimei
    2017, 37(9):  2474-2478.  DOI: 10.11772/j.issn.1001-9081.2017.09.2474
    Asbtract ( )   PDF (919KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the channel estimation accuracy of multi-cell multi-user MASSIVE Multi-Input Multi-Output (MASSIVE MIMO) system was poor in the case of low Signal-to-Noise Ratio (SNR), a compressive sensing algorithm named Fruit Fly Stagewise Orthogonal Matching Pursuit (FF-StOMP) based on group intelligent search was proposed. Based on the Stagewise Orthogonal Matching Pursuit (StOMP) solution to the channel matrix parameters and the normalized minimum mean square error under different thresholds, the algorithm was used to search the minimum normalized mean square error and its corresponding threshold by the fruit fly optimization algorithm to achieve the adaptive parameter setting. The simulation results show that the channel estimation performance of FF-StOMP algorithm can be improved by 0.5 to 1 dB when the SNR is 0 to 10 dB compared with the StOMP algorithm. When the SNR is 11 to 20 dB, the channel estimation performance can be improved by 0.2 to 0.3 dB. When the number of cell users changes, the proposed algorithm can realize the adaptive channel estimation, which can effectively improve the channel estimation accuracy in the case of MASSIVE MIMO system with low SNR.
    Enhanced interference alignment algorithm in cognitive MIMO network
    MA Dongya, LI Zhaoyu, YE Zonggang
    2017, 37(9):  2479-2483.  DOI: 10.11772/j.issn.1001-9081.2017.09.2479
    Asbtract ( )   PDF (748KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that traditional interference alignment algorithm based on the maximum Signal to Interference and Noise Ratio (SINR) in Multiple-Input Multiple-Output (MIMO) cognitive network is hard to converge when sending multiple data streams and the interference between them is prominent, an interference alignment algorithm that considers data stream interference and iterative limit was proposed. Firstly, the secondary users eliminated interference between primary users and secondary users through coding design. Then, when eliminating the interference between the primary users and the secondary users, the Generalized Rayleigh Entropy (GRE) was used to calculate the precoding and interference suppression matrix based on the maximum SINR algorithm according to channel reciprocity, and in the iterative process, each iteration always made precoding and interference suppression matrix firstly satisfy that the interference power in the expected signal space was minimal. Finally, combined with the MIMO interference channel between the secondary users, the interference channel between primary and secondary users and the necessity of interference alignment of secondary usernetwork, the secondary users' reachable upper bound of degree of freedom was deduced. The experimental results show that compared with the traditional maximum SINR algorithm, the proposed algorithm has no significant improvement in the total capacity of the secondary users when the signal to noise ratio is low, but with the increase of signal to noise ratio, the advantages of the proposed algorithm are more and more obvious. When convergence is reached, the iterative times of the proposed algorithm are reduced by 40% compared with the conventional maximum SINR algorithm. Therefore, the proposed algorithm can improve system capacity and accelerate convergence.
    Throughput analysis of multi-hop network and design of real-time estimation on neighbor nodes
    ZHU Qingchao
    2017, 37(9):  2484-2490.  DOI: 10.11772/j.issn.1001-9081.2017.09.2484
    Asbtract ( )   PDF (911KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of single hop and static nature in theoretical analysis of Media Access Control (MAC) protocol, a multi-hop analysis model for Mobile Ad Hoc NETwork (MANET) was proposed, and a real-time estimation algorithm for neighbor node was designed. Firstly, a common multi-hop throughput analysis model was established through definition of distance parameter, which equaled to the Ratio of Euclidean distance and Real statistical distance (ERR), based on 2-D discrete time Markov Chain (DTMC) model, with nodes distributed in a Poisson Network (PN). Secondly, one of the reasons resulting in deviation between theory and simulation, dynamic nature of neighbor nodes, was analyzed qualitatively, that was, ERR didn't take mobility into consideration. Thirdly, a real-time number estimation methodology of neighbor nodes in PN with Random Walk (RW) mobility model was presented based on Kalman filter algorithm through redefinition of state update rule as well as measurement rule. Finally, the performance of the multi-hop throughput analysis model was compared and analyzed. The experimental results show that, although the delay of 0.13 s is introduced, the accuracy is improved by 8% in terms of throughput, Therefore, both extension of multi-hop communication and consideration of mobility are realized in the model.
    Indoor positioning method of warehouse mobile robot based on monocular vision
    ZHANG Tao, MA Lei, MEI Lingyu
    2017, 37(9):  2491-2495.  DOI: 10.11772/j.issn.1001-9081.2017.09.2491
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    Aiming at autonomous positioning of wheeled warehous robots, an indoor positioning method based on visual landmark and odometer data fusion was proposed. Firstly, by establishing a camera model, the rotation and translation relationship between the beacon and the camera was cleverly solved to obtain the positioning information. Then, based on the analysis of the characteristics of the angle difference between the gyroscope and the odometer, a method of angle fusion based on variance weight was proposed to deal with low update frequency and discontinuous positioning information problems. Finally, to compensate for a single sensor positioning defect, the odometer error model was designed to use a Kalman filter to integrate odometer and visual positioning information. The experiment was carried out on differential wheeled mobile robot. The results show that by using the proposed method the angle error and positioning error can be reduced obviously, and the positioning accuracy can be improved effectively. The repeat positioning error is less than 4 cm and the angle error is less than 2 degrees. This method is easy to operate and has strong practicability.
    Ultra wideband localization system based on improved two-way ranging and time difference of arrival positioning algorithm
    BIAN Jiaxing, ZHU Rong, CHEN Xuan
    2017, 37(9):  2496-2500.  DOI: 10.11772/j.issn.1001-9081.2017.09.2496
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    Aiming at the poor positioning accuracy of traditional wireless indoor positioning technology, an indoor positioning system based on Ultra Wide Band (UWB) technology was designed and implemented. Firstly, to solve the problem of autonomous localization and navigation, the system architecture of real-time interaction between the location server and the mobile terminal APP was proposed. Secondly, a radio message was added to the Two-Way Ranging (TWR) algorithm to reduce the ranging error caused by clock drift, so as to improve the performance of the algorithm. Finally, the hyperboloid equation set achieved by Time Difference Of Arrival (TDOA) positioning algorithm was linearized and then solved by Jacobi iteration method, which avoids the cases that standard TDOA localization algorithm is difficult to directly solve. The test results show that the system based on this method can work stably and control the range of error within 30 cm in the corridor room scene. Compared with the positioning systems based on WiFi and Bluetooth, the positioning accuracy of this system is improved by about 10 times, which can meet the requirement of precise mobile positioning in complex indoor environment.
    PTS algorithm with low complexity for reducing PAPR of FBMC-OQAM
    LI Ruomeng, TANG Qingqing
    2017, 37(9):  2501-2506.  DOI: 10.11772/j.issn.1001-9081.2017.09.2501
    Asbtract ( )   PDF (949KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the Peak-to-Average Power Ratio (PAPR) is too high and the complexity of the traditional suppression method is too high for the Filter Bank MultiCarrier/Offset Quadrature Amplitude Modulation (FBMC-OQAM) system, a new method of suppressing in FBMC-OQAM system was proposed. Firstly, based on the traditional Partial Transmit Sequence (PTS) method, the system characteristics have been improved, the Iterative PTS (IPTS) algorithm was obtained, which complexity was significantly lower than that of the traditional PTS algorithm. Secondly, the IPTS algorithm and the Clipping algorithm were used as a new IPTS-Clipping joint algorithm in FBMC-OQAM system. The FBMC signal was processed by the IPTS algorithm and then the clipping method was used to further suppress the PAPR of the system. The results of theoretical analysis and simulation show that compared with the traditional PTS algorithm, the proposed algorithm reduces the number of calculations by about 70%. When the cumulative distribution function CCDF reaches 10-3, the PAPR value of the proposed algorithm is 48.5%, lower than that of the original signal, and 33% lower than that of PTS. The suppression effect is obviously better than other methods. The proposed algorithm not only can significantly suppress the PAPR of FBMC system, but also has much lower complexity than other original algorithms.
    Spoofing detection method of carrier tracking loop statistical property analysis
    LIU Dinghao, LYU Jing, SUO Longlong, HU Xiangyu
    2017, 37(9):  2507-2511.  DOI: 10.11772/j.issn.1001-9081.2017.09.2507
    Asbtract ( )   PDF (774KB) ( )  
    References | Related Articles | Metrics
    Since most of the spoofing detection methods working in the satellite navigation receiver tracking phase can only detect one spoofing signal transmitted from single source, a novel spoofing detection method based on the statistical property analysis of carrier tracking loop was proposed. Firstly, the shortcomings of the existing spoofing detection methods were analyzed. Secondly, the received signal model under the spoofing attack environment was established and the statistical character of the complex signal which composites the real and spoofing signals was analyzed. Theoretical analysis shows that when the frequencies of the spoof signal and real signal are different, the novel algorithm can detect the spoofing signal since the I-branch signal's amplitude is not stable. The simulation results show that the proposed method can achieve 100% detection probability under 2% false alarm probability when the carrier-to-noise ratio of the received signal in the normal range (28 dB·Hz to 50 dB·Hz). By using the proposed algorithm, the spoofing signals transmitted from different sources can be detected, and the performance is greatly improved compared with the existing methods (about 1 dB under the same carrier-to-noise ratio, and about 4 dB under the same interference-to-signal ratio).
    Network architecture design of smart substation based on software defined network
    HUANG Xin, LI Qin, YANG Gui, ZHU Zhihan, LI Wenmeng, SHI Yuxiang
    2017, 37(9):  2512-2517.  DOI: 10.11772/j.issn.1001-9081.2017.09.2512
    Asbtract ( )   PDF (967KB) ( )  
    References | Related Articles | Metrics
    With the improvement of standardization and intelligence level of secondary equipment, a kind of communication network more efficient and smarter is needed in smart substation to meet the substation operation and maintenance requirements, to achieve equipment plug and play, intelligent monitoring, subnet secure isolation and element interchange. For the application needs of substation network unified management, security isolation between subnets and equipment compatibility and interchangeability, a Software Defined Network (SDN)-based substation network architecture was proposed. IEC 61850 and OpenFlow protocols were used for network architecture design. OpenFlow controller was used to control and isolate the individual subnets to implement network device management and subnet secure isolation. The experimental results show that precise traffic control based on service types, and securely data isolation can be implemented with the proposed substation SDN-based network architecture. It has a very important application value for promoting the operation and maintenance level of smart substation.
    Hybrid parallel genetic algorithm based on Sunway many-core processors
    ZHAO Ruixiang, ZHENG Kai, LIU Yao, WANG Su, LIU Yan, SHENG Huanxue, ZHOU Qianhao
    2017, 37(9):  2518-2523.  DOI: 10.11772/j.issn.1001-9081.2017.09.2518
    Asbtract ( )   PDF (891KB) ( )  
    References | Related Articles | Metrics
    When the traditional genetic algorithm is used to solve the computation-intensive task, the execution time of the fitness function increases rapidly, and the convergence rate of the algorithm is very low when the population size or generation increases. A "coarse-grained combined with master-slave" HyBrid Parallel Genetic Algorithm (HBPGA) was designed and implemented on Sunway "TaihuLight" supercomputer which is ranked first in the latest TOP500 list. Two-level parallel architecture was used and two different programming models, MPI and Athread were combined. Compared with the traditional genetic algorithm implemented on single-core or multi-core cluster with single-level parallel architecture, the algorithm using two-level parallel architecture was implemented on the Sunway many-core processors, better performance and higher speedup ratio were achieved. In the experiment, when using 16×64 CPEs (Computing Processing Elements), the maximum speedup can reach 544, and the CPE speedup ratio is more than 31.
    Yac:yet another distributed consensus algorithm
    ZHANG Jian, WANG Yang, LIU Dandan
    2017, 37(9):  2524-2530.  DOI: 10.11772/j.issn.1001-9081.2017.09.2524
    Asbtract ( )   PDF (1104KB) ( )  
    References | Related Articles | Metrics
    There are serious load imbalance and single point performance bottleneck effect in the traditional static topology leader-based distributed consensus algorithm, and the algorithm is unable to work properly when the number of breakdown nodes is larger than 50% of the cluster size. To solve the above problems, a distributed consensus algorithm (Yac) based on dynamic topology and limited voting was proposed. The algorithm dynamically generated the membership subset and Leader nodes to participate in the consensus voting, and varied with time, achieving statistical load balance. With removal of the strong constraints of all the majority of members to participate in voting, the algorithm had a higher degree of failure tolerance. The security constraints of the algorithm were reestablished by the log chain mechanism, and the correctness of the algorithm was proved. The experimental results show that the load concentration effect of single point in the improved algorithm is significantly lower than that of the mainstream static topology leader-based distributed consensus algorithm Zookeeper. The improved algorithm has better fault tolerance than Zookeeper in most cases and maintains the same as Zookeeper in the worst case. Under the same cluster size, the improved algorithm has higher throughput upper limit than Zookeeper.
    Tensor factorization recommendation algorithm based on context similarity of mobile user
    YU Keqin, WU Yingbo, LI Shun, JIANG Jiacheng, XIANG De, WANG Tianhui
    2017, 37(9):  2531-2535.  DOI: 10.11772/j.issn.1001-9081.2017.09.2531
    Asbtract ( )   PDF (822KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of complex context and data sparsity, a new algorithm for the tensor decomposition based on context similarity of mobile user was proposed, namely UCS-TF (User-Context-Service Tensor Factorization recommendation). Multi-dimensional context similarity model was established with combining the user context similarity and confidence of similarity. Then, K-neighbor information of the target user was applied to the three-dimensional tensor decomposition, composed by user, context and mobile-service. Therefore, the predicted value of the target user was obtained, and the mobile recommendation was generated. Compared with cosine similarity method, Pearson correlation coefficient method and the improved Cosine1 model, the Mean Absolute Error (MAE) of the proposed UCS-TF algorithm was reduced by 11.1%, 10.1% and 3.2% respectively; and the P@N index of it was also significantly improved, which is better than that of the above methods. In addition, compared with Cosine1 algorithm, CARS2 algorithm and TF algorithm, UCS-TF algorithm had the smallest prediction error on 5%, 20%, 50% and 80% of data density. The experimental results indicate that the proposed UCS-TF algorithm has better performance, and the user context similarity combining with the tensor decomposition model can effectively alleviate the impact of score sparsity.
    Particle swarm and differential evolution fusion algorithm based on fuzzy Gauss learning strategy
    ZHOU Wei, LUO Jianjun, JIN Kai, WANG Kai
    2017, 37(9):  2536-2540.  DOI: 10.11772/j.issn.1001-9081.2017.09.2536
    Asbtract ( )   PDF (943KB) ( )  
    References | Related Articles | Metrics
    Due to the weak development ability, Particle Swarm Optimization (PSO) algorithms have the shortages of low precision and slow convergence. Comparatively weak exploration ability of Differential Evolution (DE) algorithm, might further lead to a trap in the local extremum. A particle swarm-differential evolution fusion algorithm based on fuzzy Gaussian learning strategy was proposed. On the basis of the standard particle swarm algorithm, the elite particle population was selected, and the fusion mechanism of elite particle swarm-evolution was constructed by using mutation, crossover and selection evolution operators to improve particle diversity and convergence. A fuzzy Gaussian learning strategy according with human thinking characteristics was introduced to improve particle optimization ability, and further generate an elite particle swarm and differential evolution fusion algorithm based on fuzzy Gaussian learning strategy. Nine benchmark functions were calculated and analyzed in this thesis. The results show that the mean values of the functions Schwefel.1.2, Sphere, Ackley, Griewank and Quadric Noise are respectively 1.5E-39, 8.5E-82, 9.2E-13, 5.2E-17, 1.2E-18, close to the minimum values of the algorithm. The convergences of Rosenbrock, Rastrigin, Schwefel and Salomon functions are 1~3 orders of magnitude higher than those of four contrast particle swarm optimization algorithms. At the same time, the convergence of the proposed algorithm is 5%-30% higher than that of the contrast algorithms. The proposed algorithm has significant effects on improving convergence speed and precision, and has strong capabilities in escaping from the local extremum and global searching.
    Improved particle swarm optimization algorithm based on twice search
    ZHAO Yanlong, HUA Nan, YU Zhenhua
    2017, 37(9):  2541-2546.  DOI: 10.11772/j.issn.1001-9081.2017.09.2541
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics
    Aiming at the premature convergence problem of standard Particle Swarm Optimization (PSO) in solving complex optimization problem, a new search PSO algorithm based on gradient descent method was proposed. Firstly, when the global extremum exceeds the preset maximum number of unchanged iterations, the global extremum was judged to be in the extreme trap. Then, the gradient descent method was used to proceed twice search, a tabu area was constituted with the center of optimal extremum point and the radius of specific length to prevent particles repeatedly search the same area. Finally, new particles were generated based on the population diversity criteria to replace the particles that would be eliminated. The twice search algorithm and other four improved algorithms were applied to the optimization of four typical test functions. The simulation results show that the convergence accuracy of the twice search particle swarm algorithm is higher up to 10 orders of magnitude, the convergence speed is faster and it is easier to find the global optimal solution.
    Dynamic adjusting threshold algorithm for virtual machine migration
    ZHAO Chun, YAN Lianshan, CUI Yunhe, XING Huanlai, FENG Bin
    2017, 37(9):  2547-2550.  DOI: 10.11772/j.issn.1001-9081.2017.09.2547
    Asbtract ( )   PDF (639KB) ( )  
    References | Related Articles | Metrics
    Aiming at the optimization of servers' energy consumption in data center and the reasonable migration time of Virtual Machine (VM), a VM migration algorithm based on Dynamic Adjusting Threshold (DAT) was proposed. Firstly, the migration threshold was dynamically adjusted by analyzing the historical load data acquired from Physical Machine (PM), then the time for migrating VMs was decided by the delay trigger mechanism and the PM load trend prediction. The VM migration algorithm based on DAT was tested on datacenter platform in the laboratory. Experimental results indicate that compared with the static threshold method, the number of the shut down PMs of the proposed algorithm is larger, and the energy consumption of the data center is lower. The VM migration algorithm based on DAT can dynamically migrate VMs according to the variation of PM load, thus improving the utilization of resources and the efficiency of VM migration, reducing the energy consumption of the data center.
    Hierarchical representation model of APT attack
    TAN Ren, YIN Xiaochuan, LIAN Zhe, CHEN Yuxin
    2017, 37(9):  2551-2556.  DOI: 10.11772/j.issn.1001-9081.2017.09.2551
    Asbtract ( )   PDF (1009KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the attack chain model for the attack phase is too small to indicate the means of attack, an Advanced Persistent Threat (APT) Hierarchical Attack Representation Model (APT-HARM) was proposed. By summarizing the analysis of a large number of published APT event reports and reference APT attack chain model and HARM, the APT attack was divided into into two layers, the upper layer attack chain and the lower layer attack tree, which were formally defined. Firstly, the APT attack was divided into four stages:reconnaissance, infiltration, operation and exfiltration and the characteristics of each stage were studied. Then, the attack methods in each stage were studied, and the attack tree was composed according to its logical relationship. APT attacks were carried out in stages according to the attack chain, and the attack of each stage was performed in accordance with the attack tree. The case study shows that the model has the advantages of reasonable granularity classification and better attack description compared to the attack chain model. APT-HARM formally defines the APT attack, which provides an idea for the prediction and prevention of APT attacks.
    Attack-defense game model for advanced persistent threats with asymmetric information
    SUN Wenjun, SU Yang, CAO Zhen
    2017, 37(9):  2557-2562.  DOI: 10.11772/j.issn.1001-9081.2017.09.2557
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of the lack of modeling and analysis of Advanced Persistent Threat (APT) attacks, an attack-defense game model based on FlipIt with asymmetric information was proposed. Firstly, the assets such as targeted hosts in the network system were abstracted as the target resource nodes and the attack-defense scenarios were described as the alternating control of the target nodes. Then, considering the asymmetry of the feedback information observed by the two sides and the incomplete defensive effect, the conditions of the payoff model and the optimal strategy of the attacker and defender were proposed in the case of renewal defense strategy. Besides, theorems of simultaneous and sequential equilibrium were proposed and demonstrated. Finally, numerical illustrations were given to analyze the factors of equilibrium strategy as well as defense payoff and to compare simultaneous and sequential equilibrium. The experimental results show that period strategy is defender's best strategy and the defender can achieve sequential equilibrium meanwhile obtaining more payoffs compared with simultaneous equilibrium by announcing her defense strategy in advance. Conclusions show that the proposed model can theoretically guide defense strategy towards stealthy APT attacks.
    Privacy-preserving equal-interval approximate query algorithm in two-tiered sensor networks
    WANG Taochun, CUI Zhuangzhuang, LIU Ying
    2017, 37(9):  2563-2566.  DOI: 10.11772/j.issn.1001-9081.2017.09.2563
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics
    Privacy preservation, a key factor in expanding the application of Wireless Sensor Network (WSN), is the current research hotspot. In view of the privacy of sensory data in WSN, Privacy-preserving Equal-Interval Approximate Query (PEIAQ) algorithm in two-tiered sensor networks based on data aggregation was proposed. Firstly, sensor node IDs and sensory data were concealed in a random vector, and then linear equations were worked out by the base station based on the random vector. As a result, a histogram containing global statistics was formed, and finally the results of approximate query were obtained. In addition, sensory data were encrypted through perturbation technique and sharing key between the sensor node and base station, which can ensure the privacy of sensory data. Simulation experiments show that the PEIAQ has a 60% decrease approximately in the traffic compared with PGAQ (Privacy-preserving Generic Approximate Query) in the query phase. Therefore, PEIAQ is efficient and costs low-energy.
    PTDC:privacy-aware trajectory data collection technology under road network constraint
    HUO Zheng, WANG Weihong, CAO Yuhui
    2017, 37(9):  2567-2571.  DOI: 10.11772/j.issn.1001-9081.2017.09.2567
    Asbtract ( )   PDF (1006KB) ( )  
    References | Related Articles | Metrics
    Since the problem of trajectory privacy violation and homogeneous semantic location attack of moving objects in road network environment is very serious, a Privacy-aware Trajectory Data Collection (PTDC) algorithm was proposed. Firstly, through visits' entropy of Points Of Interests (POI), the sensitivity of each POI was computed; secondly, based on the mixture distance of sensitivity and Euclidean distance, θ-weight was defined and a weighted model of vertices and edges in the network environment was established to reach a k-θ-D anonymity, which can resist the semantic location homogeneity attack; finally, based on the bread-first traversal algorithm of undirected graph, an anonymous algorithm was proposed to satisfy the semantic difference of POIs, so that user's sensitive sampling location was replaced by an anonymous region. Data utility caused by PTDC algorithm was theoretically evaluated. A set of experiments were implemented to test PTDC algorithm, and compare it with the privacy-preserving algorithm named YCWA (You Can Walk Alone) in free space. In theory, the privacy level of YCWA algorithm was lower than PTDC algorithm. The experimental results show that the PTDC algorithm has an average information loss of about 15%, and average range count query error rate of about 12%, which performs slightly worse than YCWA algorithm, while the running time of PTDC algorithm is less than 5 seconds, which is much better than YCWA algorithm. PTDC algorithm meets the needs of real-time online data collection.
    New security analysis of several kinds of high-level cryptographical S-boxes
    ZHAO Ying, YE Tao, WEI Yongzhuang
    2017, 37(9):  2572-2575.  DOI: 10.11772/j.issn.1001-9081.2017.09.2572
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem whether there are new security flaws of several kinds of high-level cryptographic S-boxes, an algorithm for solving the nonlinear invariant function of S-boxes was proposed, which is mainly based on the algebraic relationship between the input and output of the cryptographic S-boxes. Using the proposed algorithm, several kinds of S-boxes were tested and it was found that several of them had the same nonlinear invariant function. In addition, if these S-boxes were used to non-linear parts of the block cipher Midori-64, a new variant algorithm would be obtained. The security analysis was carried out by non-linear invariant attack. The analytical results show that the Midori-64 variant is faced with serious secure vulnerability. In other words, there exist 264 weak keys when nonlinear invariant attack is applied to the Midori-64 variant, meanwhile data, time and storage complexity can be neglected, consequently some high-level cryptographic S-boxes have security flaws.
    Information hiding algorithm based on spherical segmentation of 3D model
    REN Shuai, ZHANG Tao, YANG Tao, SUO Li, MU Dejun
    2017, 37(9):  2576-2580.  DOI: 10.11772/j.issn.1001-9081.2017.09.2576
    Asbtract ( )   PDF (779KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of weak robustness to geometric attack in 3D model information hiding algorithms, an information hiding algorithm based on 3D model spherical segmentation was proposed. Firstly, the 3D model was preprocessed by principal component analysis, spherical coordinate transformation, spherical segmentation and partition sorting. Then, the points with larger normal vector in stereo partition are taken as feature points. The feature points were carried out wavelet transform according to the amount of secret information to be embedded. Finally, the secret information after scrambled operation was embedded into the pre-processed carrier to generate the secret 3D model. The experimental results show that the algorithm is invisible and has good robustness to random noise, heavy mesh and other common attacks.
    Information hiding algorithm based on compression sensing and GHM multiwavelet transform
    ZHANG Tao, KANG Yuan, REN Shuai, LIU Yunong
    2017, 37(9):  2581-2584.  DOI: 10.11772/j.issn.1001-9081.2017.09.2581
    Asbtract ( )   PDF (721KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of low invisibility and weak anti-attacking ability in the traditional information hiding algorithm, an information hiding algorithm based on compression sensing was proposed. Firstly, the carrier image was operated by first-order GHM (Geronimo Hardin Massopust) multiwavelet transform, and the obtained region in medium energy level was processed by first-order GHM transform again to get HH component, which was decomposed by the Singular Value Decomposition (SVD). Secondly, the secret image was disposed by the wavelet transform, and the obtained wavelet coefficient was processed by compressed sensing in order to get the measurement matrix. Then the elements of the matrix were decomposed by SVD. Finally, the singular value of the carrier image was replaced by the singular value of the secret image to finish the secret information embedding. The experiment shows that compared with existing two information hiding algorithms, the invisibility has been improved by 5.99% and 22.11% respectively; and the robustness against some common attacks such as low-pass filtering, salt and pepper noise, Gaussian noise and JPEG compression has been improved by 4.11% and 11.53% averagely.
    Deep belief networks based on sparse denoising auto encoders
    ZENG An, ZHANG Yinan, PAN Dan, Xiao-wei SONG
    2017, 37(9):  2585-2589.  DOI: 10.11772/j.issn.1001-9081.2017.09.2585
    Asbtract ( )   PDF (841KB) ( )  
    References | Related Articles | Metrics
    The conventional Deep Belief Network (DBN) often utilizes the method of randomly initializing the weights and bias of Restricted Boltzmann Machine(RBM) to initialize the network. Although it could overcome the problems of local optimality and long training time to some extent, it is still difficult to further achieve higher accuracy and better learning efficiency owing to the huge difference between reconstruction and original input resulting from random initialization. In view of the above-mentioned problem, a kind of DBN model based on Sparse Denoising AutoEncoder (SDAE) was proposed. The advantage of the advocated model was the feature extraction by SDAE. Firstly, SDAE was trained, and then, the obtained weights and bias were utilized to initialize DBN. Finally, DBN was trained. Experiments were performed on card game data set of Poker hand and handwriting data sets of MNIST and USPS to verify the performance of the proposed model. In Poker hand data set, compared with the conventional DBN, the error rate of the proposed model is lowered by 46.4%, the accuracy rate and the recall rate are improved by 15.56% and 14.12% respectively. The results exhibit that the proposed method is superior to other existing methods in recognition performance.
    Sparse signal reconstruction optimization algorithm based on recurrent neural network
    WANG Xingxing, LI Guocheng
    2017, 37(9):  2590-2594.  DOI: 10.11772/j.issn.1001-9081.2017.09.2590
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of sparse signal reconstruction, an optimization algorithm based on Recurrent Neural Network (RNN) was proposed. Firstly, the signal sparseness was represented, and the mathematical model was transformed into an optimization problem. Then, based on the fact that the l0-norm is a non-convex and non-differentiable function, and the optimization problem is NP-hard, under the premise that the measurement matrix A met Restricted Isometry Property (RIP), the equivalent optimization problem was proposed. Finally, the corresponding Hopfield RNN model was established to solve the equivalent optimization problem, so as to reconstruct sparse signals. The experimental results show that under different observation number m, compared the RNN algorithm and the other three algorithms, it is found that the relative error of the RNN algorithm is smaller and the observations number is smaller, and the RNN algorithm can reconstruct the sparse signals efficiently.
    Semi-supervised classification algorithm based on C-means clustering and graph transduction
    WANG Na, WANG Xiaofeng, GENG Guohua, SONG Qiannan
    2017, 37(9):  2595-2599.  DOI: 10.11772/j.issn.1001-9081.2017.09.2595
    Asbtract ( )   PDF (910KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the traditional Graph Transduction (GT) algorithm is computationally intensive and inaccurate, a semi-supervised classification algorithm based on C-means clustering and graph transduction was proposed. Firstly, the Fuzzy C-Means (FCM) clustering algorithm was used to pre-select unlabeled samples and reduce the range of the GT algorithm. Then, the k-nearest neighbor sparse graph was constructed to reduce the false connection of the similarity matrix, thereby reducing the time of composition, and the label information of the primary unlabeled samples was obtained by means of label propagation. Finally, combined with the semi-supervised manifold hypothesis model, the extended marker data set and the remaining unlabeled data set were used to train the classifier, and then the final classification result was obtained. In the Weizmann Horse data set, the accuracy of the proposed algorithm was more than 96%, compared with the traditional method of only using GT to solve the dependence problem on the initial set of labels, the accuracy was increased by at least 10%. The proposed algorithm was applied directly to the terracotta warriors and horses, and the classification accuracy was more than 95%, which was obviously higher than that of the traditional graph transduction algorithm. The experimental results show that the semi-supervised classification algorithm based on C-means clustering and graph transduction has better classification effect in image classification, and it is of great significance for accurate classification of images.
    Kernel fuzzy C-means clustering based on improved artificial bee colony algorithm
    LIANG Bing, XU Hua
    2017, 37(9):  2600-2604.  DOI: 10.11772/j.issn.1001-9081.2017.09.2600
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that Kernel-based Fuzzy C-Means (KFCM) algorithm is sensitive to the initial clustering center and is easy to fall into the local optimum, and the fact that Artificial Bee Colony (ABC) algorithm is simple and of high global convergence speed, a new clustering algorithm based on Improved Artificial Bee Colony (IABC) algorithm and KFCM iteration was proposed. Firstly, the optimal solution was obtained by using IABC as the initial clustering center of the KFCM algorithm. IABC algorithm improved the search behavior of the employed bee with the change rate of the difference from the current dimension optimal solution in the iterative process, balancing the global search and local mining ability of the artificial bee colony algorithm. Secondly, based on within-class distance and between-class distance, the fitness function of the KFCM algorithm was constructed and the cluster center was optimized by KFCM algorithm. Finally, the IABC and KFCM algorithms were executed alternately to achieve optimal clustering. Three Benchmark test functions and six sets in UCI standard data set was used to carry out simulation experiments. The experimental results show that IABC-KFCM improves the clustering effectiveness index of data set by 1 to 4 percentage points compared to IABC-GFCM (Generalized Fuzzy Clustering algorithm based on Improved ABC), which has the advantages of strong robustness and high clustering precision.
    Key frame extraction of motion video based on spatial-temporal feature locally preserving
    SHI Nianfeng, HOU Xiaojing, ZHANG Ping
    2017, 37(9):  2605-2609.  DOI: 10.11772/j.issn.1001-9081.2017.09.2605
    Asbtract ( )   PDF (847KB) ( )  
    References | Related Articles | Metrics
    To improve the motion expression and compression rate of the motion video key frames, a dynamic video frame extraction technique based on flexible pose estimation and spatial-temporal feature embedding was proposed. Firstly, a Spatial-Temporal feature embedded Flexible Mixture-of-Parts articulated human model (ST-FMP) was designed by preserving the spatial-temporal features of body parts, and the N-best algorithm was adopted with spatial-temporal locally preserving of uncertain body parts to estimate the body configuration in a single frame based on ST-FMP. Then, the relative position and motion direction of the human body were used to describe the characteristics of the human body motion. The Laplacian scoring algorithm was used to implement dimensionality reduction to obtain the discriminant human motion feature vector with local topological structure. Finally, the ISODATA (Iterative Self-Organizing Data Analysis Technique) algorithm was used to dynamically determine the key frames. In the key frame extraction experiment on aerobics video, compared to articulated human model with Flexible Mixture-of-Parts (FMP) and motion block, the accuracy of uncertain body parts by using ST-FMP was 15 percentage points higher than that by using FMP, achieved 81%, which was higher than that by using Key Frames Extraction based on prior knowledge (KFE) and key frame extraction based on motion blocks. The experimental results on key frame extraction for calisthenics video show that the proposed approach is sensitive to motion feature selection and human pose configuration, and it can be used for sports video annotation.
    Video monitoring method of escalator entrance area based on Adaboost and codebook model
    DU Qiliang, LI Haozheng, TIAN Lianfang
    2017, 37(9):  2610-2616.  DOI: 10.11772/j.issn.1001-9081.2017.09.2610
    Asbtract ( )   PDF (1197KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the traditional video monitoring method can not divide the dense foreground objects accurately, a multi-target video monitoring method based on Adaboost and codebook model was proposed. Firstly, the Adaboost human head classifier was obtained by training, and the background model was established for the vertical elevator image by the codebook algorithm. The foreground image was extracted and heads were detected and tracked. After that, the pedestrian targets were removed to get the object targets, and the object targets were tracked. Finally, the movement of pedestrians and objects was monitored. The experimental results on 12 entrance area videos show that the method can track pedestrians and objects accurately and stably. It can accomplish the monitoring tasks of retrograde detection, passenger statistics, pedestrian congestion and object retention. With the processing speed of 36 frames per second, the tracking-accuracy rate is above 94% and the monitoring-accuracy rate is 95.8%. The proposed algorithm meets robustness, real-time and accuracy requirements of the intelligent video monitoring system.
    Speaker authentication method based on quantum tunneling effect
    HUANG Liang, PAN Ping, ZHOU Chao
    2017, 37(9):  2617-2620.  DOI: 10.11772/j.issn.1001-9081.2017.09.2617
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics
    Aiming at the unstructured characteristics of speech signal, a method of speaker authentication based on quantum tunneling effect was proposed. Based on quantum tunneling effect, the quantum properties of speech signal framing analyzed, and each speech signal frame was regarded as a quantum state, and the quantization of the algorithm was realized. And then the potential barrier was used to separate the energy characteristics. The barrier group was constructed to extract the energy spectrum characteristics of the signal and used it as the characteristic parameter. The speech signal modeling was finally carried out by the Gaussian Mixture Model (GMM) to complete the authentication of the speaker. The simulation results show that compared with the traditional method, the use of quantum tunneling theory to achieve speaker identification can reduce the complexity of algorithm effectively, improve the discrimination and provide a new direction for speaker authentication and quantum information theory.
    Optimizing multi-slice real-time interactive visualization for out-of-core seismic volume
    JI Lianen, ZHANG Xiaolin, LIANG Shiyi, WANG Bin
    2017, 37(9):  2621-2625.  DOI: 10.11772/j.issn.1001-9081.2017.09.2621
    Asbtract ( )   PDF (955KB) ( )  
    References | Related Articles | Metrics
    During multi-slice interactive visualization for out-of-core seismic volume on common computing platform, traditional cache scheduling approaches, which take no account of the spacial relationship between blocks and slices, lead to the low cache hit rate while interacting. And it's also difficult to achieve high rendering quality by use of common multi-resolution rendering methods. In view of these problems, a cache scheduling strategy named Maximum Distance First Out (MDFO) was designed. Firstly, according to the spatial position of the interactive slice, the scheduling priority of the block in the cache was improved, which ensures that the candidate block has a higher hit rate when the slice interacts continuously. Then, a two-stage slice interaction method was proposed. By using the fixed resolution body block to ensure the real-time interaction, the final display quality was improved by the step-by-step refinement, and the information entropy of the block data was further combined to enhance the resolution of the user's region of interest. The experimental results show that the proposed method can effectively improve the overall hit rate of the body block and reach the proportion of more than 60%. Meanwhile, the two-stage strategy can achieve higher quality images for application-oriented requirements and resolve the contradiction between interaction efficiency and rendering quality for out-of-core seismic data visualization.
    Improved small feature culling for large scale process plant model based on octree
    DU Zhenlin, TANG Weiqing, QIN Li, LI Shicai
    2017, 37(9):  2626-2630.  DOI: 10.11772/j.issn.1001-9081.2017.09.2626
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics
    To eliminate the drawbacks of traditional small feature culling algorithm which processing granularity are triangles and can't efficiently cope with the number of vertexes and triangles up to hundreds of millions in a certain time period, an improved small feature culling algorithm for large scale process plant based on octree was proposed. Based on the component primitive characteristics and spatial characteristics of the process plant model, the value of screen for quantizing the size of component was proposed, and the established octree and the value of screen were combined to estimate the upper limit of the number of pixels, so as to quickly determine whether the component would be culled or not. The experimental results show that the proposed algorithm is simple and effective. Compared with the current popular review software after loading the factory model with 10000 pipelines, the frame rate is increased by at least 50%, which greatly improves the platform's fluency. Process factory industry and graphics platform as a whole to enhance the level of design has a positive meaning.
    Double ring mapping projection for panoramic video
    LIN Chang, LI Guoping, ZHAO Haiwu, WANG Guozhong, GU Xiao
    2017, 37(9):  2631-2635.  DOI: 10.11772/j.issn.1001-9081.2017.09.2631
    Asbtract ( )   PDF (829KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that some area deforms too large and large volume of redundant data in the process of panoramic video mapping, a Double-Ring mapping Projection (DRP) algorithm was proposed. Firstly, according to the geometric characteristics of spherical video and the visual characteristics of Human Visual System (HVS), the spherical video was divided into 14 equal-sized regions by two mutually orthogonal ring areas regions. Then, according to the space domain sampling theorem, the 14 regions corresponding to the spherical video content were mapped to 14 rectangular videos with the same size by Lanczos interpolation method. Finally, according to the characteristics of the latest video coding standard, the 14 rectangular videos were rearranged to get a compact panoramic video in keeping with coding standards. The experimental results show that the DRP algorithm achieve a higher compression efficiency compared with the EquiRectangular mapping Projection (ERP) algorithm, OctaHedral mapping Projection (OHP) algorithm and IcoSahedral mapping Projection (ISP) algorithm. Specifically, compared with the most popular ERP algorithm, the proposed method reduces the bit rate by 8.61% which obviously improves the coding efficiency.
    Enhanced self-learning super-resolution approach for single image
    HUANG Feng, WANG Xiaoming
    2017, 37(9):  2636-2642.  DOI: 10.11772/j.issn.1001-9081.2017.09.2636
    Asbtract ( )   PDF (1379KB) ( )  
    References | Related Articles | Metrics
    Aiming at the main problem of the Sparse Representation (SR) coefficients of the image blocks in image super-resolution method, an enhanced self-learning super-resolution approach for single image was proposed by using the weighting idea. Firstly, the pyramid of high and low resolution images was established by self-learning. Then, the image block feature of low-resolution images and the central pixels of the corresponding high-resolution image blocks were extracted respectively. The role of the center pixel in constructing the image block sparse coefficient was emphasized by giving different weights of different pixels in the image blocks. Finally, the combination of SR theory and Support Vector Regression (SVR) technique was used to build the super-resolution image reconstruction model. The experimental results show that compared with the Self-Learning Super-Resolution method for single image (SLSR), the Peak Signal-to-Noise Ratio (PSNR) of the proposed method is increased by an average of 0.39 dB, the BRISQUE (Blind/Reference-less Image Spatial Quality Evaluator) score of no-reference image quality evaluation criteria is reduced by an average of 9.7. From the subjective perspective and objective values, it is proved that the proposed super resolution method is more effective.
    Fast inter-frame coding algorithm for screen content based on temporal-spatial correlation
    HU Qingqing, PENG Zongju, CHEN Fen
    2017, 37(9):  2643-2647.  DOI: 10.11772/j.issn.1001-9081.2017.09.2643
    Asbtract ( )   PDF (926KB) ( )  
    References | Related Articles | Metrics
    Aiming at the high complexity problem of inter-frame coding for screen content video, a fast inter-frame algorithm based on temporal-spatial correlation was proposed. Firstly, the encoding frames were classified to static frames and motion frames according to motion-static detection algorithm. Then, different encoding strategies were used for motion and static frames, respectively. For the static frames, the optimal partition depth and the optimal prediction mode of Coding Unit (CU) were determined based on the CU partition depth and the prediction mode of the temporal correlation. For the static Largest CU (LCU) in the motion frames, the CU partition was terminated prematurely by using the temporal correlation, and the mode selection was only for the large size modes. Whereas for the motion LCU in motion frames, the motion-static characteristic of adjacent LCU was utilized to determine the current CU partition depth and prediction mode. The experimental results show that the proposed method can reduce the average coding time by 46.40% when BDBR is increased by 3.65% compared with the original coding platform. The proposed method can significantly reduce the complexity of screen content inter-frame encoding process in the premise of negligible BDBR (Bjøntegaard Delta Bit Rate) performance loss. Therefore the proposed method is beneficial to the real-time application of screen content video.
    Text image restoration algorithm based on sparse coding and ridge regression
    WANG Zhiyi, BI Duyan, XIONG Lei, FAN Zunlin, ZHANG Xiaoyu
    2017, 37(9):  2648-2651.  DOI: 10.11772/j.issn.1001-9081.2017.09.2648
    Asbtract ( )   PDF (690KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that sparse coding in text image restoration has the shortcomings of limited expression of dictionary atoms and high computation complexity, a novel text image restoration algorithm was proposed based on sparse coding and ridge regression. Firstly, patches were used to train the dictionary for sparse representation at training stage and the sampled image were clustered based on the Euclidean distances between the sampled image patches and the dictionary atoms. Then, the ridge regressors between low-quality text image patches and clear text image patches were constructed in local manifold space to achieve the local multi-linear expansion of dictionary atoms and fast calculation. At last, the clear text image patches were directly calculated at testing stage by searching for the most similar dictionary atoms with low-quality text image patches without calculating the sparse coding of low-quality text image patches. The experimental results show that compared with the existing sparse coding algorithm, the proposed algorithm has improved Peak Signal-to-Noise Ratio (PSNR) by 0.3 to 1.1 dB and reduced computing time at one or two orders of magnitude. Therefore, this method provides a good and fast solution for text image restoration.
    Saliency detection based on guided Boosting method
    YE Zitong, ZOU Lian, YAN Jia, FAN Ci'en
    2017, 37(9):  2652-2658.  DOI: 10.11772/j.issn.1001-9081.2017.09.2652
    Asbtract ( )   PDF (1249KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of impure simplicity and too simple feature extraction of training samples in the existing saliency detection model based on guided learning, an improved algorithm based on Boosting was proposed to detect saliency, which improve the accuracy of the training sample set and improve the way of feature extraction to achieve the improvement of learning effect. Firstly, the coarse sample map was generated from the bottom-up model for saliency detection, and the coarse sample map was quickly and effectively optimized by the cellular automata to establish the reliable Boosting samples. The training samples were set up to mark the original images. Then, the color and texture features were extracted from the training set. Finally, Support Vector Machine (SVM) weak classifiers with different feature and different kernel were used to generate a strong classifier based on Boosting, and the foreground and background of each pixel of the image was classified, and a saliency map was obtained. On the ASD database and the SED1 database, the experimental results show that the proposed algorithm can produce complete clear and salient maps for complex and simple images, with good AUC (Area Under Curve) evaluation value for accuracy-recall curve. Because of its accuracy, the proposed algorithm can be applied in pre-processing stage of computer vision.
    Weak mutation test case set generation based on dynamic set evolutionary algorithm
    GUO Houqian, WANG Weiwei, SHANG Ying, ZHAO Ruilian
    2017, 37(9):  2659-2664.  DOI: 10.11772/j.issn.1001-9081.2017.09.2659
    Asbtract ( )   PDF (1113KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of fixed individual scale and high execution cost of weak mutation test case set generation based on Set Evolutionary Algorithm (SEA), a generation method of weak mutation test case set based on Dynamic Set Evolutionary Algorithm (DSEA) was proposed. The test case sets were used as individuals to generate some weak mutations to cover all mutant branches. In the evolutionary process, according to the minimum subset of the optimal individuals and the number of uncovered mutation branches, the minimum scale of the required test case set was calculated by the set compact operator. And the size of all individuals in the population was adjusted based on the minimum scale to generate the smallest scale of the weak mutation test case set. At the same time, a fitness function for assessing a use case set as an individual was designed. The experimental results show that when the dynamic ensemble evolution algorithm is used to guide the generation of weak mutation test cases, and the scale of the test cases was 50.15% lower than the initial size of the individuals, and the execution time is lower than that of SEA by 74.58% at most. Thus, the dynamic ensemble evolution algorithm provides a solution for generating of the weak mutation test case set with minimum scale and enhancing the algorithm speed.
    Recommendation method based on k nearest neighbors using data dimensionality reduction and exact Euclidean locality-sensitive hashing
    GUO Yudong, GUO Zhigang, CHEN Gang, WEI Han
    2017, 37(9):  2665-2670.  DOI: 10.11772/j.issn.1001-9081.2017.09.2665
    Asbtract ( )   PDF (1114KB) ( )  
    References | Related Articles | Metrics
    There are several problems in the recommendation method based on k nearest neighbors, such as high dimensionality of rating features, slow speed of searching nearest neighbors and cold start problem of ratings. To solve these problems, a recommendation method based on k nearest neighbors using data dimensionality reduction and Exact Euclidean Locality-Sensitive Hashing (E2LSH) was proposed. Firstly, the rating data, the user attribute data and the item category data were integrated as the input data to train the Stack Denoising Auto-encoder (SDA) neutral network, of which the last hidden layer values were used as the feature coding of the input data to complete data dimensionality reduction. Then, the index of the reduced dimension data was built by the Exact Euclidean Local-Sensitive Hash algorithm, and the target users or the target items were retrieved to get their similar nearest neighbors. Finally, the similarities between the target and the neighbors were calculated, and the target user's similarity-weighted prediction rating for the target item was obtained. The experimental results on standard data sets show that the mean square error of the proposed method is reduced by an average of about 7.2% compared with the recommendation method based on Locality-Sensitive Hashing (LSH-ICF), and the average run time of the proposed method is the same as LSH-ICF. It shows that the proposed method alleviates the rating cold start problem on the premiss of keeping the efficiency of LSH-ICF.
    E-government recommendation algorithm combining community and association sequence mining
    HUANG Yakun, WANG Yang, WANG Mingxing
    2017, 37(9):  2671-2677.  DOI: 10.11772/j.issn.1001-9081.2017.09.2671
    Asbtract ( )   PDF (1147KB) ( )  
    References | Related Articles | Metrics
    Personalized recommendation as an effective means of information gathering has been successfully applied to e-commerce, music and film and other fields. Most of the studies have focused on the recommended accuracy, lack of consideration of the diversity of recommended results, and neglected the process characteristics of the recommended items in the application area (e. g. "Internet of Things plus E-government"). Aiming at this problem, an e-government recommendation algorithm Combining User Community and Associated Sequence mining (CAS-UC) was proposed to recommend the items most associated with users. Firstly, the static basic attributes and dynamic behavior attributes of the users and items were modeled separately. Secondly, based on the user's historical record and attribute similarity for user community discovery, the user set most similar to the target user was pre-filtered to improve the diversity of the recommended results and reduce the computational amount of the core recommendation process. Finally, the associated sequence mining of the items was taken full account of the business characteristics of e-government, and the item sequence mining with time dimension was added to further improve the accuracy of the recommended results. The simulation experiments were carried out with the information after desensitization of users on the Spark platform of ewoho.com in Wuhu. The experimental results show that CAS-UC is suitable for the recommendation of items with sequence or process characteristics, and has higher recommendation accuracy compared with traditional recommendation algorithms such as cooperative filtering recommendation, matrix factorization and recommendation algorithm based on semantic similarity. The multi-community attribution factor of the user increases the diversity of the recommended results.
    Interval-value attribute reduction algorithm for meteorological observation data based on genetic algorithm
    ZHENG Zhongren, CHENG Yong, WANG Jun, ZHONG Shuiming, XU Liya
    2017, 37(9):  2678-2683.  DOI: 10.11772/j.issn.1001-9081.2017.09.2678
    Asbtract ( )   PDF (1007KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that the purpose of the meteorological observation data acquisition is weak, the redundancy of data is high, and the number of single values in the observation data interval is large, the precision of equivalence partitioning is low, an attribute reduction algorithm for Meteorological Observation data Interval-value based on Genetic Algorithm (MOIvGA) was proposed. Firstly, by improving the similarity degree of interval value, the proposed algorithm could be suitable for both single value equivalence relation judgment and interval value similarity analysis. Secondly, the convergence of the algorithm was improved by the improved adaptive genetic algorithm. Finally, the simulation experiments show that the number of the iterations of the proposed algorithm is reduced by 22, compared with the method which operated AGAv (Adaptive Genetic Attribute reduction) algorithm to solve the optimal value. In the time interval of 1 hour precipitation classification, the average classification accuracy of the MOIvGA (λ-Reduction in Interval-valued decision table based on Dependence) algorithm is 6.3% higher than that of RIvD algorithm; the accuracy of no rain forecasting is increased by 7.13%; at the same time, the classification accuracy can be significantly impoved by the attribute subset received by operating the MOIvGA algorithm. Therefore, the MOIvGA algorithm can increase the convergence rate and the classification accuracy in the analysis of interval value meteorological observation data.
    Not-temporal attribute correlation model to generate table data realistically
    ZHANG Rui, XIAO Ruliang, NI Youcong, DU Xin
    2017, 37(9):  2684-2688.  DOI: 10.11772/j.issn.1001-9081.2017.09.2684
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics
    To solve the difficulty of attribute correlation in the process of simulating table data, an H model was proposed for describing not-temporal attribute correlation in table data. Firstly, the key attributes of the evaluation subject and the evaluated subject were extracted from the data set, by the twofold frequency statistics, four relationships of the key attributes were obtained. Then, the Maximum Information Coefficient (MIC) of each relationship was calculated to evaluate the correlation of each relationship, and each relationship was fitted by the Stretched Exponential (SE) distribution. Finally, the data scales of the evaluation subject and the evaluated subject were set. According to the result of fitting, the activity of the evaluation subject was calculated, and the popularity of the evaluated subject was calculated. H model was obtained through the association that was established by equal sum of activity and popularity. The experimental results show that H model can effectively describe the correlation characteristics of the non-temporal attributes in real data sets.
    Prediction of rainfall based on improved Adaboost-BP model
    WANG Jun, FEI Kai, CHENG Yong
    2017, 37(9):  2689-2693.  DOI: 10.11772/j.issn.1001-9081.2017.09.2689
    Asbtract ( )   PDF (833KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the current classification algorithm has low generalization ability and insufficient precision, a combination classification model combining Adaboost algorithm and Back-Propagation (BP) neural network was proposed. Multiple neural network weak classifiers were constructed and weighted, which were linearly combined into a strong classifier. The improved Adaboost algorithm aimed to optimize the normalization factor. The sample weight update strategy was adjusted during the lifting process, to minimize the normalization factor, increasing the number of weak classifiers while reducing the error upper bound estimate was ensured, and the generalization ability and classification accuracy of the final integrated strong classifier was improved. A daily precipitation model of 6 sites in Jiangsu province was selected as the experimental data, and 7 precipitation models were established. Among the many factors influencing the rainfall, 12 attributes with large correlation with precipitation were selected as the forecasting factors. The results show that the improved Adaboost-BP combination model has better performance, especially for the site 58259, and the overall classification accuracy is 81%. Among the 7 grades, the prediction accuracy of class-0 rainfall is the best, and the accuracy of other types of rainfall forecast is improved. The theoretical derivation and experimental results show that the improvement can improve the prediction accuracy.
    Lip motion recognition of speaker based on SIFT
    MA Xinjun, WU Chenchen, ZHONG Qianyuan, LI Yuanyuan
    2017, 37(9):  2694-2699.  DOI: 10.11772/j.issn.1001-9081.2017.09.2694
    Asbtract ( )   PDF (914KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the lip feature dimension is too high and sensitive to the scale space, a technique based on the Scale-Invariant Feature Transform (SIFT) algorithm was proposed to carry out the speaker authentication. Firstly, a simple video frame image neat algorithm was proposed to adjust the length of the lip video to the same length, and the representative lip motion pictures were extracted. Then, a new algorithm based on key points of SIFT was proposed to extract the texture and motion features. After the integration of Principal Component Analysis (PCA) algorithm, the typical lip motion features were obtained for authentication. Finally, a simple classification algorithm was presented according to the obtained features. The experimental results show that compared to the common Local Binary Pattern (LBP) feature and the Histogram of Oriental Gradient (HOG) feature, the False Acceptance Rate (FAR) and False Rejection Rate (FRR) of the proposed feature extraction algorithm are better, which proves that the whole speaker lip motion recognition algorithm is effective and can get the ideal results.
    Dynamic gesture recognition method based on EMG and ACC signal
    XIE Xiaoyu, LIU Zhejie
    2017, 37(9):  2700-2704.  DOI: 10.11772/j.issn.1001-9081.2017.09.2700
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics
    To enhance the diversity and simplicity of hand gesture recognition, an approach based on ElectroMyoGraphy (EMG) and ACCeleration(ACC) signals was proposed to recognize dynamic gestures. Firstly, the gesture related information was collected by MYO sensors. Then, the dimensionality of ACC signal was reduced and the preprocessing of EMG was done. Finally, to reduce the number of training samples,the posture based on ACC signal was recognized by using Collaborative Sparse Representation (CSR) and the gesture based on EMG signal was classified by using Dynamic Time Warping (DTW) algorithm and the K-Nearest Neighbor (KNN) Classifier. When the ACC signal was identified by using CSR, the optimal number of samples and the dimensions of the dimensionality reduction were studied to reduce the complexity of gesture recognition. The experimental results show that the average recognition accuracy of the EMG for the hand gesture tested reaches 99.17%; the ACC signal for four postures achieve 96.88%. The recognition accuracy for the 12 dynamic gestures reaches 96.11%. This method has high recognition accuracy and fast calculation speed for dynamic gestures.
    Tattoo image detection algorithm based on three-channel convolution neural network
    XU Qingyong, JIANG Shunliang, XU Shaoping, GE Yun, TANG Yiling
    2017, 37(9):  2705-2711.  DOI: 10.11772/j.issn.1001-9081.2017.09.2705
    Asbtract ( )   PDF (1176KB) ( )  
    References | Related Articles | Metrics
    According to the characteristics of tattoo images and the insufficient ability of the Convolutional Neural Network (CNN) to extract the image features in the full connection layer, a tattoo image detection algorithm based on three-channel CNN was proposed, and three aspects of improvement work were carried out. Firstly, the image preprocessing scheme was improved for the characteristics of tattoo images. Secondly, a CNN based on three-channel fully connected layer was designed to extracted and index the features. The spatial information extraction ability of different scales was enhanced effectively, and the efficient detection of tattoo images was realized. Finally, the generalization ability of the algorithm was verified by two data sets. The experimental results on the NIST data set show that the proposed preprocessing scheme has a 0.17 percentage points increase of total correct rate and a 0.29 percentage points increase of correct rate for tattoo images than Alex scheme. Under the proposed preprocessing scheme, the proposed algorithm has obvious advantages on the standard NIST tattoo image set. The correct rate of the proposed algorithm reaches 99.1%, which is higher than 96.3%, the optimal value published by NIST; and 98.8%, obtained by traditional CNN algorithm. There is also a performance improvement on the Flickr data set.
    Air target threat assessment based on improved ACPSO algorithm and LSSVM
    XU Lingkai, YANG Rennong, ZUO Jialiang
    2017, 37(9):  2712-2716.  DOI: 10.11772/j.issn.1001-9081.2017.09.2712
    Asbtract ( )   PDF (903KB) ( )  
    References | Related Articles | Metrics
    The key link of air defense command and control system is to evaluate the threat degree of air target according to target situation information, the accuracy of the assessment will have a significant impact on air defense operations. Aiming at the shortcomings of traditional evaluation methods, such as poor real-time performance, heavy workload, low evaluation accuracy, and unable to evaluate multiple objectives simultaneously, a method of air target threat assessment based on Adaptive Crossbreeding Particle Swarm Optimization (ACPSO) and Least Squares Support Vector Machine (LSSVM) was proposed. Firstly, according to the air target situation information, the framework of threat assessment system was constructed. Then, ACPSO algorithm was used to optimize the regularization parameter and kernel function parameter in LSSVM. In order to overcome the disadvantages of the traditional crossbreeding mechanism, an improved cross-hybridization mechanism was proposed, and the crossbreeding probability was adjusted adaptively. Finally, the training and evaluation results of the systems were compared and analyzed, and the multi-target real-time dynamic threat assessment was realized by the optimized system. Simulation results show that the proposed method has the advantages of high accuracy and short time required, and can be used to evaluate multiple targets simultaneously. It provides an effective solution to evaluate the threat of air targets.
    Optimal control of chilled water system in central air-conditioning based on artificial immune and particle swarm optimization algorithm
    CHEN Dapeng, ZHANG Jiugen, LIANG Xing
    2017, 37(9):  2717-2721.  DOI: 10.11772/j.issn.1001-9081.2017.09.2717
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics
    To reduce the running energy consumption of the central air conditioner and stabilize and control the return temperature of chilled water effectively, an optimal control method of return water temperature was proposed, and the actual load demand was judged according to the deviation between the measured value of return water temperature and the set value. Firstly, the inertia weight of Particle Swarm Optimization (PSO) algorithm was made decline exponentially which made updating speed of particles match each stage of optimization process. Then, aiming at uncertain disturbance of parameters of the model, the thoughts of Artificial Immune (AI) algorithm were introduced in Particle Swarm Optimization (PSO) algorithm to form AI-PSO algorithm which could expand the diversity of particles and enforce their ability to get rid of local optimum. Finally, three parameters of Proportional Integral Differential (PID) controller were optimized with AI-PSO algorithm, and through this controller, the frequency of chilled water pump was adjusted to make return water temperature steady near set value. The experimental results show that the proposed strategy can reduce operating frequency of chilled water pump more effectively while meeting indoor load demand, in addition, energy saving effect and control quality are much better.
    Estimation method for RFID tags based on rough and fine double estimation
    DING Jianli, HAN Yuchao, WANG Jialiang
    2017, 37(9):  2722-2727.  DOI: 10.11772/j.issn.1001-9081.2017.09.2722
    Asbtract ( )   PDF (1041KB) ( )  
    References | Related Articles | Metrics
    To solve the contradiction between the estimation accuracy and the calculation amount of the RFID tag estimation method, and the instability of the estimation method performance caused by the randomness of the tag reading process in the field of aviation logistics networking information gathering. Based on the idea of complementary advantages, a method for estimating the number of RFID tags based on rough and fine estimation was proposed. By modeling and analyzing the tag reading process of framed ALOHA algorithm, the mathematical model between the average number of tags in the collision slot and the proportion of the collision slot was established. Rough number estimation based on the model was made, and then, according to the value of rough estimation, the reliability of rough estimation was evaluated. The Maximum A Posteriori (MAP) estimation algorithm based on the value of rough estimation as priori knowledge was used to improve the estimation accuracy. Compared to the original maximum posteriori probability estimation algorithm, the search range can be reduced up to 90%. The simulation results show that, the average error of the RFID tag number estimation based on rough and fine estimation is 3.8%, the stability of the estimation method is significantly improved, and the computational complexity is greatly reduced. The proposed algorithm can be effectively applied to the information collection process aviation logistics networking.
    Target detection method based on beamforming output DC response of sub-covariance matrix
    GUO Jian
    2017, 37(9):  2728-2734.  DOI: 10.11772/j.issn.1001-9081.2017.09.2728
    Asbtract ( )   PDF (1027KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that strong and weak targets can not be detected at the same time in the same single frequency band, according to the differences of beamforming output DC response of every sub-covariance matrix, a target detection method based on sub-matrix beamforming output DC response weighting was proposed. Firstly, the eigen-analysis technique was used to decompose the covariance matrix of the linear array received data. Secondly, the corresponding sub-covariance matrix for every eigenvector was obtained by conjugate multiplication, the sub-matrices were beam-formed, and the difference of beamforming output DC response formed by each sub-matrix was utilized to form the weighting factor. Finally, the weighting factor was used to weight the output of each sub-matrix beamforming to obtain the final result, and the proportion of the weak target sub-matrix beamforming output in the final result was improved, and the unknown targets were effectively detected in the same single frequency band. The results of theoretical analysis, numerical simulation and measured data processing show that under the simulation conditions, compared with the conventional beamforming and conventional subspace reconstruction method, the proposed method increases the proportion of the weak target sub-matrix beamforming output in the final result from 0.09% to 45.36%, which reduces the influence of background noise and strong target on unknown target detection, reduces the difference of output DC response between targets, and improves the detection performance of unknown targets in the same single frequency band.
2024 Vol.44 No.4

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF