Table of Content

    10 July 2017, Volume 37 Issue 7
    Minimum access guaranteed bandwidth allocation mechanism in data center network
    CAI Yueping, ZHANG Wenpeng, LUO Sen
    2017, 37(7):  1825-1829.  DOI: 10.11772/j.issn.1001-9081.2017.07.1825
    Asbtract ( )   PDF (833KB) ( )  
    References | Related Articles | Metrics
    In Data Center Network (DCN), multiple tenants may interfere with each other, which leads to unpredictable application performance. Reserving bandwidth resources can hardly guarantee high network utilization, which may result in revenue loss of cloud providers. To address above problems, a Minimum Access Guaranteed Bandwidth Allocation (MAGBA) mechanism for data center networks was proposed. To provide the minimum bandwidth guarantee and make full use of idle bandwidth for tenants, the MAGBA scheduled traffic of Virtual Machine (VM) through Weighted Fairness Queuing at the sending side, and it adjusted TCP flow's receiving window at the receiving side. Through simulations on NS2 (Network Simulation version 2) platform, compared with the static resource reservation method, MAGBA mechanism was more flexible in bandwidth allocation and it could improve the network throughput by 25%. When some tenants sent a lot of TCP flows, the other tenants in the MAGBA obtained higher bandwidth than that in the existing bandwidth allocation mechanism based on TCP flows. The simulation results show that the MAGBA mechanism can provide minimum access guaranteed bandwidth for VMs and it can avoid interferences from other tenants.
    Joint optimization of admission control and power beamforming algorithm in cognitive radio network
    ZHU Jiang, DU Qingmin, BA Shaowei
    2017, 37(7):  1830-1836.  DOI: 10.11772/j.issn.1001-9081.2017.07.1830
    Asbtract ( )   PDF (1075KB) ( )  
    References | Related Articles | Metrics
    In cognitive radio networks, for the robust joint optimization problem of multiuser admission control and power beamforming, a joint optimization scheme based on smooth approximation of entropy function was proposed. Firstly, the two optimization problems of admission control and transmit power beams were converted into a joint optimization problem by L0-norm minimization. Secondly, the method of smoothing approximation based on entropy function was used to optimize the non-convexity and discontinuity of L0-norm. Finally, since the objective function was smooth, differentiable and unimodal function, the problem was transformed into the Lagrange function, and Armijo gradient descent method was used to get the optimal solution. The numerical results show that by using the proposed algorithm, the number of admitted uses is not significantly increased when the Signal-to-Interference-plus-Noise Ratio (SINR) is relatively low, but the transmission power consumption is decreased and the number of admitted uses is increased when SINR is relatively high. The uncertain Channel State Information (CSI) of model is analyzed, which can make the network better adapt to the changes of the outside world and improve the reliability of the network. The proposed algorithm can effectively realize the optimal allocation of the network resources and improve the network performance.
    Blind period estimation of PN sequence for multipath tamed spread spectrum signal
    YANG Qiang, ZHANG Tianqi, ZHAO Liang
    2017, 37(7):  1837-1842.  DOI: 10.11772/j.issn.1001-9081.2017.07.1837
    Asbtract ( )   PDF (893KB) ( )  
    References | Related Articles | Metrics
    To estimate pseudo code period of multipath tamed spread spectrum signal, a blind method based on power spectrum reprocessing was proposed to estimate the pseudo code period of the tamed spread spectrum signal in multipath channel. Firstly, the general single path tamed spectrum signal was extended to multipath model. Then, the primary power spectrum of the signal was calculated on the basis of the tamed spread spectrum signal model in multipath environment. Next, the obtained primary power spectrum was used as the input signal to calculate the secondary power spectrum of the signal, and the theoretical analyses showed that the peak line of the secondary power spectrum of the signal would appear in the integral multiple of the pseudo code period. Finally, the pseudo code period of the tamed spread spectrum signal could be estimated by detecting the spacing between the peak spectrum lines. In the comparison experiments with time domain correlation method, the Signal-to-Noise Ratio (SNR) of the proposed method was improved by about 1 dB and 2 dB when the correct rate of pseudo code period was 100% and the length of pseudo code sequence was 127 bits and 255 bits, and the average accumulation times of the proposed method was less under the same condition. The experimental results show that the proposed method not only has less computational complexity, but also improves the estimation correct rate.
    Non-cooperative indoor human motion detection based on channel state information
    SHI Bai, ZHUANG Jie, PANG Hong
    2017, 37(7):  1843-1848.  DOI: 10.11772/j.issn.1001-9081.2017.07.1843
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics
    Concerning that using camera and sensor to detect human motion has the shortcomings of difficult deployment, expensive device and blind zone, a method of human motion detection using Wireless Fidelity (WiFi) signal was proposed. Firstly, the wireless network card was used to receive the Channel State Information (CSI) of the WiFi in the detected environment. Secondly, the Local Outlier Factor (LOF) algorithm and the Hampel filter were used to remove the abnormal CSI data. After the frequency shift caused by the rough synchronization of the network card clock was removed by the linear regression algorithm, the Principal Component Analysis (PCA) was used to reduce dimension and Naive Bayes algorithm was used to classify the CSI data in different cases, which generated a model for judging human movement states. Finally, the model was used to judge the state of human motion. In the experiment, the proposed method can quickly determine the state of human motion and reach the correct rate of 95.62%. The experimental results show that the proposed method can detect and identify the movement of people well.
    Node coverage optimization algorithm in directional heterogeneous wireless sensor network
    XU Zhongming, TAN Li, YANG Chaoyu, TANG Xiaojiang
    2017, 37(7):  1849-1854.  DOI: 10.11772/j.issn.1001-9081.2017.07.1849
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics
    Concerning covering loopholes and uneven local deployment, a Directional and Heterogeneous Precision Self-deployment Algorithm (DHPSA) was proposed. Autonomous deployment process was divided into two stages. Firstly, a node moved to the destination path by choosing the optimal route in real-time under virtual forces of neighbor node and specified path. Then, through autonomous rotation and autonomous moving, the location of the node was finely tuned under joint virtual force of neighbor nodes and the accurate coverage of the target path was realized finally. The contrast experiments show that, compared with the VFPSA (Virtual Force-based Precision Self-deployment Algorithm), the coverage rate of the proposed algorithm is increased by about 4.4 percent, the overlapping rate is decreased by about 3.4 percent, moving distance is reduced by about 2.1 percent and deployment time is reduced by about 4.3 percent. The simulation experiment results show that the proposed deployment algorithm can effectively increase the coverage rate, decrease the overlap rate and reduce energy consumption.
    Energy-balanced routing algorithm for inter-community in mobile sensor network
    GAO Qiutian, YANG Wenzhong, ZHANG Zhenyu, SHI Yan, LI Shuangshuang
    2017, 37(7):  1855-1860.  DOI: 10.11772/j.issn.1001-9081.2017.07.1855
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics
    Energy efficient routing is a challenging problem in resource constrained Mobile Wireless Sensor Network (MWSN). Focused on the issue that the energy consumption of the inter-community routing in the mobile sensor network is too fast, an Energy-balanced Routing Algorithm for Inter-community (ERAI) was proposed. In ERAI, a new routing metric FC (Forwarding Capacity) based on the residual energy of nodes and the probability of encounter was designed. Then, this metric FC and the directional information of encountered nodes were used for selection of a relay node to forward the messages. The experimental data show that the death time of the first node of ERAI was later than that of Epidemic and PROPHET by 12.6%-15.6% and 4.5%-8.3% respectively, and the residual energy mean square deviation of ERAI was less than that of Epidemic and PROPHET. The experimental results show that the ERAI can balance the energy consumption of each node to a certain extent, and thus prolongs the network lifetime.
    Cascading failure model in interdependent network considering global distribution of loads
    DONG Chongjie, CHEN Yuqiang
    2017, 37(7):  1861-1865.  DOI: 10.11772/j.issn.1001-9081.2017.07.1861
    Asbtract ( )   PDF (933KB) ( )  
    References | Related Articles | Metrics
    Concerning the interdependent network coupled by different networks, a new model for cascading failures was proposed which considered the combined effects of traffic load and interdependent edge. In the new model, the roles of interdependent edge and connected edge in interdependent networks were considered separately, variant-load global distribution principle based on the shortest path length was adopted in load allocation; the additional load assigned by the normal node was inversely proportional to the distance from the failed node. Finally, cascading failures of the interdependent network coupled by the IEEE118 standard grid network, small world network and random network were simulated. The simulation results show that the effect of global distribution of load is smaller, the failures resistance ability is stronger, the contribution of the traffic load of cascading failures is smaller and IEEE118 coupling network and the small-world coupling network have bigger failures steps when tolerance coefficient is smaller. Meanwhile, the network is unable to maintain the integrity, tolerance coefficient and failures steps appear approximately monotonically increasing relationship when the effect of global distribution of load is bigger.
    Energy consumption-reducing algorithm based on WiMAX networks
    LIU Wenzhi, LIU Zhaobin
    2017, 37(7):  1866-1872.  DOI: 10.11772/j.issn.1001-9081.2017.07.1866
    Asbtract ( )   PDF (1188KB) ( )  
    References | Related Articles | Metrics
    To overcome the shortcoming in the energy-saving algorithm of WiMAX networks that energy is unnecessarily wasted by idle mobile stations because of channel quality difference, according to the energy-saving class Ⅰ of the IEEE 802.16e standard, the formalization of Average Energy Consumption (AEC) was given for Mobile Station (MS) Quality of Service (QoS), and an algorithm of Idle-state Avoidance and Virtual Burst (IAVB) based on channel quality balancing was proposed. In this algorithm, the strategy combining a threshold of idle-state with choosing the main mobile terminal based on channel quality balancing was adopted, and the end condition of virtual-burst was perfected to avoid the energy waste between idle-state nodes when the virtual-burst did not end properly. The simulation results show that the energy saving performance of IAVB algorithm is 15% higher than that of the Longest Virtual Burst First (LVBF) algorithm. The experimental results show that the proposed algorithm can control the energy consumption of idle-state and improve the resource utilization efficiency of the WiMAX network.
    Efficient and load balanced open shortest path first protocol in electric power communication network
    LI Zhuhong, ZHAO Canming, ZHOU Fang, ZHANG Xinming
    2017, 37(7):  1873-1876.  DOI: 10.11772/j.issn.1001-9081.2017.07.1873
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    To solve the traffic load-imbalance problem in electric power communication networks based on Open Shortest Path First (OSPF) protocol, an efficient Two-Step Optimized OSPF Protocol (TSO-OSPF) algorithm was proposed to balance the traffic in intra-area and inter-area of OSPF respectively. The bandwidth utilization and delay were adopted as link weights, the inward and outward traffic of a router was considered, the overloaded branches were decomposed into multiple routers to minimize the maximum traffic flow, thus the traffic-imbalance problem of internal and boundary router in the electric power communication networks was solved. The simulation results show that the TSO-OSPF algorithm can effectively balance the traffic in the network and reduce the packet loss rate by about 10% compared with the OSPF algorithm.
    Frequency offset estimation algorithm of time domain synchronous orthogonal frequency division multiplexing based on correlations of cyclic pseudo-random noise
    LI Yangguang, BAO Jianrong, JIANG Bin, LIU Chao
    2017, 37(7):  1877-1882.  DOI: 10.11772/j.issn.1001-9081.2017.07.1877
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics
    Concerning the high complexity of the traditional frequency estimation algorithm, a new frequency offset estimation algorithm of Time Domain Synchronous Orthogonal Frequency Division Multiplexing (TDS-OFDM) with low complexity for power line communication was proposed. Firstly, the characteristics of power line network were analyzed, and a frame head was constructed with three equal-length cyclic Pseudo-random Noise (PN) sequences. Secondly, the frame head and body were based on Binary Phase Shift Keying (BPSK) and Quadrature Amplitude Modulation (QAM) modes. Finally, compared with the traditional algorithms based on Cyclic Prefix (CP) or general PN, only the lengths of part of PN were calculated, so the number of autocorrelations was reduced, and better performance could be guaranteed. The simulation results show that, at Bit Error Rate (BER) of 10-4, the improved algorithm has about 5 dB and 1 dB gains while comparing with the algorithms based on CP and general PN, respectively. And compared with algorithm with general PN, when the lengths of inserted sequence and cyclic sequence were 420 and 165, the number of correlations per frame was reduced by 1186. The theoretical analysis and simulation results show that proposed algorithm can effectively reduce the computational complexity and cost of process, meanwhile improves the communication rate.
    Analysis of factors affecting efficiency of data distributed parallel application in cloud environment
    MA Shengjun, CHEN Wanghu, YU Maoyi, LI Jinrong, JIA Wenbo
    2017, 37(7):  1883-1887.  DOI: 10.11772/j.issn.1001-9081.2017.07.1883
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics
    Data distributed parallel applications like MapReduce are widely used. Focusing on the issues such as low execution efficiency and high cost of such applications, a case analysis of Hadoop was given. Firstly, based on the analyses of the execution processes of such applications, it was found that the data volume, the numbers of the nodes and tasks were the main factors that affected their execution efficiency. Secondly, the impacts of the factors mentioned above on the execution efficiency of an application were explored. Finally, based on a set of experiments, two important novel rules were derived as follows. Given a specific volume of data, the execution efficiency of a data distributed parallel application could not be improved remarkably only by increasing the number of nodes, but the execution cost would raise on the contrary. However, when the number of tasks was nearly equal to that of the nodes, a higher efficiency and lower cost could be got for such an application. The conclusions are useful for users to optimize their data distributed parallel applications and to estimate the necessary computing resources to be rented in a cloud environment.
    Task scheduling algorithm for cloud computing based on multi-scale quantum harmonic oscillator algorithm
    HAN Hu, WANG Peng, CHENG Kun, LI Bo
    2017, 37(7):  1888-1892.  DOI: 10.11772/j.issn.1001-9081.2017.07.1888
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics
    Reasonable virtual machine allocating and efficient task scheduling is a key problem in the cloud computing. In order to better use virtual machine and make the system meet the service requests efficiently, a task scheduling algorithm based on Multi-scale Quantum Harmonic Oscillator Algorithm (MQHOA) was proposed. Firstly, each scheduling scheme was regarded as a sampling position, and then the randomness of Gaussian sampling was used to search the local optimal solution at the current scale. Then, whether the energy level was stable was judged. If the energy level was stable, it would enter the descent process and the worst scheduling scheme would be replaced. Finally, when the algorithm entered the process of scale reduction, the algorithm transitioned from global search to local search, eventually terminated and delivered the optimal result after several iterations. The simulation experiment results on CloudSim platform show that the makespan of task scheduling of MQHOA decreased by more than 10% and the degree of imbalance fell more than 0.4 in comparison with First Come First Serviced (FCFS) algorithm and Particle Swarm Optimization (PSO) algorithm. The experimental results show that the proposed algorithm has fast convergence rate and good characteristics of global convergence and adaptability. The task scheduling algorithm based on MQHOA can reduce the makespan of task scheduling and maintain the load balance of virtual machines in cloud computing.
    High efficient construction of location fingerprint database based on matrix completion improved by backtracking search optimization
    LI Lina, LI Wenhao, YOU Hongxiang, WANG Yue
    2017, 37(7):  1893-1899.  DOI: 10.11772/j.issn.1001-9081.2017.07.1893
    Asbtract ( )   PDF (1047KB) ( )  
    References | Related Articles | Metrics
    To solve the problems existing in the off-line construction method of location fingerprint database for location fingerprint positioning based on Received Signal Strength Indication (RSSI), including large workload of collecting all the fingerprint information in the location, low construction efficiency of the location fingerprint database, and the limited precision of interpolation, a high efficient off-line construction method of the location fingerprint database based on the Singular Value Thresholding (SVT) Matrix Completion (MC) algorithm improved by the Backtracking Search optimization Algorithm (BSA) was proposed. Firstly, using the collected location fingerprint data of some reference nodes, a low-rank matrix completion model was established. Then the model was solved by the low rank MC algorithm based on the SVT. Finally, the complete location fingerprint database could be reconstructed in the location area. At the same time, the BSA was introduced to improve the optimization process of MC algorithm with the minimum kernel norm as the fitness function to solve the problem of the fuzzy optimal solution and the poor smoothness of the traditional MC theory, which could further improve the accuracy of the solution. The experimental results show that the average error between the location fingerprint database constructed by the proposed method and the actual collected location fingerprint database is only 2.7054 dB, and the average positioning error is only 0.0863 m, but nearly 50% of the off-line collection workload can be saved. The above results show that the proposed off-line construction method of the location fingerprint database can effectively reduce the workload of off-line collection stage while ensuring the accuracy, significantly improve the construction efficiency of location fingerprint database, and improve the practicability of fingerprint positioning method to a certain extent.
    Performance optimization of ItemBased recommendation algorithm based on Spark
    LIAO Bin, ZHANG Tao, GUO Binglei, YU Jiong, ZHANG Xuguang, LIU Yan
    2017, 37(7):  1900-1905.  DOI: 10.11772/j.issn.1001-9081.2017.07.1900
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    Under MapReduce computing scenarios, complex data mining algorithms typically require multiple MapReduce jobs' collaboration process to compete the task. However, serious redundant disk read and write and repeat resource request operations among multiple MapReduce jobs seriously degrade the performance of the algorithm under MapReduce. To improve the computational efficiency of ItemBased recommendation algorithm, firstly, the performance issues of the ItemBased collaborative filtering algorithm under MapReduce platform were analyzed. Secondly, the execution efficiency of the algorithm was improved by taking advantage of Spark's performance superiority on iterative computation and memory computing, and the ItemBased collaborative filtering algorithm under Spark platform was implemented. The experimental results show that, when the size of the cluster nodes is 10 and 20, the running time of the algorithm in Spark is only 25.6% and 30.8% of that in MapReduce. The algorithm's overall computing efficiency of Spark platform improves more than 3 times compared with that of MapReduce platform.
    Improved algorithm of artificial bee colony based on Spark
    ZHAI Guangming, LI Guohe, WU Weijiang, HONG Yunfeng, ZHOU Xiaoming, WANG Jing
    2017, 37(7):  1906-1910.  DOI: 10.11772/j.issn.1001-9081.2017.07.1906
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics
    To combat low efficiency of Artificial Bee Colony (ABC) algorithm on solving combinatorial problem, a parallel ABC optimization algorithm based on Spark was presented. Firstly, the bee colony was divided into subgroups among which broadcast was used to transmit data, and then was constructed as a resilient distributed dataset. Secondly, a series of transformation operators were used to achieve the parallelization of the solution search. Finally, gravitational mass calculation was used to replace the roulette probability selection and reduce the time complexity. The simulation results in solving the Traveling Salesman Problem (TSP) prove the feasibility of the proposed parallel algorithm. The experimental results show that the proposed algorithm provides a 3.24x speedup over the standard ABC algorithm and its convergence speed is increased by about 10% compared with the unimproved parallel ABC algorithm. It has significant advantages in solving high dimensional problems.
    Parallel algorithm for hillshading under multi-core computing environment
    HAN Litao, LIU Hailong, KONG Qiaoli, YANG Fanlin
    2017, 37(7):  1911-1915.  DOI: 10.11772/j.issn.1001-9081.2017.07.1911
    Asbtract ( )   PDF (1000KB) ( )  
    References | Related Articles | Metrics
    Most of the exiting hillshading algorithms are implemented based on single-core single-thread programming model, which makes them have lower computational efficiency. To solve this problem, an improved algorithm for parallelizing the existing hillshading algorithms based on multi-core programming model was proposed. Firstly, the original Digital Elevation Model (DEM) data were divided into several data blocks by grid segmentation. Secondly, these data blocks were shaded in parallel using the class Parallel under the .Net environment to generate shaded image of each block. Finally, the shaded images were spliced into a complete hillshading image. The experimental results show that the calculation efficiency of the improved parallelized algorithm is obviously higher than that of the existing shading algorithms based on single-core single-thread programming, and there is an approximate linear growth relation between the number of the involved cores and the shading efficiency. Additionally, it is also found that the three dimensional and realistic effect of the hillshading image is extremely relevant to the parameter setting of the light source.
    Task partitioning algorithm based on parallelism maximization with multi-objective optimization
    YUAN Kaijian, ZHANG Xingming, GAO Yanzhao
    2017, 37(7):  1916-1920.  DOI: 10.11772/j.issn.1001-9081.2017.07.1916
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics
    Concerning the parallelism maximization of hardware task partitioning in reconfigurable system, a task partitioning algorithm based on parallelism maximization for multi-objective optimization was proposed. Firstly, the operating nodes to be partitioned were discovered according to the breadth first search under the constraints of hardware area resource and reasonable dependency relation. Then, considering the effect of execution delay on system completion time, the parallelism of intra-block operations was maximized. Finally, the new nodes were accepted under the principle of reducing the fragment area without increasing the number of connections between blocks. Otherwise, a block partitioning was ended. The experimental results show that the proposed algorithm achieves the maximum intra-block parallelism and reduces the number of blocks and connecting edges compared with the existing Level Based Partitioning (LBP) and Cluster Based Partitioning (CBP) algorithms.
    Research progress in Internet of vehicles trajectory privacy protection
    ZHANG Chunhua, ZANG Haijuan, XUE Xiaoping, ZHANG Fang, CHEN Kangqiang, FENG Lijuan
    2017, 37(7):  1921-1925.  DOI: 10.11772/j.issn.1001-9081.2017.07.1921
    Asbtract ( )   PDF (998KB) ( )  
    References | Related Articles | Metrics
    Trajectory privacy protection is critical to the development of Internet of Vehicles (IoV), which makes it important to summarize and analyze existing research methods. Existing IoV trajectory privacy protection methods can be divided into three categories: trajectory obfuscation, pseudonym change and trajectory encryption. Trajectory obfuscation can be achieved based on users' real trajectory or dummy trajectory. Pseudonym change can be achieved based on mix-zone or path confusion. Trajectory encryption can be achieved based on Private Information Retrieval (PIR) protocol or spatial transformation. Firstly, the research background and common attacks were introduced and summarized in this paper. Secondly, existing IoV trajectory privacy protection methods were surveyed from the aspects of methodology, scientific problem and method evolution. The problems need to be further studied also were elaborated. Furthermore, the performances of representative schemes were summarized, such as privacy protection, attack resistance and complexity. Finally, the future research directions of IoV trajectory privacy protection was prospected.
    Network security situation assessment method based on cuckoo search optimized back propagation neural network
    XIE Lixia, WANG Zhihua
    2017, 37(7):  1926-1930.  DOI: 10.11772/j.issn.1001-9081.2017.07.1926
    Asbtract ( )   PDF (805KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low efficiency of the existing network security situation assessment method based on neural network, a network security situation assessment method based on Cuckoo Search (CS) optimized Back Propagation (BP) Neural Network (CSBPNN) was proposed. Firstly, the numbers of input and output nodes of the BP Neural Network (BPNN) were determined according to the number of input index and the output value. The number of hidden layer nodes was calculated according to the empirical formula and the trial and error method. Secondly, the connection weights and thresholds were randomly initialized, and the weights and thresholds were coded into cuckoo by using floating point coding. Finally, the weights and thresholds were optimized by using CS algorithm. The CSBPNN model for situation assessment was got and trained. The situation data was input into the CSBPNN model to obtain the situation value. The experimental results show that the iterative number of CSBPNN is reduced by 943 and 47 respectively, and the prediction accuracy is 8.06 percentage points and 3.89 percentage points higher than that of BPNN and Genetic Algorithm (GA) optimized BP neural network. The proposed algorithm has faster convergence speed and higher prediction accuracy.
    Dynamic provable data possession scheme based on B+ tree
    LI Haoyu, ZHANG Longjun, LI Qingpeng
    2017, 37(7):  1931-1935.  DOI: 10.11772/j.issn.1001-9081.2017.07.1931
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the existing schemes of provable data possession are inefficient and can not support full dynamic update, a novel dynamic provable data possession scheme based on B+ tree was proposed. Bilinear pairing techniques and data version table were introduced to support fine-grained dynamic operations at the block level and to protect user's data privacy in the proposed scheme. The third party auditor could identify the wrong data and locate it accurately by optimizing the system model and designing the retrieved value of data node. In comparison with the scheme based on the Merkel Hash Tree (MHT), theoretical analysis and experimental results show that the proposed scheme can significantly reduce the cost of constructing the authentication data structure, simplify the dynamic update process, and improve the verification efficiency of the third party auditor.
    Dual watermarking algorithm based on human visual characteristics and SIFT
    CHEN Shuqin, LI Zhi, CHENG Xinyu, GAO Qi
    2017, 37(7):  1936-1942.  DOI: 10.11772/j.issn.1001-9081.2017.07.1936
    Asbtract ( )   PDF (1148KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the video watermarking information is vulnerable to geometric attacks and the balance between robustness and adaptability of the watermarking algorithm, a dual watermarking scheme based on human visual characteristics and Scale Invariant Feature Transform (SIFT) was proposed. Firstly, the human visual threshold in the video sequence was taken as the maximum embedding strength of the watermark; secondly, the video frame was processed by Discrete Wavelet Transform (DWT). An adaptive watermarking algorithm based on video motion information was proposed for medium-high frequency subband coefficients; based on statistical properties of wavelet coefficients, an anti-geometric attack video watermarking scheme was proposed for low-frequency ones. Finally, SIFT was acted as the trigger to judge whether the video frame was subjected to geometric attacks. The video frames were corrected by using the SIFT scale and orientation invariance when it was under geometric attack, and the watermark signal of the video frame was extracted after correction. For video frame under non-geometric attack, the medium-high frequency extraction scheme was used directly. In comparison with the real-time robust video watermarking algorithm, called VW-HDWT (Video Watermarking based on Histogram in DWT domain) algorithm, the Peak Signal-to-Noise Ratio (PSNR) value was improved by 7.5%. Compared with the watermarking algorithm based on feature area, the capacity of watermark embedding could be increased by about 10 times. The experimental results show that the proposed scheme is robust to common geometric attacks in the condition of fine watermarking transparency.
    False data injection attacks based on robust principal component analysis in smart grid
    TIAN Jiwei, WANG Buhong, SHANG Fute
    2017, 37(7):  1943-1947.  DOI: 10.11772/j.issn.1001-9081.2017.07.1943
    Asbtract ( )   PDF (969KB) ( )  
    References | Related Articles | Metrics
    The blind attack strategy based on Principal Component Analysis (PCA) is only effective for the measurement data with Gaussian noise. In the presence of outliers, the attack strategy will be detected by the traditional bad data detection module. Aiming at the problem of outliers, a blind attack strategy based on Robust PCA (RPCA) was proposed. Firstly, the attacker collected the measurement data with outliers. Then, the outliers and the real measurement data were separated from the measurement data containing outliers by the sparse optimization technique based on the Alternating Direction Method (ADM). Secondly, the PCA technique was carried out on the real measurement data, and the relevant information of the system was obtained. Finally, the acquired system information was used to construct the attack vector, and the false data was injected according to the attack vector. The experimental results show that the traditional attack method based on PCA will be detected by the bad data detection module in the presence of outliers, and the proposed method based on robust PCA can avoid the detection of bad data detection module. This strategy makes it possible to successfully implement False Data Injection Attack (FDIA) in the presence of outliers.
    Regenerating codes construction method based on sparse random matrix
    XU Zhiqiang, YUAN Dezhai, CHEN Liang
    2017, 37(7):  1948-1952.  DOI: 10.11772/j.issn.1001-9081.2017.07.1948
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the calculations of the existing regenerating code schemes is based on GF(q), and it has high computational complexity and low efficiency, a regenerating code construction method based on sparse random matrix over GF(2) and product matrix framework was proposed. Firstly, file data was arranged in a matrix and the row XOR operation was performed according to encoding matrix. Secondly, local data was encoded by helper nodes according to failed node's encoding vector and sent to repair node. Finally, the failed node's data was decoded by repair node according to received data. The experimental results show that the repair bandwidth of the proposed method is only one-tenth of traditional erasure code at most, and the encoding rate increases by 70% and decoding recovery rate increases by 50% compared with regenerating code based on conventional Vandermonde matrix, which facilitates the application of regenerating code in massive storage system.
    Algebraic fault attack on lightweight block ciphers SIMON
    MA Yunfei, WANG Tao, CHEN Hao, HUANG Changyang
    2017, 37(7):  1953-1959.  DOI: 10.11772/j.issn.1001-9081.2017.07.1953
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of small fault depth and complex manual deduction in previous fault attacks on SIMON, an Algebraic Fault Attack (AFA) method was proposed. Firstly, Correct equations of full-round SIMON encryption was established based on the algebraic representation of SIMON core operation ‘&’. Then faults were injected into the internal states and two models were provided for fault representation based on whether attackers knew the exact fault information or not. Finally, a CryptoMinisat-2.9.6 solver was used for round-keys recovery. The simulation results show that the fault-known and fault-unknown model need 5 and 6 faults to recover the entire key set with single-bit faults injected in the 26th round of SIMON32/64. As for SIMON128/128, two models both need only 2 faults to recover the entire key set with n-bit length faults injected in the 65th round. Moreover, it can be found that the influencing factor of average solving time will change from fault information to computation with fault number growing.
    Multi-view feature projection and synthesis-analysis dictionary learning for image classification
    FENG Hui, JING Xiaoyuan, ZHU Xiaoke
    2017, 37(7):  1960-1966.  DOI: 10.11772/j.issn.1001-9081.2017.07.1960
    Asbtract ( )   PDF (1171KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the existing synthesis-analysis dictionary learning method can not effectively eliminate the differences between the samples of the same class and ignore the different effects of different features on the classification, an image classification method based on Multi-view Feature Projection and Synthesis-analysis Dictionary Learning (MFPSDL) was put forward. Firstly, different feature projection matrices were learned for different features in the process of synthesis-analysis dictionary learning, so the influence of the within-class differences on recognition was reduced. Secondly, discriminant constraint was added to the synthesis-analysis dictionary, so that similar sparse representation coefficients were obtained for samples of the same class. Finally, by learning different weights for different features, multiple features could be fully integrated. Several experiments were carried out on the Labeled Faces in the Wild (LFW) and Modified National Institute of Standards and Technology (MNIST) database, the training time of MFPSDL method on LFW and MNIST databases were 61.236 s and 52.281 s. Compared with Fisher Discrimination Dictionary Learning (FDDL), Lable Consistent K Singular Value Decomposition (LC-KSVD) and Dictionary Pair Learning (DPL), the recognition rate of MFPSDL method on LFW and MNIST was increased by at least 2.15 and 2.08 percentage points. The experimental results show that MFPSDL method can obtain higher recognition rate while keeping low time complexity, and it is suitable for image classification.
    Path planning for restaurant service robot based on improved genetic algorithm
    XU Lin, FAN Xinwei
    2017, 37(7):  1967-1971.  DOI: 10.11772/j.issn.1001-9081.2017.07.1967
    Asbtract ( )   PDF (808KB) ( )  
    References | Related Articles | Metrics
    Since the Genetic Algorithm (GA) is easy to produce premature phenomenon and has slow convergence rate, an improved GA based on Traditional GA (TGA), called HLGA (Halton-Levenshtein Genetic Algorithm), was proposed for path planning of real restaurant service robots. Firstly, the similarity method based on edit distance optimized the initial population of quasi-random sequence; secondly, the improved crossover probability and mutation probability adjustment formula based on the adaptive algorithm were adopted to cross and mutate the individuals after they had been selected. Finally, the individual fitness values of the safety evaluation factor functions were calculated, and the global optimal solution was obtained by comparison and iteration. Through theoretical analysis and Matlab simulation, the running time of HLGA was decreased by 6.92 seconds and 1.79 seconds compared with TGA and Adaptive Genetic Algorithm based on Improved independent Similarity (ISAGA), and the actual path of planning was more secure and smooth. The simulation results show that HLGA can effectively improve the quality of path planning in practical applications, meanwhile reduces the searching space and the planning time.
    Encoding of genetic algorithm for a class of fitness functions
    ZHU Chunmei, MO Hongqiang
    2017, 37(7):  1972-1976.  DOI: 10.11772/j.issn.1001-9081.2017.07.1972
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    In the investigation of relationship between the periodicity of fitness function and encoding cardinality, the evaluation of encoding performance using the number of order-1 building blocks is not necessarily established. Focused on this issue, evaluating method of encoding performance of Genetic Algorithm (GA) using Accumulated Escape Probability (AEP) was proposed, and for a class of fitness functions linearly combined of sinusoidal functions whose frequencies are exponential to a positive integer m, research on encoding was carried out. Firstly, the general form of the fitness function was given, and the concept of base-m encoding was explained. Secondly, the definition of AEP was introduced, and a method was proposed to figure out AEPs. Then the AEPs of the above-mentioned fitness functions under encodings with different encoding bases were compared, and the results showed that, for a fitness function which was linearly combined of sinusoidal functions with frequencies exponential to a positive integer m, a base-m encoding could achieve higher AEP than encodings with bases other than m. The simulation results show that, the optimization performance and the rise time of the average fitness of the population under a base-m encoding are significantly better than those of the other encodings.
    Classification of symbolic sequences with multi-order Markov model
    CHENG Lingfang, GUO Gongde, CHEN Lifei
    2017, 37(7):  1977-1982.  DOI: 10.11772/j.issn.1001-9081.2017.07.1977
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that the existing methods based on the fixed-order Markov models cannot make full use of the structural features involved in the subsequences of different orders, a new Bayesian method based on the multi-order Markov model was proposed for symbolic sequences classification. First, a Conditional Probability Distribution (CPD) model was built based on the multi-order Markov model. Second, a suffix tree for n-order subsequences with efficient suffix-tables and its efficient construction algorithm were proposed, where the algorithm could be used to learn the multi-order CPD models by scanning once the sequence set. A Bayesian classifier was finally proposed for the classification task. The training algorithm was designed to learn the order-weights for the models of different orders based on the Maximum Likelihood (ML) method, while the classification algorithm was defined to carry out the Bayesian prediction using the weighted conditional probabilities of each order. A series of experiments were conducted on real-world sequence sets from three domains and the results demonstrate that the new classifier is insensitive to the predefined order change of the model. Compared with the existing methods such as the support vector machine using the fixed-order model, the proposed method can achieve more than 40% improvement on both gene sequences and speech sequences in terms of classification accuracy, yielding reference values for the optimal order of a Markov model on symbolic sequences.
    Online service evaluation based on social choice theory
    LI Wei, FU Xiaodong, LIU Li, LIU Lijun
    2017, 37(7):  1983-1988.  DOI: 10.11772/j.issn.1001-9081.2017.07.1983
    Asbtract ( )   PDF (976KB) ( )  
    References | Related Articles | Metrics
    The inconformity of user evaluation standard and preference results in unfair comparability between online services in cyberspace, thereby the users are hardly to choose satisfactory online services. The ranking method to calculate the online service quality based on social choice theory was proposed. First, group preference matrix was built according to the user-service evaluation matrix given by users; second, 0-1 integer programming model was built based on group preference matrix and Kemeny social choice function; at last, the optimal service ranking results could be obtained by solving this model. The individual preferences were aggregated to group preference in the proposed method; the decision was consistent with the majority preference of the group and maximum consistency with the individual preference. The proposed method's rationality and effectiveness were verified by theoretical analysis and experiment results. The experimental results show that the proposed method can solve the incomparability between online services, realize the online service quality ranking, effectively resisted the recommendation attacks. So it has strong anti-manipulation.
    Fast fire flame recognition algorithm based on multi-feature logarithmic regression
    XI Tingyu, QIU Xuanbing, SUN Dongyuan, LI Ning, LI Chuanliang, WANG Gao, YAN Yu
    2017, 37(7):  1989-1993.  DOI: 10.11772/j.issn.1001-9081.2017.07.1989
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    To improve the recognition rate and reduce the false-recognition rate in real-time detection of flame in video surveillance, a fast flame recognition algorithm based on multi-feature logarithm regression model was proposed. Firstly, the image was segmented according to the chromaticity of the flame, and the Candidate Fire Region (CFR) was obtained by subtracting the moving target image with reference image. Secondly the features of the CRF such as area change rate, circularity, number of sharp corners and centroid displacement were extracted to establish the logarithmic regression model. Then, a total of 300 images including flame and non-flame images, which were got from National Institute of Standards and Technology (NIST), Computer Vision laboratory of Inha University (ICV), Fire detection based on computer Vision (VisiFire) and the experimental library consisting of the candle and paper combustion were used to parametric learning. Finally, 8 video clips including 11071 images were used to validate the proposed algorithm. The experimental results show that the True Positive Rate (TPR) and True Negative Rate (TNR) of the proposed algorithm are 93% and 98% respectively. The average time of identification is 0.058 s/frame. Because of its fast identification and high recognition rate, the proposed algorithm can be applied in embedded real-time flame image recognition.
    Heart disease classification based on active imbalance multi-class AdaBoost algorithm
    WANG Lili, FU Zhongliang, TAO Pan, HU Xin
    2017, 37(7):  1994-1998.  DOI: 10.11772/j.issn.1001-9081.2017.07.1994
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics
    An imbalance multi-class AdaBoost algorithm with active learning was proposed to improve the recognition accuracy of minority class in imbalance classification. Firstly, active learning was adopted to select the most informative samples for classifiers through multiple iterations of sampling. Secondly, a new sample selection strategy based on uncertainty of dynamic margin was proposed to tackle the problem of data imbalance in the multi-class case. Finally, the cost sensitive method was adopted to improve the multi-class AdaBoost algorithm: giving different class with different misclassification cost, adjusting sample weight update speed, and forcing weak learners to "concern" minority class. The experimental results on clinical TransThoracic Echocardiography (TTE) data set illustrate that, when compared with multi-class Support Vector Machine (SVM), the total recognition accuracy of heart disease increases by 5.9%, G-mean improves by 18.2%, the recognition accuracy of Valvular Heart Disease (VHD) improves by 0.8%, the recognition accuracy of Infective Endocarditis (IE) (minority class) improves by 12.7% and the recognition accuracy of Coronary Artery Disease (CAD) (minority class) improves by 79.73%; compared with SMOTE-Boost, the total recognition accuracy of heart disease increases by 6.11%, the G-mean improves by 0.64%, the recognition accuracy of VHD improves by 11.07%, the recognition accuracy of Congenital Heart Disease (CHD) improves by 3.67%. The experiment results on TTE data and 4 UCI data sets illustrate that when used in imbalanced multi-class classification, the proposed algorithm can improve the recognition accuracy of minority class effectively, and upgrade the overall classifier performance while guaranteeing the recognition accuracy of other classes not to be decreased dramatically.
    Facial age estimation method based on hybrid model of classification and regression
    ZHAO Yiding, TIAN Senping
    2017, 37(7):  1999-2002.  DOI: 10.11772/j.issn.1001-9081.2017.07.1999
    Asbtract ( )   PDF (813KB) ( )  
    References | Related Articles | Metrics
    Focusing on small size and uneven distribution of current facial age database, an approach based on a hybrid model combined with classifier and regressor was proposed for facial age estimation. This approach mainly consisted of two aspects: feature learning and estimation method. In the aspect of feature learning, based on an existing Convolutional Neural Network (CNN), an age group classifier and two age estimators were pretrained on the coarse dataset and then fine tuned on the accurate database. In the aspect of estimation method, a coarse-to-fine strategy was adopted. First, a facial images were classified into teenaged, middled-aged, elderly and two overlap groups. Next, the teenaged and elderly groups were estimated by the classifier model, the middled-aged group was estimated by the regressor model, and the two overlap groups were estimated by both models. The proposed approach can achieve a Mean Absolute Error (MAE) of 2.56 on the test set. The experimental results show that the proposed approach can reach a low error under different races and genders.
    Pedestrian detection based on deformable part model with branch and bound
    CHAI Enhui, ZHI Min
    2017, 37(7):  2003-2007.  DOI: 10.11772/j.issn.1001-9081.2017.07.2003
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics
    The detection accuracy of the Deformable Part Model (DPM) algorithm is high in the field of pedestrian detection, however, in the two steps of feature extraction and pedestrian location, the computation is too large, which leads to the slow detection speed and the deformable part model algorithm can not be used in real time pedestrian detection. To solve the problems, a deformable Part Model with Branch and Bound (BB) algorithm and Cascaded Detection (CD) algorithm (BBCDPM) was proposed. First, the Histogram of Oriented Gradients (HOG) feature was selected to describe human target to generate characteristic pyramid. Then, the deformable part model was modeled, and the Latent Support Vector Machine (LSVM) was used to train the model. In order to increase the accuracy of pedestrian detection, the part model of traditional deformation part model algorithm was increased from 5 to 8 parts. Finally, the cascade detection algorithm was used to simplify detection model, then the maximum value was found by combining with the branch and bound algorithm, and a lot of impossible object assumptions were removed, so the pedestrian target location and detection were completed. The experimental results on INRIA dataset show that, compared with the traditional DPM algorithm, the proposed algorithm improves the accuracy rate by 12 percentage points and significantly accelerates pedestrian detection and recognition.
    Field scene recognition method for low-small-slow unmanned aerial vehicle landing
    YE Lihua, WANG Lei, ZHAO Liping
    2017, 37(7):  2008-2013.  DOI: 10.11772/j.issn.1001-9081.2017.07.2008
    Asbtract ( )   PDF (1005KB) ( )  
    References | Related Articles | Metrics
    For the complex and autonomous landing scene is difficult to be recognized in wild flight environment for low-small-slow Unmanned Aerial Vehicles (UAV), a novel field scene recognition algorithm based on the combination of local pyramid feature and Convolutional Neural Network (CNN) learning feature was proposed. Firstly, the scene was divided into small scenes of 4×4 and 8×8 blocks. The Histogram of Oriented Gradient (HOG) algorithm was used to extract the scene features of all the blocks. All the features were connected end to end to get the feature vector with the characteristics of spatial pyramid. Secondly, a depth CNN aiming at the classification of scenes was designed. The method of tuning training was adopted to obtain CNN model and extract the characteristics of deep network learning. Finally, the two features were connected to get the final scene feature and the Support Vector Machine (SVM) classifier was used for classification. Compared with other traditional manual feature methods, the proposed algorithm can improve the recognition accuracy by more than 4 percentage points in data sets such as Sports-8, Scene-15, Indoor-67 and a self-built one. The experimental results show that the proposed algorithm can effectively improve the recognition accuracy of the landing scene.
    Character recognition of license plate based on convolution neural network
    DONG Junfei, ZHENG Bochuan, YANG Zejing
    2017, 37(7):  2014-2018.  DOI: 10.11772/j.issn.1001-9081.2017.07.2014
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics
    Character recognition of license plate is an important component of an intelligent license plate recognition system. Both the number of categories and the complexity of background of license plate character affected the correct recognition rate. A character recognition method of license plate based on Convolution Neural Network (CNN) was proposed for improving the correct recognition rate. Firstly, the simple shape structures of license plate characters were obtained through image preprocessing which included image size normalization, image denoising, image binarization, image thinning, and character centering. Secondly, the preprocessed character images were trained and recognized by the proposed CNN model. The experimental results show that the correct recognition rate of the proposed method can reach 99.96%, which is better than the other three compared methods. It is demonstrated that the proposed CNN method has good recognition performance for the license plate character, and can meet the practical application requirements.
    Design and implementation of process management system supporting multi-tool collaboration
    YANG Tao, SHI Lin, SONG Mengdie, LI Shoubin, WANG Qing
    2017, 37(7):  2019-2026.  DOI: 10.11772/j.issn.1001-9081.2017.07.2019
    Asbtract ( )   PDF (1269KB) ( )  
    References | Related Articles | Metrics
    The software development process is increasingly depending on various Computer-Aided Software (CAS). Simultaneously using these tools bring some problems, including non-customized development process, inconsistent process data and inefficient process management. To deal with these problems, a software development process management system that supports multi-tool collaboration was proposed. The hierarchical architecture system was developed on workflow design by analyzing software development process and studying the software engineering development model that supports fast iteration and tends to process management. Besides, the system was rigorously tested under 576 test cases. As a result the pass rate is 85%, which is able to meet the majority of tool collaboration needs, including definable development process, consistent interaction data and available process management. The system has been used by seven development teams with about 200 developers. The feedback results from the managers, developers and testers show that this system saves the time of weekly meetings, facilitates the management of development tasks, and significantly improves the development efficiency.
    UML profile for modeling ERP cloud service platform based on SaaS
    WANG Zhen, JIANG Zheyuan
    2017, 37(7):  2027-2033.  DOI: 10.11772/j.issn.1001-9081.2017.07.2027
    Asbtract ( )   PDF (1038KB) ( )  
    References | Related Articles | Metrics
    Since the traditional Enterprise Resource Planning (ERP) system has low openness, low expansibility and high cost in current business environment, an ERP system modeling method based on Software-as-a-Service (SaaS) model was proposed. Firstly, a new primitive set called Unified Modeling Language (UML) profile was gotten by extending the primitives of UML. Secondly, an equivalent meta model was established and semantic unambiguity was ensured by Object Constraint Language (OCL). Finally, the cloud ERP was described by the model framework which was composed of application diagram, operation dictionary, physical diagram and topological diagram to transform the cloud ERP system into documents. The proposed method focused on the modular design and all stages adopted a unified visual meta-model. According to the requirements of modeling, the ERP model based on SaaS was successfully established on the Enterprise Architect (EA) platform by the proposed method and the effectiveness was verified. The theoretical analysis and modeling results show that the proposed method can ensure the interoperability and consistency between models and improve the scalability of ERP system.
    Multi-Agent-based real-time self-adaptive discrimination method
    WANG Jing, WANG Chunmei, ZHI Jia, YANG Jiasen, CHEN Tuo
    2017, 37(7):  2034-2038.  DOI: 10.11772/j.issn.1001-9081.2017.07.2034
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that existing data discrimination methods can not adapt to changeable test environment and realize continuous real-time discriminating process with low error rate when applied in ground integrated test of payload, a Multi-Agent-based Real-time self-Adaptive Discrimination (MARAD) method was proposed. Firstly, based on the design principle of "sensing-decision-execution", four Agents which had own tasks but also interact and cooperate with each other were adopted in order to adapt the changeable test situation. Secondly, an activity-oriented model was constructed, and the C Language Integrated Production System (CLIPS) was used as an inference engine to make the discrimination rules independent of test sequences and assure the continuity of discrimination. Finally, fault-tolerant mechanism was introduced to the discrimination rules to decrease fault positive rate without changing the correctness. With the same test data, compared with the state modeling method with the average result of three times after discriminating, MARAD method has the same parameter missing rate 0% but decreases the activity false-positive rate by 10.54 percentage points; compared with the manual method, MARAD method decreases the parameter missing rate by 5.97 percentage points and activity false-positive rate by 3.02 percentage points, and no person is needed to participate in the discrimination. The proposed method can effectively improve the environment self-adaptability, real-time discriminating continuity and correctness of the system.
    Reputation model of crowdsourcing workers based on active degree
    YAN Jun, KU Shaoping, YU Chu
    2017, 37(7):  2039-2043.  DOI: 10.11772/j.issn.1001-9081.2017.07.2039
    Asbtract ( )   PDF (844KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the existing crowd-sourcing system can not effectively control the active enthusiasm of the workers and the quality of task completion in the process of crowd-sourcing interaction, a worker reputation model based on active degree was proposed to realize the quality control of the crowd-sourcing platforms. The model improved the average reputation model, and the concepts of active factor and historical factor were put forward from the point of view of workers' active degree and historical reputation value. First, the active factor of the worker was calculated according to his participating days in the crowd in the last 30 days, and then the historical reputation value of the crowd-sourcing worker was calculated according to the historical factor. Finally, the reputation value of the crowd-sourcing worker based on active degree was calculated based on the calculated active factor and historical reputation value, which was used to measure the ability of the crowdsourcing worker. The theoretical analysis and test results showed that compared with the average reputation model, the task completion quality of crowdsourcing workers selected by the worker reputation model based on active degree was increased by 4.95% and the completion time was decreased by 25.33%; compared with the trust model based on evidence theory, the task completion quality was increased by 6.63% and the completion time was decreased by 25.11%. The experimental results show that the worker reputation model based on active degree can effectively improve the quality of crowdsourcing tasks and reduce the completion time.
    Challenges and recent progress in big data visualization
    CUI Di, GUO Xiaoyan, CHEN Wei
    2017, 37(7):  2044-2049.  DOI: 10.11772/j.issn.1001-9081.2017.07.2044
    Asbtract ( )   PDF (1184KB) ( )  
    References | Related Articles | Metrics
    The advent of big data era elicits the importance of visualization. As an import data analysis method, visual analytics explores the cognitive ability and advantages of human beings, integrates the abilities of human and computer, and gains insights into big data with human-computer interaction. In view of the characteristics of large amount of data, high dimension, multi-source and multi-form, the visualization method of large scale data was discussed firstly: 1) divide and rule principle was used to divide big problem into a number of smaller tasks, and parallel processing was used to improve the processing speed; 2) the means of aggregation, sampling and multi-resolution express were used to reduce data; 3) multi-view was used to present high dimensional data. Then, the visualization process of flow data was discussed for the two types of flow data, which were monitoring and superposition. Finally, the visualization of unstructured data and heterogeneous data was described. In a word, the visualization could make up for the disadvantages and shortcomings of computer automatic analysis, integrate computer analysis ability and human perception of information, and find the information and wisdom behind big data effectively. However, the research results of this theory are very limited, and it is faced with the challenge of large scale, dynamic change, high dimension and multi-source heterogeneity, which are becoming the hot spot and direction of large data visualization research in the future.
    Quick visual Boolean operation on heavy mesh models
    YANG Zhanglong, CHEN Ming
    2017, 37(7):  2050-2056.  DOI: 10.11772/j.issn.1001-9081.2017.07.2050
    Asbtract ( )   PDF (1174KB) ( )  
    References | Related Articles | Metrics
    A new algorithm was proposed to meet the instantaneous response requirements of the Boolean operation between large-scale mesh models in the product design. Discrete sampling was performed on mesh models to obtain the ray-segment point clound model and the three-dimensional Boolean operation among triangular mesh was then converted into one-dimensional one among ray segments; the intersection points could be accurately solved and interpolated around the overlapped regions of mesh models, so the Boolean operation was significantly speeded and the design efficiency of products of complex topology was greatly improved in turn. The point cloud model obtained by the proposed algorithm could be rendered with the same effect as that by the triangular mesh model. The proposed method can be adopted in engineering applications.
    Photogrammetric method for accurate tracking of 3D scanning probe
    LIU Hong, WANG Wenxiang, LI Weishi
    2017, 37(7):  2057-2061.  DOI: 10.11772/j.issn.1001-9081.2017.07.2057
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics
    For the traditional 3D robot scanners, the measuring precision is dependent on the positioning precision of the robot, and it is difficult to achieve high measuring precision. A photogrammetric method was proposed to track and position the 3D scanning probe accurately. First, a probe tracking system consisting of multiple industrial cameras was set up, and coded markers were pasted on the probe. Then, the camera was calibrated with high precision and interior and exterior parameters of the camera were obtained. Second, all cameras were synchronized, the markers in the image were matched according to the coding principle, and the projection matrix was obtained. Finally, the 3D coordinates of the markers in space were computed to track and position the probe. The experimental results show that the mean error of the marker position is 0.293 mm, the average angle error is 0.136°, and the accuracy of the algorithm is within reasonable range. The photogrammetric method can improve the positioning precision of the probe, so as to achieve high precision measurement.
    Joint calibration method of camera and LiDAR based on trapezoidal checkerboard
    JIA Ziyong, REN Guoquan, LI Dongwei, CHENG Ziyang
    2017, 37(7):  2062-2066.  DOI: 10.11772/j.issn.1001-9081.2017.07.2062
    Asbtract ( )   PDF (906KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of information fusion between Light Detection And Ranging (LiDAR) data and camera images in the detection process of Unmanned Ground Vehicle (UGV) following the target vehicle, a method of joint calibration of LiDAR and camera based on a trapezoidal checkerboard was proposed. Firstly, by using the scanning information of the LiDAR in the trapezoidal calibration plate, the LiDAR installation angle and installation height were accessed. Secondly, the external parameters of the camera relative to the body were calibrated through the black and white checkerboard on the trapezoidal calibration plate. Then combining with the correspondence between the LiDAR data points and the pixel coordinates of the image, two sensors were jointly calibrated. Finally, integrating the LiDAR and the camera calibration results, the pixel data fusion of the LiDAR data and the camera image was carried out. As long as the trapezoidal calibration plate was placed in front of the vehicle body, the image and the LiDAR data were collected only once in the entire calibration process of two types of sensors. The experimental results show that the proposed method has high calibration accuracy with average position deviation of 3.5691 pixels (13 μm), and good fusion effect of LiDAR data and the visual image. It can effectively complete the spatial alignment of LiDAR and the camera, and is strongly robust to moving objects.
    Variable exponent variational model for image interpolation
    ZHAN Yi, LI Meng
    2017, 37(7):  2067-2070.  DOI: 10.11772/j.issn.1001-9081.2017.07.2067
    Asbtract ( )   PDF (702KB) ( )  
    References | Related Articles | Metrics
    To eliminate the zigzagging and blocky effects in homogeneous areas in an interpolated image, a variable exponent variational method was proposed for image interpolation. An exponent function with diffusion characteristic of image interpolution was introduced by analyzing the diffusion characteristic of variable exponent variational model. Two parameters in the exponent function act on interpolation: the one controlled the intensity of diffusion which eliminated the width of image edges while the other controlled the intensity of smoothness which retained the fine textures in the image. The new variable exponent variatonal model made the Total Variation (TV) variational diffuse along image contours and the heat diffusion on smooth areas. The numerical experiment results on real images show that image interpolated by the proposed method has better interpolated edges, especially for fine textures. Compared to the method proposed by Chen et al. (CHEN Y M, LEVINE S, RAO M. Variable exponent, linear growth functionals in image restoration. SIAM Journal on Applied Mathematics, 2006, 66(4): 1383-1406) and robust soft-decision interpolation method, the visual improvement is prominent for retaining fine textures, and the Mean Structural SIMilarity (MSSIM) is increased by 0.03 in average. The proposed model is helpful to further study variable exponent variational model for specifical image processing and worthy to practical applications such as image network communication and print.
    Salient object detection and extraction method based on reciprocal function and spectral residual
    CHEN Wenbing, JU Hu, CHEN Yunjie
    2017, 37(7):  2071-2077.  DOI: 10.11772/j.issn.1001-9081.2017.07.2071
    Asbtract ( )   PDF (1167KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of "center-surround" salient object detection and extraction method, such as incomplete object detected or extracted, not smooth boundary and redundancy caused by down-sampling 9-level pyramid, a salient object detection method based on Reciprocal Function and Spectral Residual (RFSR) was proposed. Firstly, the difference between the intensity image and its corresponding Gaussian low-pass one was used to substitute the normalization of the intensity image under "center-surround" model, meanwhile the level of Gaussian pyramid was further reduced to 6 to avoid redundancy. Secondly, a reciprocal function filter was used to extract local orientation information instead of Gabor filter. Thirdly, spectral residual algorithm was used to extract spectral feature. Finally, three extracted features were properly combined to generate the final saliency map. The experimental results on two mostly common benchmark datasets show that compared with "center-surround" and spectral residual models, the proposed method significantly improves the precision, recall and F-measure, furthermore lays a foundation for subsequent image analysis, object recognition, visual-attention-based image retrieval and so on.
    Non-local means denoising algorithm based on image segmentation
    XU Su, ZHOU Yingyue
    2017, 37(7):  2078-2083.  DOI: 10.11772/j.issn.1001-9081.2017.07.2078
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problems of non-adaption of filtering parameters and edge blur of Non-Local Means (NLM) algorithm, an improved NLM denoising algorithm based on image segmentation was proposed. The proposed algorithm is composed of two phases. In the first phase, the filtering parameter was determined according to the noise level and image structure, and traditional NLM algorithm was used to remove the noise and generate the rough clean image. In the second phase, the estimated clean image was divided into detailed region and background region based on pixel variance, and the image patches belonged to different regions were denoised separately. To effectively remove the noise, the back projection was utilized to make full use of the residual structure from the method noise of the first phase. The experimental results show that compared with traditional NLM and three NLM-improved algorithms, the proposed algorithm achieves higher Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM), while maintaining more structure details and edges, making the denoised image clear and retaining the complete real information.
    Adaptive threshold denoising of regularized super-resolution reconstruction procedure
    PENG Zheng, CHEN Dongfang, WANG Xiaofeng
    2017, 37(7):  2084-2088.  DOI: 10.11772/j.issn.1001-9081.2017.07.2084
    Asbtract ( )   PDF (993KB) ( )  
    References | Related Articles | Metrics
    In order to enhance the reconstruction ability of regularized super-resolution technique for noisy image, an adaptive threshold denoising method was proposed based on the extended research of General Total Variation (GTV) regularized super-resolution reconstruction. Firstly, the iterative reconstruction was completed according to GTV regularized super-resolution reconstruction. Then, the deduced adaptive threshold matrix was used to divide GTV cost matrix of each iteration procedure by the threshold. The corresponding pixel points whose costs were less than the threshold continued to be iterated while the points whose costs were greater than the threshold were cut down for re-interpolating and canceled from the iteration of this turn. Finally, the reconstruction result was output when the program met the convergence requirement. The experimental results show that, compared with the single GTV regularized reconstruction method and adaptive parameter method, the proposed adaptive threshold denoising method accelerates the convergence rate and improves the quality of reconstruction image, which makes the regularized super-resolution reconstruction technology perform better for noisy image.
    Continuous ultrasound image set segmentation method based on support vector machine
    LIU Jun, LI Pengfei
    2017, 37(7):  2089-2094.  DOI: 10.11772/j.issn.1001-9081.2017.07.2089
    Asbtract ( )   PDF (1199KB) ( )  
    References | Related Articles | Metrics
    A novel Support Vector Machine (SVM)-based unified segmentation model was proposed for segmenting a continuous ultrasound image set, because the traditional SVM-based segmenting method needed to extract sample points for each image to create a segmentation model. Firstly, the gray feature was extracted from the gray histogram of the image as the characteristic representing the continuity of the image in the image set. Secondly, part images were selected as the samples and the gray feature of each pixel was extracted. Finally, the gray feature of the pixel was combined with the feature of image sequence continuity in the image where each pixel was located. The SVM was used to train the segmentation model to segment the whole image set. The experimental results show that compared with the traditional SVM-based segmentation method, the new model can greatly reduce the workload of manually selecting the sample points when segmenting the image set with large quantity and continuous variation and guarantees the segmentation accuracy simultaneously.
    Space target sphere grid index based on orbit restraint and region query application
    LYU Liang, SHI Qunshan, LAN Chaozhen, CHEN Yu, LIU Yiping, LIANG Jing
    2017, 37(7):  2095-2099.  DOI: 10.11772/j.issn.1001-9081.2017.07.2095
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics
    Since the efficiency of retrieval and query of mass and high-speed space targets remains in a low level, a construction method of sphere grid index to the space targets based on the orbit restraint was proposed. The advantage that the orbit of space target is relatively stable in earth inertial coordinate system was used in the method to achieve the stabilized index to high-speed moving objects by maintaining the list of the space targets that pass through the sphere subdivision grid. On this basis, a region query application scheme was proposed. Firstly, the query time period was dispersed according to a particular step value. Secondly, the boundary coordinates of the query region in the inertial space were calculated and the staggered mesh was confirmed. Then the space targets in the grid were extracted and the spatial relationship between targets and the region was calculated and estimated. Finally, the whole time period was queried recursively and the space targets transit query analysis was accomplished. In the simulation experiment, the consumed time of the traditional method by calculating one by one has a linear positive correlation with the target number, but it has no relevance with the region size. One target costs 0.09 ms on average. By contrast, the time of the proposed method in the paper shows a linear decrease with the decline of area size. When the number of the region grids is less than 2750, the time efficiency is higher than that of the comparison method. Furthermore, it can maintain a fairly good accuracy. The experimental results show that the proposed method can improve the efficiency of the query in the actual region application effectively.
    Single document automatic summarization algorithm based on word-sentence co-ranking
    ZHANG Lu, CAO Jie, PU Chaoyi, WU Zhi'ang
    2017, 37(7):  2100-2105.  DOI: 10.11772/j.issn.1001-9081.2017.07.2100
    Asbtract ( )   PDF (948KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that extractive summarization needs to automatically produce a short summary of a document by concatenating several sentences taken exactly from the original material. A single document automatic summarization algorithm based on word-sentence co-ranking was proposed, named WSRank for short, which integrated the word-sentence relationship into the graph-based sentences ranking model. The framework of co-ranking in WSRank was given, and then was converted to a quite concise form in the view of matrix operations, and its convergence was theoretically proved. Moreover, a redundancy elimination technique was presented as a supplement to WSRank, so that the quality of automatic summarization could be further enhanced. The experimental results on real datasets show that WSRank improves the performance of summarization by 13% to 30% in multiple Rouge metrics, which demonstrates the effectiveness of the proposed method.
    Optimal path planning method based on taxi trajectory data
    QI Xin, LIANG Weitao, MA Yong
    2017, 37(7):  2106-2113.  DOI: 10.11772/j.issn.1001-9081.2017.07.2106
    Asbtract ( )   PDF (1326KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the path calculated by traditional path planning algorithm is not necessarily the optimal path in reality, a path planning algorithm which combined the experience of taxi driving and took time as a measure was proposed. The implementation of this algorithm was to transform the path planning technology which took calculation as the center into data-driven mining technology which regarded data as the center. Firstly, the real manned trajectory data were extracted from a large number of taxi trajectory data and matched to the road network data. Then, the access frequency of the road segments were calculated according to the calculation results of map-matching, and Top-k road sections were selected as hot sections; Secondly, the similarity of road tracks between hot sections was calculated, and the trajectories were clustered to build k sections of hot road map based on the road network. Finally, an improved A* algorithm was used to calculate the optimal path. The experimental results show that compared with the traditional shortest path planning algorithm and the path planning algorithm based on hierarchical road network, the path planning method based on hot section map can shorten the length of the planning path and the travel time and improve the time efficiency of path planning.
    Motion planning algorithm for unmanned surface vehicle based on Dubins path
    LIU Lezhu, XIAO Changshi, WEN Yuanqiao
    2017, 37(7):  2114-2117.  DOI: 10.11772/j.issn.1001-9081.2017.07.2114
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of motion planning for unmanned surface vehicle, a new method of calculating Dubins path by using pure geometric method was proposed by the theoretical analysis of Dubins path. The operation of solving equations was not used by the proposed method. Firstly, the steering circle was calculated according to the motion state of the unmanned surface vehicle. Then, the geometric method was used to compute the common tangent of the steering circles. Finally, the Dubins path was obtained by the common tangent connection. The effectiveness of the proposed method was verified through five groups of simulation experiments. All kinds of situations in the process of calculating the Dubins path were designed in the first four simulation experiments, the proposed algorithm was verified to be applicable to a variety of Dubins path calculation. The last experiment of simulations was used for path planning and motion state adjustment of unmanned surface vehicle. The simulation results show that the motion planning algorithm based on Dubins path is feasible.
    Indoor displacement calculation method based on smart phone sensors
    ZHONG Yong, GUAN Jichang, YANG Fan
    2017, 37(7):  2118-2123.  DOI: 10.11772/j.issn.1001-9081.2017.07.2118
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    The construction of structure is the foundation of indoor map constructing, and the indoor distance measuring is one of the core problems in this process. In order to solve the problem of high cost or low accuracy in the existing methods, a distance measuring method based on the statistical steps and strides with the multi-sensor data was proposed. In the stage of counting steps, the optimal threshold was calculated according to the ideas of Support Vector Machine (SVM), which made the model have excellent generalization ability. In the stage of detecting the validity of the step, the variance of the direction sensor data was used to filter the effective displacement steps. Finally, the stride estimation model was used to estimate the stride, and then the effective displacement was calculated. In the practical application, the proposed method is proved to be effective, and the overall error is about 4%, which can achieve the accuracy required to build indoor maps. It is a low cost and reliable displacement calculation method for indoor map constructing.
    Airline predicting algorithm based on improved Markov chain
    WANG Zhongqiang, CHEN Jide, PENG Jian, HUANG Feihu, TONG Bo
    2017, 37(7):  2124-2128.  DOI: 10.11772/j.issn.1001-9081.2017.07.2124
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics
    In the transportation field, analyzing passengers' travel destinations brings a lot of commercial value. However, research on the passengers' travel destinations is difficult because of its uncertainty. In order to solve this problem, in existing studies, entropy is used to measure the uncertainty of human mobility to describe individuals' travel features, and the spatiotemporal correlation of individual trajectories is taken into account simultaneously, which can not achieve the desired accuracy. Therefore, an algorithm for airline prediction based on improved Markov chain was proposed to predict passengers' travel destinations. First, the distance distribution, site distribution and temporal regularity on history records of passengers' travels were analyzed. Then, the dependence of human mobility on historical behavior and current location was analyzed. Finally, the characteristics of passengers' permanent residence and the exploration probability of new airlines were added into the calculation transition matrix, and an algorithm based on improved Markov chain was proposed and realized to predict passengers' next travels. The experimental results show that the average prediction accuracy of the proposed model can reach 66.4%. Applying in the field of customer travel analysis, airline company can benefit from the research to predict passenger travel better and provide personalized travel services.
2024 Vol.44 No.6

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF