Loading...

Table of Content

    10 March 2015, Volume 35 Issue 3
    Reliable wave division multiplexing/time division multiplexing passive optical network with cost efficient hybrid protection
    XIONG Yu, TANG Xiaofei, JIANG Jing
    2015, 35(3):  601-605.  DOI: 10.11772/j.issn.1001-9081.2015.03.601
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics

    To limit the number of components in the protection switching process and to accordingly reduce the protection cost, a new reliable Wave Division Multiplexing/Time Division Multiplexing Passive Optical Network (WDM/TDM-PON) architecture with cost-efficient hybrid protection was proposed. Firstly, the logic decision unit, protection path control unit and backup transceiver unit were designed in Optical Line Terminal (OLT) to only switch failure components to their backups in Wave Division Multiplexing (WDM) segment. Secondly, by employing the cross bus structure in Time Division Multiplexing (TDM) segment, fast protection switching was achieved in a distributed manner. According to the analytic results of the hybrid protection, the proposed architecture can provide fast and full protection in recovery time of 1.5 to 2.4 ms against Feed Fiber (FF), Distribution Fiber (DF) and Last Mile Fiber (LMF) failures. Certainly, the proposed architecture can also significantly reduce the protection overhead, and achieve great scalability.

    Dual-cluster-head routing algorithm based on location information
    LIN Qizhong, ZHANG Dongmei, WANG Cong, XU Kui
    2015, 35(3):  606-609.  DOI: 10.11772/j.issn.1001-9081.2015.03.606
    Asbtract ( )   PDF (796KB) ( )  
    References | Related Articles | Metrics

    To deal with the energy-efficient routing selection problem of the Wireless Sensor Network (WSN), an Energy-Efficient routing algorithm with Location information and Double cluster heads based on Hybrid Energy-Efficient Distributed clustering (HEED-EELD) was proposed. Assuming that all the network nodes had location awareness capabilities, the network was divided into different hierarchies according to the best single-hop distance, so the nodes determined their hierarchies based on their locations. Double cluster heads were selected to share a single cluster head's work and to balance the energy consumption. In the inter-cluster multi-hop routing, the cluster head selected the optimal route based on location, distance and cost function about residual energy. Matlab simulation results show that, compared with Low Energy Adaptive Clustering Hierarchy (LEACH) algorithm, HEED algorithm, HEED-EELD has obvious advantages in terms of network lifetime, energy efficiency and energy balancing.

    Multi-path routing protocol based on three-dimensional space and regional co-evolution in wireless sensor network
    REN Xiuli, WANG Chong
    2015, 35(3):  610-614.  DOI: 10.11772/j.issn.1001-9081.2015.03.610
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of unbalanced energy consumption in Wireless Sensor Network (WSN), the Multi-path Routing Protocol based on Three-dimensional Space and Regional Co-evolution (MRPTSRC) was proposed. The zoning model was designed to divide the one-hop neighborhood nodes into a set of subspaces. MRPTSRC selected the local optimum node from every subspace and decided the next hop node by the Regional Co-evolution Algorithm (RCA). Weighted strategy of the forward local optimum node was proposed to escape from local optimum and accelerate the convergence speed toward the Sink node. Simulations were conducted on NS-2 platform, the time of the first node death of MRPTSRC was respectively increased by 6% and 3% of the total time, compared with DEgree COnstrained Routing (DECOR) and Forward-Aware Factor for Energy Balance Routing Protocol (FAF-EBRP). The ratio of dead nodes and the relay time of MRPTSRC respectively declined up to 38% and 30%, the standard deviation of the residual energy decreased by 16.7%, when compared with FAF-EBRP. The network lifetime of MRPTSRC increased by 30% compared with DECOR. The simulation results show that MRPTSRC can effectively improve the network performance.

    Distributedly-dynamic bandwidth allocation algorithm based on proportional-integral controller
    ZHAO Haijun, LI Min, LI Mingdong, PU Bin
    2015, 35(3):  615-619.  DOI: 10.11772/j.issn.1001-9081.2015.03.615
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics

    Aiming at the fair and efficient bandwidths allocation for the geographically distributed control systems, a distributed and dynamic bandwidth allocation algorithm was proposed.Firstly, the bandwidth allocation problem was formulated as a convex optimization problem, namely, the sum of utilities of all the control systems was maximized. Further, the idea of the distributed bandwidth allocation algorithm was adopted to make the control systems vary their sampling periods based on fed-back congestion information from the network, and get the maximum sampling rate or maximum transmission rate which could be used. Then the interaction between control systems and links was modelled as a time-delay dynamical system, and Proportional-Integral (PI) controller was used as the link queue controller to realize the algorithm; The simulation results show that the proposed bandwidth allocation algorithm can not only make the transmission rates of all plants converge to the value where all plants share the bandwidth equally in 10 seconds. At the same time, for the PI controller, its queue stabilizes around the desired set point of 50 packets, and can accurately and steadily track the input signal to maximize the performance of all control systems.

    Coverage hole detection agorithm based on Voronoi diagram in wireless sensor network
    DAI Guoyong, CHEN Luyi, ZHOU Binbin, XU Ping
    2015, 35(3):  620-623.  DOI: 10.11772/j.issn.1001-9081.2015.03.620
    Asbtract ( )   PDF (609KB) ( )  
    References | Related Articles | Metrics

    To address the Coverage Hole (CH) problem in Wireless Sensor Network (WSN) caused by nodes' random deployment or running out of energy, a novel coverage hole detecting algorithm was proposed for wireless sensor networks in this paper. The location information of sensor nodes was used to build the Voronoi diagram for the monitored area. Then the distances between a sensor node and the vertex or edges of the corresponding Voronoi cell were calculated to decide the existence of coverage holes and identify the border nodes. Simulations were conducted to evaluate the performance of the proposed algorithm within different sensing ranges and nodes density. The performance comparison with the Path Density (PD) algorithm shows that the proposed algorithm has about 10% promotion both in average detection time and average energy consumption, which is important for prolonging the network lifetime.

    Influence of slave clock frequency drift on hanwave-in-the-loop network testing system
    SUN Leigang, ZHANG Mingqing, KONG Hongshan, LIU Xiaohu
    2015, 35(3):  624-628.  DOI: 10.11772/j.issn.1001-9081.2015.03.624
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics

    For the difficulties of achieving accurate clock synchronization in hanwave-in-the-loop network simulation testing using Precise Time Protocol (PTP), the performance of the simulation system of PTP in the face of slave clock frequency drift was studied. The system model and clock model of PTP under hanwave-in-the-loop network environment were established, the time delay estimation error for the master clock estimated by Slaven in unidirectional transmission line was analytically derived. It turned out that each drifting slave led to an additive error of identical structure, these error terms got added both in the drifting slave and its successor, and were then percolated down the line together with the previous accumulated error. Based on this, various network simulation scenarios were designed for verification, analysis and testing. The simulation results show that if only one slave clock drifts, the error at the end of the line is slight; but when all the slave clocks drift, synchronization error is more than 10 times larger, and the same relationship holds for the master versus all slaves drifting, which makes a serious impact on the system synchronization accuracy. The research provides an important reference for the clock deployment strategy in hanwave-in-the-loop network simulation.

    Cyclic redundancy check algorithm based on reliability
    HU Fangjia, ZHOU Shuang'e, ZENG Jun
    2015, 35(3):  629-632.  DOI: 10.11772/j.issn.1001-9081.2015.03.629
    Asbtract ( )   PDF (596KB) ( )  
    References | Related Articles | Metrics

    Large iterations and errors may be caused by using the Cyclic Redundancy Check (CRC) criterion in decoding when channel condition gets worse. Thus, an iterative stopping algorithm based on reliability and a retransmission algorithm were proposed. First, the reliability of the intermediate result was calculated after each iteration, and it was used to achieve early stop of iteration by reaching a threshold. Second, the intermediate result corresponding to the maximum reliability was saved and used as the final result of decoding. Finally, after each decoding, the maximum reliability was used to determine whether to retransmit by being under a threshold of retransmission or not, and the best result of decoding was calculated by using results of no more than three transmissions. Simulations show that, when signal to noise ratio is less than 1.2 dB, in comparison with the CRC criterion, bit errors can be reduced by one or two on the basis of not increasing iterations by using this stopping algorithm, and bit errors can be further reduced by at least two by using the retransmission algorithm. The algorithm based on reliability can achieve less number of bit errors and iterations.

    Community detection algorithm based on structural similarity affinity propagation
    SUN Guibin, ZHOU Yong
    2015, 35(3):  633-637.  DOI: 10.11772/j.issn.1001-9081.2015.03.633
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics

    The community structure exists generally in the complex network, so the community detection has important theoretical significance and practical value. In order to improve the performance of community detection in the complex network, a community detection algorithm based on structural similarity affinity propagation was proposed. Firstly, the algorithm selected structural similarity as a similarity measurement between nodes, and applied an optimized method to calculate the similarity matrix of complex networks. Secondly, the algorithm made the similarity matrix as an input, and used a Fast Affinity Propagation (FAP) algorithm to cluster. Finally, the algorithm got the final community structure. The experimental results show that in the LFR (Lancichinetti-Fortunato-Radicchi) simulated network, the average community detection Normalized Mutual Information (NMI) value of the proposed algorithm is 65.1%, which is higher than 45.3% of the Label Propagation Algorithm (LPA) and 49.8% of CNM (Clauset-Newman-Moore) algorithm. And in the real network, the average community detection modularity value of the proposed algorithm is 53.1%, which is also higher than 39.9% of the LPA and 47.8% of the CNM algorithm. The proposed algorithm has better ability of community detection, but also can find a higher quality of community structure.

    Social network model based on micro-blog transmission
    CHEN Xiao, HUANG Shuguang, QIN Li
    2015, 35(3):  638-642.  DOI: 10.11772/j.issn.1001-9081.2015.03.638
    Asbtract ( )   PDF (706KB) ( )  
    References | Related Articles | Metrics

    Studying the constructing mechanism of micro-blog transmission network help to understand the information spreading process on the micro-blog platform deeply, and then obtain effective strategies and suggestions. As for this issue, a directed and weighted network model was proposed. In the model building process, according to the phenomenon that micro-blogs can be transmitted more than one time, triad formation was introduced. Different directions of links were used to represent the various characteristics of active and famous users. Besides, the dynamic evolution process of the link weight was considered. The theory analysis and simulation experiment results indicate the strength distribution, the degree distribution and the correlation of strength and degree obey power-law distribution, and the power exponents are between 1 and 3. Also, this model is characterized by high clustering coefficient and short average path length. Average clustering coefficient is 0.7, and average length is less than 6. As well, actual data of micro-blog transmission were collected to prove the model's correctness.

    Load balancing cloud storage algorithm based on Kademlia
    ZHENG Kai, ZHU Lin, CHEN Youguang
    2015, 35(3):  643-647.  DOI: 10.11772/j.issn.1001-9081.2015.03.643
    Asbtract ( )   PDF (938KB) ( )  
    References | Related Articles | Metrics

    Prevailing cloud storage systems normally use master/slave structure, which may cause performance bottlenecks and scalability problems in some extreme cases. So, fully distributed cloud storage system based on Distributed Hash Table (DHT) technology is becoming a new choice. How to solve load balancing problem for nodes, is the key for this technology to be applicable. The Kademlia algorithm was used to locate storage target in cloud storage system and its load balancing performance was investigated. Considering the load balancing performance of the algorithm significantly decreased in heterogeneous environment, an improved algorithm was proposed, which considered heterogeneous nodes and their storage capacities and distributed loads according to the storage capacity of each node. The simulation results show that the proposed algorithm can effectively improve load balance performance of the system. Compared with the original algorithm, after running a long period (more than 1500 hours in simulation), the number of overloaded nodes in system dropped at an average percentage 7.0%(light load) to 33.7%(heavy load), file saving success rate increased at an average percentage 27.2%(light load) to 35.1%(heavy load), and also its communication overhead is acceptable.

    Dual fault-tolerant scheduling algorithm of periodic and aperiodic hybrid real-time tasks in cloud environment
    CAO Jie, ZENG Guosun
    2015, 35(3):  648-653.  DOI: 10.11772/j.issn.1001-9081.2015.03.648
    Asbtract ( )   PDF (1153KB) ( )  
    References | Related Articles | Metrics

    The problem of cloud computing processors failure cannot be ignored in the cloud environment. Fault-tolerance becomes a key requirement in the design and development of cloud computing systems. Aiming at the problem of low scheduling efficiency and single type of task in most fault-tolerant scheduling algorithms, the fault-tolerant scheduling method based on processors, primary-backup copies of hybrid tasks grouped was proposed. A method to determine whether two backup copies can overlap was presented. What's more, the calculation formulas of periodic task worst-case response time and completion time of aperiodic tasks preemptive execution were given. The simulation result shows that the proposed algorithm has a remarkable saving of cloud computing system processors needed and scheduling computation time compared with Hybrid real time task Fault Tolerant Scheduling (HFTS) algorithm. It is of great significance for improving the reliability of cloud system and the schedulability of real-time tasks set, as well as the processor efficiency.

    Generalized AVL tree with low adjusting ratio and its unified rebalancing method
    JIANG Shunliang, HU Shihong, TANG Yiling, GE Yun, YE Famao, XU Shaoping
    2015, 35(3):  654-658.  DOI: 10.11772/j.issn.1001-9081.2015.03.654
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics

    The traditional AVL (Adelson-Velskii and Landis) tree programming has been faced with the problem of too much code, complex process and high adjusting ratio. To solve these problems, a unified rebalancing method was developed and a generalized AVL (AVL-N) tree was defined. The unified rebalancing method automatically classifies the type of the unbalanced node in AVL tree and uses a new way to adjust the tree shape without using standard rotations. AVL-N tree with relaxed balance allows the height difference between the right sub-tree and left sub-tree doesn't exceed N(N ≥ 1). When insertions and deletions have been performed in AVL-N tree, the height difference between the right sub-tree and left sub-tree of some nodes may be higher than N. At that time the unified rebalancing would be applied to rearrange the unbalanced node's descendants. The simulation results indicate that the adjusting ratio of AVL-N tree reduced significantly with N increasing, it is less than 4% for N=5 and less than 0.1% for N=13. The adjusting ratio of AVL-N tree is far below other classic data structures, such as red-black tree, and allows for a greater degree of concurrency than the original proposal.

    Multi-label classification algorithm based on joint probability
    HE Peng, ZHOU Lijuan
    2015, 35(3):  659-662.  DOI: 10.11772/j.issn.1001-9081.2015.03.659
    Asbtract ( )   PDF (673KB) ( )  
    References | Related Articles | Metrics

    Since the Multi-Label k Nearest Neighbor (ML-kNN) algorithm ignores the correlation between labels, a multi-label classification algorithm based on joint probability was proposed. Firstly, priori probability was calculated during traversing the sample space; Secondly, conditional probability of a label appeared m times in kNN when it got value 1 or 0 was computed; Then, the method of using label joint probability distribution, which was computed during traversing the sample space, as multi-label classification model was proposed. Finally, the multi-label classification model of coRrelation Multi-Label-kNN (RML-kNN) was deduced by way of maximizing the posterior probability. The theoretical analysis and comparison experiments on several datasets show that RML-kNN elevates Subset Accuracy to 0.9612 in the best case, which gains 2.25% promotion compared with ML-kNN; RML-kNN, which gains significant reduction on Hamming Loss, gets a minimum value of 0.0022; Micro-FMeasure can be elevated up to 0.9767, in comparison of ML-kNN, RML-kNN gets 2.88% elevation in the best case. The experimental results show that RML-kNN outperforms ML-kNN as it integrates correlation between labels during classification process.

    Novel quantum differential evolutionary algorithm for blocking flowshop scheduling
    QI Xuemei, WANG Hongtao, CHEN Fulong, TANG Qimei, SUN Yunxiang
    2015, 35(3):  663-667.  DOI: 10.11772/j.issn.1001-9081.2015.03.663
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics

    A Novel Quantum Differential Evolutionary (NQDE) algorithm was proposed for the Blocking Flowshop Scheduling Problem (BFSP) to minimize the makespan. The NQDE algorithm combined Quantum Evolutionary Algorithm (QEA) with Differential Evolution (DE) algorithm, and a novel quantum rotating gate was designed to control the evolutionary trend and increase the diversity of population. An effective Quantum-inspired Evolutionary Algorithm-Variable Neighborhood Search (QEA-VNS) co-evolutionary strategy was also developed to enhance the global search ability of the algorithm and to further improve the solution quality. The proposed algorithm was tested on the Taillard's benchmark instances, and the results show that the number of optimal solutions obtained by NQDE is bigger than the current better heuristic algorithm-Improved Nawaz-Enscore-Ham Heuristic (INEH) evidently. Specifically, the optimal solutions of 64 instances in the 110 instances are improved by NQDE. Moreover, the performance of NQDE is superior to the valid meta-heuristic algorithm-New Modified Shuffled Frog Leaping Algorithm (NMSFLA) and Hybrid Quantum DE (HQDE), and the Average Relative Percentage Deviation (ARPD) of NQDE algorithm decreases by 6% contrasted with the latter ones. So it is proved that NQDE algorithm is suitable for the large scale BFSP.

    Application of restricted velocity particle swarm optimization and self-adaptive velocity particle swarm optimization to unconstrained optimization problem
    XU Jun, LU Haiyan, SHI Guijuan
    2015, 35(3):  668-674.  DOI: 10.11772/j.issn.1001-9081.2015.03.668
    Asbtract ( )   PDF (1151KB) ( )  
    References | Related Articles | Metrics

    Restricted Velocity Particle Swarm Optimization (RVPSO) and Self-Adaptive Velocity Particle Swarm Optimization (SAVPSO) are two recently proposed Particle Swarm Optimization (PSO) algorithms specially for solving Constrained Optimization Problem (COP), but to our knowledge, no research has been done on the applications of the two algorithms to Unconstrained Optimizations Problem (UOP). To this end, the effectiveness and performance characteristics of the two algorithms in UOP were investigated. Moreover, in view of their relatively strong conservativeness, the algorithms were improved by combining chaos factor and random strategy respectively with the search mechanism to enhance their global exploration ability. Also, the effects of different parameter settings on the performance of all these algorithms were studied. The performance of all these algorithms was evaluated on 5 typical benchmark functions. Experimental and comparison results show that the improved RVPSO is better than RVPSO in terms of robustness and global exploration ability, but it may easily get trapped into local optima when solving high-dimensional multi-modal functions; the improved SAVPSO has stronger exploration ability and faster convergence rate than improved RVPSO, and it can achieve more accurate solutions when applied to high-dimensional multi-modal functions. Therefore, the improved SAVPSO has competitive ability of global optimization, and thus is an effective algorithm for solving unconstrained optimization problems.

    Improved particle swarm optimization algorithm based on centroid and self-adaptive exponential inertia weight
    CHEN Shouwen
    2015, 35(3):  675-679.  DOI: 10.11772/j.issn.1001-9081.2015.03.675
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that Particle Swarm Optimization (PSO) algorithm is easily trapped into local optima and has low accuracy in convergence, in order to improve the optimization capability of PSO algorithm, an improved particle swarm optimization algorithm-Centroids combined with self-adaptive Exponential inertia weight PSO (CEPSO) was proposed. Firstly, weighting coefficients were calculated by the fitness of each particle. Secondly, double centroids, the population centroid and the best individual centroid were constructed, which were the weighted combination of each particle's current position and its by far best position. Finally, the proposed algorithm worked on the centroids and the self-adaptive exponential inertia weight designed by the swarm diversity correspondingly to the different working stages of the swarm to adjust its velocity updating formula. The experimental results show that CEPSO can enhance the search ability, and it has strong stability.

    Quantum-behaved particle swarm optimization algorithm with crossover operator to multi-dimension problems
    XI Maolong, SHENG Xinyi, SUN Jun
    2015, 35(3):  680-684.  DOI: 10.11772/j.issn.1001-9081.2015.03.680
    Asbtract ( )   PDF (713KB) ( )  
    References | Related Articles | Metrics

    According to the problem that better dimensions information of particles will loss in Quantum-behaved Particle Swarm Optimization (QPSO) algorithm when solving multi-dimensions problems, a strategy with crossover operator was introduced and the quality of solutions and the performance of algorithm would be improved. Firstly, the whole update and evaluation strategy on solutions in algorithm was analyzed and the better dimensions information of particles would loss because of the mutual interference between dimensions. Secondly, when the evolution was executed dimension by dimension, the algorithm complexity would increase exponentially. Finally, multi-crossover method was employed to increase the retaining probability of excellent dimension information. The comparison and analysis results of the proposed method, with linearly decreased coefficient control method and non-linearly decreased coefficient control method on 12 CEC2005 benchmark functions were given. The simulation results show the modified algorithm can greatly improve the QPSO performance compared with the basic QPSO in 10 functions and also get better performance in 7 functions compared with the other two QPSO variants. Therefore, the proposed method can improve the performance of QPSO effectively.

    Improved multi-objective firefly algorithm-based fuzzy clustering
    ZHU Shuwei, ZHOU Zhiping, ZHANG Daowen
    2015, 35(3):  685-690.  DOI: 10.11772/j.issn.1001-9081.2015.03.685
    Asbtract ( )   PDF (942KB) ( )  
    References | Related Articles | Metrics

    It has been shown that most traditional fuzzy clustering algorithms only optimize a single objective function, hence more comprehensive and accurate clustering result cannot be achieved. To solve this problem, a new fuzzy clustering technique based on improved multi-objective Firefly Algorithm (FA) was proposed. Firstly, a mutation mechanism with dynamically decreasing probability which was similar to the mutation operator in Differential Evolution (DE) algorithm was drawn into FA, in order to obtain more uniformly distributed non-dominated solutions, simultaneously the scaling factor was adaptively adjusted to enhance the efficiency of mutation. When the archive was filled, some solutions in it were selected to combine with the current population for the next evolution to improve the efficiency of the algorithm. Finally, this algorithm was applied to fuzzy clustering problem, which simultaneously optimized two objectives of fuzzy clustering index, and one solution was selected from the final archive to get the result of clustering. The experimental results on five groups of data show that the proposed algorithm raises the clustering validity index by 2 to 8 percentages than traditional single objective clustering algorithm, so it can achieve higher accuracy of clustering and obtains better comprehensive performance.

    Multi-group firefly algorithm based on simulated annealing mechanism
    WANG Mingbo, FU Qiang, TONG Nan, LIU Zheng, ZHAO Yiming
    2015, 35(3):  691-695.  DOI: 10.11772/j.issn.1001-9081.2015.03.691
    Asbtract ( )   PDF (727KB) ( )  
    References | Related Articles | Metrics

    According to the problem of premature convergence and local optimum in Firefly Algorithm (FA), this paper came up with a kind of multi-group firefly algorithm based on simulated annealing mechanism (MFA_SA), which equally divided firefly populations into many child populations with different parameter. To prevent algorithm fall into local optimum, simulated annealing mechanism was adopted to accept good solutions by the big probability, and keep bad solutions by the small probability. Meanwhile, variable distance weight was led into the process of population optimization to dynamically adjust the "vision" of firefly individual. Experiments were conducted on 5 kinds of benchmark functions between MFA_SA and three comparison algorithms. The experimental results show that, MFA_SA can find the global optimal solutions in 4 testing function, and achieve much better optimal solution, average and variance than other comparison algorithms. which demonstrates the effectiveness of the new algorithm.

    Application of chaotic electromagnetism mechanism algorithm based on limited memory Broyden-Fletcher-Goldfarb-Shanno in path planning
    QIAO Xianwei, QIAO Lei
    2015, 35(3):  696-699.  DOI: 10.11772/j.issn.1001-9081.2015.03.696
    Asbtract ( )   PDF (592KB) ( )  
    References | Related Articles | Metrics

    According to the problem of Electromagnetism Mechanism (EM) algorithm which may easily get into local optimal solution and has poor search capability, this paper combined the Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) with chaotic model into EM. The main idea of the algorithm was using the L-BFGS which has high precision, in the later stage of algorithm, and using the chaotic model through the whole algorithm to keep the diversity of population. The tests suggested that the algorithm could jump out from the local optimal solution, had better solution and converged faster than EM, Particle Swarm Optimization (PSO) and particle swarm optimization with Time-Varying Accelerator Coefficients (TVAC). Tests also showed that it could be used in path planning and had better results than both PSO and Ant Colony Optimization (ACO), so the algorithm can be applied to the discrete domain question.

    Meet-in-the-middle attack on 11-round reduced 3D block cipher
    LI Lingchen, WEI Yongzhuang, ZHU Jialiang
    2015, 35(3):  700-703.  DOI: 10.11772/j.issn.1001-9081.2015.03.700
    Asbtract ( )   PDF (556KB) ( )  
    References | Related Articles | Metrics

    Focusing on the safety analysis of the 3D block cipher, a new method on this algorithm against the meet-in-the-middle attack was proposed. Based on the structure of the 3D algorithm and the differential properties of the S-box, the research reduced the number of required bytes during structuring the multiple sets in this attack and constructed a new 6-round meet-in-the-middle distinguisher. According to extending the distinguisher 2-round forward and 3-round backward, an 11-round meet-in-the-middle attack of the 3D algorithm was finally achieved. The experimental results show that:the number of required bytes on constructed the distinguisher is 42, the attack requires a data complexity of about 2497 chosen plaintexts, a time complexity of about 2325.3 11-round 3D algorithm encryption and a memory complexity of about 2342 bytes. The new attack shows that the 11-round of the 3D algorithm is not immune to the meet-in-the-middle attack.

    Semi-fragile net-flow fingerprint coding scheme based on adaptive net-flow characteristic
    LEI Cheng, ZHANG Hongqi, SUN Yi, DU Xuehui
    2015, 35(3):  704-711.  DOI: 10.11772/j.issn.1001-9081.2015.03.704
    Asbtract ( )   PDF (1455KB) ( )  
    References | Related Articles | Metrics

    Aiming at unavailability and unreliability of net-flow fingerprint caused by net-flow transformation and network jitter, a semi-fragile net-flow fingerprint coding scheme based on adaptive net-flow characteristic (ACSF) was proposed. Firstly, ACSF generated Hash Message Authentication Code (HMAC) encryption key, determined HMAC scrambling method and chose the initial phase of the Pseudo-Noise (PN) code in accordance with net-flow characteristic parameters.The space of secret key was enlarged to O((k+1)·(S·O(KEN))), so as to increase computational complexity of compromising. Besides, net-flow fingerprint was made to have the capability of self-adaption. It decreased the computational complexity of decoder to O(k2·l·nf), which enhanced the efficiency of decoding. Secondly, in order to be semi-fragile net-flow fingerprint, Direct Sequence Spread Spectrum (DSSS) was used to filter non-malicious disposing. It can reach more than 90% correctness under the condition of 66.7% multi-flow disturbance rate. Besides, HMAC was used to locate malicious tamper, which could correctly locate malicious tamper at least 98.3%. Finally, the security, accuracy of tamper localization and resisting disturbance capability of ACSF were analyzed and verified by experiments.

    Provable identity-based signcryption scheme
    ZUO Liming, CHEN Renqun, GUO Hongli
    2015, 35(3):  712-716.  DOI: 10.11772/j.issn.1001-9081.2015.03.712
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    Through the cryptanalysis of a signcryption scheme without bilinear pairing proposed by Gao et al. (GAO J, WU X, QIN Y. Secure certificateless signcryption scheme without bilinear pairing.Application Research of Computers,2014,31(4):1195-1198) recently, the scheme could not resist public-key substitute attacks. Then a new ID-based signcryption scheme without bilinear pairing was proposed and the proposed scheme was proved to be secure in the random oracle model under the first type attacker. Finally, a contrast analysis of efficiency was made between the new scheme and other schemes. Compared with other signcryption schemes, the new scheme uses only 3 Hash operations and 7 point multiplication operations, so it has higher computational efficiency.

    Text steganographic method with hierarchical security
    XIANG Lingyun, WANG Xinhui
    2015, 35(3):  717-721.  DOI: 10.11772/j.issn.1001-9081.2015.03.717
    Asbtract ( )   PDF (816KB) ( )  
    References | Related Articles | Metrics

    For the low security and capacity shortages of steganographic methods based on the single data, a new text steganography method with hierarchical security was proposed. First, multiple types of data in the whole cover document were regarded as optional steganographic covers to build up a hierarchical security steganographic model upon the the steganographic security levels defined by taking the characteristics of different types of data and the steganalysis as evaluation criterions. Then, a security level was adaptively determined by the secret message length, and the secret message was embedded into the selected independent different types of data in a cover document with the help of the built model. Theoretical analysis and experimental results show that compared with the steganography based on single data, the proposed method has expanded the steganographic capacity and reduced the modifications of the statistic characteristics of a single type of data in the cover document when the same secret message was embedded. In conclusion the proposed method improves the security of the secret message.

    Adaptive audio steganography algorithm based on wavelet packet decomposition and matrix code
    ZHANG Yao, PAN Feng, SHEN Junwei
    2015, 35(3):  722-725.  DOI: 10.11772/j.issn.1001-9081.2015.03.722
    Asbtract ( )   PDF (575KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the audio steganography has low utilization of carriers, poor imperceptibility and small embedding capacity, an adaptive audio steganography based on the wavelet packet decomposition and matrix code was proposed. Comparing the differences between the wavelet-packet decomposed coefficients before and after the audio's MP3 compression, the algorithm took the unchanged bits' position of wavelet-packet decomposed coefficients as embedding carriers, which effectively increased the embedding capability. And the algorithm improved the matrix code by using chaotic model to generate random triple-groups, which promoted the safety and efficiency. As for capacity, the proposed algorithm promoted about 30%, compared with the algorithm that directly uses the medium-frequency sub-bands as the carriers. On the aspect of Signal-to-Noise Ratio (SNR), the proposed algorithm promoted about 9%, compared with the matrix steganography that fixes the triple-groups. The experimental results show that the algorithm is correct and can basically satisfies large-capacity and secure communications.

    Arnold digital image encryption algorithm based on sparse matrix
    JIANG Fan, WU Xiaotian, SUN Wei
    2015, 35(3):  726-731.  DOI: 10.11772/j.issn.1001-9081.2015.03.726
    Asbtract ( )   PDF (1210KB) ( )  
    References | Related Articles | Metrics

    For the common key space shortage problem found in existing Arnold digital image encryption algorithm, a new digital image encryption algorithm-SMA (Sparse Matrix Arnold) based on sparse matrix and Arnold transformation was proposed and in order to further improve the security of the algorithm, an improved algorithm-3SMA (3 round SMA) using the ideas of multi-layered decomposition and three-tier structure encryption was proposed. The SMA algorithm adopted Arnold transform to spread the plaintext picture into a large sparse matrix, and then removed invalid sparse matrix elements to get the cipher text. While, the decryption of SMA needed to enter the cipher text picture, and moved pixels in cipher text picture back to their original positions in accordance with the previously computed swapping table. The 3SMA algorithm comprised three different round keys. Each round, the improved algorithm needed to process two color components of the plaintext picture to achieve the purpose of encryption. The experimental results show that the proposed encryption algorithm and its improvement obtain higher security compared to Arnold encryption algorithms analyzed.

    Security mechanism based on path sequence detection in wireless sensor network
    CHEN Zhuo, TAN Zhihuan
    2015, 35(3):  732-735.  DOI: 10.11772/j.issn.1001-9081.2015.03.732
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics

    To the problem of vulnerable to be attack in Wireless Sensor Network (WSN), a new security mechanism based on path sequence detection was proposed in this paper. This mechanism could detect the routing rule of packets and achieve the authentication of previous hop through constructing reasonable path sequence and sequence verification, then the correctness of routing rule and authenticity of data could be guaranteed. Performance analysis and simulation results show that the probability of the failure of data transmission path attacks detection will not decrease with the network scale. This result demonstrates that the proposed mechanism can effectively detect tampering of data transmission path attacks and can improve the security of the wireless sensor network.

    Agent-based multi-layered security detection method in wireless sensor network
    DANG Xin, WANG Yan, WAN Qiang
    2015, 35(3):  736-740.  DOI: 10.11772/j.issn.1001-9081.2015.03.736
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics

    To facilitate the communication security of Wireless Sensor Network (WSN), an Agent-based multi-layered security detection method was proposed. Nodes were divided into three layers based on the different functions, and each layer performed the appropriate detection method to maintain the security of network. In order to reduce the energy consumption, mobile Agent technology was applied to collect data efficiently. And then, the Agent nodes completed the underlying security detection tasks to reduce the energy and prolong network life of head nodes. Finally, the experimental results show that, compared with the methods of Su and eHIDS, the proposed algorithm can increase detection rate by at most 35% and decrease false alarm rate by at most 15% respectively, as well as better performance from perspectives of energy consumption, showing an effective method to detect attack in WSN.

    Novel anonymous authentication scheme without cryptography in vehicular Ad Hoc network
    ZHANG Gang, SHI Runhua, ZHONG Hong, WANG Yimin
    2015, 35(3):  741-745.  DOI: 10.11772/j.issn.1001-9081.2015.03.741
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics

    Concerning to the problem of preserving users' identity privacy during the process of authentication in Vehicular Ad Hoc NETwork (VANET), a novel anonymous authentication scheme without cryptography based on the theory of linear equation was proposed. The scheme constructed an anonymous authentication model based on the basic theory of solving linear equations without employing any symmetric or asymmetric cryptosystem, which avoided disclosure of the identity information during the process of anonymous authentication. Besides, the proposed scheme not only satisfied the requirement of anonymous authentication between On Board Unit (OBU) and Road Side Unit (RSU), but also ensured the anonymous authentication among OBU. The results of the security and complexity analysis show that the proposed scheme is secure and efficient, and it needs low costs of computation and communication.

    Searching algorithm of trust path by filtering
    CONG Liping, TONG Xiangrong, JIANG Xianxu
    2015, 35(3):  746-750.  DOI: 10.11772/j.issn.1001-9081.2015.03.746
    Asbtract ( )   PDF (934KB) ( )  
    References | Related Articles | Metrics

    The existed trust models have two shortages in searching the trust path:firstly, factors affecting the trust value were not considered fully in the searching, or considered the same. Meanwhile, many algorithms ignored the importance of the interaction number when searching the trust path. In view of these problems, a searching algorithm of trust path based on graph theory was proposed. The concept of probability of honesty was put forward to weigh the credibility of the node further, and as the searching priority basis, it is more reasonable in the priority searching. Meanwhile it searched by filtering and used probability of multi-factors which affect the credibility of the node. The analyses of algorithm show that the complexity of the proposed algorithm is (n-m)2 magnitude, much lower than the original fine-grained algorithm which complexity is n2 magnitude. The experimental results show that the proposed algorithm can better filter out malicious nodes, improve the accuracy of the trust path search algorithms, and resist the attacks of malicious nodes.

    User-friendly privacy monitoring and management mechanism on Android
    HUANG Jie, TAN Bo, TAN Chengxiang
    2015, 35(3):  751-755.  DOI: 10.11772/j.issn.1001-9081.2015.03.751
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics

    To solve the excessive authorization problem of Android, this paper proposed a User-Friendly privacy monitoring and management Mechanism on AnDroid named UFMDroid. Proxy redirect technology was used to implement privacy-related behavior monitoring module and fine-grained resources constraint module in Android control flow. UFMDroid analyzed the existing applications on Android market and constructed a permission profile as preset by hierarchical clustering and Euclidean distance metric to filter suspicious authority. A static threat value could be provided by calculating the distance between the preset and the current permission configuration. The privacy-related behaviors of application were classified and both of the individual threat and combination threat were considered in calculating the dynamic runtime threat value. In addition, fake data mechanism was imported to prevent the application from crashing while the permission was withdrawn. The experimental results show that UFMDroid can monitor the usage of 21 different resources and it can intercept the privacy leakage behaviors in accordance with user configuration. UFMDroid can enhance the security of Android to some extent.

    Heuristic detection system of Trojan based on trajectory analysis
    ZHONG Mingquan, FAN Yu, LI Huanzhou, TANG Zhangguo, ZHANG Jian
    2015, 35(3):  756-760.  DOI: 10.11772/j.issn.1001-9081.2015.03.756
    Asbtract ( )   PDF (771KB) ( )  
    References | Related Articles | Metrics

    Concerning of the low accurate rate of active defense technology, a heuristic detection system of Trojan based on the analysis of trajectory was proposed. Two kinds of typical Trojan trajectories were presented, and by using the behavioral data on Trojan trajectory the danger level of the suspicious file was detected with the decision rules and algorithm. The experimental results show that the performance of detecting unknown Trojan of this system is better than that of the traditional method, and some special Trojans can also be detected.

    Spectral embedded clustering algorithm based on kernel function
    WANG Weidong, LIU Bing, GUAN Hongjie, ZHOU Yong, XIA Shixiong
    2015, 35(3):  761-765.  DOI: 10.11772/j.issn.1001-9081.2015.03.761
    Asbtract ( )   PDF (846KB) ( )  
    References | Related Articles | Metrics

    Samples are required to meet the manifold assumption in Spectral Embedded Clustering (SEC) algorithm, and class labels of samples can always be embedded in a linear space, which provides a new idea for spectral clustering of linearly separable data, but the linear mapping function used by the spectral embedded clustering algorithm is not available to process the nonlinear high-dimensional data. To solve this problem, this paper cored the linear mapping function, built a Spectral Embedded Clustering based on Kernel function (KSEC) model. This model can solve the problem that the linear mapping function can't deal with nonlinear data, as well as it can achieve kernel's dimension reduction synchronously. The experimental results on real data sets show that the improved algorithm can improve the clustering accuracy by 13.11% averagely, and the highest 31.62%, especially for high-dimensional data clustering accuracy can be increased by 16.53% on average. And the sensitive experiments on algorithm to parameters show the stability of the improved algorithm, so compared with traditional spectral clustering algorithms, higher accuracy and better clustering performance are obtained. And the method can be used for such complex image processing field as remote sensing image.

    Support vector machine combined model forecast based on ensemble empirical mode decomposition-principal component analysis
    SANG Xiuli, XIAO Qingtai, WANG Hua, HAN Jiguang
    2015, 35(3):  766-769.  DOI: 10.11772/j.issn.1001-9081.2015.03.766
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of feature extraction and state prediction of intermittent non-stationary time series in the industrial field, a new prediction approach based on Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Support Vector Machine (SVM) was proposed in this paper. Firstly, the intermittent non-stationary time series was analyzed by multiple time scales and decomposed into a couple of IMF components which possessed the different scales by the EEMD algorithm. Then, the noise energy was estimated to determine the cumulative contribution rate adaptively on the basis of 3-sigma principle. The feature dimension and redundancy were reduced and the noise in IMF was removed by using PCA algorithm. Finally, on the basis of the determining of SVM key parameters, the principal components were regarded as input variables to predict future. Instance's testing results show that Mean Average Error (MAE), Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Squared Percentage Error (MSPE) were 514.774, 78.216, 12.03% and 1.862%, respectively. It is concluded that the SVM prediction of the time series of output power of wind farm possesses a higher accuracy than not using PCA because the frequency mixing phenomena was inhibited, the non-stationary was reduced and the noise was further eliminated by EEMD algorithm and PCA algorithm.

    Robust soft subspace clustering algorithm with feature weight self-adjustment mechanism
    ZHI Xiaobin, XU Zhaohui
    2015, 35(3):  770-774.  DOI: 10.11772/j.issn.1001-9081.2015.03.770
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics

    In view of soft subspace clustering with feature weight self-adjustment mechanism (SC-FWSA) clustering algorithm sensitive to noise, based on a non-Euclidean distance, a robust soft subspace clustering with feature weighting self-adjustment mechanism (RSC-FWSA) was proposed. RSC-FWSA algorithm adaptively generated a weighting function for data during the iteration, and computed the clustering centers by computing the weighted average of each class. And this "weighted average" made the estimation of the cluster centers be relatively insensitive to noise, and improved the clustering accuracy of algorithm for data with noise and complex structure. The effectiveness of RSC-FWSA algorithm were demonstrated with comparative experiments on synthetic and real data. Especially the experimental results on synthetic data set with noise and 3 real data sets:Wine, Zoo and Breastcancer show that RSC-FWSA can significantly improve the clustering accuracy compared to original corresponding algorithm. RSC-FWSA has strong robustness, which makes it be suitable for the clustering of data with high dimensions, noise and complex structure.

    FP-MFIA: improved algorithm for mining maximum frequent itemsets based on frequent-pattern tree
    YANG Pengkun, PENG Hui, ZHOU Xiaofeng, SUN Yuqing
    2015, 35(3):  775-778.  DOI: 10.11772/j.issn.1001-9081.2015.03.775
    Asbtract ( )   PDF (591KB) ( )  
    References | Related Articles | Metrics

    Focusing on the drawback that Discovering Maximum Frequent Itemsets Algorithm (DMFIA) has to generate lots of maximum frequent candidate itemsets in each dimension when given datasets with many candidate items and each maximum frequent itemset is not long, an improved Algorithm for mining Maximum Frequent Itemsets based of Frequent-Pattern tree (FP-MFIA) for mining maximum frequent itemsets based on FP-tree was proposed. According to Htable of FP-tree, this algorithm used bottom-up searches to mine maximum frequent itemsets, thus accelerated the count of candidates. Producing infrequent itemsets with lower dimension according to conditional pattern base of every layer when mining, cutting and reducing dimensions of candidate itemsets can largely reduce the amount of candidate itemsets. At the same time taking full advantage of properties of maximum frequent itemsets will reduce the search space. The time efficiency of FP-MFIA is at least two times as much as the algorithm of DMFIA and BDRFI (algorithm for mining frequent itemsets based on dimensionality reduction of frequent itemset) according to computational time contrast based on different supports. It shows that FP-MFIA has a clear advantage when candidate itemsets are with high dimension.

    Face recognition algorithm based on low-rank matrix recovery and collaborative representation
    HE Linzhi, ZHAO Jianmin, ZHU Xinzhong, WU Jianbin, YANG Fan, ZHENG Zhonglong
    2015, 35(3):  779-782.  DOI: 10.11772/j.issn.1001-9081.2015.03.779
    Asbtract ( )   PDF (744KB) ( )  
    References | Related Articles | Metrics

    Since the face images might be not over-complete and they might be also corrupted under different viewpoints or different lighting conditions with noise, an efficient and effective method for Face Recognition (FR) was proposed, namely Robust Principal Component Analysis with Collaborative Representation based Classification (RPCA_CRC). Firstly, the face training dictionary D0 was decomposed into two matrices as the low-rank matrix D and the sparse error matrix E; Secondly, the test image could be collaboratively represented based on the low-rank matrix D; Finally, the test image was classified by the reconstruction error. Compared with SRC (Sparse Representation based Classification), the speed of RPCA_CRC on average is 25-times faster. Meanwhile, the recognition rate of RPCA_CRC increases by 30% with less training images. The experimental results show the proposed method is fast, effective and accurate.

    Facial expression recognition based on feature fusion of active shape model differential texture and local directional pattern
    XIA Haiying, XU Luhui
    2015, 35(3):  783-786.  DOI: 10.11772/j.issn.1001-9081.2015.03.783
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of complex background and robustness in facial expression recognition, a novel method for facial expression recognition was proposed, which combined Active Shape Model (ASM) differential texture features and Local Directional Pattern (LDP) features in decision-making level by Dempster-Shafer (DS) evidence theory. ASM differential texture features could shield the differences between individuals effectively. Meanwhile it could try to retain expression information. LDP is a robust feature descriptor, which computes the edge response values in different directions and used these to encode the image texture. So LDP features have strong anti-noise capability and can capture the subtle changes caused by facial expression. With the consideration of different expression recognition rates for different features, different weight coefficients were selected to calculate probability assignment value during the process of DS evidence fusion. By conducting the experiments on JAFFE database and Cohn-Kanade database, the average recognition of facial expression can reach to 97.08% and is 1% higher than the method using single LDP feature. The experimental results show that the recognition rate and the robustness of facial expression are promoted.

    Gait learning and control of humanoid robot based on Kinect
    ZHOU Hao, PU Jiantao, LIANG Lanzhen, FANG Jianjun, GUO Hao
    2015, 35(3):  787-791.  DOI: 10.11772/j.issn.1001-9081.2015.03.787
    Asbtract ( )   PDF (867KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of complex planning method, too many man-made specified parameters and huge computation in the existing gait dynamic model, the gait generation approach of humanoid robot based on the data collected by Kinect to learn human gait was proposed. Firstly, the skeleton information was collected by Kinect device, human joint local coordinate system was built by the least square fitting method. Next, the dynamic model of human body mapping robot was built, and robot joint angle trajectory was generated according to mapping relation between main joints, the studies of walking posture from human was realized. Then, Robot's ankle joint was optimized and controlled by gradient descent on the basis of Zero-Moment Point (ZMP) stability principle. Finally, on the gait stability analysis, safety factor was proposed to evaluate the stability of robot walk. The experimental results show that the safety factor of walking keeps in 0 to 0.85, experctation is 0.4825 and ZMP closes to stable regional centres, the robot realizes walking imitating human posture and gait stability, which proves the validity of the method.

    Short question classification based on semantic extensions
    YE Zhonglin, YANG Yan, JIA Zhen, YIN Hongfeng
    2015, 35(3):  792-796.  DOI: 10.11772/j.issn.1001-9081.2015.03.792
    Asbtract ( )   PDF (789KB) ( )  
    References | Related Articles | Metrics

    Question classification is one of the tasks in question answering system. Since questions often have rare words and colloquial expressions, especially in the application of voice interaction, the traditional text classifications perform poorly in short question classification. Thus a short question classification algorithm was proposed, which was based on semantic extensions and used the search engine to extend knowledge for short questions, the question's category was got by selecting features with the topic model and calculating the word similarity. The experimental results show that the proposed method can get F-measure value of 0.713 in a set of 1365 real problems, which is higher than that of Support Vector Machine (SVM), K-Nearest Neighbor (KNN) algorithm and maximum entropy algorithm. Therefore, the accuracy of the question classification can be improved by above method in question answering system.

    Context feature extraction method of terrorism behavior based on dependence maximization
    XUE Anrong, JIA Xiaoyan, GE Qinglong, YANG Xiaoqin
    2015, 35(3):  797-801.  DOI: 10.11772/j.issn.1001-9081.2015.03.797
    Asbtract ( )   PDF (835KB) ( )  
    References | Related Articles | Metrics

    To combat the missing value problem in terrorism behavior data set, this paper proposed Compressed Context Space (CCS) method which is based on the idea of maximizing the dependence between the context vectors and actions. CCS relied on Hilbert-Schmidt independence criterion which evaluated the relationship between two variables according to their Hilbert-Schmidt norm. Theories have proven Hilbert-Schmidt norm can detect dependence. In order to detect the relevance well and maximum the dependence between the context features and actions, CCS should maximum Hilbert-Schmidt norm between the linearly mapped low-dimensional features and actions, which is able to reduce the effect of missing value problem. Combining CCS followed SVM (CCS) can produce effective classification. Experiments on MAROB show that the proposed CCS+SVM improves SVM, PCA+SVM, CCA+SVM and CONVEX by at least 1.5% and 1.0% for recall and F measure, and has competitive performance with the best results for precision and Area Under ROC Curve (AUC). The results show that CCS+SVM handles missing value problem well.

    Search data clustering based on wavelet and its application in variable selection
    YUAN Ming
    2015, 35(3):  802-806.  DOI: 10.11772/j.issn.1001-9081.2015.03.802
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics

    A clustering method for online shopping search data based on Continuous Wavelet Transformation (CWT) and its inverse transformation was proposed for variable selection in predictive model. The method decomposed original series into different periodic components by taking full account of special characteristics of search data and reconstructed such periodic components into input vectors. Clustering was implemented through weighted Fuzzy C-Means (FCM) algorithm. The variables (keywords) were selected according to their membership function values in each group. Variable selection effectiveness was then evaluated through a prediction model for Chinese monthly Consumer Price Index (CPI). The experimental results indicate that search volume series have different periodic components and the keywords within the same group are highly consistent in commodity type. Compared to other variable selection methods, the prediction model based on the wavelet clustering can achieve better prediction accuracy, the one-step and three-step relative prediction errors are 0.3891% and 0.5437% respectively, and the selected variables also have clearly economic meaning. The proposed method is particularly suitable to address variable selection problem of high-dimensional predictive model in the big data era.

    Adaptive handwritten character recognition based on affinity propagation clustering
    YANG Yi, WANG Jiangqing, ZHU Zongxiao
    2015, 35(3):  807-810.  DOI: 10.11772/j.issn.1001-9081.2015.03.807
    Asbtract ( )   PDF (668KB) ( )  
    References | Related Articles | Metrics

    For too many similar words and lots of irregular writing ways of the same words in the handwritten character recognition, a modified Affinity Propagation (AP) clustering algorithm was proposed to add to the recognition process. Clustering judging function Silhouette was combined with original AP algorithm in the proposed algorithm. Class number was updated by changing preference parameter adaptively through iterative process of AP algorithm. And then the optimal clustering result was obtained by assessing clustering quality of every iteration. The experiment of handwritten Chinese character recognition indicates that the recognition rate of recognition process added original AP algorithm is 1.52% higher than the rate of traditional recognition process. And the recognition rate of recognition process added modified AP algorithm is 1.28% higher than the rate of recognition process added original AP algorithm. The experimental results verify that it is effective to add clustering algorithm to the handwritten character recognition process. And compared with original AP algorithm, convergence and clustering quality of modified AP algorithm are also improved.

    Skeleton-driven mesh deformation technology based on subdivision
    ZHANG Xiangyu, LI Ming, MA Xiqing
    2015, 35(3):  811-815.  DOI: 10.11772/j.issn.1001-9081.2015.03.811
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of preserving detailed features of model about the traditional skeleton driven deformation, a method of subdivision-based skeleton-driven mesh deformation was proposed. Firstly,after that skeleton and control mesh were generated on deformed region, the relationship of between skeleton and control mesh, subdivision surface of control mesh and deformed region were established. Secondly, when the skeleton was modified according to the desired deformation result, the change information of the corresponding subdivision surface was transformed into the alteration of the mesh gradient field for Poisson. Some examples show that the deformation method for different mesh models could get better editing effects and preserve detailed features after the deformation effectively. Compared with the traditional skeleton-driven deformation method, it is proved to be easy to operate, and can be employed to preserve detailed features effectively. The method is suitable for editing the models with complex and rich geometric details.

    Quality assessment method of color stereoscopic images
    ZHANG Jing, SANG Qingbing
    2015, 35(3):  816-820.  DOI: 10.11772/j.issn.1001-9081.2015.03.816
    Asbtract ( )   PDF (782KB) ( )  
    References | Related Articles | Metrics

    Most existing stereoscopic image quality assessment methods convert color images to gray scale images, which loses the color information, so it is not conducive for color stereopairs to make the right assessment. To solve this problem, a quality assessment method of color stereopairs was proposed. Firstly, the new algorithm used Principal Component Analysis (PCA) image fusion to deal with the reference image pairs and the distortion image pairs to generate 2D color images. Secondly, the low-frequency coefficients were extracted from the 2D images by color wavelet transform respectively. The information of low-frequency coefficients were expressed in quaternion form. In other words, hue component' local mean of low-frequency coefficients was regarded as real part of quaternion, and three primary color components were regarded as the imaginary parts of quaternion. Finally, singular value feature vectors were gained by quaternion singular value decomposition. Cosine angle, Bhattacharyya distance and chi-square distance based on singular value feature vectors were taken as image quality evaluation indexes respectively. The method was tested on the LIVE 3D Image Quality Database, which included both symmetric and asymmetric distorted 3D images published by university of Texas. The linear correlation coefficient and Spearman Rank Order Correlation Coefficient (SROCC) achieved 0.919 and 0.923 in symmetric database. The results have high accordance with the subjective evaluation and reach the expected values.

    Fast detection and recovery method for copy-move forgery in time domain of homologous videos based on geometric mean decomposition and structural similarity
    LIAO Shengyang, HUANG Tianqiang
    2015, 35(3):  821-825.  DOI: 10.11772/j.issn.1001-9081.2015.03.821
    Asbtract ( )   PDF (1016KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of low efficiency of tampering detection and accuracy of location, a homologous video copy-move tampering detection and recovering method based on Geometric Mean Decomposition (GMD) and Structural SIMilarity (SSIM) was proposed. Firstly, the videos were translated into grayscale image sequences. Then, the geometric mean decomposition was adopted as a feature and a block-based search strategy was put forward to locate the starting frame of the duplicated sequences. In addition, SSIM was first extended to measure the similarity between two frames of a video. The starting frame of duplicated sequences was rechecked by using the structural similarity. Since the value of similarity between duplicated frames is higher than that between the normal inter-frames, a coarse-to-fine method based on SSIM was put forward to locate the tail frame. Finally, the video was recovered. In comparison with other classical algorithms, the experimental results show that the proposed method can not only achieve detection of copy-move forgery but also accurately detect and localize duplicated clips in different kinds of videos. Besides, the method has a great improvement in terms of precision, recall and computation time.

    Quaternion-based edge detection by improved smallest univalue segment assimilating nucleus
    SONG Jianfei, GAO Li
    2015, 35(3):  826-829.  DOI: 10.11772/j.issn.1001-9081.2015.03.826
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics

    The color image edge detection based on luminance and chrominance ignores the correlation between luminance and chrominance in the process of detecting edge, which leads to some edges can not be effectively detected. An edge detection method based on quaternion using improved Smallest Univalue Segment Assimilating Nucleus (SUSAN) was proposed. Firstly, spatial dimension reduction was realized through turning 3D information of HSI color space into 2D information, which based on the vector rotation principle of quaternion, meanwhile, the introduction of the scalar V was used to comprehensively express the three-channel relationship among H, S and I; secondly, kernel function of the algorithm was represented by the scalar V; finally, edge detection was completed by the improved SUSAN. The experimental results show that for the color images that have the same chrominance and different saturation, or have the same saturation and deviational chrominance, the presented algorithm reduces the location error rate by 1.5%. The proposed method can get better information of the target in the image of real applications, and provide better prior knowledge for the segmentation and recognition in the subsequent research.

    Multi-constrained segmentation of 3D human point-cloud
    ZHANG Xiangyu, TIAN Qingguo, GE Baozhen
    2015, 35(3):  830-834.  DOI: 10.11772/j.issn.1001-9081.2015.03.830
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics

    Body parts segmentation of human point-cloud model is an important research content in action recognition and virtual reconstruction fields. Focused on this issue, a multi-constrained segmentation algorithm based on classified skeleton,geodesic distance, feature points and posture analysis was proposed.By generating the classified skeleton and geodesic distance of point-cloud, the roughly segmented point sets of each body part were got. Feature points were positioned by an algorithm depending on geodesic path and optimized by a curve fit method.According to these feature points and some anatomical features of human body, multiple constraints were constructed and roughly segmented point sets were segmented once again.The experimental results demonstrate that the segmentation effects of human point cloud models with different action, size and precision in standing posture are consistent with visual understanding of human. The point-cloud of body parts obtained through this algorithm can be used for posture analysis and so on.

    Image restoration algorithm of adaptive weighted encoding and L1/2 regularization
    ZHA Zhiyuan, LIU Hui, SHANG Zhenhong, LI Runxin
    2015, 35(3):  835-839.  DOI: 10.11772/j.issn.1001-9081.2015.03.835
    Asbtract ( )   PDF (965KB) ( )  
    References | Related Articles | Metrics

    Aiming at the denoising problem in image restoration, an adaptive weighted encoding and L1/2 regularization method was proposed. Firstly, for many real images which have not only Gaussian noise, but have Laplace noise, an Improved L1-L2 Hybrid Error Model (IHEM) method was proposed, which could have the advantages of both L1 norm and L2 norm. Secondly, considering noise distribution change in the iteration process, an adaptive membership degree method was proposed, which could reduce iteration number and computational cost. An adaptive weighted encoding method was applied, which had a perfect effect on solving the noise heavy tail distribution problem. In addition, L1/2 regularization method was proposed, which could get much sparse solution. The experimental results demonstrate that the proposed algorithm can lead to Peak Signal-to-Noise Ratio (PSNR) about 3.5 dB improvement and Structural SIMilarity (SSIM) about 0.02 improvement in average over the IHEM method, and it gets an ideal result to deal with the different noise.

    Sequence images super-resolution reconstruction based on L1 and L2 mixed norm
    LI Yinhui, LYU Xiaoqi, YU Hefeng
    2015, 35(3):  840-843.  DOI: 10.11772/j.issn.1001-9081.2015.03.840
    Asbtract ( )   PDF (706KB) ( )  
    References | Related Articles | Metrics

    In order to filter out Gaussian noise and impulse noise at the same time, and get high resolution image in super-resolution reconstruction, a method with L1 and L2 mixed norm and Bilateral Total Variation (BTV) regularization was proposed for sequence images super-resolution. Firstly, multi-resolution optical flow model was used to register low-resolution sequence images and the registration precision was up to sub-pixel level, then the complementary information was used to raise image resolution. Secondly, taking advantage of L1 and L2 mixed norm, BTV regularization algorithm was used to solve the ill-posed problem. Lastly, the proposed algorithm was used to sequence images super-resolution. Experimental results show that the method can decrease the mean square error and increase Peak Signal-to-Noise Ratio (PSNR) by 1.2 dB to 5.2 dB. The algorithm can smooth Gaussian and impulse noise, protect image edge information and improve image identifiability, which provides good technique basis for license plate recognition, face recognition, video surveillance, etc.

    Hyperspectral unmixing algorithm based on spectral information divergence and spectral angle mapping
    LIU Wanjun, YANG Xiuhong, QU Haicheng, MENG Yu
    2015, 35(3):  844-848.  DOI: 10.11772/j.issn.1001-9081.2015.03.844
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics

    When using Linear Deconvolution (LD) algorithm in the selection process, endmembers subset has similar endmembers and similar endmembers have an impact on the accuracy of spectral unmixing,a hyperspectral unmixing optimization algorithm based on per-pixel optimal endmember selection named Spectral Information Divergence (SID) and Spectral Angle Mapping (SAM) was proposed. At the end of the second choice, the method adopted Spectral Information Divergence mixed with Spectral Angle (SID-SA) rule as the most similar endmember selection criteria, removed the similar endmembers and reduced the effect of the accuracy by spectral unmixing. The experiment results show that hyperspectral unmixing optimization algorithm based on SID and SAM makes Root Mean Square Error (RMSE) of reconstruction images be reduced to 0.0104. This method improves the accuracy of endmember selection in comparison with traditional method, reduces abundance estimation error and error distributes more evenly.

    Summary statistics method for complex scenes of high-resolution remote sensing image
    GU Xiuying, ZHAO Ziyi, FANG Tao, HUO Hong
    2015, 35(3):  849-853.  DOI: 10.11772/j.issn.1001-9081.2015.03.849
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics

    For the classification of high-resolution remote sensing images, inspired by human vision system which extracts summary statistics information for scene perception, a feature extraction method based on summary features was proposed. In the method average orientation information and visual clutter were extracted and combined to form a representation based on summary statistics, in which average orientation information was summarized by using Gabor filters and visual clutter was measured based on visual crowding.The experimental results on the classification of 21 classes of remote sensing image set reveal that the classification accuracy of the proposed method is 6.5% higher than Gist and 3.22% higher than Bag-Of-Words (BOW), when the number of training images and testing images are both 50. It also has lower calculation burden. While compared with Gist, the proposed method doesn't need any human intervention.

    Automated surface frost detection based on manifold learning
    ZHU Lei, CAO Zhiguo, XIAO Yang, LI Xiaoxia, MA Shuqing
    2015, 35(3):  854-857.  DOI: 10.11772/j.issn.1001-9081.2015.03.854
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics

    As an important component of the surface meteorological observation, the daily observation of surface frost still relies on manual labor. Therefore, a new method for detecting frost based on computer vision was proposed. First, a k-nearest neighbor graph model was constructed by incorporating the manually labeled frosty image samples and the test samples which were acquired during the real-time detection. Second, the candidate frosty regions were extracted by rating those test samples using a graph-based manifold learning procedure which took the aforementioned frosty samples as the query nodes. Finally, those candidate frosty regions were identified by an on-line trained classifier based on Support Vector Machine (SVM). Some experiments were conducted in a standardized weather station and the manual observation was taken as the baseline. The experimental results demonstrate that the proposed method achieves an accuracy of 87% in frost detection and has a potential applicability in the operational surface observation.

    Salient points extraction method of furnace flame image based on hierarchical adaptive algorithm
    ZHANG Xiaolin, CUI Ningning, YANG Tao, LI Jie
    2015, 35(3):  858-862.  DOI: 10.11772/j.issn.1001-9081.2015.03.858
    Asbtract ( )   PDF (710KB) ( )  
    References | Related Articles | Metrics

    Given the feature extraction of the furnace flame image produced in boilers and industrial production, a hierarchical adaptive method to extract salient points was proposed. First the Block Difference of Inverse Probabilities (BDIP) model was used to change the original image into BDIP image. On the basis of this, the BDIP image was made into Haar wavelet transform, the salient value of two-dimensional image was calculated by the improved weighted method, and then a non-equilibrium quadtree was built through the proposed adaptive method. The root of quadtree represented the salient value of the image, and the salient points number of subtree was determined according to the ratio of the salient value of every subtree to the salient value of parent node. The proposed extracting algorithm was salient points compared with the extracting algorithms based on BDIP and based on Haar wavelet transform. The experimental results show that edge accuracy and comprehensive feature retrieval accuracy at least increase by 10% and 3.5% respectively. The proposed method overcomes the shortcoming of traditional way that it extracts too many salient points and some extracted points are not salient, at the same time the method avoids local gather of salient points.

    Fabric defect detection algorithm based on Radon-wavelet low resolution
    ZHU Zhongyang, XIAO Zhiyun, SUN Guangmin, QI Yongsheng
    2015, 35(3):  863-867.  DOI: 10.11772/j.issn.1001-9081.2015.03.863
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics

    In view of the problems in the textile process, a novel fabric defect segmentation method-quartering method and a fabric defect feature extraction method-Radon Wavelet Low Resolution Characteristic (RWLRC) was presented, which were respectively used for fabric defect detection and classification. According to this method, the fabric image was preprocessed by using Gabor filter, and then the fabric image was divided into four parts, the threshold for segmenting the fabric defect was determined by four parts' maximum value and minimum value. After that the Radon transform was used to binary image and characteristic curve was got. Meanwhile Mallat pyramidal decomposition algorithm was used for feature dimension reduction. Finally, the neural network was used to the state recognition and characteristic classification. The experimental results show that quartering method does not need to contrast with the other normal fabric images and has good adaptability. RWLRC only has three eigenvalues and has the characteristics of low dimension and accurate description of defect shape, the proposed method can efficiently inspect and recognize four common fabric defects:weft-lacking, warp-lacking, oil stains and holes.

    Speech enhancement based on bionic wavelet transform of subband spectrum entropy
    LIU Yan, NI Wanshun
    2015, 35(3):  868-871.  DOI: 10.11772/j.issn.1001-9081.2015.03.868
    Asbtract ( )   PDF (534KB) ( )  
    References | Related Articles | Metrics

    Front end noise processing has a direct impact upon the accuracy and stability of the speech recognition. According to the fact that the signal separated by wavelet denoising algorithm isn't its optimal estimation, a novel Bionic Wavelet Transform (BWT) de-noising algorithm based on subband spectrum entropy was proposed. To achieve the purpose of speech enhancement, the subband spectrum entropy, which has a good accuracy of the endpoint detection, was taken full advantage to distinguish the parts of speech and noise, to real-timely update the threshold of BWT, and to precisely determine the noise signal wavelet coefficients. The experimental results indicate that the Signal-to-Noise Ratio (SNR) of the proposed algorithm is 8% higher than the Wiener filter algorithm. The proposed method has significant enhancement effect on speech signal in noisy environments.

    Hierarchical modeling method based on extensible port technology in real-time field
    WANG Bin, CUI Xiaojie, HE Bi, LIU Hui, XU Shenglei, WANG Xiaojun
    2015, 35(3):  872-877.  DOI: 10.11772/j.issn.1001-9081.2015.03.872
    Asbtract ( )   PDF (1063KB) ( )  
    References | Related Articles | Metrics

    When the Model Driven Development (MDD) method is used in real-time field, it is difficult to describe the whole control system in a single layer completely and clearly. A real-time multi-layer modeling method based on hierarchy theory was presented in this study. The extensible input port and output port were adopted to equip present meta-model technique in real-time field, then the eXtensible Markup Language (XML) was used to describe the ports and the message transfer mechanism based on channel was applied to realize communication between models in mutiple layers. The modeling results for real-time control system show that compared with single layer modeling method, the hierarchical modeling method can effectively support the description of parallel interactions between multiple tasks when using model driven development method in real-time field, as a result it enhances the visibility and reusability of real-time complex system models.

    Design of telemetry and command message-oriented middleware system with publish/subscribe model
    WANG Chongnan, WANG Zongtao, BAO Zhonggui, XING Hongwei
    2015, 35(3):  878-881.  DOI: 10.11772/j.issn.1001-9081.2015.03.878
    Asbtract ( )   PDF (573KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that TelemeTry and Command (TT&C) Message-Oriented Middleware (MOM) with traditional model like message queue and shared memory has the disadvantage of tightly coupling and limited extensible ability. Combining with the current characteristics of TT&C computer system, a function distributed TT&C MOM system with Publish/Subscribe (Pub/Sub) model was put forward. Centralized publish/subscribe server was canceled with its function embedded into the distributed processing unit. The working process of themes global registration, subscriptions global broadcasting and event local matching were designed. And its transmission reliability was achieved through the reliable multicast protocol, its nodes reliability was achieved by soft duplex with virtual IP mechanism and accelerated push-pull heartbeat detection. Experiments show that the average response time of Pub/Sub message is controlled within 100 ms, the packet loss rate of multicast protocol is around 0.86×10-7, duplex switch delay is up to 56 ms. This TT&C MOM system with Pub/Sub model satisfies the requirement of highly real-time performance and reliability of TT&C applications.

    Software fault localization approach by statistical analysis of failure context
    WANG Kechao, WANG Tiantian, REN Xiangmin, JIA Zongfu
    2015, 35(3):  882-885.  DOI: 10.11772/j.issn.1001-9081.2015.03.882
    Asbtract ( )   PDF (749KB) ( )  
    References | Related Articles | Metrics

    The program slicing approach does not describe the suspiciousness of statements, while the coverage analysis based fault localization approach does not analyze the relationship between statements. To solve these problems, a software fault localization approach by statistical analysis of failure context was proposed. Firstly, source code was transformed to an abstract syntax tree and program dependence graphs. Then, instrumentation was performed based on the abstract syntax tree to collect execution information. Next, starting from the failure point, dynamic program slicing based on requirement was conducted in order to get the context of failure. Finally, suspiciousness of nodes in the reverse dynamic program slice was computed, and a dynamic program slice with suspiciousness ranking was output. The proposed approach could not only describe the failure context, but also gave the suspiciousness of the statements. The experimental results show that it has an average 1.3% and 5.6% expense decrease compared with the coverage based analysis approach and the slicing approach respectively, so that it can facilitate the localization and fixing of bugs.

    High performance simulation of Win32 environment in Android system
    HU Jiajie, JIANG Letian
    2015, 35(3):  886-890.  DOI: 10.11772/j.issn.1001-9081.2015.03.886
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that Win32 applications cannot run directly in Android system, a high performance solution was proposed to simulate the Win32 environment. By dynamically translating x86 program consisted of Translation Blocks (TB) into Advanced Reduced Instruction Set Computing Machine (ARM) instructions and then executing it, the instruction compatibility issue was solved. At the same time, with the help of Wine, a compatibility layer, Win32 API invocations were converted into Linux system calls ultimately. In this way, emulation of the whole operating system was avoided. Moreover, to achieve the adaption between the X window system and the graphics stack in Android, an X display server with a virtual Framebuffer backend was adopted in the graphics system, and images were shown on the physical screen through the Virtual Network Computing (VNC) protocol. The system can finish its initialization within 30 seconds, occupying less than 150MB of memory, while its performance in GUI rendering, file I/O and floating-point calculation can generally be over 3 times better than solutions based on full-system emulation in the experiments. The results show that the proposed framework features fast startup and low resource consumption, and it can provide a high performance simulation scheme of Win32 environment in Android.

    Comprehensive coordination optimization of train break-up and make-up scheme based on hard time windows at railway technical station
    ZHU Haiyang, CUI Bingmou, HU Zhiyao
    2015, 35(3):  891-895.  DOI: 10.11772/j.issn.1001-9081.2015.03.891
    Asbtract ( )   PDF (753KB) ( )  
    References | Related Articles | Metrics

    Concerning the existing train break-up and make-up scheme cannot effectively meet the job of stage plan wagon-flow allocating at railway technical station, based on the different hard time window constraints of traction weight and converted length of train, considering constraints of the car flow joining and formation direction, the dynamic wagon-flow allocating model was established by constructing enable-break-up collection, improving the status transition regularity, and updating the pheromone in the ant colony algorithm. The objective of this model was set to maximize the number of vehicles departed and fully loaded departure train. To realize the comprehensive coordination optimization of train break-up and make-up scheme, a decision support system based on ant colony algorithm was designed on the basis of adjustment rules of train break-up and make-up sequence, and by defining a solvable set and improving the status transition regularity and the pheromone update strategy of ant colony algorithm. The result of numerical examples demonstrates that the decision support system can reduce the scale of the wagon-flow allocating problem, effectively help decision-makers choose a satisfactory solution of wagon-flow allocation scheme and get the result of sorting considering marshalling according to the impact of changes in the sequences of train break-up and make-up, so it provides theoretical support for the realization of comprehensive coordination optimization of dispatching system at railway technical station.

    Weak signal detection in chaotic clutter based on effective K-means and effective extreme learning machine
    SHANG Qingjian, ZHANG Jinming, WANG Tingzhang
    2015, 35(3):  896-900.  DOI: 10.11772/j.issn.1001-9081.2015.03.896
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of extracting the useful signal in the complex background of chaotic noise rapidly and accurately, the phase space reconstruction theory based on complex nonlinear system was proposed, and the method of improved Extreme Learning Machine (ELM) was used to predict the single step error and detect the weak signal. The improved K-means clustering algorithm was used to select the optimal family as training set, the improved extreme learning machine chose the weight value and the offset to improve the detection accuracy and speed. The one step prediction model of chaotic noise sequence with Lorenz system was established, and the weak target signals (including periodic signal and transient signal) that lost in the chaotic noise were detected, then the IPIX radar data of Canada Mc Master University were used, and the floater signal in sea clutter noise was extracted to do the experimental research. The results show that the method can effectively detect the very weak signal in chaos background noise, at the same time, the influence of noise was restrained to the chaotic background signal, compared with the traditional algorithms such as Radial Basis Function (RBF), the prediction accuracy is increased by 25%, the detection threshold is increased by -5 dB, the training time is reduced by 77.1 s, it has more obvious advantages in practical application.

    Bridge crack measurement system based on binocular stereo vision technology
    WANG Lin, ZHAO Jiankang, XIA Xuan, LONG Haihui
    2015, 35(3):  901-904.  DOI: 10.11772/j.issn.1001-9081.2015.03.901
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics

    A bridge crack measurement system based on binocular stereo vision technology was proposed considering the low efficiency, high cost and low precision of bridge cracks measurement at home and abroad. The system realized by using some binocular stereo vision methods like camera calibration, image matching and three dimensional coordinates reconstruction to calculate the width and the length of bridge cracks. The measured results by binocular vision and by monocular vision system under the same conditions were compared, which show that using the binocular vision measurement system made width relative error keep within 10% and length relative error keep within 1% steadily, while the results measured by monocular vision were changed widely in different angles with a maximum width relative error 19.41% and a maximum length relative error 54.35%. The bridge crack measurement system based on binocular stereo vision can be used in practical well with stronger robustness and higher precision.

2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF