Loading...

Table of Content

    01 June 2014, Volume 34 Issue 6
    Network and communications
    Green resource allocation algorithm in orthogonal frequency division multiplexing cellular system over time-varying channel
    LONG Ken GUO Bingjin
    2014, 34(6):  1533-1536.  DOI: 10.11772/j.issn.1001-9081.2014.06.1533
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics

    The traditional green resource allocation algorithms are based on terminal energy saving, while most of them neglect the impact of time-selectivity of the channel on energy consumption and system performance. An efficient green resource allocation algorithm which incorporated multi-user diversity was proposed. This algorithm guaranteed fairness among users and dynamically adjusted size of diversity sub-module to satisfy the time-varying character of channel. Due to multiple available frequency bands in any diversity sub-module, the receiving energy of user equipments could be minimized by scheduling users' resources into fewer time slots. The optimal solution could be reached efficiently by searching along the boundary. The simulation results illustrate that the proposed algorithm can increase system throughput about 13% with improved stability, lower computational complexity and faster convergence, maintaining a good terminal power-saving gain.

    Energy-aware virtual network embedding algorithm based on topology aggregation
    WANG Bo CHEN Shuqiao WANG Zhiming WANG Wengao
    2014, 34(6):  1537-1540.  DOI: 10.11772/j.issn.1001-9081.2014.06.1537
    Asbtract ( )   PDF (745KB) ( )  
    References | Related Articles | Metrics

    The key issue of network virtualization is Virtual Network Embedding (VNE), and the rapid growth of energy cost makes infrastructure providers concern energy conservation. An energy conservation VNE algorithm that centrally used network topology for saving energy on VNE problem was presented. The importance of the nodes was characterized by the conception of closeness centrality and the capabilities of the nodes, and the working nodes were preferentially used for resources integration to reduce energy consumption and calculation cost, that ensured the distance of the substrate links won't be too long. The simulation results show that the proposed algorithm improves revenue-energy ratio more than 20% when accept ratio reaches 70% and revenue cost ratio reaches 75%, and has advantages compared with the previous algorithms.

    Redundancy traffic elimination algorithm based on packet feature
    ZHENG Hong XING Ling MA Qiang
    2014, 34(6):  1541-1545.  DOI: 10.11772/j.issn.1001-9081.2014.06.1541
    Asbtract ( )   PDF (712KB) ( )  
    References | Related Articles | Metrics

    Concerning the low efficiency of network transmission caused by redundant traffic, an algorithm named Packet Feature based Redundancy Traffic Elimination (PFRTE) was proposed based on the protocol-independent traffic redundancy elimination technique. Based on the grouping of packet size, PFRTE dynamically analyzed statistical bimodal characteristics and packet features of network traffic and regarded the size of the packet with the greatest capability of redundancy elimination as the threshold. It decided the boundary points by using sliding window method and calculated the fingerprint of block data within two boundary points. PFRTE encoded the redundant blocks in a simple way and transfered the encoded data instead of redundant data. The experimental results show that, compared with redundant traffic elimination algorithm based on maximum selection and static lookup table selection, PFRTE has the advantage of analyzing the redundancy statistics of network traffic dynamically, and the CPU consumption reduces both at server and client. Meanwhile, the algorithm is also effective with rate of redundancy elimination bytes saving of 8%-40%.

    Distributed particle filter algorithm with low complexity for cooperative blind equalization
    WU Di CAO Haifeng GE Lindong PENG Hua
    2014, 34(6):  1546-1549.  DOI: 10.11772/j.issn.1001-9081.2014.06.1546
    Asbtract ( )   PDF (610KB) ( )  
    References | Related Articles | Metrics

    The traditional blind equalization with single receiver is significantly influenced by fading channel, and has high Bit Err Ratio (BER). In order to improve the BER performance, a Distributed Particle Filter (DPF) algorithm with low complexity for cooperative blind equalization was proposed in cooperative receiver networks. In the proposed algorithm, multiple receivers composed distributed network with no fusion center, estimated the transmitted sequences cooperatively by using the distributed particle filter. In order to reduce the complexity of particle sampling, the prior probability was employed as importance function. Then the minimum consensus algorithm was used to evaluate the approximation of the global likelihood function across the receiver network, therefore, all nodes achieved the same set of particles and weights. The theoretical analysis and simulation results show that the proposed algorithm does not centralize data at a fusion center and reduces the computational complexity. The fully distributed cooperative scheme achieves spatial diversity gain and improves the BER performance.

    Modified algorithm for dual-threshold cooperative spectrum sensing based on two-step fusion
    WU Ruoyu HUI Xiaowei NAN Jingchang XU Guangxian
    2014, 34(6):  1550-1553.  DOI: 10.11772/j.issn.1001-9081.2014.06.1550
    Asbtract ( )   PDF (581KB) ( )  
    References | Related Articles | Metrics

    Concerning the shortcomings in improving the sensing ability and reducing the amount of data transmission of the conventional dual-threshold cooperative spectrum sensing under the communication environment of uncertain noise, an improved dual-threshold cooperative spectrum sensing algorithm based on two-step fusion was introduced in Fusion Center (FC). Firstly, this algorithm got rid of the negative influences of some drop-out users by filtering all cognitive users. Then set the dual-threshold adaptively according to the uncertainty of the noise to strengthen the sensing adaptability of system under uncertain noise circumstance. Finally, by adopting a strategy of two-step fusion in FC, this algorithm made a compromise between the high detection ability and low amount of the data transmission. Compared with the conventional dual-threshold spectrum sensing algorithm, the theoretical analysis and simulation indicate that the proposed algorithm can not only avoid the cognitive failure and enhance the cognitive performance on the condition of a low data transmission, but also show an obvious improvement under a high noise uncertainty.

    MAC protocol applied to bridge monitoring in wireless sensor networks
    REN Xiuli XI Yuanhao
    2014, 34(6):  1554-1557.  DOI: 10.11772/j.issn.1001-9081.2014.06.1554
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    In order to improve the performance of real-time bridge monitoring, a Multi-Priority and Multi-Channel MAC (MPMC-MAC) protocol for Wireless Sensor Network (WSN) was proposed. MPMC-MAC combined data types with the transmission frequency to prioritize nodes and allocated channels according to the nodes priority and the state of channels, which ensured that high-priority nodes sent data first. When channel collision or node communication interference caused retransmission, the channels would be reallocated. The channel reallocation here took nodes priority, remaining energy and retransmission times into consideration to achieve fairness. In addition, MPMC-MAC dynamically adjusted active and sleep time of nodes to save energy and reduce latency. The simulation results indicate that MPMC-MAC performs better than Hybrid MAC (HyMAC), Zebra MAC (ZMAC) and IEEE802.15.4 MAC in network throughput, average transmission delay and energy consumption.

    Design of relay link deployment algorithms for unmanned aerial vehicles
    FANG Bin CHEN Tefang
    2014, 34(6):  1558-1562.  DOI: 10.11772/j.issn.1001-9081.2014.06.1558
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics

    To get a reasonable deployment and a communication relay link model of Unmanned Aerial Vehicle (UAV), and extend the data transmission distance, the Improved Bellman-Ford (IBF) algorithm and the Improved Dijkstra Algorithm (IDA) were proposed considering communication blind area and limited number of available UAVs. The UAV deployment problem was modeled as a All Hop Optimal Path (AHOP) problem, in which the IBF algorithm was used to generate a set of reachable records, and the solutions were got by accessing the records reversely; Then the IDA algorithm changed the connection weights of edges in each iteration process and found the path which decreased the hops of relay link, hence the feasible solution of UAV relay deployment problem was got. The simulation analysis illustrates that IBF and IDA can provide effective solutions of relay link deployment, and the time performance of the proposed algorithms are superior to Bellman-Ford (BF) algorithm.

    Fingerprinting location method for WLAN using physical neighbor points information
    ZHOU Mu ZHANG Qiao QIU Feng
    2014, 34(6):  1563-1566.  DOI: 10.11772/j.issn.1001-9081.2014.06.1563
    Asbtract ( )   PDF (587KB) ( )  
    References | Related Articles | Metrics

    In order to make full use of Adjacent Reference Point (ARP) information in radio-map, a new method of establishing both location fingerprint database based on Received Signal Strength (RSS) and physical neighbor information database for each Reference Point (RP) in the off-line phase was proposed to improve the accuracy of fingerprinting-based probabilistic localization. In the on-line phase, based on the probability distribution of RSS, the system first used Bayesian inference to calculate the most adjacent points for each test point. Then, by using physical neighbor information database, the system found the physical adjacent points with respect to the most adjacent points. In the set of most adjacent and physical adjacent points, the system selected feature points for second Bayesian inference. Finally, the system estimated the position of each test point at the center of the group of feature points which had the Maximum A Posterior (MAP) probability. The simulation results show that, compared with the traditional method without physical neighbor information database, the proposed method can improve the localization accuracy by nearly 10%, which enhances the reliability of location determination.

    False data filtering scheme based on trust management mechanism in wireless sensor networks
    CAO Yanhua ZHANG Zhiming YU Min
    2014, 34(6):  1567-1572.  DOI: 10.11772/j.issn.1001-9081.2014.06.1567
    Asbtract ( )   PDF (892KB) ( )  
    References | Related Articles | Metrics

    In the traditional false data filtering schemes of Wireless Sensor Networks (WSN), only the false data reports are filtered, while the compromised nodes still can continuously inject false data to WSN which wastes network resources. In order to cut off the source of false data generating, a new false data filtering scheme based on trust management mechanism was proposed. The conspiracy forged false data generated from multiple captured nodes were limited within a cluster by using clustering method, then the trust management mechanism was used to detect whether the node was compromised, hence isolated the compromised node. The analysis results show that the proposed scheme can not only filter false data effectively, but also isolate compromised nodes, and has a strong ability of tolerating compromised nodes.

    New signal system design for satellite navigation system
    XUE Rui XU Xichao WEI Qiang
    2014, 34(6):  1573-1577.  DOI: 10.11772/j.issn.1001-9081.2014.06.1573
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics

    In order to further improve the precision of navigation signal, band efficiency and the reliability performance for navigation systems, a new signal system adopted Minimum Shift Keying (MSK) with Binary Offset Carrier (BOC) based on Low Density Party Check (LDPC) codes was presented, which was called LDPC-MSK-BOC signal system. The navigation performances of the BOC and MSK-BOC were evaluated based on the parameters of Compass and GPS typical signals, which were scaled with power spectral density, code tracking error, multipath error envelope, bit error rate, anti-narrowband jamming merit factor and anti-matched spectrum jamming merit factor for demodulation processing, anti-narrowband jamming merit factor and anti-matched spectrum jamming merit factor for code tracking processing, and spectral separation coefficient. The theoretical analysis and simulation show that the proposed system has better performance in the field of code tracking precision and anti-multipath compared with BOC signal system under the limited spectrum resource condition. Meanwhile, the signal structure can further improve system reliability and band efficiency.

    Trilateration based clustering target tracking algorithm
    GAO Lei
    2014, 34(6):  1578-1581.  DOI: 10.11772/j.issn.1001-9081.2014.06.1578
    Asbtract ( )   PDF (586KB) ( )  
    References | Related Articles | Metrics

    In target tracking applications, the moving targets have randomness and contingency, and the tracking nodes have limited energy and small communication radius. In order to improve the tracking accuracy with minimizing the energy consumption and extending the lifetime of network, a trilateration based clustering target tracking algorithm was proposed. It adopted trilateration technique for the target localization to improve the tracking accuracy. For the sake of achieving energy balance, on the basis of two parameters including the distance between the node and the target and the residual energy level of the node, the cluster head and members were elected in the stage of wake clustering establishment. The simulation results show that, compared with Prediction-based Energy Saving (PES) scheme and Hybrid Cluster-based Target Tracking (HCTT) protocol, the proposed algorithm has a better performance in lifetime of the network, predicting path and tracking accuracy.

    Self-elasticity cloud platform based on OpenStack and Cloudify
    PEI Chao WU Yingchuan LIU Zhiqin WANG Yaobin YANG Lei
    2014, 34(6):  1582-1586.  DOI: 10.11772/j.issn.1001-9081.2014.06.1582
    Asbtract ( )   PDF (833KB) ( )  
    References | Related Articles | Metrics

    Under the condition of being confronted with highly concurrent requests, the existing Web services would bring about the increase of the response time, even the problem that server goes down. To solve this problem, a kind of distributed self-elasticity architecture for the Web system named ECAP (self-Elasticity Cloud Application Platform) was proposed based on cloud computing. The architecture built on the Infrastructure as a Service (IaaS) platform of OpenStack. It combined Platform as a Service (PaaS) platform of Cloudify to realize the ECAP. In addition, it realized the fuzzy analytic hierarchy scheduling method by building the fuzzy matrix in the scale values of virtual machine resource template. At last, the test applications were uploaded in the cloud platform, and the test analysis was given by using the tool of pressure test. The experimental result shows that ECAP performs better in the average response time and the load performance than that of the common application server.

    Improvement of matrix completion algorithm based on random projection
    WANG Ping CAI Sijia LIU Yu
    2014, 34(6):  1587-1590.  DOI: 10.11772/j.issn.1001-9081.2014.06.1587
    Asbtract ( )   PDF (565KB) ( )  
    References | Related Articles | Metrics

    Using random projection acceleration technology to project the Singular Value Decomposition (SVD) of higher dimensional matrices onto a lower subspace can reduce the time consumption of SVD. The singular value random projection compression operator was defined to replace the singular value compression operator, then it was used to improve the Fixed Point Continuation (FPC) algorithm and got FPCrp algorithm. Lots of experiments were conducted on the original algorithm and the improved one. The results show that the random projection technology can reduce more than 50% time consumption of the FPC algorithm, while maintaining its robustness and precision. The modified matrix completion algorithm based on random projection technology is effective in solving large scale problems.

    Distributed parallel algorithm of physically based ray tracing
    ZHANG Congpin YUE Dongli
    2014, 34(6):  1591-1594.  DOI: 10.11772/j.issn.1001-9081.2014.06.1591
    Asbtract ( )   PDF (621KB) ( )  
    References | Related Articles | Metrics

    Ray tracing is a technique for image synthesis: creating a photorealistic picture of a 3-D world. But the technique tends to be a time-consuming process, and how to decrease the expensive computational cost has become a hotspot. Considering the factors of task partition and load balance, with the two levels of task partition method, a new dynamically self-adaptive distributed ray tracing algorithm was developed to enhance the efficiency of ray tracing based on Physically Based Ray Tracing (PBRT), which is a famous ray tracing framework from Stanford University. When using CPU cores to 80 in experiment, the corresponding speedup of the proposed algorithm versus the original PRBT algorithm reached to a perfect value, which is close to theoretic linear speedup. It shows that the proposed algorithm is effective and highly scalable with respect to PBRT. The proposed method can be applied to ray tracing process effectively to accelerate the efficiency of rendering.

    Parallel text hierarchical clustering based on MapReduce
    YU Xiaoshan WU Yangyang
    2014, 34(6):  1595-1599.  DOI: 10.11772/j.issn.1001-9081.2014.06.1595
    Asbtract ( )   PDF (930KB) ( )  
    References | Related Articles | Metrics

    Concerning the deficiency in scalability of the traditional hierarchical clustering algorithm when dealing with large-scale text, a parallel hierarchical clustering algorithm based on the MapReduce programming model was proposed. The vertical data partitioning algorithm based on the statistical characteristic of the components group of text vector was developed for data partitioning in MapReduce. Additionally, the sorting characteristics of the MapReduce were applied to select the merge points, making the algorithm be more efficient and conducive to improve clustering accuracy. The experimental results show that the proposed algorithm is effective and has good scalability.

    MapReduce Based Image Classification Approach
    WEI Han ZHANG Xueqing CHEN Yang
    2014, 34(6):  1600-1603.  DOI: 10.11772/j.issn.1001-9081.2014.06.1600
    Asbtract ( )   PDF (642KB) ( )  
    References | Related Articles | Metrics

    Many existing image classification algorithms cannot be used for big image data. A new approach was proposed to accelerate big image classification based on MapReduce. The whole image classification process was reconstructed to fit the MapReduce programming model. First, the Scale Invariant Feature Transform (SIFT) feature was extracted by MapReduce, then it was converted to sparse vector using sparse coding to get the sparse feature of the image. The MapReduce was also used to distributed training of random forest, and on the basis of it, the big image classification was achieved parallel. The MapReduce based algorithm was evaluated on a Hadoop cluster. The experimental results show that the proposed approach can classify images simultaneously on Hadoop cluster with a good speedup rate.

    Fast algorithm for sparse decomposition of real first-order polynomial phase signal based on group testing
    OU Guojian WANG Weiqiang JIANG Qingping
    2014, 34(6):  1604-1607.  DOI: 10.11772/j.issn.1001-9081.2014.06.1604
    Asbtract ( )   PDF (705KB) ( )  
    References | Related Articles | Metrics

    Concerning the huge calculation of sparse decomposition, a fast sparse decomposition algorithm with low computation complexity was proposed for first-order Polynomial Phase Signals (PPS). In this algorithm, firstly,two concatenate dictionaries including Df and Dp were constructed, and the atoms in the Df were constructed by the frequency, and the atoms in the Dp were constructed by the phase.Secondly, for the dictionary Df, the group testing was used to search the atoms that matched the signal, and the correlation values of the atoms and the signal were tested twice to achieve the reliability. Finally, according to the matching frequency atoms tested by group testing, the dictionary Dp was constructed, and the matching phase atoms were searched by Matching Pursuit (MP) algorithm. Therefore, the sparse decomposition of real first-order PPS was finished. The simulation results show that the computational efficiency of the proposed algorithm is about 604 times as high as that of matching pursuit and about 139 times as high as that of genetic algorithm, hence the presented algorithm has less computation complexity, and can finish sparse decomposition fast. The complexity of the algorithm is only O(N).

    Artificial intelligence
    Sparsity reconstruction-based discriminant analysis
    QI Mingming XIANG Yang
    2014, 34(6):  1608-1612.  DOI: 10.11772/j.issn.1001-9081.2014.06.1608
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of being sensitive to external interference such as defects and occlusions in the existing discriminant analysis, a Sparsity reconstruction-based Discriminant Analysis (SDA) for dimensionality reduction was proposed in the term of local sparse representation. The algorithm firstly made use of sparse representation to complete local sparsity reconstruction in each class, and then completed between-class sparsity reconstruction with the average of each different class. Finally the algorithm preserved the ratio between the between-class sparsity reconstruction information and the within-class sparsity reconstruction information in the process of dimensionality reduction. The algorithm promotes the computational efficiency of sparse representation and the robust performance of discriminant analysis. The experimental results on AR and UMIST face datasets show, compared with Graph-based Fisher Analysis (GbFA) algorithm and Reconstructive-based Discriminant Analysis (RDA) algorithm, the proposed algorithm promotes 2-10 percent in the highest recognition accuracy based on nearest neighbor classification.

    High-dimensional data visualization based on random forest
    LYV Bing WANG Huazhen
    2014, 34(6):  1613-1617.  DOI: 10.11772/j.issn.1001-9081.2014.06.1613
    Asbtract ( )   PDF (940KB) ( )  
    References | Related Articles | Metrics

    High-dimensional data mining methods are mostly based on the mathematical theory rather than visual intuition currently. To facilitate visual analysis and evaluation of high-dimensional data, Random Forest (RF) was introduced to visualize high-dimensional data. Firstly, RF applied supervised learning to get the proximity measurement from the source data and the principal coordinate analysis was used for dimension reduction, which transformed the high-dimensional data relationship into the low-dimensional space. Then scattering plots were used to visualize the data in low-dimensional space. The results of experiment on high-dimensional gene datasets show that visualization with supervised dimension-reduction based on RF can illustrate perfectly discrimination of class distribution and outperforms traditional unsupervised dimension-reduction.

    Modified proximal support vector machine algorithm for dealing with unbalanced samples
    LIU Yan ZHONG Ping CHEN Jing SONG Xiaohua HE Yun
    2014, 34(6):  1618-1621.  DOI: 10.11772/j.issn.1001-9081.2014.06.1618
    Asbtract ( )   PDF (545KB) ( )  
    References | Related Articles | Metrics

    When Proximal Support Vector Machine (PSVM) deals with unbalanced samples, it will overfit the class with large samples and underestimate the misclassification error of the class with small samples, resulting in the decline of accuracy in overall samples. To solve this problem, a modified PSVM used for dealing with unbalanced samples was proposed. The new algorithm not only set different punishments for positive and negative samples, but also added a new parameter to the constraint, making the classification hyperplane more flexible. Firstly, the new algorithm trained the training set to obtain the optimal parameters, then the classification hyperplane was obtained by training the test set. Finally, the classification results was output. The experiments presented by 9 datasets in UCI database show that the new algorithm improves the classification accuracy of the samples, by 2.19 and 3.14 percentage points in the linear and nonlinear case respectively. The generalization ability of the algorithm is strengthened effectively.

    Large scale ontology aligning approach based on NSGA-Ⅱ
    XUE Xingsi
    2014, 34(6):  1622-1625.  DOI: 10.11772/j.issn.1001-9081.2014.06.1622
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics

    The application of existing ontology aligning technologies based on evolutionary algorithm is limited by the huge search space of large scale ontology aligning problem. To this end, in this paper, a large scale ontology aligning approach based on a fast elitist Non-dominated Sorting Genetic Algorithm for multi-objective optimization (NSGA-Ⅱ) was proposed. To be specific, it worked in three steps: 1) a neighbor similarity based ontology partitioning algorithm was presented to split the source ontology into a set of disjoint concept blocks; 2) a relevant concept filtering method was proposed to determine the concept block in target ontology associated with each source one; 3) NSGA-Ⅱ was utilized to align the various concept block pairs and a greedy algorithm was used to aggregate various results. Small scale bibliographic ontology benchmark and large scale biomedic ontology benchmark in OAEI 2012 were used to test the proposed approach. The comparisons with the participants of OAEI 2012 show that the large scale ontology aligning approach based on NSGA-Ⅱ is able to determine good alignments in a short time, and therefore it is effective.

    Precise text mining using low-rank matrix decomposition
    HUANG Xiaohai GUO Zhi HUANG Yu
    2014, 34(6):  1626-1630.  DOI: 10.11772/j.issn.1001-9081.2014.06.1626
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    Applications such as information retrieval need a precise representation of text content while the representations using traditional topic model can only extract topic background and have no ability for a precise description. A new low-rank and sparse model was proposed to decompose text into a low-rank component which represents topic background and a sparse component which represents keywords. To implement this model, the topic matrix was defined, and Robust Principal Component Analysis (RPCA) was introduced to realize the decomposition. The experimental result on news corpus shows that the model complexity is 25 percent lower than that of Latent Dirichlet Allocation (LDA). In practical applications, the low-rank component reduces the features needed in text classification by 28.7 percent, which helps to reduce the dimension of features; And the sparse component improves the precision of information retrieval result by 10.8 percent compared with LDA, which improves the hit rate of information retrieval result.

    Cuckoo search algorithm based on differential evolution
    XIAO Huihui DUAN Yanming
    2014, 34(6):  1631-1635.  DOI: 10.11772/j.issn.1001-9081.2014.06.1631
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems of Cuckoo Search (CS) algorithm including low optimizing accuracy and weak local search ability, an improved CS algorithm with differential evolution strategy was presented. The individual variation was completed in the algorithm before population with two weighted differences increased on its individuals entering the next iteration, then crossover operation and select operation were performed to obtain optimal individual, which making the CS algorithm lack of mutation mechanism have the variation mechanism, so as to increase the diversity of the CS algorithm, avoid individual species into local optimum and enhance the global optimization ability. The algorithm was put through several classical test functions and a typical application example. The simulation results show that the new algorithm has better global searching ability, and the convergence precision, convergence speed and optimization success rate are significantly better than those of the basic CS algorithm.

    Multi-document sentiment summarization based on latent Dirichlet Allocation model
    XUN Jing LIU Peiyu YANG Yuzhen ZHANG Yanhui
    2014, 34(6):  1636-1640.  DOI: 10.11772/j.issn.1001-9081.2014.06.1636
    Asbtract ( )   PDF (706KB) ( )  
    Related Articles | Metrics

    It is difficult for the existing methods to get overall sentiment orientation of the comment text. To solve this problem, the method of multi-document sentiment summarization based on Latent Dirichlet Allocation (LDA) model was proposed. In this method, all the subjective sentences were extracted by sentiment analysis and described by LDA model, then a summary was generated based on the weight of sentences which combined the importance of words and the characteristics of sentences. The experimental results show that this method can effectively identify key sentiment sentences, and achieve good results in precision, recall and F-measure.

    Logarithmic adaption crowding genetic algorithm for multimodal function optimization
    LIU Wentao HU Jiabao
    2014, 34(6):  1645-1648.  DOI: 10.11772/j.issn.1001-9081.2014.06.1645
    Asbtract ( )   PDF (717KB) ( )  
    References | Related Articles | Metrics

    Crowding genetic algorithm can obtain multiple optima of multimodal functions, but it has low efficiency, and cannot get a higher precision in limited iterations. In order to obtain all optima of the multimodal function quickly, the crowding genetic algorithm based on logarithmic adaption was presented combined with niche crowding genetic and climbing operators. The algorithm computed the distance values of climbing operators by logarithmic adaption according to the iterations, which made the population maintain genetic diversity in the process. According to the experiments and comparative analysis of several one-dimensional and two-dimensional multimodal functions, the test results show that the algorithm can ensure both the solution accuracy rate and the convergence speed in the limited iterations, and obtain all optimal solutions more stably. It is proved to be an effective algorithm for the multimodal function problems.

    Optimization for test selection based on simulated annealing binary particle swarm optimization algorithm
    JIAO Xiaoxuan JING Bo HUANG Yifeng DENG Sen DOU Wen
    2014, 34(6):  1649-1652.  DOI: 10.11772/j.issn.1001-9081.2014.06.1649
    Asbtract ( )   PDF (557KB) ( )  
    References | Related Articles | Metrics

    For the problem of test selection for complex system, a test selection optimization based on Simulated Annealing Binary Particle Swarm Optimization (SA-BPSO) algorithm was adopted. The probabilistic jumping ability of simulated annealing algorithm was used to overcome the deficiencies of the particle swarm being easily fall into local optimal solution. The process and key steps of the algorithm for test selection in complex system were introduced, and the complexity of the algorithm was analyzed. The simulation results show that the algorithm has better performance in running time and testing cost compared to genetic algorithm, thus the algorithm can be used to optimize test points of complex system.

    Robust Cooperative Output Tracking of Linear Multi-Agent Systems under Switching Communication Topology
    SUN Wei
    2014, 34(6):  1653-1656.  DOI: 10.11772/j.issn.1001-9081.2014.06.1653
    Asbtract ( )   PDF (630KB) ( )  
    References | Related Articles | Metrics

    A robust distributed output tracking controller was proposed for a class of linear multi-Agent system subject to external disturbances. This controller was applied to the case where the communication topology among the Agents was direct and possibly time-varying (i.e. switching). This controller is composed of two parts: the first part could ensure the tracking error uniformly exponentially converges to zero in the ideal case (without external disturbances), while the other part was used to compensate for the effect of the present disturbances. It is shown that the effect of constant disturbances can be completely attenuated by the proposed controller, that is, the tracking error converges asymptotically to zero even in the presence of constant disturbances; while for other type of disturbances with bounded derivatives, the ultimate bound of tracking error can be made arbitrarily small by choosing appropriate design parameters. Finally, the two-fold theoretical results were verified by a simulation example.

    Shooting method for humanoid robot based on three-mass model
    LI Chunguang LIU Guodong
    2014, 34(6):  1657-1660.  DOI: 10.11772/j.issn.1001-9081.2014.06.1657
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics

    To achieve rapid and stable shooting action, a shooting trajectory planning method of humanoid robot was proposed based on three-mass model. Firstly, Zero Moment Point (ZMP) equations containing body and swing leg trajectory were obtained based on the three-mass model of humanoid robot. After the trajectories of swing leg and ZMP were planned by cubic Bezier curve, the body trajectory of humanoid robot could be obtained by solving the ZMP equations. Secondly, during the double-support phase,the center of mass trajectory of humanoid robot was calculated based on linear pendulum model,then quick adjustment of shooting posture could be achieved. Finally, fast shooting action of humanoid robot based on this method was realized on RoboCup 3D simulation platform, and comparisons with the shoot action of other teams were made. The experimental results show that stable shooting action can be quickly achieved only by manual debugging based on this method, time of shooting action is greatly reduced, and the competitiveness of robot soccer team can be enhanced.

    Cross-site scripting detection in online social network based on classifiers and improved n-gram model
    LI Ruilei WANG Rui JIA Xiaoqi
    2014, 34(6):  1661-1665.  DOI: 10.11772/j.issn.1001-9081.2014.06.1661
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics

    Due to the threats of Cross-Site Scripting (XSS) attack in Online Social Network (OSN), a approach combined classifiers and improved n-gram model was proposed to detect the malicious OSN webpages infected with XSS code. Firstly, similarity-based features and difference-based features were extracted to build classifiers and the improved n-gram model. After that, the classifiers and model were combined to detect malicious webpages in OSN. The experimental results show that compared with the traditional classifier detection methods, the proposed approach is more effective and the false positive rate is about 5%.

    Cascading invulnerability attack strategy of complex network via community detection
    DING Chao YAO Hong DU Jun PENG Xingzhao LI Minhao
    2014, 34(6):  1666-1670. 
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics

    In order to investigate the cascading invulnerability attack strategy of complex network via community detection, the initial load of the node was defined by the betweenness of the node and its neighbors, this defining method comprehensively considered the information of the nodes, and the load on the broken nodes were redistributed to its neighbors according to the local preferential probability. When the network being intentionally attacked based on community detection, the couple strength, the invulnerability of Watts-Strogatz (WS) network, Barabási-Albert (BA) network, Erds-Rényi (ER) network and World-Local (WL) network, as well as network with overlapping and non-overlapping community under differet attack strategies were studied. The results show that the network's cascading invulnerability is negatively related with couple strength; as to different types of networks, under the premise that fast division algorithm correctly detects community structure, the networks invulnerability is lowest when the node with largest betweenness was attacked; after detecting overlapping community using the Clique Percolation Method (CPM), the network invulnerability is lowest when the overlapping node with largest betweenness was attacked. It comes to conclusion that the network will be largest destoryed when using the attack strategy of complex network via community detection.

    Perceptual encryption algorithm for mobile communication VOD application
    GUO Yu BO Sen GUO Hui TANG Jianbo
    2014, 34(6):  1671-1675.  DOI: 10.11772/j.issn.1001-9081.2014.06.1671
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics

    In Video-On-Demand (VOD) applications, it is desired that the encrypted multimedia data are still partially perceptible after encryption in order to stimulate the purchase of the high-quality versions of the multimedia products. This perceptual encryption requires specific algorithms for encrypting the video data. Duo to lack of H.264 video perceptual encryption algorithms for mobile communication application, a video encryption algorithm based on ZU Chongzhi's (ZUC) algorithm and Compressive Sensing (CS) was proposed. First of all, ZUC algorithm was utilized to construct a random measurement matrix. Then the quantified Discrete Cosine Transformation (DCT) coefficients were measured by measurement matrix, and the measured values were regarded as new quantified DCT coefficients to encode, which realized the encryption by using the difference between original and new coefficients. Finally, the characteristics of a good perception encryption algorithm were defined. The experimental results show that the proposed algorithm has little effect on video compression bit rate with low time complexity, and it is also sensitive to key change with good perceptual security.

    Identity-based public verifiable signcryption scheme in standard model
    BAI Yin HAN Yiliang YANG Xiaoyuan LU Wanxuan
    2014, 34(6):  1676-1680.  DOI: 10.11772/j.issn.1001-9081.2014.06.1676
    Asbtract ( )   PDF (698KB) ( )  
    References | Related Articles | Metrics

    The existing identity-based signcryption schemes are based on random oracle model. In order to solve its low security, a new identity-based efficient signcryption scheme was proposed in standard model. The proposed scheme was based on the difficult problems of discrete logarithm and factorization and could efficiently improve the security. And it is proved that the confidentiality relies on the Decisional Bilinear Diffie-Hellman (DBDH) assumption and the unforgeablity relies on the Computational Diffie-Hellman (CDH) assumption. In addition, the scheme has public verifiability. The comparison and analysis show that the proposed scheme is more efficient and has a wide application range compared with similar schemes.

    Unidirectional and multi-hop identity-based proxy re-encryption scheme with constant ciphertext
    MENG Yichao ZHANG Minqing WANG Xu'an
    2014, 34(6):  1681-1685.  DOI: 10.11772/j.issn.1001-9081.2014.06.1681
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics

    In current multi-hop unidirectional identity-based proxy re-encryption schemes, the ciphertext length increases with the number of hops, which leads to the reduction of efficiency. To solve this issue, a new multi-hop unidirectional identity-based proxy re-encryption scheme was designed by changing the re-encryption key generation side. The re-encryption keys were generated by the sender. In the scheme, the first-level and second-level ciphertexts were of the same pattern, and the length of the re-encrypted ciphertext remained unchanged. The efficiency analysis shows that the proposed scheme reduces the numbers of exponent, multiplication, and bilinear pairing computations. The new scheme has been proved to be chosen-ciphertext attack secure in the random oracle model based on the Decisional Bilinear Diffie-Hellman (DBDH) assumption.

    Network intrusion detection based on particle swarm optimization algorithm and information gain
    HUANG Huiqun SUN Hong
    2014, 34(6):  1686-1688.  DOI: 10.11772/j.issn.1001-9081.2014.06.1686
    Asbtract ( )   PDF (578KB) ( )  
    References | Related Articles | Metrics

    In order to improve the detection accuracy of network intrusion, a network intrusion detection model named PSO-IG was proposed based on Particle Swarm Optimization (PSO) algorithm and Information Gain (IG). Firstly, PSO algorithm was used to eliminate redundant features of original network data, and then the weight values of selection features were obtained using IG, and Support Vector Machine (SVM) was used to establish intrusion detection model. Finally, the KDD CUP 99 dataset was used to test the performance of PSO-IG. The results show that the proposed model can eliminate redundant features and reduce the input dimension to improve the detection speed of network intrusion, and it can improve the network intrusion detection accuracy by reasonable selecting weight values.

    Zero-watermarking algorithm based on cellular automata and sigular value decomposition
    WU Weimin DING Ran LIN Zhiyi ZOU Qinhui
    2014, 34(6):  1689-1693.  DOI: 10.11772/j.issn.1001-9081.2014.06.1689
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem of low robustness of general watermarking algorithms in resisting JPEG compression and geometric transform attacks, a zero-watermarking algorithm based on Cellular Automata (CA) and Singular Value Decomposition (SVD) was proposed. Firstly, an image was transformed by 2-dimensional cellular automata transform and the low-frequency subband approximation image were isolated, then the CA parameters was saved as key. After that, the approximation image was sub-blocked, and the blocks were decomposed by SVD, then the zero-watermark was constructed by CA rule in SVD matrix. In image authentication, the image could be certificated by comparing the similarity of two watermarks with the threshold value. The experimental result shows that this algorithm has good invisibility and perfect robustness in resisting JPEG compression and geometric transform attacks.

    Multi-stream based Tandem feature method for mispronunciation detection
    YUAN Hua CAI Meng ZHAO Hongjun ZHANG Weiqiang LIU Jia
    2014, 34(6):  1694-1698.  DOI: 10.11772/j.issn.1001-9081.2014.06.1694
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics

    To deal with the under-resourced labeled pronunciation data in mispronunciation detection, some other data were used to improve the discriminability of feature in the framework of Tandem system. Taking Chinese learning of English as object, unlabeled data, native Mandarin data and native English data which can be relatively easily accessed were selected as the assisted data. The experiments show that these types of data can effectively improve the performance of system, and the unlabeled data performs the best. And the effect to system performance was discussed with different length of frame context, the shallow and deep neural network typically represented by Multi-Layer Perception (MLP) and Deep Neural Network (DNN), and different structure of Tandem feature. Finally the strategy of merging multiple data streams was used to further improve the system performance, and the best system performance was achieved by combining the DNN based unlabeled data stream and native English stream. Compared with the baseline system, the recognition accuracy is increased by 7.96%, and the diagnostic accuracy of mispronunciation type is increased by 14.71%.

    Improvement of UMHexagonS motion estimation algorithm in H.264
    XIAO Bingjun YANG Jing
    2014, 34(6):  1699-1705.  DOI: 10.11772/j.issn.1001-9081.2014.06.1699
    Asbtract ( )   PDF (1082KB) ( )  
    References | Related Articles | Metrics

    The UMHexagonS motion estimation algorithm in H.264 was studied, and an improved fast motion estimation algorithm was proposed. First, the fixed search range, the unsymmetrical cross search, the 5×5 small rectangular spiral search, the uneven multi-hexagon-grid search and the extended hexagon-based search were analyzed. Then the optimized search modes were given respectively, which called dynamic search window, adaptive rood pattern search, the directional 3×3 small rectangular search pattern, the predictive intensive direction search and the modified extended hexagon-based search. Thus Adaptive Pattern Direction Search (APDS) algorithm was formed by these optimized search modes. The experimental results conducted on different test sequences show that, compared to UMHexagonS algorithm, the APDS algorithm can save about 29.64% Motion Estimation (ME) time and reduce the average number of checking points per Motion Vector (MV) generation about 21.64, while incurring nothing obvious loss in the reconstructed picture quality and less increment in the bit rate. With the efficiency improvement of ME, the real-time performance of the encoder is further enhanced.

    Relative orientation approach based on direct resolving and iterative refinement
    YANG Ahua LI Xuejun LIU Tao LI Dongyue
    2014, 34(6):  1706-1710.  DOI: 10.11772/j.issn.1001-9081.2014.06.1706
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics

    In order to improve the robustness and accuracy of relative orientation, an approach combining direct resolving and iterative refinement for relative orientation was proposed. Firstly, the essential matrix was estimated from some corresponding points. Afterwards the initial relative position and posture of two cameras were obtained by decomposing the essential matrix. The process for determining the only position and posture parameters were introduced in detail. Finally, by constructing the horizontal epipolar coordinate system, the constraint equation group was built up from the corresponding points based on the coplanar constraint, and the initial position and posture parameters were refined iteratively. The algorithm was resistant to the outliers by applying the RANdom Sample Consensus (RANSAC) strategy and dynamically removing outliers during iterative refinement. The simulation experiments illustrate the resolving efficiency and accuracy of the proposed algorithm outperforms that of the traditional algorithm under the circumstance of importing varies of random errors. And the experiment with real data demonstrates the algorithm can be effectively applied to relative position and posture estimation in 3D reconstruction.

    Symmetry optimization of polar coordinate back-projection reconstruction algorithm for fan beam CT
    ZHANG Jing ZHANG Quan LIU Yi GUI Zhiguo
    2014, 34(6):  1711-1714.  DOI: 10.11772/j.issn.1001-9081.2014.06.1711
    Asbtract ( )   PDF (592KB) ( )  
    References | Related Articles | Metrics

    To improve the speed of image reconstruction based on fan-beam Filtered Back Projection (FBP), a new optimized fast reconstruction method was proposed for polar back-projection algorithm. According to the symmetry feature of trigonometric function, the preprocessing projection datum were back-projected on the polar coordinates at the same time. During the back-projection data coordinate transformation, the computation of bilinear interpolation could be reduced by using the symmetry of the pixel position parameters. The experimental result shows that, compared with the traditional convolution back-projection algorithm, the speed of reconstruction can be improved more than eight times by the proposed method without sacrificing image quality. The new method is also applicable to 3D cone-beam reconstruction, and can be extended to multilayer spiral three-dimensional reconstruction.

    Panoramic density estimation method in complex scene
    HE Kun LIU Zhou WEI Luning YANG Heng ZHU Tong LIU Yanwei ZHOU Jimei
    2014, 34(6):  1715-1718.  DOI: 10.11772/j.issn.1001-9081.2014.06.1715
    Asbtract ( )   PDF (828KB) ( )  
    References | Related Articles | Metrics

    为了克服传统密度估计方法受限于算法配置工作量高、高等级密度样本数量有限等因素无法大规模应用的缺点,提出一种基于监控视频的全景密度估计方法。首先,通过自动构建场景的权重图消除成像过程中射影畸变造成的影响,该过程针对不同的场景自动鲁棒地学习出对应的权值图,从而有效降低算法配置工作量;其次,利用仿真模拟方法通过低密度等级样本构建大量高密度等级样本;最后,提取训练样本的面积、周长等特征用于训练支持向量回归机(SVR)来预测每个场景的密度等级。在测试过程中,还通过二维图像与全景地理信息系统(GIS)地图的映射,实时展示全景密度分布情况。在北京北站广场地区的深度应用结果表明,所提全景密度估计方法可以准确、快速、有效地估计复杂场景中人群密度动态变化。

    Fast image completion algorithm based on random correspondence
    XIAO Mang LI Guangyao TAN Yunlan GENG Ruijin LV Yangjian XIE Li PENG Lei
    2014, 34(6):  1719-1723.  DOI: 10.11772/j.issn.1001-9081.2014.06.1719
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics

    The traditional patch-based image completion algorithms circularly search the most similar patches in the whole image, and are easily affected by confidence factor in the process of structure propagation. As a result, these algorithms have poor efficiency and need a lot of time for the big computation. To overcome these shortages, a fast image completion algorithm based on randomized correspondence was proposed. It adopted a randomized correspondence algorithm to search the sample regions, which have similar structure and texture with the target region, so as to reduce the search space. Meanwhile, the method of computing filling priorities based on confidence factor and edge information was optimized to enhance the correctness of structure propagation. In addition, the method of calculating the most similar patches was improved. The experimental results show that, compared with the traditional algorithms, the proposed approach can obtain 5-10 times speed-up in repair rate, and performs better in image completion.

    Image inpainting using reference image texture and distress image color
    YANG Su YANG Zhaozhong
    2014, 34(6):  1724-1726.  DOI: 10.11772/j.issn.1001-9081.2014.06.1724
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics

    While dealing with a target image of large damaged area and complex structure, the traditional image restoration algorithms cannot get sufficient information from it, leading to unsatisfactory repair effects. An algorithm based on the texture of reference image and the color of target image was proposed to overcome this defect. First, the suitable reference image was obtained by using intelligent search from similar images in image database and the template fill was performed on the damaged regions by selecting appropriate content from reference image. Then, the restored boundary was smoothed by the texture information of both reference image and target image. Color transfer and colorization was implemented to make the appearance of repaired part in accordance with its surroundings. The experimental results show that this approach can obtain better results and keep smooth region boundaries in visual effect.

    Canny edge detection algorithm based on robust principal component analysis
    NIU Fafa CHEN Li ZHANG Yongxin LI Qin
    2014, 34(6):  1727-1730.  DOI: 10.11772/j.issn.1001-9081.2014.06.1727
    Asbtract ( )   PDF (680KB) ( )  
    References | Related Articles | Metrics

    To improve the accuracy and robustness of image edge detection, a new Canny edge detection algorithm based on Robust Principal Component Analysis (RPCA) was proposed. The image was decomposed into a principal component and a sparse component by RPCA. Then edge information of the principal component was extracted by Canny operator. The proposed algorithm formulated the problem of image edge detection as the edge detection of the principal component of the image. It eliminated the interference of image "stain" on the detection results and suppressed the noise. The experimental results show that the proposed algorithm outperforms Log, Canny and Susan edge detection algorithms in terms of both accuracy and robustness.

    Multi-scale image salient region extraction based on frequency domain
    YANG Dawei SONG Chengcheng LI Songjiang LI Dan
    2014, 34(6):  1731-1734.  DOI: 10.11772/j.issn.1001-9081.2014.06.1731
    Asbtract ( )   PDF (607KB) ( )  
    References | Related Articles | Metrics

    To overcome the salient extraction results cannot preserve edge and enrich the inner details when extracting image salient region, a new multi-scale extraction approach based on frequency domain was proposed. In order to remove redundant information and get the innovation, the image was Fourier-transformed to get the spectral residual on multiple resolutions. Then normalization processing was applied to obtain the final saliency image. The simulation results show that the proposed method has good visual effect, which can keep the edges of salient region and highlight the whole significant target uniformly at the same time. The area under Receiver Operating Characteristic (ROC) curve of these results also has satisfied performance.

    Tensor-based super-resolution algorithm for single image
    Feng WANG
    2014, 34(6):  1735-1737.  DOI: 10.11772/j.issn.1001-9081.2014.06.1735
    Asbtract ( )   PDF (644KB) ( )  
    References | Related Articles | Metrics

    Edge details of the image directly affect the visual quality of the image. In order to maintain structure information of the image edges as much as possible, and then improve the quality of super-resolution images, a tensor-based single image super-resolution algorithm was proposed. Firstly, the local geometric characteristics of image were described by tensor, then the local characteristics of the interpolation points were estimated according to that of the sampling points. Finally, the gray values of the interpolation points were calculated by the estimated characteristics. The experimental results show that the super-resolution method based on tensor can better preserve the structure information edges in the image, and performace better in Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity (SSIM) and subjective visual effect.

    Image decomposition with L0-norm regularization from local gradient
    PAN Kangjun XIE Dehong
    2014, 34(6):  1738-1740.  DOI: 10.11772/j.issn.1001-9081.2014.06.1738
    Asbtract ( )   PDF (659KB) ( )  
    References | Related Articles | Metrics

    An image decomposition based on minimizing the variational function with L0-norm regularization using local gradient was proposed, with regard to the problem that difference between gradient of the noise and gradient of the edge cannot be discriminated by the typical gradient computed from the first-order derivative. It consisted of fidelity term and regular term, and the regular term was estimated by the L0-norm of local gradient from the first-order derivative. Finally, the base layer, only including edges and excluding noises, was obtained by minimizing the proposed variational function. Compared with the decomposition algorithm with the typical L0 gradient regulation, the proposed algorithm can preserve sharp edges and avoid the impact of noises.

    Remote sensing image classification using layer-by-layer feature associative conditional random field
    YANG Yun XU Li
    2014, 34(6):  1741-1745.  DOI: 10.11772/j.issn.1001-9081.2014.06.1741
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics

    For the difficulty of expressing spatial context in classification of high resolution remote sensing imagery, a new multi-scale Conditional Random Field (CRF)model was proposed here. Specifically, a given image was represented as three superpixel layers respectively being region, object and scene from fine to coarse firstly. Then features were extracted layer-by-layer, and those features from the three layers were associated with each other to form a feature vector for each node in region layer. Secondly, Support Vector Machine (SVM) was adopted to define association potential function, and Potts model weighted by feature contrast function was used to define interaction potential function of CRF model, thus a layer-by-layer feature associative and multi-scale SVM-CRF model was formed. To confirm the effectiveness of the proposed model in classification, experiments on two complex scenes from Quickbird remote sensing imagery were developed. The results show that the proposed model achieves an improved accuracy averagely 2.68%, 2.37%, 3.75% higher than that of SVM-CRF model based on either region, object or scene layer, also it consumes less time in classification.

    Multi-camera person identification based on hidden markov model
    GAO Peng GUO Lijun ZHU Yiwei ZHANG Rong
    2014, 34(6):  1746-1752.  DOI: 10.11772/j.issn.1001-9081.2014.06.1746
    Asbtract ( )   PDF (1042KB) ( )  
    References | Related Articles | Metrics

    In the non-overlapping filed of multi-camera system, the single-shot person identification methods cannot well deal with appearance and viewpoint changes. Based on the multiple frames acquired from surveillance cameras, a new technique which combined Hidden Markov Model (HMM) with appearance-based feature was proposed. First, considering the structural constraint of human body, the whole-body appearance of each individual was equally vertically divided into sub-images. Then multi-level threshold method was used to extract Segment Representative Color (SRC) and Segment Standard Variation (SSV) feature. The feature dataset acquired from multiple frames was applied to train continuous density HMM,and the final recognition was realized by these well-trained model. Extensive experiments on two public datasets show that the proposed method achieves high recognition rate, improves robustness against viewpoint changes and low resolution, and it is simple and easy to realize.

    Fast haze removal algorithm for single image based on human visual characteristics
    ZHANG Hongying ZHANG Sainan WU Yadong WU Bin
    2014, 34(6):  1753-1757.  DOI: 10.11772/j.issn.1001-9081.2014.06.1753
    Asbtract ( )   PDF (953KB) ( )  
    References | Related Articles | Metrics

    In order to remove the effect of weather in degraded image, a fast haze removal algorithm for single image based on human visual characteristics was proposed. According to the luminance distribution of the hazy image and the human visual characteristics, the proposed method first applied luminance component to estimate coarse transmission map, then used a linear spatial filter to refine the transmission map and obtained the dehazed image by the atmospheric scattering model. Finally a new image enhancement fitting function was applied to enhance the luminance component of the dehazed image to make it more natural and clear. The experimental results show that the proposed algorithm effectively removes haze and is better than the existing algorithms in terms of contrast, information entropy and computing time.

    Adaptive image diffusion filtering based on exponent variable in the complex wavelet domain
    Jin-Hua LIU
    2014, 34(6):  1758-1761.  DOI: 10.11772/j.issn.1001-9081.2014.06.1758
    Asbtract ( )   PDF (587KB) ( )  
    References | Related Articles | Metrics

    In order to overcome the problem of staircase effect and edge blurring caused by using the traditional anisotropic diffusion model, the perfect reconstruction and better direction selectivity characteristics of the complex wavelet transform were applied to design an adaptive diffusion model combined with the gradient and complex wavelet transform modulus in the complex wavelet domain, and an adaptive image diffusion filtering algorithm was proposed based on exponent variable in this work. Finally, the filtering performance of the proposed algorithm was tested through computer simulation. The experimental results show that the noise can be filtered effectively in low Signal-to-Noise Ratio (SNR) conditions, and the edges and textures can be preserved well by the proposed method.

    Vehicle navigation method based on trinocular vision
    WANG Jun LIU Hongyan
    2014, 34(6):  1762-1764.  DOI: 10.11772/j.issn.1001-9081.2014.06.1762
    Asbtract ( )   PDF (607KB) ( )  
    References | Related Articles | Metrics

    A classification method based on trinocular stereovision, which consisted of geometrical classifier and color classifier, was proposed to autonomously guide vehicles on unstructured terrain. In this method, rich 3D data which were taken by stereovision system included range and color information of the surrounding environment. Then the geometrical classifier was used to detect the broad class of ground according to the collected data, and the color classifier was adopted to label ground subclasses with different colors. During the classifying stage, the new classification data needed to be updated continuously to make the vehicle adapt to variable surrounding environment. Two broad categories of terrain what vehicles can drive and can not drive were marked with different colors by using the classification method. The experimental results show that the classification method can make an accurate classification of the terrain taken by trinocular stereovision system.

    Blood cell image thresholding method using cloud model
    WU Tao
    2014, 34(6):  1765-1769.  DOI: 10.11772/j.issn.1001-9081.2014.06.1765
    Asbtract ( )   PDF (905KB) ( )  
    References | Related Articles | Metrics

    The traditional statistical thresholding methods which directly construct the optimal threshold criterions by the class-variance have certain versatility, but lack the specificity of practical application in some cases. In order to select the optimal threshold for blood cell image segmentation and extract white blood cells nuclei, a simple and fast method based on cloud model was proposed. The method firstly generated the cloud models corresponding to white blood cells nuclei and blood cell image background respectively, and defined a new thresholding criterion by utilizing the hyper-entropy of cloud models, then obtained the optimal grayscale threshold by the maximization of this criterion, finally achieved blood cell image thresholding and white blood cells nuclei extraction. The experimental results indicate that, compared with the traditional methods including maximizing inter-class variance method, maximizing entropy method, minimizing error method, minimizing intra-class variance sum method, and minimizing maximal intra-class variance method, the proposed method is suitable for blood cell image thresholding, and it is reasonable and effective.

    Modeling and verification of services oriented cyber physical systems
    LIU Mingxing MA Wubin DENG Su HUANG Hongbin
    2014, 34(6):  1770-1773.  DOI: 10.11772/j.issn.1001-9081.2014.06.1770
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics

    Concerning the problems and challenges in Cyber Physical System (CPS), a new modeling and verification method of CPS was proposed based on service composition ideas. Firstly, a composition structure of CPS was proposed, including the physical world, sensor systems, information processing systems, control systems and time constraints. Based on this proposed structure, the service classification and composition framework of CPS resources were proposed. The physical environment modeling, atomic service modeling and service composition of CPS were also given based on the timed automata theory. Finally, through case design and model checking tool Uppaal, the experimental results were given to illustrate the correctness of the CPS service-oriented modeling approach, including system security, accessibility, liveness and time constraints. The results verify the above properties and the correctness of the proposed method.

    Hybrid Web service composition approach based on interface automata
    MA Changwei MA Hongjiang
    2014, 34(6):  1774-1778.  DOI: 10.11772/j.issn.1001-9081.2014.06.1774
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics

    To realize hybrid Web service composition when Web Services Description Language (WSDL) and Web Ontology Language for Service (OWL-S) coexist, an approach based on interface automata was presented. Firstly, the interface automata were employed to accomplish automatic recognition and composition of Web services after analyzing the relations between WSDL and OWL-S. Simultaneously, the optimal results were obtained to realize different service business logic by comparing the service composition with the predefined service quality. The results of a tourism service sample show that the approach is feasible and effective, and the efficiency of service composition is improved by 5%-10%.

    Test case generation method for Web applications based on state transition
    ZHANG Shaokang WANG Shuyan SUN Jiaze
    2014, 34(6):  1779-1782.  DOI: 10.11772/j.issn.1001-9081.2014.06.1779
    Asbtract ( )   PDF (683KB) ( )  
    References | Related Articles | Metrics

    Due to low error checking rate of Web application test, a method of test case generation for Web applications based on state transition was proposed. By constructing state transition diagram of pages, event transition table and navigation transition table, the link relationship of Web applications was shown. This approach generated test path from state transition tree of pages got from state transition diagram of pages. Based on equivalence partitioning principles, a coverage criteria was proposed, then a test case set was reported as result combined with information from event transition table and navigation transition table. The result shows that the proposed method can represent link relationship of Web applications effectively, and improve error checking rate of test case.

    Construction of service processes oriented to service clusters
    HU Qiang
    2014, 34(6):  1783-1787.  DOI: 10.11772/j.issn.1001-9081.2014.06.1783
    Asbtract ( )   PDF (715KB) ( )  
    References | Related Articles | Metrics

    To reduce the building time, optimize the service qualities and improve the adaptive response of service processes, a method to construct service processes based on service clusters was proposed. Service clusters were adopted as the component units in the service processes. The process to build service processes were divided into two phases: recommending the component services oriented to service clusters and computing the optimal service qualities of service processes. The corresponding solutions were also given. The simulation experiment was conducted on 10000 Web services with different process patterns. Compared with the service processes constructed on atomic Web services, the building and rebuilding times were decreased more than 50% and the service qualities were improved more than 10%. The simulation results confirm the proposed method can greatly reduce the building time and optimize the service qualities and adaptive response in the process of building service processes.

    Predicting inconsistent change probability of code clone based on latent Dirichlet allocation model
    YI Lili ZHANG Liping WANG Chunhui TU Ying LIU Dongsheng
    2014, 34(6):  1788-1791.  DOI: 10.11772/j.issn.1001-9081.2014.06.1788
    Asbtract ( )   PDF (748KB) ( )  
    References | Related Articles | Metrics

    The activities of the programmers including copy, paste and modify result in a lot of code clone in the software systems. However, the inconsistent change of code clone is the main reason that causes program error and increases maintenance costs in the evolutionary process of the software version. To solve this problem, a new research method was proposed. The mapping relationship between the clone groups was built at first. Then the theme of lineal cloning cluster was extracted using Latent Dirichlet Allocation (LDA) model. Finally, the inconsistent change probability of code clone was predicted. A software which contains eight versions was tested and an obvious discrimination was got. The experimental results show that the method can effectively predict the probability of inconsistent change and be used for evaluating quality and credibility of software.

    Workflow weighting colored Petri-net modeling method
    DU Yibo
    2014, 34(6):  1792-1797.  DOI: 10.11772/j.issn.1001-9081.2014.06.1792
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics

    In the classical Petri-net, the workflow has no strict restrictions and definition, the transition token (including type, quantity, flow direction) binding and arriving of the subsequent places in different ways, and the description and analysis of multi-performance cannot be handled effectively. A workflow weighting colored Petri-net modeling method was proposed by defining the workflow structure and color set of Petri net and adding the multi-performance analysis. The conception, weight vector and the structure of the method were introduced, and the process of dangerous chemicals logistics was teken as an example to put forward a method of modeling and performance measurement of the process of dangerous chemicals logistics from two dimensions including time and safety. Then the method was used for modeling, measuring and analyzing performance of the process of dangerous chemicals logistics, the total value of performance was 3.8094. At last, by screening the weakness of local performance, bottleneck of process of dangerous chemicals logistics was found out, thus proves that the method is a scientific method for multi-performance analysis of workflow.

    Cooperative shock search particle swarm optimization with chaos for resource-constrained project scheduling problems
    DAI Yueming TANG Jitao JI Zhicheng
    2014, 34(6):  1798-1802.  DOI: 10.11772/j.issn.1001-9081.2014.06.1798
    Asbtract ( )   PDF (759KB) ( )  
    References | Related Articles | Metrics

    For Resource-Constrained Project Scheduling Problems (RCPSP), the Cooperative Shock search Particle Swarm Optimization with Chaos (CSCPSO) was proposed. On the basis of particle attractor, a bidirectional cooperation shock search mechanism was established in the algorithm to enhance the search accuracy and diversity of population. The particles converged to particle attractor, meanwhile they adjusted the dimensions whose adjacent relationship were inconsistent with attractor's by shock search in the mechanism. Combined with topological sorting based on particles and serial schedule generation scheme, the gotten scheduling scheme could meet the project schedule constraints of resource and precedence relations. The tests on specific examples show that the proposed algorithm can get higher accuracy and better stability for RCPSP.

    Research on Efficient Job Scheduling Method of Injection Workshop
    LI Qirui PENG Zhiping CHEN Xiaolong
    2014, 34(6):  1803-1806.  DOI: 10.11772/j.issn.1001-9081.2014.06.1803
    Asbtract ( )   PDF (551KB) ( )  
    References | Related Articles | Metrics

    To solve the low efficiency of scheduling in injection molding workshop, an improved job-shop scheduling method was proposed based on clustering mold. The production time was reduced by merging jobs with the same tool list, and the energy consumption was reduced through small model injection machine preferred scheduling. The theoretical analysis and the experimental results show that the proposed mehtod can improve productivity and reduce power consumption more than 50%, making injection molding shop job scheduling be more efficient.

    Application of biclustering algorithm in high-value telecommunication customer segmentation
    LIN Qin XUE Yun
    2014, 34(6):  1807-1811.  DOI: 10.11772/j.issn.1001-9081.2014.06.1807
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics

    To improve the accuracy of traditional method for customer segmentation, the Large Average Submatrix (LAS) biclustering algorithm was used, which performed clusting on customer samples and consumer attributes simultaneously to identify the upscale and high-value customers. By introducing a new value yardstick and a novel index named PA, the LAS biclustering algorithm was compared with K-means clustering algorithm based on a simulation experiment on consumption data of a telecom corporation. The experimental result shows that the LAS biclustering algorithm finds more groups of high-value customers and obtains more accurate clusters. Therefore, it is more suitable for recognition and segmentation of high-value customers.

    4D flight trajectory prediction model based on improved Kalman filter
    WANG Taobo HUANG Baojun
    2014, 34(6):  1812-1815.  DOI: 10.11772/j.issn.1001-9081.2014.06.1812
    Asbtract ( )   PDF (563KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of too many parameters and low prediction precision in the traditional aerodynamic 4D trajectory prediction models, an Improved Kalman Filter (IKF) algorithm was proposed to estimate the 4D trajectory, which increased the accuracy of trajectory prediction through real-time estimation of system noise. First, according to the varying direction and velocity of aircraft during flight, the velocity was shifted. Then, the prediction models were set up separately by KF and IKF. Finally, by comparing the predictive deviations in X, Y and Z directions by two algorithms, the smaller one was selected. The simulation results illustrate that the deviations respectively reduce by 17.65% and 98.03% in X and Y directions by IKF; meanwhile, KF has higher accuracy in Z direction. Besides, according to the analysis of IKF in different time interval, within the width of protection zone of arrival procedure (9.46km), the time interval could be increased to 20s.

    Geographic information aggregation model of spatial data described by geography markup language
    MIAO Lizhi JIAO Donglai YANG Lijun
    2014, 34(6):  1816-1818.  DOI: 10.11772/j.issn.1001-9081.2014.06.1816
    Asbtract ( )   PDF (631KB) ( )  
    References | Related Articles | Metrics

    For implementing dynamic aggregation of dispersive Geography Markup Language (GML) geospatial data, a aggregation mapping model based on GeoRSS standards was proposed considering the openness, self-description and dispersion characteristics of GML. A four-tier integrating framework of prototype architechture and its workflow were generated to utilize the above model to construct the related program. Then a aggregation prototype system was designed and developed for aggregating dispersive GML spatial data according to the system architechture. Based on this prototype system, the related experiments were performed to verify the feasibility of GML geospatial data aggregation, and confirm the correctness and availablity of the model. This prototype can help users to quikly find the GML geospatial data from the massive Geographic Information System (GIS) data, to achieve selection, analysis and classification of the existing data, as well as the instant updating and aggregating integration.

    Fuzzy comprehensive evaluation method for emergency management capability based on formal concept analysis and analytic hierarchy process
    QIU Qizhi ZHANG Jinbao ZHOU Jie
    2014, 34(6):  1819-1824.  DOI: 10.11772/j.issn.1001-9081.2014.06.1819
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics

    The researches on the emergency management capability evaluation mostly focus on the evaluation methods, models and etc., pay less attention on the influences varying from the emergency type, lack of dynamic and low participation of scholars and the public. A new method of emergency management capability evaluation was put forward by combining Formal Concept Analysis (FCA) and Analytic Hierarchy Process (AHP) to figure out those problems. FCA was used to assign the first level index weights because of its outstanding ability on processing the data and extracting the rules, while AHP was used to take into the various factors. Then, Fuzzy Comprehensive Evaluation (FCE) was used to draw the evaluation results. As a result, the index weights had been varied which are constant in the traditional one. And the dynamic characteristic of the evaluation was also achieved. Finally, experiments show its feasibility and effectiveness.

    Safety and energy saving detection technology of automatic door based on omni-directional vision sensor
    LIN Lulu JIANG Rongjian XU Haitao TANG Yiping
    2014, 34(6):  1825-1829.  DOI: 10.11772/j.issn.1001-9081.2014.06.1825
    Asbtract ( )   PDF (857KB) ( )  
    References | Related Articles | Metrics

    Concerning the efficiency and security issues of automatic door, an safety and energy saving detection technology of automatic door based on Omni-Directional Vision Sensor (ODVS) was proposed. Firstly, on-site 360° panorama image around automatic door was collected timely by ODVS and preprocessed according to the detection requirements. Secondly, moving target was detected and tracked through Motion History or Energy Images (MHoEI) algorithm. Then, the behavior of pedestrian was analyzed according to the direction of motion and spatial position of the foreground object. Finally, in order to make automatic door secure, energy-saving and comfortable, the automatic door was controlled to open or close according to the behavior and state of pedestrian. At the same time, the number of people passing through the automatic door can also be worked out accurately. This technology can be directly used in intelligent monitoring and business survey. The experimental results indicate that the detection technology of automatic door can recognize the behavior of pedestrian around the automatic door, avoid the security risks of automatic door and improve the accuracy of people counting.

    Double four-step route phase-shifting average algorithm
    CHEN Liwei LIU Yong BI Guotang JIANG Yong
    2014, 34(6):  1830-1833.  DOI: 10.11772/j.issn.1001-9081.2014.06.1830
    Asbtract ( )   PDF (724KB) ( )  
    References | Related Articles | Metrics

    Gama nonlinearity and random noise caused by optical devices are two main phase errors on structured light projection. Double three-step phase-shifting algorithm has the unique advantage on inhabiting both of them, but there remains two drawbacks in its measuring result including higher nonlinear error and lower measuring pecision. A double four-step route phase-shifting average algorithm was proposed for resolving above questions, which applied the idea of phase-aligning average in four-step phase-shifting algorithm to lower the effect of nonlinear error, and put forward a phase average method of phase-field space transform in multi-frequency heterodyne to weaken random noise and improve meauring precision. The experimental results show that the proposed method has higher accuracy and adaptability in phase unwrapping.

    Feature evaluation for advanced radar emitter signals based on SPA-FAHP
    ZHU Bin JIN Weidong YU Zhibin ZHU Jianliang
    2014, 34(6):  1834-1838.  DOI: 10.11772/j.issn.1001-9081.2014.06.1834
    Asbtract ( )   PDF (715KB) ( )  
    References | Related Articles | Metrics

    Concerning the lackness of effective means in the feature evaluation of Advanced Radar Emitter Signals (ARES), and the excessive dependence on expert experience in Analytic Hierarchy Process (AHP), a new feature evaluation model of ARES named SPA-FAHP was proposed based on Set Pair Analysis (SPA) and Fuzzy Analytic Hierarchy Process (FAHP). In order to solve the uncertainty or fuzzy judgement of the judge people when they evaluate the large-capacity data of radar emitter signals, the traditional AHP was improved through the introduction of triangular fuzzy numbers, and the index weights of ARES feature evaluation system were analyzed by FAHP. Then, the expert decision matrix of traditional AHP was made improvement and identical degree analysis through the introduction of SPA theory to solve the problem that the decisions of AHP rely on experience of experts too much. Finally, ARES features were made comprehensive evaluation through the combination of index weights matrix and identical degree matrix of the decision. The calculation results show that the model is effective and feasible. It can achieve the characteristic analysis and evaluation of ARES features more objectively.

2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF