Loading...

Table of Content

    01 April 2012, Volume 32 Issue 04
    Internet of things
    On Internet and Internet of things
    ZENG Hua-shen
    2012, 32(04):  893-899.  DOI: 10.3724/SP.J.1087.2012.00893
    Asbtract ( )   PDF (1289KB) ( )  
    References | Related Articles | Metrics
    Based on the analysis of development history and application environment changes of Internet, this paper provided a comprehensive discussion on the incentive, connotation, organization,and some key issues of the Internet of Things (IoT). On the basis of the analysis of the typical definition of IoT, the author believed that "the full range of information interoperability ITU to IoT as an Internet platform in the application areas of human, machine, and intelligent physical objects (SPO) and the next generation Internet applications generalization of Pervasive Computing (Ubiquitous Computing) concept " is a more reasonable broad definition of the IoT. Based on this definition, the author analyzed the basic organization of IoT, which consisted of interconnected multiple Customer Premises Network (CPN) via backbone Subnetworks, and examined two types of components of SPOCPN and their supporting techniques. It was also pointed out that introduction of SPO has little impact on Internet technology, except on access part of CPN of application systems. The author disagreed on the definition of IoT in narrow sense given by the European communities, which restricted IoT to Internet of interconnecting physical objects, and pointed out their models and architectures of IoT are essentially those of network application systems.
    Vehicular CPS:an application of IoT in vehicular networks
    LIU Xiao-yang WU Min-you
    2012, 32(04):  900-904.  DOI: 10.3724/SP.J.1087.2012.00900
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics
    This paper discussed the connotation and extension of two hot concepts, namely, "The Internet of Things (IoT)" and "vehicular CPS (Cyber Physical System)". By analyzing their courses of development, history and interrelationship, and comparing their applications in daily life, the authors are in favor that vehicular CPS can be regarded as an application of IoT in vehicular networks. Furthermore, the authors look into the bright future of services and applications resulting from vehicular CPS. With this in mind, the authors discussed the key technologies of implementing vehicular CPS and introduced an initiative. The authors also discussed in detail the academic research at home and abroad in the area of the Internet of Things and vehicular CPS.
    Traffic flow model in intelligent transport system based on precise sensor network
    WANG Tao LI Zhi-shu
    2012, 32(04):  905-909.  DOI: 10.3724/SP.J.1087.2012.00905
    Asbtract ( )   PDF (942KB) ( )  
    References | Related Articles | Metrics
    The advantage of precise sensor network compared to the traditional sensor network was introduced firstly. Then the traffic flow model of precise sensor network was built. To improve the interpretability of the model, the precise license plate identification data were used, and the variables such as the space travel time and the classified traffic flow were introduced. The established model was dynamic in essence. The experimental results show that the fitting accuracy is higher and the mean absolute error between fitted and standard value is less than 9 seconds, and the mean relative error is less than 5%. The model has a high degree of accuracy above 90%. Finally, the great value of precise sensor network in real traffic environment was summarized.
    Network and communications
    Analysis of coordinated transmission pre-coding in distributed wireless communication system
    YANG Jun ZHANG Zheng-xiao LI Min-zhi JIANG Zhan-jun
    2012, 32(04):  910-912.  DOI: 10.3724/SP.J.1087.2012.00910
    Asbtract ( )   PDF (473KB) ( )  
    References | Related Articles | Metrics
    In distributed wireless communication and Coordinated Multi-Point (CoMP) system, the deterioration of channel quality seriously affect the system receptivity of edge users, and therefore the coordinated coding processing is used to improve the quality of reception. A coordinated transmission and joint pre-coding method was proposed in this paper. Coordinated Remote Antenna Units (RAU) were processed jointly. According to the state information of channel, different pre-coding rules were adopted by each RAU to transmit data to the same user. A Maximal-Ratio Combining (MRC) algorithm was used to calculate the received signals of terminals. The simulation results show that the proposed method can effectively decrease the Bit Error Rate (BER) and improve the transmission reliability.
    Task allocation strategy in heterogeneous wireless sensor networks based on 0-1 programming
    JIANG Zhi-qiang LIAO Xiao-feng LIU Qun
    2012, 32(04):  913-916. 
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics
    In order to reduce total energy consumption for processing task in Wireless Sensor Network (WSN), balance the residual energy of nodes and decrease the time of task scheduling, a task allocation algorithm for three targets was proposed. Cost function was built with the theory of 0-1 nonlinear programming and energy variance was utilized to show equilibrium degree of residual energy of nodes. Discrete Particle Swarm Optimization (DPSO) was used to solve cost function to obtain minimum and get optimized task allocation strategy. The simulation results verify that the task allocation strategy based on 0-1 programming with DPSO could decrease the energy consumption efficiently, balance the residual energy of nodes and reduce the time of task scheduling.
    Algorithms of neighbor discovery in wireless networks with directional antennas
    LIU Zhen LI Ying
    2012, 32(04):  917-919.  DOI: 10.3724/SP.J.1087.2012.00917
    Asbtract ( )   PDF (641KB) ( )  
    References | Related Articles | Metrics
    To improve the efficiency of neighbor discovery in wireless networks with directional antennas, a busy-tone aided algorithm was proposed. With the help of omni-directional busy-tones, the problems of collision and idleness in wireless communication were effectively resolved; as a result, the channel utilization ratio was increased. The direction of antenna beam was adjusted according to the Direction of Arrival (DOA) of busy-tones. Through this strategy, communication efficiency was improved. The experimental results show that, compared with the conventional ALOHA-like algorithm and the neighbor discovery algorithm based on the feedback mechanism, the proposed algorithm has a better performance.
    Clustering algorithm based on backup path in wireless sensor network
    DING Ding LIU Fang-ai LI Qian-qian YANG Guang-xu
    2012, 32(04):  920-923.  DOI: 10.3724/SP.J.1087.2012.00920
    Asbtract ( )   PDF (599KB) ( )  
    References | Related Articles | Metrics
    Clustering can be used in the routing algorithm to enhance the scalability of Wireless Sensor Network (WSN). Concerning the defects of traditional clustering algorithm, a new strategy EDC (Energy-efficient, Dual-path, Clustering) was proposed, in which the member node has an optimal backup path. The strategy guaranteed that member node can still transmit data through its backup path when its cluster head was dying in the WSN. The results of the simulation experiment on the platform OMNeT ++ indicate that EDC performs much better than other protocols of WSN in terms of network reconstruction time and number of failed nodes.
    Funneling-MAC protocol based on DRAND algorithm
    ZHU Xiu-li LI Ying-jie
    2012, 32(04):  924-926. 
    Asbtract ( )   PDF (585KB) ( )  
    References | Related Articles | Metrics
    Concerning the disadvantage of funneling-MAC protocol, this paper gave an improved proposal of DRAND algorithm. Based on the centralized Time Division Multiple Access (TDMA) scheduling algorithm of funneling-MAC protocol, it introduced the DRAND scheme, which guaranteed nodes did not overlap within the time slots in two-hop range, so it could greatly avoid interference and collision. The NS-2 simulation results show that the improved protocol can effectively reduce system power consumption, and maintain higher channel utilization.
    3D localization algorithm for wireless sensor networks in underground coal mine
    ZHU Xiao-juan WANG Jun-hao MENG Xiang-rui
    2012, 32(04):  927-931.  DOI: 10.3724/SP.J.1087.2012.00927
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics
    Most of the existing algorithms for Wireless Sensor Networks (WSN) localization in underground coal mine have problems of low accuracy and high cost. A new 3D localization algorithm was proposed for underground coal mine based on the regular deployment of beacon nodes. First, the beacon nodes were deployed according to the characteristics of the underground tunnel; second, the beacon nodes and unknown node were projected onto the same high level while doing location estimation; third, 2D position was calculated by trilateration; at last the 3D position was obtained by means of combining the height difference between unknown nodes and beacon nodes. The theoretical analysis and simulation results show that the algorithm is of less calculation, low communication, higher positioning accuracy and good stability.
    Wspruce: an improved method of measuring available bandwidth
    JI De-zhi WU Wei-dong
    2012, 32(04):  932-934.  DOI: 10.3724/SP.J.1087.2012.00932
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    Available bandwidth is the main parameter to reflect the network status, of which the accurate measurement and estimation is an essential problem in traffic engineering and network monitoring. And there are many difficulties in its actual measurement. For Spruce, it converges slowly and needs high overhead. To solve the problems, Wspruce, an improved method of measuring available bandwidth was proposed. By using Hidden Markov Model (HMM)-series prediction features, more accurate analysis can be made on the available bandwidth. The actual measurements show that the method for estimating the available bandwidth measurement is faster, and of lower overhead.
    Data scheduling algorithm of P2P streaming media
    GUO Yuan-weiGUO XU Xue-mei ZHANG Jian-yang HUANG Zheng-yu NI Lan
    2012, 32(04):  935-937.  DOI: 10.3724/SP.J.1087.2012.00935
    Asbtract ( )   PDF (568KB) ( )  
    References | Related Articles | Metrics
    The data scheduling algorithm in data-driven overlay network is identified as one of the most influential factors affecting system performance of P2P streaming media. Considering the fact that the current algorithm fails to make use of the data blocks and nodes efficiently, which leads to low-quality streaming media services, a new method for data scheduling algorithm was proposed in this study based on both priority of data blocks and capacity of nodes. This algorithm could get priority value according to the scarcity and urgency of blocks. It also could get the capacity of the nodes according to uplink-bandwidths, time-online and relative distance of the nodes. With the utilization of this algorithm, higher priority blocks and higher capacity nodes were requested, and the waiting time to play was decreased. The simulations in the OPNET network indicate that the algorithm can efficiently reduce start-up delay of streaming media playing system and the server load.
    Video resource search policy of user generated content based on P2P
    LI Yan CHEN Zhuo
    2012, 32(04):  938-942.  DOI: 10.3724/SP.J.1087.2012.00938
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics
    Present User Generated Content (UGC) video system mainly adopts the client/server architecture, which can result in huge bandwidth pressure on streaming server. This paper proposed a Peer-to-Peer (P2P) based online short video search policy—FastSearch. It aims to make use of the relevancy relationship between video resources to locate resource, which can improve the sharing efficiency between peers and decrease the consumption of streaming server. The simulation results show that FastSearch has an efficient streaming source peers searching ability and can greatly reduce the bandwidth consumption of streaming server.
    Network time protocol performance evaluation in LAN environment
    CHEN Chao-fuCHEN WANG Lei
    2012, 32(04):  943-945.  DOI: 10.3724/SP.J.1087.2012.00943
    Asbtract ( )   PDF (432KB) ( )  
    References | Related Articles | Metrics
    Network Time Protocol (NTP) is a simple, economic and efficient way to accomplish time and frequency synchronization of multiple nodes, while relevant study on the performance evaluation is hard to find in literature, which makes it a question whether to use NTP in application. Concerning this problem, the local network NTP performance and impact of system / network load were measured and analyzed on Windows platform. By comparing time value obtained from IRIG-B time code reader and GetLocalTime Windows API, frequency skew of computer clock signal was approximated. The skew value was close to the value calculated by NTP.
    Iterative Carrier Phase and Channel Estimation Algorithm in BICM-ID
    CUI Peng-hui YANG Yu-hong ZENG Xiang-feng
    2012, 32(04):  946-948.  DOI: 10.3724/SP.J.1087.2012.00946
    Asbtract ( )   PDF (426KB) ( )  
    References | Related Articles | Metrics
    This paper proposed an iterative carrier phase and channel estimation algorithm for 16APSK in Bit-Interleaved Coded Modulation with Iterative Decoding (BICM-ID). The algorithm was based on Maximum Likelihood (ML) estimation, which made use of the decision information provided by the decoder and exchanged information between phase, channel estimation and decoder through iteration, so as to combine phase, channel estimation with decoder. The simulation results show the algorithm can get the performance only 0.5dB lower than the ideal one, and can effectively estimate the phase difference from -20 degree to 20 degree.
    Genetic algorithm based power distribution optimization scheme for indoor optical wireless communication
    XU Chun
    2012, 32(04):  949-952.  DOI: 10.3724/SP.J.1087.2012.00949
    Asbtract ( )   PDF (612KB) ( )  
    References | Related Articles | Metrics
    As indoor Optical Wireless Communications (OWC) is characterized by enormous spectrum source, lower power need, immunity to interference caused by other Radio Frequency (RF) wireless devices, it has attracted an increasing attention in the field. However, due to multipath transmission, it is hard to obtain satisfactory uniform signal quality at the receiving terminal even for locations within the same room. A genetic algorithm based optimization scheme was proposed as a candidate approach for OWC to reduce the variability of the received power. And the simulation results based upon the use of a commercially available detector with the field of view of 50 degree show that the dynamic range of received power can be reduced to 34.6% against the peak optical power from 50.3% while the impact on illumination function is negligible.
    Information security
    Dynamic trusted measurement model of operating system kernel
    XIN Si-yuan ZHAO Yong LIAO Jian-hua WANG Ting
    2012, 32(04):  953-956.  DOI: 10.3724/SP.J.1087.2012.00953
    Asbtract ( )   PDF (839KB) ( )  
    References | Related Articles | Metrics
    Dynamic trusted measurement is a hot and difficult research topic in trusted computing. Concerning the measurement difficulty invoked by the dynamic nature of operating system kernel, a Dynamic Trusted Kernel Measurement (DTKM) model was proposed. Dynamic Measurement Variable (DMV) was presented to describe and construct dynamic data objects and their relations, and the method of semantic constraint was proposed to measure the dynamic integrity of kernel components. In DTKM, the collection of memory data was implemented in real-time, and the dynamic integrity was verified by checking whether the constructed DMV was consistent with semantic constraints which were defined based on the security semantics. The nature analysis and application examples show that DTKM can effectively implement dynamic measurement of the kernel and detect the illegal modification of the kernel dynamic data.
    Efficient and secure identity-based multi-signcryption scheme in standard model
    LI Cong YAN De-qin ZHENG Hong-liang
    2012, 32(04):  957-959.  DOI: 10.3724/SP.J.1087.2012.00957
    Asbtract ( )   PDF (395KB) ( )  
    References | Related Articles | Metrics
    Faced with the inefficiency problem of current identity-based multi-signcryption schemes, an efficient and secure identity-based multi-signcryption scheme was proposed. The new scheme eliminated the number of multiplicative operations, and increased the procedure of key verification. Furthermore, through security and efficiency analysis in the standard model, the new scheme is proved secure under computational Diffie-Hellman assumption and reduces the calculation work. Compared with the known schemes, the new scheme is more secure and efficient.
    Secure management of continuity key pre-distribution scheme based on SBIBD
    WU Qiu-lin LI Qiao-liang
    2012, 32(04):  960-963.  DOI: 10.3724/SP.J.1087.2012.00960
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low connectivity in continuous security key management scheme, the authors implemented a new scheme based on Symmetric Balanced Incomplete Block Design (SBIBD). In the new scheme, the key ring of each node corresponded to a block of the SBIBD, which ensured that any two nodes shared a common key in the deployment stage, and the nodes in different stage could be connected by bridge nodes. The simulation demonstrates that the new scheme can improve the global and local connectivity of the network, and save the overhead in establishing communication between nodes.
    Group key agreement and rekeying scheme in satellite network based on group key sequence
    PAN Yan-hui WANG Tao WU Yang ZHENG Yan-ru
    2012, 32(04):  964-967.  DOI: 10.3724/SP.J.1087.2012.00964
    Asbtract ( )   PDF (600KB) ( )  
    References | Related Articles | Metrics
    Group key agreement is one of the important stages to carry out secure multicast communication. A group controller node switch method was given pointing to the problem of satellite network topology changed dynamically. It could adjust controlling nodes in a dynamic way. Then, both authentication and integrality mechanism were used to attest agreement messages and group keys, a group key generation and renewing method was proposed, which could improve security of agreement messages. The results of simulation and analysis show that this group key agreement protocol leads to high efficiency and security.
    Enhanced secure RFID authentication protocol for EPC Gen2
    TANG Yong-zheng WANG Ming-hui WANG Jian-dong
    2012, 32(04):  968-970.  DOI: 10.3724/SP.J.1087.2012.00968
    Asbtract ( )   PDF (615KB) ( )  
    References | Related Articles | Metrics
    Many current Radio Frequency IDentification (RFID) authentication protocols cannot conform with the EPC Class 1 Gen 2 (EPC Gen2) standards or cannot meet the requirements of low-cost tags for the RFID authentication protocol. A new RFID authentication protocol based on the EPC Class 1 Gen 2 (EPC Gen2) standards was proposed and the security proof was given with BAN logic. After analyzing the security, the proposed protocol can meet the RFID security demands: information confidentiality, data integrity and identity authentication.
    Reversible digital watermarking based on public key system
    LI Li-zong GU Qiao-lun GAO Tie-gang
    2012, 32(04):  971-975.  DOI: 10.3724/SP.J.1087.2012.00971
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics
    A reversible digital watermarking based on public key system was proposed to improve the security, transparency and embedding capacity. This technique shifted the pixels between the peak and zero in the histogram, extracted the characteristics of the original image, used the Boolean exclusive OR operator between the characteristics value and the watermark image processed with chaotic system, and finally embedded the value into the image with the public key. Verification process was the inverse process of the embedding. After the verification, the shifted pixels were recovered depending on the relationships of the peak and zero in the histogram, and the image was recovered. The public key system and chaotic system guarantee the system security. The shift between the peak and zero pixels ensured more embedded information, higher peak signal- to-noise ration, and authentication of all the pixels. The process was simulated with lots of images. The results show that the method is safer than others, can embed more information, and has more transparency.
    Semi-fragile audio zero-watermarking algorithm for content authentication
    LIU Guang-yu ZHANG Xue-ying MA Zhao-yang
    2012, 32(04):  976-980.  DOI: 10.3724/SP.J.1087.2012.00976
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a new semi-fragile audio zero-watermarking algorithm which can be used to authenticate the copyright and content of digital data. This algorithm has the following features: (1) it extracted the low frequency components of host audio to construct zero-watermarking, ensured the imperceptibility of watermarking algorithm, and achieved blind detection; (2) it distributed the pixels of watermarking images rationally by adopting an adaptive audio segmentation frame method, and improved the capability of tampering localization and robustness of regular attacks; (3) it used multilevel scrambling technology to eliminate the correlation of the watermarking images, improved its safety and the robustness toward regular attacks. Meanwhile, this algorithm can not only conduct the integrity authentication, but also locate the tampering area accurately by tampering assessment. The experimental result shows that this algorithm has good robustness toward regular attacks and strong capability of tampering localization toward malicious attacks.
    Advanced computing
    Hybrid orthogonal CMAES for solving global optimization problems
    HUANG Ya-fei LIANG Ximing CHEN Yi-xiong
    2012, 32(04):  981-985.  DOI: 10.3724/SP.J.1087.2012.00981
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    In order to overcome the shortcomings of Covariance Matrix Adaptation Evolution Strategy (CMAES), such as premature convergence and low precision, when it is used in high-dimensional multimodal optimization, a hybrid algorithm combined CMAES with Orthogonal Design with Quantization (OD/Q) was proposed. Firstly, the small population CMAES was used to realize a fast searching. When orthogonal CMAES algorithm trapped in local extremum, base vectors for OD/Q were selected dynamically based on the position of current best solution. Then the entire solution space, including the field around extreme value, was explored by trial vectors generated by OD/Q. The proposed algorithm was guided by this process jumping out of the local optimum. The new approach was tested on six high-dimensional multimodal benchmark functions. Compared with other algorithms, the new algorithm has better searching precision, convergence speed and capacity of global search.
    Gene expression programming algorithm based on multi-threading evaluator
    NI Sheng-qiao TANG Chang-jie YANG Ning ZUO Jie
    2012, 32(04):  986-989.  DOI: 10.3724/SP.J.1087.2012.00986
    Asbtract ( )   PDF (584KB) ( )  
    References | Related Articles | Metrics
    Combining the advantages of multi-core CPU and multi-threading technology, a new Gene Expression Programming (GEP) algorithm with multi-threading evaluator was introduced, which greatly improved the efficiency of the GEP algorithm. The experimental results demonstrate that the new proposed algorithm MTEGEP is more efficient than traditional GEP. Furthermore,compared to the traditional GEP,MTEGEP achieves 1.89 times faster speed in average with a dual-core CPU, and 6.48 times faster speed with an eight-core CPU.
    QR factorization and algorithm for generalized row (column) symmetric matrix
    YUAN Hui-ping
    2012, 32(04):  990-993.  DOI: 10.3724/SP.J.1087.2012.00990
    Asbtract ( )   PDF (605KB) ( )  
    References | Related Articles | Metrics
    The properties and the QR factorization of generalized row (column) symmetric matrix were studied, and some new results were gained. The formula and fast calculating way for the QR factorization of generalized row (column) symmetric matrix were obtained, and that formula could dramatically reduce the amount of calculation for QR factorization of generalized row (column) symmetric matrix, saved dramatically the CPU time and memory without loss of any numerical precision. Meanwhile, the system parameter estimation was discussed, some results of two paper (ZOU H, WANG D, DAI Q. et al. QR factorization for row or column symmetric matrix. Science of China: Series A, 2002,32(9): 842-849; LIN X L, JIANG Y L. QR Decomposition and Algorithm for Unitary Symmetric Matrix. Chinese Journal of Computers, 2005,28(5):817-822) were generalized,and some mistakes of the latter were corrected.
    Numerical simulation and analysis based on multidimensional independent component analysis
    XIE Yong-hong ZHANG Guo-wei
    2012, 32(04):  994-998.  DOI: 10.3724/SP.J.1087.2012.00994
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics
    By introducing an indicator to evaluate performance of Multidimensional Independent Component Analysis (MICA) algorithm, the separation was studied by numerical simulation. The multidimensional Amari separation error was used as an important indicator of the measurement of MICA algorithm performance. In the comparative separation performance analysis of four algorithms named vkMICA, cfMICA, MSOBI, SJADE, a random distribution of letters signal was used for simulation and testing, and a visual representation of MICA model of separation and uncertainty was got. The results show that MICA is a very effective method for multidimensional source signal analysis.
    High-speed packet processing by non-collision hash functions based on
    ZHANG Mo-hua LI Ge
    2012, 32(04):  999-1002.  DOI: 10.3724/SP.J.1087.2012.00999
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics
    High-speed packet inspection can be achieved through storing attack signatures on the high-speed on-chip memory. Concerning the limited on-chip memory, this paper proposed a new trie structure with non-collision hash functions based on middle-point partition. The algorithm evenly partitioned attack signatures into multiple groups at each layer in trie tree to achieve the effective control of memory. The trie-tree structure can be implemented on a single chip and perform query operations by pipelining and parallelism, thus achieving higher throughput. The space complex of storing middle-point is O(n) and the construction time of hash table is linearly growing with the number of attack signatures. The experimental results show that the new structure decreases the demand of on-chip memory and can facilitate access to the attack signature on the on-chip memory while allowing to perform the signatures matching operations only once.
    Spatial data fusion algorithm of CO2 based on cloud computing platform
    HU Jun-guo QI Heng-nian
    2012, 32(04):  1003-1008.  DOI: 10.3724/SP.J.1087.2012.01003
    Asbtract ( )   PDF (947KB) ( )  
    References | Related Articles | Metrics
    In order to fuse the massive CO2 dada, which were collected by the mobile sensor network from uncertain time and space, the paper analyzed the collected data. First, the test area was divided into m * n grids, ,and CO2 concentration was analyzed from each valid data of every grid. Second, according to the strong computing power of cloud computing, the paper put forward combined cloud model and design common clouds, breeding clouds, visual clouds and adjacent clouds. They ran relatively independently in the cloud and interacted with each cloud, forming distributed parallel computing system. Third, the paper modified the ants family, and designed common ants, breeding ants, visual ants and adjacent ants. All kinds of ants, which walked by their own rules, were assigned to different clouds and worked together harmoniously, with the pheromones and the optimal solution exchanging in local cloud and between global clouds by the cloud server. Finally, in Lingan of Zhejiang province the authors sampled 11080 data, and used Clounding V2 simulation platform to do a lot of experiments. The result shows that after searching 105 times the algorithm reaches stabilization, the optimization capability is 60 times as strong as the single algorithm, and that the ants in common clouds, breeding clouds, visual clouds and adjacent clouds are set 2∶2∶1∶1 can get the best performance.
    Private cloud computing system based on dynamic service adaptable to
    WANG Zhu MEI Lin LI Lei ZHAO Tai-yin HU Guang-min
    2012, 32(04):  1009-1012.  DOI: 10.3724/SP.J.1087.2012.01009
    Asbtract ( )   PDF (654KB) ( )  
    References | Related Articles | Metrics
    In order to deal with problem in private cloud environment caused by computing tasks with large amount of data, intensive computing and complex processing, an implementation of private cloud system based on dynamic service was proposed on the basis of public cloud computing and the characteristics of private cloud environment, which was able to adapt large-scale data processing. In this implementation, computing tasks were described by job files, processing workflows were constructed dynamically by job logic, service requests were driven by data streams and the large-scale data processing could be reflected more efficiently in MapReduce parallel framework. The experimental results show that this implementation offers a high practical value, can deal with computing tasks with large amount of data, intensive computing and complex processing correctly and efficiently.
    Dynamic causal order control approach in distributed virtual environment on wide area network
    FU Sha ZHOU Hang-jun
    2012, 32(04):  1013-1016.  DOI: 10.3724/SP.J.1087.2012.01013
    Asbtract ( )   PDF (578KB) ( )  
    References | Related Articles | Metrics
    In a Distributed Virtual Environment (DVE) system running on Wide Area Network (WAN), the control effect of causal order consistency is determined by both the correctness of causality and the real-time due to the large network transmission delay. In order to achieve a better trade-off between the quality of causal order consistency and real-time in a DVE, a new dynamic causal order control approach was proposed. The core idea of this approach was to dynamically adjust and balance the amount of blocked effect messages and that of delayed causal messages on each site in a DVE system, so as to meet the demands of causality correctness and real-time requirement on message ordering delivery. The evaluation results demonstrate that the proposed approach can outperform the previous ones by effectively improving the quality of causal order control while simultaneously preserving the real-time property of DVE systems.
    Artificial intelligence
    Sparse discriminant analysis
    CHEN Xiao-dong LIN Huan-xiang
    2012, 32(04):  1017-1021.  DOI: 10.3724/SP.J.1087.2012.01017
    Asbtract ( )   PDF (716KB) ( )  
    References | Related Articles | Metrics
    Methods for manifold embedding have the following issues: on one hand, neighborhood graph is constructed in such high-dimensionality of original space that it tends to work poorly; on the other hand, appropriate values for the neighborhood size and heat kernel parameter involved in graph construction are generally difficult to be assigned. To address these problems, a new semi-supervised dimensionality reduction algorithm called SparsE Discriminant Analysis (SEDA) was proposed. Firstly, SEDA set up a sparse graph to preserve the global information and geometric structure of the data based on sparse representation. Secondly, it applied both sparse graph and Fisher criterion to seek the optimal projection. The experimental results on a broad range of data sets show that SEDA is superior to many popular dimensionality reduction methods.
    Improved particle swarm optimization for permutation flowshop scheduling problem
    ZHANG Qi-liang CHEN Yong-sheng HAN Bin
    2012, 32(04):  1022-1024.  DOI: 10.3724/SP.J.1087.2012.01022
    Asbtract ( )   PDF (628KB) ( )  
    References | Related Articles | Metrics
    To solve permutation flowshop scheduling problem, an improved particle swarm optimization was proposed. Improved algorithm introduced a method to judge the premature state of the particle swarm, and used reversion strategy to mutate the best particle after the particle swarm being trapped in premature convergence, and used simulated annealing method to accept the new particle. The mutation for best particle can guide the particle swarm to escape from the local best values limit and overcome the particles premature stagnation. The simulation results based on Car and Rec benchmarks of permutation flowshop scheduling problem prove the effectiveness of the proposed algorithm.
    HRRP feature extraction based on proportion of divergence criterion
    LIU Jing ZHAO Feng LIU Yi
    2012, 32(04):  1025-1029.  DOI: 10.3724/SP.J.1087.2012.01025
    Asbtract ( )   PDF (727KB) ( )  
    References | Related Articles | Metrics
    Traditional Linear Discriminant Analysis (LDA) faces the problem of tending to keep the separability of the class pairs having large within-class distances, while discarding the separability of those having small within-class distances. Based on the viewpoint that the feature subspace should uniformly keep the separability of each class pair, a new criterion, i.e., the Proportion of Divergence (PD), was presented. PD criterion was the mean of the proportion of the subspace divergence to original space divergence of each class pair. The solution of the Linear Discriminant Analysis (LDA) maximizing PD criterion (PD-LDA) was also presented. PD-LDA was used to perform feature extraction in the amplitude spectrum space of High Resolution Range Profile (HRRP). Shortest Euclidian distance classifier and Support Vector Machine (SVM) classifier were designed to evaluate the recognition performance. The experimental results for measured data show that, compared with traditional LDA, PD-LDA reduces data dimension remarkably and improves recognition rate effectively.
    A Historical Disaster Classification Method Based on Ant Colony Clustering
    JIA Zhi-juan HU Ming-sheng LIU Si
    2012, 32(04):  1030-1032.  DOI: 10.3724/SP.J.1087.2012.01030
    Asbtract ( )   PDF (674KB) ( )  
    References | Related Articles | Metrics
    Aiming at the descriptiveness and parsimony problems of historical disaster records a historical disaster classification method which based on ant colony clustering is proposed in this paper. The disaster data are normalized by using gray relational analysis approach, and then the levels of historical disasters are divided by the results of ant colony automatic clustering, so as to avoid the arbitrary man-made interference. In comparison with other classification methods in performance, experimental results show that this method has high accuracy and practicality.
    Artificial bee colony algorithm based on chaos local search operator
    Wang Xiang LI Zhi-yong XU Guo-yi WANG Yan
    2012, 32(04):  1033-1036.  DOI: 10.3724/SP.J.1087.2012.01033
    Asbtract ( )   PDF (730KB) ( )  
    References | Related Articles | Metrics
    In order to improve the ability of Artificial Bee Colony (ABC) algorithm at exploitation, a new Chaos Artificial Bee Colony (CH-ABC) algorithm was proposed for continuous function optimization problems. A new chaotic local search operator was embedded in the framework of the new algorithm. The new operator, whose search radius shrinks with the evolution generation, can do the local search around the best food source. The simulation results show that: compared with those of ABC algorithm, the solution quality and the convergence speed of the new algorithm are better for Rosenbrock and the convergence speed of the new algorithm is better for Griewank and Rastrigin.
    Threshold improvement method combining DSmT and DST
    LIU Yong-kuo LING Shuang-han
    2012, 32(04):  1037-1040.  DOI: 10.3724/SP.J.1087.2012.01037
    Asbtract ( )   PDF (514KB) ( )  
    References | Related Articles | Metrics
    Dezert-Smarandache Theory (DSmT) is a data fusion method, in which high conflicting evidence sources could be successfully handled, to efficiently realize multi-source information fusion. Meanwhile, Dempster-Shafer Theory (DST) can bring a better result with less computational cost on condition that conflicts are low. Therefore, integrating the two methods, the DST evidence theory will be adopted when the conflicts are lower, otherwise the Dezert-Smarandache Theory (DSmT) fusion algorithms will be used, which is a feasible way to raise the efficiency of the information fusion. The method of single-value switching thresholds for DSmT and DST has been proposed. According to the deficiency of the method,this article proposed that the conflict distance function can be regarded as the judgment basis. Thus, the single-value thresholds and the multi-spot value thresholds are distinguishable according to different evidence combinations.
    Solving combinational optimization problems based on harmony search algorithm
    LI Ning LIU Jian-qin HE Yi-chao
    2012, 32(04):  1041-1044.  DOI: 10.3724/SP.J.1087.2012.01041
    Asbtract ( )   PDF (609KB) ( )  
    References | Related Articles | Metrics
    For solving combinational optimization problems, a Binary Harmony Search Algorithm (BHSA) based on three discrete operators of Harmony Search Algorithm (HSA)was proposed. Then, BHSA was used to solve the famous k-SAT problem and 0-1 knapsack problem. The numeral results of BHSA, Binary Particle Swarm Optimization (BPSO) and Genetic Algorithm (GA) show that the BHSA is feasible and highly efficient.
    Gas emission prediction model of hybrid pi-sigma fuzzy neural network
    2012, 32(04):  1045-1049.  DOI: 10.3724/SP.J.1087.2012.01045
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics
    A gas emission prediction model established by using reasoning method of hybrid pi-sigma fuzzy neural networks was proposed. The model adopted Gaussian function as a fuzzy membership function, and the membership functions and conclusions parameters of the model could be adjusted online dynamically. Compared with the neural network prediction model, the method has characteristics of clear physical meaning, clear principle, fast convergence, high prediction accuracy and so on. The gas emission data of a coal mine simulation results show that the prediction has a high accuracy, fast convergence and the prediction results can be repeated, it is proved that the method is effective. In order to facilitate the practical application, the authors developed a Graphical User Interface (GUI) application interface in the Matlab environment, and gave the method and prediction results. The experiments also show that, for the data, the generalization ability of the model is best when the training accuracy is set 0.001, and the training accuracy and the prediction accuracy of the model do not have positive relationship.
    Multi-scale fused edge detection algorithm based on conflict redistribution DSmT
    QIAO Kui-xian YIN Shi-bai QU Sheng-jie
    2012, 32(04):  1050-1052.  DOI: 10.3724/SP.J.1087.2012.01050
    Asbtract ( )   PDF (719KB) ( )  
    References | Related Articles | Metrics
    Single-scale edge detection operator itself is sensitive to noise, which leads to little difference between the real and false edge, so the edge detected by it is not accurate, because ground object character is complex and thin ground object is intermingled with noise in real environment. Therefore, a new multi-scale fused edge detection algorithm based on conflict redistribution DSmT was proposed in this paper. First, multi-scale edge measure was extracted and then evidence theory was brought in. The basic belief assignment of multi-scale edge measure was constructed by a new method of bidirectional exponent and then fused by conflict redistribution DSmT combination rule. At last, edge points were extracted by multiple thresholds. The simulation with both optical and Synthetic Aperture Radar (SAR) images shows that the edge detection method of this paper suppresses noise effectively, while preserving rich details.
    Landmark-oriented heuristic routing algorithm in traffic network
    MENG Ke ZHANG Chun-yan
    2012, 32(04):  1053-1055.  DOI: 10.3724/SP.J.1087.2012.01053
    Asbtract ( )   PDF (467KB) ( )  
    References | Related Articles | Metrics
    To improve the query efficiency of road routing algorithm in large-scale traffic network, a landmark-oriented algorithm based on A* algorithm was proposed. Select the most important vertexes and edges as landmarks during preprocessing, choose appropriate landmarks as the reference parameters and calculate in sections in point-to-point routing. The experimental results indicate that it has higher query efficiency and more reasonable results in long-distance road routing.
    Self-stability evaluation model of surrounding rock based on improved BP neural network
    WANG Duo-dian QIU Guo-qing DAI Ting-ting WANG Yue
    2012, 32(04):  1056-1059.  DOI: 10.3724/SP.J.1087.2012.01056
    Asbtract ( )   PDF (684KB) ( )  
    References | Related Articles | Metrics
    Command protection engineering is the important component of national protection engineering system. To raise the level of construction of command protection engineering, the Back Propagation (BP) neural network was improved to give research on self-stability evaluation of its surrounding rock. Firstly, the network topology was devised,based on the characteristics of surrounding rock. Secondly, the model was improved according to its disadvantages, by introducing the momentum, self-adaptive adjusting learn rate, variable hidden nodes and steep factor; furthermore, Genetic Algorithm(GA) was imported to seek its best initial weight and threshold value. Finally, an instance was given to validate the algorithm. The results show that the model is scientifically reliable and of better value in engineering.
    Research of embedded systems recognition based on MADM2
    ZHANG Ping JIANG Lie-hui LIU Tie-ming XIE Yao-bin
    2012, 32(04):  1060-1063.  DOI: 10.3724/SP.J.1087.2012.01060
    Asbtract ( )   PDF (656KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that operating system type is difficult to recognize in embedded firmware reversing analysis, an recognition technology which is based on MADM(Multi-attribute Decision Making) was proposed. Comprehensively analyzed the multiply features in the firmware, built a recognition model, calculated the similarity using the vector included angle cosine method. The basic idea of recognition and the concrete realization of the process were described. Experimental results show that this method can get more accurate recognition results in some cases that some features are missed.
    Optimization algorithm for fault diagnosis strategy based on failure feature information entropy
    LI Qi-zhi HU Guo-ping
    2012, 32(04):  1064-1066.  DOI: 10.3724/SP.J.1087.2012.01064
    Asbtract ( )   PDF (437KB) ( )  
    References | Related Articles | Metrics
    Concerning the sequential fault diagnosis strategy problem of complicated electrical equipment, to realize fast fault detection and isolation, an algorithm for designing fault diagnosis strategy tree based on the failure feature information entropy was presented. The algorithm, considering test cost and fault probabilities, can select test points and build optimal fault diagnosis strategy tree based on the value of failure feature information entropy. An example shows that the algorithm is feasible, and can accomplish fault detection and isolation by using lower testing costs and fewer testing steps.
    Database technology
    Rough set based attribute reduction with consistent confidence
    GAO Can MIAO Duo-qian ZHANG Zhi-fei ZHANG Hong-yun
    2012, 32(04):  1067-1069.  DOI: 10.3724/SP.J.1087.2012.01067
    Asbtract ( )   PDF (612KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of reduction anomaly in the existing probabilistic rough set models, non-parameterized and parameterized maximum decision entropy measures for attribute reduction were proposed by using the concept of maximum confidence of uncertain object. The monotonicity of the parameterized maximum decision entropy was explained and the relationship between its attribute reduction and other ones was analyzed. The definitions for core and relatively dispensable attributes in the proposed model were also given. Moreover, non-parameterized and parameterized confidence discernibility matrixes were put forward and the difference of classical discernibility matrix and the proposed ones in charactering the uncertain object were discussed. Finally, a case study was given to show the validity of the proposed model.
    Clustering model based on weighted intuitionistic fuzzy sets
    CHANG Yan ZHANG Shi-bin
    2012, 32(04):  1070-1073.  DOI: 10.3724/SP.J.1087.2012.01070
    Asbtract ( )   PDF (618KB) ( )  
    References | Related Articles | Metrics
    Concerning the limitations of the existing clustering methods based on intuitionistic fuzzy sets, a clustering model called Weighted Intuitionistic Fuzzy Set Model (WIFSCM) was proposed based on weighted intuitionistic fuzzy sets. In this model, the concepts of equivalent sample and weighted intuitionistic fuzzy set were put forward in special feature space, and based on which the objective function of intuitionistic fuzzy clustering algorithm was proposed. Iterative algorithms of clustering center and matrix of membership degree were inferred from the objective function. The density function based on weighted intuitionistic fuzzy sets was defined, and initial clustering center was gotten to reduce iterative times. The experiment of gray image segmentation shows that WIFSCM is effective, and it is faster than IFCM algorithm nearly a hundred times.
    Protein-protein interaction extraction based on contextual and syntactic features
    WANG Jian JI Ming-hui LIN Hong-fei YANG Zhi-hao
    2012, 32(04):  1074-1077.  DOI: 10.3724/SP.J.1087.2012.01074
    Asbtract ( )   PDF (598KB) ( )  
    References | Related Articles | Metrics
    Considering the one-sidedness of features used in many Protein-Protein Interaction (PPI) extraction methods, a new approach was proposed to extract rich features from context information and syntax structure for PPI extraction. Various features, such as lexicon, position, distance, dependency syntax and deep syntax features constitute feature set, and the Support Vector Machine (SVM) classifier was used for PPI extraction. The experimental evaluation on multiple PPI corpora reveals that the rich features can utilize more comprehensive information to reduce the risk of missing some important features. This method achieves state-of-the-art performance with respect to comparable evaluations, with 59.2% F-score and 85.6% Area Under Curve (AUC) on the AImed corpus.
    Improved suffix tree clustering for Uyghur text
    ZHAI Xian-min TIAN Sheng-wei YU Long FENG Guan-jun
    2012, 32(04):  1078-1081.  DOI: 10.3724/SP.J.1087.2012.01078
    Asbtract ( )   PDF (600KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of non-standard, repetition and redundancy of information in the process of selecting the base class phrases, an improved Suffix Tree Clustering (STC) method was proposed. Firstly, phrase mutual information algorithm was put forward to choose the base class phrases abiding by Uyghur grammar. Secondly, in order to reduce the repeated base class phrase, the phrase reduction algorithm based on Uyghur grammar was proposed. Thirdly, on the basis of the first two steps, the phrase redundancy algorithm based on Uyghur grammar was constructed to remove redundant phrase. The experimental results show that this method improves the recall and the precision compared with STC. This indicates that the improved algorithm can enhance clustering performance effectively.
    Optimization of sparse data sets to improve quality of collaborative filtering systems
    LIU Qing-peng CHEN Ming-rui
    2012, 32(04):  1082-1085.  DOI: 10.3724/SP.J.1087.2012.01082
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics
    Currently, the collaborative filtering is one of the successful and better personalized recommendation technologies that have been applied to the personalized recommendation systems. As the number of users and items increase dramatically, the score matrix which reflects the users preference information is very sparse. The sparse matrix seriously affects the recommendation quality of collaborative filtering. To solve this problem, this paper presented a comprehensive mean optimal filling method. Compared to the default method and the mode method, this method has two advantages. First, the method takes account of user rating scale issues. Second, the method does not have the "multiple mode" and the "no mode" problems. On the same data set, using traditional user-based collaborative filtering to test the effectiveness of the method, and the results prove that the new method can improve the recommendation quality of recommendation systems.
    Clustering user behavior patterns of E-commerce search engine based on mixture of Markov models
    QIN Jun XIAO Rong
    2012, 32(04):  1086-1089.  DOI: 10.3724/SP.J.1087.2012.01086
    Asbtract ( )   PDF (596KB) ( )  
    References | Related Articles | Metrics
    Clustering the behavior patterns of the customers is helpful to provide more specific services for E-commerce applications. A mixture model based on Markov models was proposed to solve this problem on the search engine of E-Commerce website. This model assumed that the behaviors of every customer who used the search engine can be represented by a Markov model and every user was assigned to a particular cluster randomly. Based on Bayesian Ying-Yang (BYY) harmony learning theory, a corresponding harmony function and an adaptive gradient algorithm were designed to deal with the parameter-learning and model-selection tasks. The experimental result shows that this adaptive gradient algorithm can achieve the model-selection and the parameter-learning more automatically and efficiently when compared with EM algorithm. At last, this clustering approach was applied on real-world click-through logs of the search engine on www.taobao.com and the result shows that this method can capture the nature of customers behaviors effectively.
    XML keyword search algorithm based on smallest lowest entity sub-tree interrelated
    YAO Quan-zhu YU Xun-bin
    2012, 32(04):  1090-1093.  DOI: 10.3724/SP.J.1087.2012.01090
    Asbtract ( )   PDF (788KB) ( )  
    References | Related Articles | Metrics
    A query algorithm of semantic relativity was proposed in this paper, with regard to many meaningless nodes contained in the present results of XML keywords retrieval. Based on the characteristics of semi-structure and self-description of XML files, the concept of Smallest Lowest Entity Sub-Tree (SLEST), in which only physical connection exists between keywords, was put forward by making full use of semantic correlation between nodes. Based on Smallest Interrelated Entity Sub-Tree (SIEST), an algorithm, in which the result was represented by SLEST and SIEST instead of Smallest Lowest Common Ancestor (SLCA), was proposed to capture the IDREF relation between keywords. The result shows that the algorithm proposed in this paper can increase the precision of XML keyword retrieval.
    Ontology similarity computation using k-partite ranking method
    LAN Mei-hui REN You-jun XU Jian GAO Wei
    2012, 32(04):  1094-1096.  DOI: 10.3724/SP.J.1087.2012.01094
    Asbtract ( )   PDF (452KB) ( )  
    References | Related Articles | Metrics
    This paper represented the information of each vertex in ontology graph as a vector. According to its structure of ontology graph, the vertices were divided into k parts. It chose vertices from each part, and chose the ranking loss function. It used k-partite ranking learning algorithm to get the optimization ranking function, mapped each vertex of ontology structure graph into a real number, and then calculated the relative similarities of concepts by comparing the difference between real numbers. The experimental results show that the method for calculating the relative similarity between the concepts of ontology is effective.
    Graphics and image technology
    Hierarchical overlapping clustering and exemplar visualization of images returned by search engine
    GU Rui-jun CHEN Sheng-lei CHEN Geng WANG Jia-cai
    2012, 32(04):  1097-1100.  DOI: 10.3724/SP.J.1087.2012.01097
    Asbtract ( )   PDF (603KB) ( )  
    References | Related Articles | Metrics
    To resolve the problems of high dimensionality, low accuracy and overlapping in image clustering, an effective link-clustering based image multiple-cluster partition method was proposed in this paper. This method utilized image distance to measure similarity and identified overlapping clusters by using link-clustering. As a result, an image may be partitioned into multiple clusters, and this multiple-cluster partition makes each cluster more specific compared with others. To validate this method, experiments were carried out on the datasets returned by search engine when searching for some key words. The result shows that the proposed method can find explicit clusters with partial overlapping.
    Texture image retrieval based on complementary features
    QU Huai-jing
    2012, 32(04):  1101-1103.  DOI: 10.3724/SP.J.1087.2012.01101
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics
    Because the performance of the image retrieval system could be effectively improved by using the complementary features, a retrieval method of the texture image using L1 energy and generalized Gaussian distribution parameter features was proposed in the improved Contourlet transform domain. Firstly, the directional subband coefficients went through generalized Gaussian modeling with an improved approach. Then, the texture images were respectively retrieved based on the single feature and the corresponding similarity measurement. Lastly, using the complementary features and the direct summation of their similarity measurements, the texture images were retrieved. The experimental results show that, compared with single feature, the average retrieval rates of the texture image database are effectively improved by the complementary features that fully represent the structural information and the random distribution information.
    Markov edge descriptor for feature extraction algorithms of image
    CUI Ning-hai LIU Li-ping
    2012, 32(04):  1104-1107.  DOI: 10.3724/SP.J.1087.2012.01104
    Asbtract ( )   PDF (721KB) ( )  
    References | Related Articles | Metrics
    After analyzing the texture feature extraction on MPEG-7 standard, a new Markov Edge Descriptor (MED) was proposed based on Edge Histogram Descriptor (EHD), which was in virtue of the Markov chain. It adopted initial distribution of Markov chain model to represent the spatial information between the same kinds of edges. And it adopted stationary distribution of Markov chain model to represent the spatial information between different edges. The experimental results show that MED has the merits of EHD, and MED can describe the spatial information between the image edges. It has corking retrieval performance for the same or different kinds of edges. Its retrieval performance is better than EHD.
    New method of block-restoration for motor-vehicle blurred images
    LI Yu-cheng YU Hai-tao WANG Mu-shu
    2012, 32(04):  1108-1112.  DOI: 10.3724/SP.J.1087.2012.01108
    Asbtract ( )   PDF (1003KB) ( )  
    References | Related Articles | Metrics
    During the restoration of actual motion blurred images based on Wiener filtering, restoration results get affected by serious ringing effect and unsatisfactory local restoration. Its main reasons were found through theoretical analysis, experimental comparisons and the study of the characteristics of the actual motion blurring process. It was proposed that the artificial boundary compensation and block-restoration were used to restrain ringing effect and local unsatisfactory restoration. The relations of blur parameters, space positions and speeds, even the standard of blocking partition were given. The experimental results verify that the proposed method of the boundary compensation and the block-restoration can effectively reduce ringing effect and maintain the consistency of the overall image restoration effect.
    New method of correcting barrel distortion on lattice model
    WU Kai-xing DUAN Ma-li
    2012, 32(04):  1113-1115.  DOI: 10.3724/SP.J.1087.2012.01113
    Asbtract ( )   PDF (454KB) ( )  
    References | Related Articles | Metrics
    In order to correct the barrel distortion of wide-angle lens, a new method for distortion correction for barrel distortion was proposed. Adopting the lattice model calibration method, according to the location relation of dots between distortion image and ideal figure, the offset surfaces in X-and Y-axis direction about distorted pixels were got. Then, the cubic B-spline interpolation function was adopted to interpolate the surface. Thus, the offsets of each pixel were obtained in the distorted image. Furthermore, the pixels shift was rectified to achieve an undistorted image by coordinate conversion. And then the bilinear interpolation was used to reconstruct the gray level of pixels. The simulation results show that the proposed method can make a good correction of the coordinate position and gray value.
    Weight-length consistent graph drawing algorithm for weighted undirected graphs
    ZHANG Wei ZENG Rui-bi HU Ming-xiao
    2012, 32(04):  1116-1118.  DOI: 10.3724/SP.J.1087.2012.01116
    Asbtract ( )   PDF (621KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the weighted undirected graphs output is needed to demonstrate weights with edge lengths, a new algorithm for weighted undirected graphs based on genetic algorithm was proposed. The algorithm obtained the ideal vertex coordinates by the crossover and mutation on the vertex coordinate coding. The mutation operator combined the inconsistent mutation with single neighbourhood mutation. In the fitness function, four evaluation criteria were employed: the average distance of the vertex, the edge crossing number, the uniformity of the angles around the multi-degree vertex, and the uniformity of the ratios of edge weight and edge length. The experimental results show that the drawn graph by the proposed algorithm is featured with edge-crossing-free, clear showing of branches and weight-length consistency. The visual output results of the algorithm are clear, visually optimized and especially faithful to weights. The algorithm is suitable to draw most kinds of weighted undirected graphs and can be used in undirected graph drawing methods and/or prototypes.
    Image denoising method based on dictionary learning with union of two orthonormal bases
    XIE Kai ZHANG Fen
    2012, 32(04):  1119-1121.  DOI: 10.3724/SP.J.1087.2012.01119
    Asbtract ( )   PDF (484KB) ( )  
    References | Related Articles | Metrics
    Overcomplete dictionary was used to represent an image sparsely in order to improve image denoising performance. The sparse representation may represent efficiently the singular geometry of the images with the redundancy of over-complete dictionary. Global image prior model based on the sparse representation of image patches was presented in Bayesian framework. Then maximum a posteriori probability estimator for denoising image was constructed. The dictionary was composed of two orthonormal bases. A method based on singular value decomposition was used for dictionary learning. The orthonormal property was used to update the one chosen basis effectively. The method can improve the performance of image denoising. The experimental results verify the validity of the method.
    Palm-dorsal vein recognition based on uniform discrete curvelet transformation
    WEI Shang-qing GU Xiao-dong
    2012, 32(04):  1122-1125.  DOI: 10.3724/SP.J.1087.2012.01122
    Asbtract ( )   PDF (669KB) ( )  
    References | Related Articles | Metrics
    Palm-dorsal vein recognition using Uniform Discrete Curvelet Transform (UDCT) was proposed in this paper. With the palm-dorsal Region Of Interest (ROI) extraction and image preprocessing, UDCT of the curvelet Transform was used on ROI. Then, the curvelet coefficients phase variance was encoded. Finally, the Chi-square distance of histogram of the coding was used for vein recognition. The experimental results show that this proposed method can identify palm-dorsal vein images with high robustness and high speed even with poor-quality images.
    Face detection pre-processing method based on three-dimensional skin color model
    SUN Jin-guang ZHOU Yu-chengZHOU MENG Xiang-fu LI Yang
    2012, 32(04):  1126-1129.  DOI: 10.3724/SP.J.1087.2012.01126
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    In order to improve the face detection test results under the influence of illumination change and complex background, an algorithm of 3D color clustering model based on direct least squares estimate was proposed during the preprocessing phrase. Firstly, three plane projection distributions of skin color were seen as fitting objects in CbCrCg space, and then smooth edge was got by median filter and Sobel operator, at last the best 3D color model was got through direct least squares. In experiment, the public face library and face image got by outdoor shooting were seen as objects, and the experimental results show that, this algorithm has better segmentation effects than traditional color preprocessing algorithm, and it has improved the detection rate more effectively.
    New fast face recognition algorithm based on Gabor filter
    KONG Rui HAN Ji-xuan
    2012, 32(04):  1130-1132.  DOI: 10.3724/SP.J.1087.2012.01130
    Asbtract ( )   PDF (689KB) ( )  
    References | Related Articles | Metrics
    Concerning the disadvantage of traditional face recognition algorithm, such as high dimension of extracted feature, a great deal of computation, a fast face recognition algorithm was proposed. The algorithm integrated the half face recognition scheme, Gabor filter, Gabor features selecting method based on mutual information, and the nearest neighbor method for frontal face recognition. The face images in training set and testing set were divided into the left half and the right half, one half of the face images was chosen by entropy maximum. The features of the face images were extracted by Gabor filter. Then the rank of discriminating capabilities of features can be estimated by evaluating the classification error on intra-set and extra-set based on weak classifier built by single feature.The Gabor features with small errors were selected.And at the same time, the mutual information between the selected features was examined.The nearest neighbor method was used to recognize the frontal face. The experimental results show that the proposed method has higher accuracy than the traditional half face recognition algorithm, and is of lower computational complexity than the traditional Gabor filter algorithm.
    Binarization algorithm for CCD wool images with weak contour
    ZHOU Li BI Du-yan ZHA Yu-fei LUO Hong-kai HE Lin-yuan
    2012, 32(04):  1133-1136.  DOI: 10.3724/SP.J.1087.2012.01133
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    In order to solve the distortion of wool geometric dimension, resulting from image binarization with weak contour, an automatic binarization algorithm for Charge-Coupled Device (CCD) wool image was proposed with reference to a ramp-width-reduction approach based on intensity and gradient indices, using a classical global threshold method and a local one. In that algorithm, edge-pixel-seeking step was added and gray-adjusting factor was improved, with sobel operator and ramp edge model introduced, to increase processing efficiency and avoid human intervention. Besides, every sub image was processed by the mixed global and local threshold based on the analysis of Otsus and Bernsens methods to intensify edge details and decrease distortion. Compared with the traditional ways, the experimental results show that the new algorithm has good performance in automatic binarization with weak contour.
    OMR image segmentation based on mutation signal detection
    MA Lei LIU Jiang LI Xiao-peng CHEN Xia
    2012, 32(04):  1137-1140.  DOI: 10.3724/SP.J.1087.2012.01137
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics
    Concerning the accurate positioning of Optical Mark Recognition (OMR) images without any position information, an image segmentation approach of mutation signal detection based on wavelet transformation was proposed. Firstly, the horizontal and vertical projective operations were processed, and then these functions were transformed by wavelet to detect mutation points, which can better reflect the boundary of OMR information. This algorithms adaptability is based on limited times of wavelet transform and mutation signal detection. The experimental results demonstrate that the method possesses high accuracy of segmentation and stability, and the mean square error of segmentation accuracy can be 0.4167 pixels. The processing of this method is efficient because the segmentation only used the horizontal and vertical information. This algorithm is not sensitive to noise because of the statistic characteristic of projection functions and multi-resolution characteristic of wavelet tranformation.
    Compression of color image with 2D wavelet transform by set-partitioning RGB color components synchronously
    QIU Zi-hua HU Juan YANG Hua
    2012, 32(04):  1141-1143.  DOI: 10.3724/SP.J.1087.2012.01141
    Asbtract ( )   PDF (622KB) ( )  
    References | Related Articles | Metrics
    Concerning that the conventional color image coding algorithm does not take advantage of the dependency of the RGB color components, a new algorithm of set-partitioning RGB color components synchronously based on the Set Partitioning In Hierarchical Trees (SPIHT) algorithm was proposed. In this algorithm, RGB color components were treated as a whole, partition was sorted and set at the same time by using the same list of LIS. The color embedded bit-stream generated by this algorithm can stop at any point of the bit-stream and reconstruct the color image. The simulation results show the Peak Signal-to-Noise Ratio (PSNR) of the new algorithm on test images is about 0.1dB to 0.70 dB higher than JPEG2000s.
    New scheme for image transmission based on SPIHT
    FU Yao LIU Qing-li
    2012, 32(04):  1144-1146.  DOI: 10.3724/SP.J.1087.2012.01144
    Asbtract ( )   PDF (441KB) ( )  
    References | Related Articles | Metrics
    In this paper, a new real-time image transmission scheme based on Set Partitioning In Hierarchical Tree (SPIHT) was proposed. Firstly, the image data needed to be transformed by wavelet. Secondly, in order to resist error pervasion when image was transmitted, the wavelet coefficients were separated into small blocks and encoded by SPIHT. Finally, in order to improve the quality of the restructured image, the wavelet coefficients of the highest level in every block were transmitted repeatedly. In order to improve the throughput of the image transmission system, the optimum frame length was proposed. Both theoretical demonstration and simulation results here have validated that the proposed scheme provides stronger error resilience than traditional scheme based on SPIHT, and can improve the peak signal to noise ratio of the restructured image about 10dB.
    Typical applications
    Estimating parameters of software reliability models by ant colony algorithm
    ZHENG Chang-you LIU Xiao-ming HUANG Song
    2012, 32(04):  1147-1151.  DOI: 10.3724/SP.J.1087.2012.01147
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics
    It is difficult to estimate the parameters of software reliability models, since most of them are non-linear models. The most widely used methods for parameters estimating of software reliability models have been summarized, and a new approach based on ant colony algorithm was proposed. The experiments with three typical models, G-O model, Weibull model and M-O model, show that this algorithm demonstrates good applicability. And the results demonstrate that the proposed method has solved the nonconvergent problem that resulted from traditional methods. Compared with Particle Swarm Optimization (PSO), the method given in this paper shows up to two times faster convergence rate, and for some subjects, the new method shows ten times higher precision.
    Hardware reliability study of embedded system based on Markov chain
    GUO Rong-zuo HUANG Jun WANG Lin
    2012, 32(04):  1152-1156.  DOI: 10.3724/SP.J.1087.2012.01152
    Asbtract ( )   PDF (919KB) ( )  
    References | Related Articles | Metrics
    The products of embedded system often have hardware failure while in use, which affects the safety and reliability of the system. In this paper, its reliability was researched from the embedded system hardware level. At first, it defined the goal of the embedded system hardware, and introduced the theory of Markov process. And then, a single IP hardcore and embedded system hardware Markov model was set up. At last, the reliability of the embedded controller hardware automatic block was calculated and analyzed through the established model. The results show that the Markov model can accurately describe the embedded system hardware state change, and can calculate and analyze their reliability. Therefore, the model has certain practical value.
    Effectiveness evaluation method based on statistical analysis of operations
    CHENG Kai ZHANG Rui ZHANG Hong-jun CHE Jun-hui
    2012, 32(04):  1157-1160.  DOI: 10.3724/SP.J.1087.2012.01157
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics
    The effect data of actions show a significant randomness because of lots of uncertain elements in the course of action. In order to explore the rules of warfare hidden behind the data, the effectiveness evaluation was studied based on statistical analysis method. The basic concept of action and its effectiveness were analyzed. With the simulation data produced by enhanced irreducible semi-autonomous adaptive combat neural simulation toolkit (EINSTein), a single, a group and multi group experimental methods were used to study the statistical characteristics of offensive actions and find out that to a party who has a combat advantage, compared with increased number of personnel, the increased radius of firepower can achieve better operational results. On this basis, an evaluation method of action effectiveness was proposed and validated with simulation data. Therefore, a feasible resolution is provided to evaluate the action effectiveness based on actual combat data.
    Equity and real-time traffic signal scheduling algorithm
    LI HuiLi GUO Ai-huang
    2012, 32(04):  1161-1164.  DOI: 10.3724/SP.J.1087.2012.01161
    Asbtract ( )   PDF (589KB) ( )  
    References | Related Articles | Metrics
    The real-time traffic signal scheduling is an important way to improve traffic congestion, and research on its equity is also vital. In view of the common places between the computer communication network and transportation network, drawing the idea of the max-min fairness and proportional fairness, propose a min-max fairness traffic signal scheduling algorithm and a proportional fairness traffic signal scheduling algorithm. Conduct a variety of simulations to compare their performances with the fixed time control and the minimum queue length control algorithms. The results prove that the minimum queue length control and fixed control may not treat every vehicle fairly for it cause a number of vehicles waiting for a comparative long time. Though min-max fairness treats each vehicle fairly, it performs badly when the traffic flow density is high. Proportional fairness shows the good performance both in the aspect of the average delay and fairness. The results provide a solution to control the traffic light in a both efficiency and fair way, which has good value of application.
    Test data generation based on K-means clustering and particle swarm optimization
    PAN Shuo WANG Shu-yan SUN Jia-ze
    2012, 32(04):  1165-1167.  DOI: 10.3724/SP.J.1087.2012.01165
    Asbtract ( )   PDF (644KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of the test data set generation in combinatorial test, if the software under test has a great many factors and values, the traditional Particle Swarm Optimization (PSO)will have large iteration times and slow convergence velocity. A test data set generation method based on K-means clustering algorithm and PSO has been proposed. The polymorphism of the test data set has been enhanced, though clustering and partitioning the test data set. And it makes PSO has been improved. The compact between the particles in each area has been promoted. Several typical cases show that this method has some merits while ensuring the coverage.
    Information management model based on the chaordic organization
    SU Li-wen DU Gang
    2012, 32(04):  1168-1172.  DOI: 10.3724/SP.J.1087.2012.01168
    Asbtract ( )   PDF (721KB) ( )  
    References | Related Articles | Metrics
    The conformity and sharing management of information resource is one of the difficult problems in present E-government construction. This paper applied Dee Hocks chaordic organization idea and method to make up a kind of resource sharing information management model for the sake of providing a practice way to the conformity and sharing of E-government information resource. By means of analyzing the principle of chaordic organization and the cooperated relation among the chaordic organization members, the authors constructed the management model for information connection and resource shareing. This common model was applied to the system design in the E-government information resource conformity and sharing, and further a common information resource conformity and sharing management model was created with which we have also constructed the system frame and analyzed the problems of information resource conformity and sharing such as the administration examine and approve system at government department and the management system for social trustworthiness. So by linking theory with practice this paper has established a kind of soft system frame model and technology method for E-government information resource conformity and sharing management with the feature of chaordic organization.
    Three-dimensional slope stability analysis based on DEM data
    ZHANG Shao-hua JI Wei-yong FAN Dong-juan CUI Jian-jun
    2012, 32(04):  1173-1175.  DOI: 10.3724/SP.J.1087.2012.01173
    Asbtract ( )   PDF (458KB) ( )  
    References | Related Articles | Metrics
    Concerning the application requirements of slope stability analysis in large areas and the defects of current calculation method, a three-dimensional analysis method based on Digital Elevation Model (DEM) data was proposed. In this method, a sphere was used, instead of ellipsoid, to search slippery surface, and the slope safety factor under the three-dimensional conditions was calculated through integral operator with the results of two-dimensional analysis. Finally, the position and shape of possible landslide were determined according to the safety factor. The practical application results confirm that this method simplifies the search algorithm, ensures the accuracy of slope stability analysis, and improves the efficiency of analysis and calculation.
    Design of power supply for high-performance computer system
    YAO Xin-an SONG Fei HU Shi-ping
    2012, 32(04):  1176-1179.  DOI: 10.3724/SP.J.1087.2012.01176
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics
    To meet high efficiency, low cost and high reliability power requirements of high-performance computer system, 12V DC bus distributed power system was developed in this paper. The block diagram and operation principle of power supply for cabinet and motherboard were described. The voltage regulator module for processor in motherboard was analyzed in detail. Based on adaptive voltage position control, the small-signal model of voltage regulator module was presented, and output impedance and system control bandwidth were discussed. Following the proposed design guidelines of compensator, the experimental results demonstrate very good transient response. The application results show the proposed power supply can fully meet the power requirements of high-performance computer system.
    Super resolution pitch detection based on LPC and AMDF
    WANG En-cheng SU Teng-fang YUAN Kai-guo WU Chun-hua
    2012, 32(04):  1180-1183.  DOI: 10.3724/SP.J.1087.2012.01180
    Asbtract ( )   PDF (587KB) ( )  
    References | Related Articles | Metrics
    According to the mechanism of speech signal, a super resolution pitch detection algorithm, which combined Linear Predictive Coding (LPC) with Average Magnitude Difference Function (AMDF), was proposed. Firstly, residual of LPC was extracted by linear predictive analysis. Then, cumulative mean normalized difference function and difference signal revision were used to make pitch valley sharper. At last, parabolic interpolation and pitch multiple check were taken to select real pitch period. The experimental results indicate that the pitch detection effect of the algorithm is superior to that of the conventional algorithms. The proposed algorithm conquers half frequency errors, and has good accuracy and robustness under the condition of high Signal-to-Noise Ratio (SNR).
    Application of contrast source inversion algorithm to image restruction of 2-D hybrid targets
    WANG Xue-jing MIAO Jing-hong René Marklein
    2012, 32(04):  1184-1187.  DOI: 10.3724/SP.J.1087.2012.01184
    Asbtract ( )   PDF (621KB) ( )  
    References | Related Articles | Metrics
    In view of the limited accuracy of imaging algorithm,the nonlinear Contrast Source Inversion (CSI) algorithm combined with regularization and Concurrent Frequency (CF) was proposed for reconstructing a hybrid target in an anechoic chamber. The experimental data were obtained using multi-frequency multi-bistatic measurements. The reconstructed position, shape and contrast value of the target were presented, verifying the accuracy of the extended CSI algorithm for reconstructing the complicated 2-D hybrid targets.
    Least square support vector machines model based on particle swarm optimization for hydrological forecasting
    LI Wen-li LI Yu-xia
    2012, 32(04):  1188-1190.  DOI: 10.3724/SP.J.1087.2012.01188
    Asbtract ( )   PDF (482KB) ( )  
    References | Related Articles | Metrics
    Support Vector Machine (SVM) algorithm provides a new way for the study of mid-and-long term hydrological forecasting that needs a learning of finite samples. Concerning the time-consumption and unsatisfactory performance in the conventional parameter choosing method, a Least Square Support Vector Machine (LS-SVM) model based on Particle Swarm Optimization (PSO) was given in this paper. The model was built by using the regression principle of least square support vector machine, the key parameters in this model were optimized by PSO algorithm with random seeking strategy. Monthly runoff forecasting in Yele Hydropower Station on Nanya river indicates that the algorithm is able to promote efficiency and accuracy.
2025 Vol.45 No.5

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF