Loading...

Table of Content

    01 September 2011, Volume 31 Issue 09
    Network and communications
    Study and implementation of optimization mechanism for hybrid P2P spatial indexing network
    WU Jia-gao SHAO Shi-wei HUA Zheng ZOU Zhi-qiang HU Bin
    2011, 31(09):  2301-2304.  DOI: 10.3724/SP.J.1087.2011.02301
    Asbtract ( )   PDF (615KB) ( )  
    Related Articles | Metrics
    In allusion to the insufficiency of current P2P Geographic Information System (GIS) in utilizing network resources of clients, based on analyzing and summarizing the existing hybrid P2P spatial indexing network, a new idea of group strategy was proposed in view of practice. In this idea, peers with the same spatial data semantics were joined in the same group in which the burden of query was shared by group members together. Furthermore, a replacement algorithm of current index nodes and backup strategy were proposed to improve the query performance and stability of the overall network. The experimental results indicate that the indexing network with group strategy can effectively make use of clients' network resources and improve the query performance when a large number of queries request concurrently.
    Cross autonomous system cooperative model for flow monitoring in trustworthy and controllable network
    ZHANG Xiao-juan LI Wei
    2011, 31(09):  2304-2312.  DOI: 10.3724/SP.J.1087.2011.02308
    Asbtract ( )   PDF (808KB) ( )  
    Related Articles | Metrics
    The flow measurement algorithms under current network environment are lack of cooperation, thus the coverage of monitored flow is low. To solve this problem, a new cooperative flow monitoring model based on trustworthy and controllable network was proposed. This model realized cross-AS (Autonomous System) cooperative flow monitoring by packet marking and result sharing, and optimized the flow monitoring coverage through distributing monitoring responsibility across routers by control node in each AS. The comparison with the previous work in an experiment verifies that this model makes the best use of network resource, balances the monitoring overhead of each router, and improves monitoring coverage, and also has better performance than previous methods.
    Active queue management algorithm based on neuron adaptive variable structure control
    ZHOU Chuan WANG Zong-xin WU Yi-fei CHEN Qing-wei
    2011, 31(09):  2305-2307.  DOI: 10.3724/SP.J.1087.2011.02305
    Asbtract ( )   PDF (581KB) ( )  
    Related Articles | Metrics
    Considering the non-linearity of TCP model, uncertainty of Round Trip Time (RTT) and fluctuation of network load, an Active Queue Management (AQM) scheme based on Variable Structure Controller (VSC) using single neuron adaptive learning was proposed. The nonlinear VSC was used to guarantee the swiftness and robustness of queue response at router. However, the jitter of VSC would cause the queue fluctuation and performance degradation. Therefore, a single neuron was introduced to adjust the parameters of the VSC in order to alleviate the effect of jitter and modeling uncertainty. The proposed scheme can reduce the jitter and enhance the robustness for AQM control system greatly. Finally, the simulation results show the effectiveness of the proposed algorithm through NS-2 simulator.
    Optimal deployment of multiple sink nodes in wireless sensor networks
    LIU Qiang MAO Yu-ming LENG Su-peng LI Long-jiang ZHUANG Yi-qun
    2011, 31(09):  2313-2316.  DOI: 10.3724/SP.J.1087.2011.02313
    Asbtract ( )   PDF (647KB) ( )  
    Related Articles | Metrics
    In a large-scale Wireless Sensor Network (WSN), the nodes closer to the single sink node use up their energy more quickly than others because of relaying more packets so that the network is invalid rapidly. In order to elongate the network lifetime, it is required to deduce the hops from sensor node to sink node. An efficient method is to deploy multiple sink nodes instead of single one. Therefore, it needs to be considered that how many sink nodes should be deployed on minimizing network cost and maximizing network lifetime. A network lifetime model and a cost model were proposed in WSN with multiple sink nodes and a new method was presented to determine the optimal number of sink nodes by computing the Ratio of Lifetime to Cost (RLC). The theoretical studies show that the number of sink nodes is related to the cost of sensor nodes and sink nodes, the network scale, the number of critical sensor nodes and the transmission power of sensor node. The simulation results prove the theoretical conclusion.
    Download performance optimization in Hadoop distributed file system based on P2P
    LIAO Bin YU Jiong ZHANG Tao YANG Xing-yao
    2011, 31(09):  2317-2320.  DOI: 10.3724/SP.J.1087.2011.02317
    Asbtract ( )   PDF (730KB) ( )  
    Related Articles | Metrics
    The data block storage mechanism and downloading process in Hadoop Distributed File System (HDFS) cluster were analyzed. In combination with multi-point and multi-threaded Peer-to-Peer (P2P) download idea, an efficiency optimization algorithm was proposed from the aspects of data-block, file and cluster. Concerning the possible imbalanced load problem caused by multi-thread download in HDFS cluster, a download-point selection algorithm was put forward to optimize the download-point selection. The mathematical analysis and experiments prove that the three methods can improve the download efficiency and download-point selection algorithm can achieve loading balance among DataNodes in HDFS cluster.
    Continuous wireless network coding based on sliding windows
    REN Zhi ZHENG Ai-li YAO Yu-kun LI Qing-yang
    2011, 31(09):  2321-2324.  DOI: 10.3724/SP.J.1087.2011.02321
    Asbtract ( )   PDF (672KB) ( )  
    Related Articles | Metrics
    According to the characteristics of wireless single-hop broadcast networks, a network coding scheme based on sliding windows named NCBSW was proposed. The scheme designed a coding window which slid in a chronological order in the matrix of data packets waiting for retransmission, and the data packets used to encode were chosen from the sliding window. Meanwhile, the scheme ensured the solvability of coded packets. The simulation results show that the proposed scheme has a better performance as compared to the retransmission approach in wireless broadcasting based on network coding (NCWBR) in terms of the number of retransmission, delay, network overhead and energy consumption.
    Limited feedback precoding for multiuser MIMO systems based on double codebook
    FU Hong-liang TAO Yong ZHANG Yuan
    2011, 31(09):  2325-2328.  DOI: 10.3724/SP.J.1087.2011.02325
    Asbtract ( )   PDF (579KB) ( )  
    Related Articles | Metrics
    Concerning the problem of performance loss due to limited feedback in multiuser MIMO downlink systems, a new limited feedback precoding for multiuser Multiple Input Multiple Output (MIMO) systems based on double codebook was proposed. The maximum SINR criteria was used for selecting optimal codeword from the Grassmannian codebook and perturbation codeword at the receiving, and feedback the Grassmannian precoding codeword index and perturbation codeword index to the transmitter, then the perturbation codeword was used at the transmitter to get optimal capacity, and compensating for the capacity performance loss due to the limited feedback. The simulation results show that the proposed method ensures Bit Error Rate (BER) performance and the cost of the feedback link, and the system throughput is improved effectively.
    DOA estimation of coherent NLFM signals based on DPT and virtual array
    GAO Chun-xia ZHANG Tian-qi WEI Shi-peng TAN Fang-qing
    2011, 31(09):  2329-2332.  DOI: 10.3724/SP.J.1087.2011.02329
    Asbtract ( )   PDF (728KB) ( )  
    Related Articles | Metrics
    Concerning the common multi-path transmission and reflection factors in the radio communications, broadband coherent sources must be considered. A new algorithm for the Direction Of Arrival (DOA) estimation of correlation wideband Non-Linear Frequency Modulation (NLFM) signals based on Discrete Polynomial-phase Transform (DPT) and virtual array was introduced. And comparison, analysis and improvement were done on the new algorithm. It can settle the problem of not finding coherent signals in routine MUSIC and ESPRIT algorithm. This approach can more accurately estimate and does not decrease effective array aperture, and it also can improve the utilization of array. Then the angle estimation algorithm of two closely-spaced emitters was proposed. The simulation results verify the correctness and efficiency of the new algorithm.
    Time-frequency analysis of frequency-hopping signals based on window function design
    GUO Jian-tao LIU You-an WANG Lin
    2011, 31(09):  2333-2335.  DOI: 10.3724/SP.J.1087.2011.02333
    Asbtract ( )   PDF (580KB) ( )  
    Related Articles | Metrics
    To suppress the cross-term and increase aggregation of time-frequency signal components, a new time-frequency analysis method for frequency-hopping signals was proposed based on adjusted window for kernel function design of Smoothed Pseudo Wigner Ville Distribution (SPWVD). According to the auto-term energy and cross-term energy ratios in the time-frequency plane, the shape of kernel function of SPWVD was adjusted through changing spread factors of window function to obtain excellent time frequency representation in fixed-kernel width. Compared with the fixed window function, using the proposed time frequency representation, time frequency parameters of frequency hopping signals can be estimated efficiently and good noise immunity can be got.
    Heterogeneous network selection algorithm based on extension theory and fuzzy analytic hierarchy process
    HU Tu JING Zhi-hong ZHANG Qiu-lin
    2011, 31(09):  2336-2339.  DOI: 10.3724/SP.J.1087.2011.02336
    Asbtract ( )   PDF (608KB) ( )  
    Related Articles | Metrics
    Determination of index weights in the current heterogeneous network selection algorithm is of subjectivity. To solve this, a network selection algorithm based on fuzzy Analytic Hierarchy Process (AHP) and extension theory was proposed. Based on the analysis of the network requirements of different business, and the combination of extension theory, the collected performance parameters were mapped into corresponding calibration interval. Through the establishment of the network and calculation of the relative membership degree of the matter-element, a new decision matrix was constructed. The integrated weights of the network performance parameters were calculated by fuzzy AHP. Finally, the optimal access network was chosen through the weighted ranking of the relative membership degree. The simulation results show that the algorithm can take account of the business types of different users' and network objective performance, and multi-mode terminal can select the network accurately and effectively in the heterogeneous networks.
    Uneven clustering routing algorithm for WSN based on particle swarm optimization
    SU Bing HUANG Guan-fa
    2011, 31(09):  2340-2343.  DOI: 10.3724/SP.J.1087.2011.02340
    Asbtract ( )   PDF (608KB) ( )  
    Related Articles | Metrics
    Clustering algorithm provides an effective way to save energy for the large-scale Wireless Sensor Network (WSN) remote monitoring system. Cluster-heads communicate data to the base-station through the multihop routing way. The cluster-heads closer to the base-station may be over-loaded due to transmitting lots of other cluster-heads data, which may consume all the energy and be early dead so as to cause the entire networks partition. Concerning the uneven energy consumption in wireless sensor networks clustering algorithm, an uneven clustering algorithm based on Particle Swarm Optimization (PSO) was proposed. By using the PSO algorithm, PSO-UCA partitioned all nodes into clusters of unequal size, which the clusters closer to the base-station have smaller size. Thus, the cluster-heads closer to the base-station can preserve more energy for the inter-cluster relay traffic. The simulation results demonstrate that, compared with LEACH algorithm, the clustering algorithm can prolong the network lifetime by 30%.
    Enhanced TCP Westwood algorithm based on nonlinear congestion window increase
    ZHAO Wen-bo SUN Xiao-ke MA Cao-chuan
    2011, 31(09):  2344-2348.  DOI: 10.3724/SP.J.1087.2011.02344
    Asbtract ( )   PDF (824KB) ( )  
    Related Articles | Metrics
    Congestion window of TCP Westwood (TCPW) is based on the increase of linear mode at the congestion avoidance phase in high-speed networks. Therefore, it cannot rapidly obtain or maintain the high throughput. During the slow-start stage, the congestion window of TCPW is based on exponential growth mode, which will cause the datagram increases too fast and prompt the probability of congestion. For the above defects, TCPW was improved from two aspects, and the new algorithm was called NLTCPW. During the slow-start stage, send window of NLTCPW got 10 packets faster than TCPW. After that, the increment speed of send window was decelerated. A simple nonlinear mode was used to increase the congestion widow at the congestion stage. The performance analysis of mathematical model and simulation results show that NLTCPW algorithm has better throughput performance, lower packet loss rate and better fairness, and it is friendly and stable in high-speed networks.
    Selection scheme of message carried vehicles in vehicle network environment
    LIU Jing WANG Xin-hua WANG Shuo
    2011, 31(09):  2349-2351.  DOI: 10.3724/SP.J.1087.2011.02349
    Asbtract ( )   PDF (703KB) ( )  
    Related Articles | Metrics
    For the dynamic characteristic of vehicle Ad-Hoc network's topology, the file is difficult to completely downloaded within the communication of a single vehicle road Access Point (AP) using the existing schemes of information dissemination, having the long time delay limitation of waiting for the next AP to document communication. A method that downloaded and spread the file fragmentation using multiple vehicles within the range of some free APs was proposed, the message delivery delay was divided into direct and indirect encounter delays, and discussion was made on them respectively, and a specific option was given to choose message carried vehicles. The experimental results of the message loss rate and delay show that the environment joined with the proposed scheme can effectively improve the reliability of message downloading, shorten the delay of downloading message to the purpose vehicle, without significant additional load to the network.
    Information security
    Traceback of IPv6 based on deterministic linear network coding
    YAN Qiao NING Tu-wen
    2011, 31(09):  2352-2355.  DOI: 10.3724/SP.J.1087.2011.02352
    Asbtract ( )   PDF (759KB) ( )  
    Related Articles | Metrics
    To solve the problem of Probabilistic Packet Marking (PPM) method for IPv6 that the computational complexity of reconstruction and false alarm rate are too large, a new traceback method for IPv6 based on deterministic linear network coding was proposed. The method chose the hop-by-hop option extension header of IPv6 for the marked region and applied the deterministic linear network coding to the probabilistic packet marking. Moreover, the 64bit sampling was employed to check the attack paths. The theoretical analysis and simulation in NS2 environment show that the method decreases the wasted network bandwidth and the amount of packets needed to reconstruct the path, reduces the computational complexity of reconstruction and false alarm rate, and improves the mark efficiency.
    Multi-granular resource access control for cloud manufacturing based on K-shortest path algorithm
    LI Chun-quan SHANG Yu-ling HU Chun-yang ZHU Pan-feng
    2011, 31(09):  2356-2358.  DOI: 10.3724/SP.J.1087.2011.02356
    Asbtract ( )   PDF (622KB) ( )  
    Related Articles | Metrics
    Multi-granular resource access control is a key issue of Cloud Manufacturing (CM). In this paper, multi-granular resource access control model (MGAC) was proposed and the conversion method of MGAC digraph was analyzed on the basis of attribute-based access control. The solution algorithm of K-shortest path was studied based on Dijkstra algorithm, and the feasibility of the method was verified through an example. Finally, based on the relationship between vertex number and objective number with the change of K algorithm performance was analyzed and the validity of algorithm was proved in comparison with related algorithm.
    Network anomaly detection based on anisotropic centroidal Voronoi diagram
    LI Xiao-lei WANG Lei
    2011, 31(09):  2359-2361. 
    Asbtract ( )   PDF (469KB) ( )  
    Related Articles | Metrics
    Network anomaly detection is an important research topic in the field of intrusion detection. However, it is inefficient in practice because the detection rate and false alarm rate restrain each other. Based on the anisotropic centroidal Voronoi diagram, a new algorithm of network anomaly detection was proposed. In this new algorithm, the anisotropic centroidal Voronoi diagram was used in the clustering of data set at first, then the point density for each data point was computed out, which was used to determine whether the data point was normal or not. The laboratory tests on KDD Cup 1999 data sets show that the new algorithm has a higher detection rate and a lower false alarm rate.
    Hidden process detection method based on multi-characteristics matching
    ZHOU Tian-yang ZHU Jun-hu WANG Qing-xian
    2011, 31(09):  2362-2366.  DOI: 10.3724/SP.J.1087.2011.02362
    Asbtract ( )   PDF (833KB) ( )  
    Related Articles | Metrics
    Based on certain detection characteristics of process, hidden process could be uncovered by memory searching. However, malware, with the help of developing Rootkit, could hardly be detected because its feature has been manipulated or virtual memory scan could be invalid, thus increasing the difficulty of detection. In order to address this issue, a new multi-characteristics matching approach was proposed. It was to obtain the whole physical memory image by Page Table Entry (PTE) patching, to extract the key fields from process data structure and construct a template to improve the reliability of characteristics, and to introduce similarity for preventing the detection leakage. The results show that the new detection is effective in the hidden process searching.
    Dynamic taint analysis based on virtual technology
    CHEN Yan-ling ZHAO Jing
    2011, 31(09):  2367-2372.  DOI: 10.3724/SP.J.1087.2011.02367
    Asbtract ( )   PDF (951KB) ( )  
    Related Articles | Metrics
    The record of the current taint analysis tool is not accurate. To solve this, dynamic taint analysis based on the virtual technology was studied and implemented. A virtualization based dynamic taint analysis framework was designed, and two kinds of taint signature models based on Hook technology and Hash-traversal technology were given respectively for memory taint and hard disk taint. A taint propagation strategy was put forward according to the instruction type which was classified by instruction encoding format of Inter&AMD, and a taint record strategy based on instruction filtering was given to solve the problem of redundant information records. The experimental results prove that the proposed method is effective, and can be well used in test case generation and vulnerability detection of fuzzy test.
    Blind fingerprint scheme against RSD attacks based on differential grid
    ZHAO Wei-guang YIN Zhong-hai ZHOU Yong-jun LIANG Shuang
    2011, 31(09):  2373-2377.  DOI: 10.3724/SP.J.1087.2011.02373
    Asbtract ( )   PDF (731KB) ( )  
    Related Articles | Metrics
    The construction of digital fingerprint embedding and acquisition scheme for anti-rotation, anti-scaling and anti-distortion attack can improve the anti-attack capability of the digital fingerprint. The designed spatial-DCT (Discrete Cosine Transform) domain combinational embedding scheme of digital fingerprint provided the construction of differential characteristic point, on which the digital fingerprint embedding and acquisition algorithm was proposed. And an attack parameter recognition algorithm with high accuracy was presented. The simulation results show that the accuracy of attack recognition algorithm can be the order of sub-pixel and can resist the scaling attack with parameter larger than 0.5, any rotation attack with angle less than 45° and any distortion attack with angle less than 25°. In addition, the effect of the proposed scheme would not decrease with the increase of the rotation and distortion angle. The proposed scheme improves the robustness of the digital fingerprint and enables the digital fingerprint system to resist the removal and CTP (cutting, trimming, pasting) attack as well as RSD (rotation, scaling, distoration) attack.
    Zero watermark algorithm for binary document images based on texture spectrum
    CHEN Xia WANG Xi-chang ZHANG Hua-ying LIU Jiang
    2011, 31(09):  2378-2381.  DOI: 10.3724/SP.J.1087.2011.02378
    Asbtract ( )   PDF (611KB) ( )  
    Related Articles | Metrics
    Concerning the copyright protection of binary document images, a zero watermark algorithm was proposed. This algorithm constructed the texture image based on Local Binary Pattern (LBP), and then zero watermark information was constructed from the texture spectral histograms of the texture image. This method had a better invisibility compared to other text image watermarking, and the original image information would not be changed. Watermark attacks including image cropping, adding noise and rotation operators were tested. The experimental results show that the proposed zero watermark algorithm has a good performance in robustness. And these attack operators have little impact on zero watermark information, and the algorithm is of stability with the lowest standard correlation above 0.85.
    Security protocol for sphere-sphere relation of different spatial coordinates
    WANG Tao-chun GU Fen-fei ZUO Kai-zhong
    2011, 31(09):  2382-2384.  DOI: 10.3724/SP.J.1087.2011.02382
    Asbtract ( )   PDF (472KB) ( )  
    Related Articles | Metrics
    How to bring geometric objects of different spatial coordinates into the same spatial coordinate is a common problem during the cooperation of completing certain work. However, as related to their own security and interests, the partners both do not want to disclose their secret input. Therefore, firstly, a security protocol for distance measure of different spatial coordinates was proposed and designed. Furthermore, a security protocol for sphere-sphere relation of different spatial coordinates was developed. The correctness, security and efficiency of these two protocols were analyzed too. Having the private information of both sides protected, the problem of determining the relative position of sphere-sphere is successfully solved by using the proposed protocols.
    Hardware acceleration based on IMPULSE C of ECC over GF(P)
    CUI Qiang-qiang JIN Tong-biao ZHU Yong
    2011, 31(09):  2385-2388.  DOI: 10.3724/SP.J.1087.2011.02385
    Asbtract ( )   PDF (554KB) ( )  
    Related Articles | Metrics
    Elliptic Curve Cryptography (ECC) based on GF(P) was studied deeply and programmed in IMPULSE C code. Firstly, a parallelization technique was proposed to speed up modular addition and modular doubling in standard projective coordinates, and a further parallelization was given using complier while programming. Secondly, according to the characteristics of IMPULSE C, a rational distribution of ECC algorithm was made. In this design, the complicated point multiplication with a large amount of calculation was regarded as hardware part, which was implemented and accelerated through Field Programmable Gate Array (FPGA). The ECC protocol was regarded as software part and implemented on CPU, and VHDL code was generated for hardware part. The IMPULSE C code was simulated by CoDeveloper and the VHDL code was analyzed and synthesized by Xilinx ISE 10.1.On the basis of the previous work, the design has been prototyped on a Xilinx Virtex-5 xc5vfx70t FPGA board. The experimental result indicate that the proposed method can deal with P-192 point multiplication within 2.9 ms at 133 MHz clock, and shows better throughput compared to the exiting reported realization.
    Database technology
    Range-based approach for multi-object convergence problem
    TAN Rong GU Jun-zhong LIN Xin CEHN Peng
    2011, 31(09):  2389-2394.  DOI: 10.3724/SP.J.1087.2011.02389
    Asbtract ( )   PDF (948KB) ( )  
    Related Articles | Metrics
    In this paper, the concept of multi-object convergence problem was introduced. While some former query techniques could be used to deal with this problem, they are all point-based and unable to protect location privacy. Hence, a range-based spatial Skyline query algorithm named VRSSA was proposed. It utilized the Voronoi graph and supported the spatial anonymity techniques in Location-based Service (LBS). Furthermore, with respect to the changes of query conditions, another two algorithms, Dynamic Point Joining Algorithm (DPJA) and Dynamic Point Deleting Algorithm (DPDA), to dynamically update the query results were proposed so that heavy re-computation could be avoided. The experimental results show that the approaches could efficiently and effectively solve the problem.
    Design and implementation of index in main memory database system named SwiftMMDB
    ZHAO Yan-mei ZHENG Xin-fu XU Li-zhen
    2011, 31(09):  2395-2398.  DOI: 10.3724/SP.J.1087.2011.02395
    Asbtract ( )   PDF (695KB) ( )  
    Related Articles | Metrics
    T tree, combining the advantages of AVL tree and B tree, can organize index data efficiently, thus providing good storage efficiency and search performance for main memory database. The design and implementation of T tree index was presented in the main memory database system named SwiftMMDB which was developed by the authors' research group recently. The insert and delete operations of traditional T tree were improved through node splitting and node populating method. The rotation number for balance in T tree was reduced. As a result, the retrieve efficiency and performance of main memory database system are improved.
    ISMOTE Algorithm Of Facing The Imbalanced Data Sets
    XU Dan-dan WANG Yong CAI Li-jun
    2011, 31(09):  2399-2401.  DOI: 10.3724/SP.J.1087.2011.02399
    Asbtract ( )   PDF (490KB) ( )  
    Related Articles | Metrics
    In order to improve the classification performance of minority class instances in imbalanced dataset, a new algorithm named ISMOTE (Improved Synthetic Minority Over-sampling TEchnique) was proposed. ISMOTE improved the imbalanced distribution of data through randomizing interpolation in the ball space constituted of the minority class instances and its nearest neighbor. The experiment was given on real data set. The experimental results show that the ISMOTE has substantial advantages over SMOTE (Synthetic Minority Over-sampling Technique) and direct classifying imbalanced data algorithm in prediction accuracy, and it can effectively improve the performance of classifier.
    Aggregate nearest neighbor query algorithm based on spatial distribution of query set
    XU Chao ZHANG Dong-zhan ZHENG Yan-hong RAO Li-li
    2011, 31(09):  2402-2404.  DOI: 10.3724/SP.J.1087.2011.02402
    Asbtract ( )   PDF (627KB) ( )  
    Related Articles | Metrics
    Aggregate nearest neighbor query involves many query points, so it is more complicated than traditional nearest neighbor query, and the distribution characteristic of query set implies the region where its aggregate nearest neighbor exists. Taking full account of the distribution characteristic of query set, a method by utilizing distribution characteristic to direct the way of aggregate nearest neighbor searching was given. Based on the method, a new algorithm named AM was presented for aggregate nearest neighbor query. AM algorithm can dynamically capture and use the distribution characteristic of query set, which enables it to search data points in a right order, and avoid unnecessary searching to data points. The experimental results show the efficiency of the algorithm.
    Sequential patterns mining algorithm based on improved PrefixSpan
    GONG Wei LIU Pei-yu JIA Xian
    2011, 31(09):  2405-2407. 
    Asbtract ( )   PDF (494KB) ( )  
    Related Articles | Metrics
    PrefixSpan, the classic sequential patterns mining algorithm, has the problem of producing huge amount of project databases. To solve this problem, a sequential patterns mining algorithm named SPMIP was proposed based on an improved PrefixSpan. This algorithm reduced the scale of projected databases and the time of scanning projected databases through adding pruning step and reducing scanning of certain specific sequential patterns production. In this way, algorithm efficiency could be raised up, and the needed sequential patterns were obtained. The experimental results show that SPMIP is more efficient than PrefixSpan while obtained sequential patterns have not been affected.
    Personal recommendation algorithm in multidimensional and weighted social network
    ZHANG Hua-qing WANG Hong TENG Zhao-ming MA Xiao-hui
    2011, 31(09):  2408-2411.  DOI: 10.3724/SP.J.1087.2011.02408
    Asbtract ( )   PDF (772KB) ( )  
    Related Articles | Metrics
    Personal recommendation is a crucial implementation to solve the problem of information overloading on the Internet. On the basis of researching personal recommendation skills and corresponding technologies, an application-driven personal recommendation algorithm in multidimensional and weighted social network was proposed. First, this algorithm built multidimensional and weighted social network between users, then applied the complex network clustering method—CPM (Clique Percolation Method) to find neighbor users, finally made recommendation on the grounds of the similarity between users. The experimental results show that the recommendation system of multidimensional network applying this algorithm can achieve higher recall and precision compared to content-based and collaborative filtering recommendation systems, and the quality of personal recommendation has been improved to some extent.
    MapReduce-based Bayesian anti-spam filtering mechanism
    TAO Yong-cai XUE Zheng-yuan SHI Lei
    2011, 31(09):  2412-2416.  DOI: 10.3724/SP.J.1087.2011.02412
    Asbtract ( )   PDF (764KB) ( )  
    Related Articles | Metrics
    The Bayesian anti-spam filter has strong classification capacity and high accuracy, but the mail training and learning at early stage consume mass system and network resources and affect system efficiency. A MapReduce-based Bayesian anti-spam filtering mechanism was proposed, which first improved the traditional Bayesian filtering technique, and then optimized the mail training and learning by taking advantage of mass data processing of MapReduce. The experimental results show that, compared with the traditional Bayesian filtering technique, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms, the MapReduce-based Bayesian anti-spam filtering mechanism performs better in recall, precision and accuracy, reduces the cost of mail learning and classifying and improves the system efficiency.
    Extraction technology of blog comments based on functional semantic units
    FAN Chun-long XIA Jia XIAO Xin LV Hong-wei XU Lei
    2011, 31(09):  2417-2420.  DOI: 10.3724/SP.J.1087.2011.02417
    Asbtract ( )   PDF (813KB) ( )  
    Related Articles | Metrics
    Blog is an important kind of network information resources, and the extraction of its comments is the basic work of public opinion analysis researches and of such work. The current mainstream blog comments extraction algorithms were summarized, and the application of page structure in information extraction was described. Using the characteristics of indicating phrases such as the "Home" when people understand Web pages, technology of extracting comments information was proposed by utilizing functional semantic units that they have clear semantics and functional indication. Many technologies involved in the extraction process were detailed such as page structure linearization, functional semantic units recognition, text distinguishment and comments extraction algorithm. Finally, the experimental results show that this technology can achieve better results in extraction of blog body and comments.
    Cloud pattern collaborative filtering recommender algorithm using user behavior correlation clustering
    WANG Xue-rong WAN Nian-hong
    2011, 31(09):  2421-2425.  DOI: 10.3724/SP.J.1087.2011.02421
    Asbtract ( )   PDF (902KB) ( )  
    Related Articles | Metrics
    The traditional collaborative filtering recommender algorithms based on Internet pattern research merely E-commerce recommender problem from one angle, and their recommender quality is evidently not high. To improve recommender efficiency, and to achieve scalability and utility of recommendation systems, with studying user behavior similarity measure formula, grade function and correlation rule function based on cloud pattern, a correlation clustering method was put forward. To improve the corresponding algorithms, a cloud pattern collaborative filtering recommender algorithm based on user behavior correlation clustering was proposed. Finally, the improved algorithms were validated by local and global experiments using MovieLens and Alibaba cloud testing data. The experimental results show that the recommender efficiency of the proposed algorithm is obviously higher than those of traditional algorithms, and it has stronger scalability and higher utility.
    Web news recommendation based on multiple topic tracking
    CHEN Hong CHEN Wei
    2011, 31(09):  2426-2428.  DOI: 10.3724/SP.J.1087.2011.02426
    Asbtract ( )   PDF (445KB) ( )  
    Related Articles | Metrics
    A Web news recommendation method based on multiple topic tracking was proposed to improve the precision of recommendation. The proposed algorithm used multiple user profiles to represent user's interests in different topics, and dynamically updated user's profile to reflect the changing of user's interests. The central recommendation algorithm was implemented, and experiments on Reuters Corpus Volume 1 were carried out. The experimental results show that the proposed algorithms can effectively improve the precision of recommendation.
    Latent semantic features selection based on support vector machine
    LI Min-song DUAN Zhuo-hua
    2011, 31(09):  2429-2431.  DOI: 10.3724/SP.J.1087.2011.02429
    Asbtract ( )   PDF (605KB) ( )  
    Related Articles | Metrics
    Latent Semantic Indexing (LSI) is an effective feature extraction method which can capture the underlying latent semantic structure between words in documents. However, feature subspace selected by LSI is probably not the most appropriate for text classification, since the method orders extracted features according to their variance without considering the classification capability. The high generalization ability of Support Vector Machine (SVM) makes it especially suitable for the classification of high-dimension data such as term-document. Thus, a feature extraction method based on SVM was proposed to select the LSI features fit for classification. Making use of the high generalization ability of SVM, contribution value of the reverse side of the square decomposition of the k-th feature was estimated by each classifier parameter trained under the rules. The experimental results indicate that the method improves classification performance with more compact representation when less time of training and testing is required than that of LSI.
    Ontology matching based on improved algorithm of similarity propagation
    ZHANG Yue LING Xing-hong YAO Wang-shu FU Yu-chen
    2011, 31(09):  2432-2435.  DOI: 10.3724/SP.J.1087.2011.02432
    Asbtract ( )   PDF (688KB) ( )  
    Related Articles | Metrics
    In order to solve the semantic heterogeneity and achieve interoperability between Web applications of different ontology and integrating data, an improved matching algorithm of similarity propagation based on RDF graph was proposed. First, it sought to find initial similar seeds by WordNet. Then it expressed ontology as RDF triples through preprocessing. According to the characteristics of RDF graph, it expanded similarity propagation to triples to find probable similar pairs and then calculated similarities by elements' features. The procedure of similarity propagation, finding probable similar pairs and calculating similarities is a cyclic iterative process until it is convergent. The experimental results show that the algorithm is effective and has better time performance.
    Computer software technology
    Reliability evaluation before the accomplishment of service-oriented architecture software
    LV Tang-qi HUANG Ning JIA Xiao-guang WANG Dong
    2011, 31(09):  2436-2439.  DOI: 10.3724/SP.J.1087.2011.02436
    Asbtract ( )   PDF (836KB) ( )  
    Related Articles | Metrics
    A reliability evaluation method was proposed to evaluate the reliability of Service-Oriented Architecture (SOA) before its realization. OWL-S (Ontology Web Language for Services), of which the formal semantics of the control structure was defined by Maude, was used to descript the information of software requirements and design. The operational profile of software was built up by distribution function. After this, how the information of operational profile and the architecture of software took part in reliability calculation was added in Maude. At last, the reliability of software could be achieved through rewriting with the supporting of Maude system. In addition, the Software Reliability Predict Tool (SRPT) was developed based on this method. The data flow, control flow, components as well as the operational profile and the architecture of software were considered in the impact on software reliability. According to the design of software, it can estimate the reliability before the accomplishment of software.
    Automatic generation of test data for extended finite state machine models based on Tabu search algorithm
    REN Jun ZHAO Rui-lian LI Zheng
    2011, 31(09):  2440-2443.  DOI: 10.3724/SP.J.1087.2011.02440
    Asbtract ( )   PDF (746KB) ( )  
    Related Articles | Metrics
    Test case generation of EFSM (Extended Finite State Machine Models) includes test path generation and test data generation. However, nowadays most research into EFSM testing focuses on test path generation. In order to explore the automatic test generation, a test data generation method oriented to the path of EFSM models was proposed. A Tabu Search (TS) strategy was adopted to automatically generate test data, and the key factors that affect the performance of test data generation in EFSM models were analyzed. Moreover, the test generation efficiency was compared with that of Genetic Algorithm (GA). The experimental results show that the proposed method is promising and effective, and it is obviously superior to the GA in the test generation for EFSM models.
    Method of test data selection for Web services composition based on mutation
    ZHANG Mei-hua JIANG Ying
    2011, 31(09):  2444-2448.  DOI: 10.3724/SP.J.1087.2011.02444
    Asbtract ( )   PDF (795KB) ( )  
    Related Articles | Metrics
    The amount of test data for integration of Web services is quite huge, which influences the quality and efficiency of testing. A method of test data selection for Web services composition based on mutation technology was proposed. The paths were generated from BPEL (Business Process Execution Language) document. Then interface mutation operator and path mutation operator were used to generate mutants. The effective test data was selected according to the ability of the initial test data that kills the mutants. The experimental results show that the method is effective and the selected test data are less but more effective.
    Implementation of performance testing for TPC-DS benchmark
    CHEN Dan YE Xiao-jun SHI Lin
    2011, 31(09):  2449-2452.  DOI: 10.3724/SP.J.1087.2011.02449
    Asbtract ( )   PDF (635KB) ( )  
    Related Articles | Metrics
    The data model, business model, execution schema and performance metric of TPC-DS benchmark for next generation Decision Support System (DSS) application performance evaluation were introduced. The implementation architecture and key technologies for a configurable TPC-DS performance testing tool were put forward, including configuration file, query execution control and data maintenance mechanism. By testing practices in different Database Management Systems (DBMSs), the configurability and usability of the proposed tool for implementation strategies were verified.
    Graphics and image technology
    New trend and challenges in 3D video coding
    DENG Zhi-pin JIA Ke-bin CHAN Yui-lam FU Chang-hong SIU Wan-chi
    2011, 31(09):  2453-2456.  DOI: 10.3724/SP.J.1087.2011.02453
    Asbtract ( )   PDF (817KB) ( )  
    Related Articles | Metrics
    The key technologies of 3D video coding were introduced. Firstly, the developing directions and challenges of video-only format and depth-enhancement format 3D videos were elaborated. The depth estimation and view synthesis technologies were analyzed in detail. Subsequently, the process of standardizing the current 3DV/FTV standard of MPEG was summarized. The conclusion and prospect were given at last.
    H.264 scalable video coding inter-layer rate control
    YANG Jin SUN Yu SUN Shi-xin
    2011, 31(09):  2457-2460.  DOI: 10.3724/SP.J.1087.2011.02457
    Asbtract ( )   PDF (594KB) ( )  
    Related Articles | Metrics
    An adaptive inter-layer rate control scheme was proposed for H.264/AVC scalable extension. A switched model was put forward to predict the number of bits used for encoding inter frame either from the previous frame of the current layer or from the current frame of the previous layer. First, a Rate-Complexity-Quantization (R-C-Q) model was extended in scalable video coding. Second, the Proportional+Integral+Derivative (PID) buffer controller was adopted to provide the inter frame bit estimation according to the buffer state. Third, to achieve more accurate prediction when an abrupt change happens, the bit estimation was predicted from the actual bit of the current frame of the previous layer. Finally, a switched model was used to decide the bit estimation, and then the Quantization Parameter (QP) could be calculated according to the R-C-Q model. The simulation results demonstrate that the proposed algorithm outperforms JVT-W043 rate control algorithm by providing more accurate output bit rate for each layer, maintaining stable buffer fullness, reducing frame skipping and quality fluctuation, and improves the overall coding quality.
    New optimization algorithm for multi-view video coding
    YANG Zhong-hua DAI Sheng-kui
    2011, 31(09):  2461-2464.  DOI: 10.3724/SP.J.1087.2011.02461
    Asbtract ( )   PDF (660KB) ( )  
    Related Articles | Metrics
    After analyzing and researching the performance and deficiencies of TZSearch algorithm adopted by multi-view video, concerning the sequences of multi-view video obtained by parallel cameras, a new multi-view video coding optimization algorithm was put forward. The optimization was suggested mainly from the following three aspects: selection of search model, search strategy and adaptive threshold setting, so as to reduce the computational complexity of the algorithm. Tests were given on software testing platform for multi-view video named JMVC4.0. The experimental results show that: in ensuring the reconstruction video quality within tolerance, and under the premise of controlling the coding overhead, the optimized algorithm has reduced the average encoding time about 75% compared with the original algorithm, greatly improves the real-time performance of coding.
    Optimized management and interactive strategy for massive terrain data
    YIN Xiao-jing MU Xiao-dong XU Yi-wen CHEN Qi
    2011, 31(09):  2465-2467.  DOI: 10.3724/SP.J.1087.2011.02465
    Asbtract ( )   PDF (528KB) ( )  
    Related Articles | Metrics
    Concerning the contradiction of the limited computer memory and the huge terrain data in terrain rendering, the tile-pyramid model was built up and the optimum size of the tile was analyzed. An efficient method of quadtree tile-index according to the relation of quardtree tiles and a thought of data compress based on "tile" were also put forward. The method not only analyzed but also optimized the interactive strategy between the massive terrain data and the three-dimensional display from the data pre-loaded, double-caching mechanism, multi-thread and memory. In combination with classic Geometry Clipmaps algorithm, the terrain rendering was accomplished. The experimental results show excellent rendering effect and good real-time performance, and prove the feasibility and efficiency of the optimized strategy.
    SAR target feature extraction and recognition based on multi-wavelet sub-band weighted discrimination entropy ICA
    ZHANG Xin-zheng
    2011, 31(09):  2468-2472.  DOI: 10.3724/SP.J.1087.2011.02468
    Asbtract ( )   PDF (818KB) ( )  
    Related Articles | Metrics
    Generally, single wavelet basis function and low frequency sub-band of the signal are used to perform Independent Component Analysis (ICA) in wavelet domain for Synthetic Aperture Radar (SAR) target feature extraction while high frequency sub-bands are ignored. For this defect, SAR images were decomposed utilizing multi-wavelet basis function. Then a new feature extraction method was proposed by multi-wavelet sub-band ICA according to sub-band weighted discrimination entropy criterion on the basis of the general wavelet-ICA algorithm. The SAR target recognition experiment was performed on the nearest criterion using features extracted by the new algorithm with MSTAR dataset. The experimental results show that the proposed algorithm is superior to the traditional wavelet-ICA algorithm.
    Aspect estimation method for SAR target based on Radon transform of leading edge
    HUANG Jia-xin LU Jun ZHAO Ling-jun
    2011, 31(09):  2473-2476.  DOI: 10.3724/SP.J.1087.2011.02473
    Asbtract ( )   PDF (654KB) ( )  
    Related Articles | Metrics
    Only using estimation of leading edge for target will cause vertical and horizontal ambiguity. Therefore, a new method of Synthetic Aperture Radar (SAR) target aspect estimation based on Radon transform of leading edge was proposed. The new method was introduced to eliminate the ambiguity of horizontal and vertical aspect estimation based on the length of the target region. It is difficult to separate the long leading edge from the short one. By introducing the discrimination rule of the target leading edge, the problem that many traditional algorithms try to settle was solved due to the estimation algorithm of Radon transform. The experimental results on the MSTAR data prove the precision and robustness of the algorithm.
    Method for estimating building heights via registering catadioptric omnidirectional image and remote sensing image
    WANG Yuan-yuan CHEN Wang ZHANG Mao-jun WANG Wei XU Wei
    2011, 31(09):  2477-2480.  DOI: 10.3724/SP.J.1087.2011.02477
    Asbtract ( )   PDF (675KB) ( )  
    Related Articles | Metrics
    A method was proposed for estimating building heights via registering catadioptric omni-directional image and remote sensing image, which can be applied to large-scale 3D city reconstruction. Firstly, the top edges of building roof were extracted from the catadioptric omni-directional image by using omnidirectional Hough transform. Then the catadioptric omni-directional image and the remote sensing image were registered based on the extracted top edges where the angle consistency nature of horizontal lines in catadioptric omni-directional imaging was used as evidence. Finally, according to the model of catadioptric omnidirectional camera, the building heights were estimated by using the registration results. The proposed method is simple and easy to implement. The experimental results show that the method is effective and the error of estimated building height is fairly small.
    Edge detection of high resolution remote sensing images based on morphology and wavelet phase filtering
    WANG Peng-wei NIU Rui-qing
    2011, 31(09):  2481-2484.  DOI: 10.3724/SP.J.1087.2011.02481
    Asbtract ( )   PDF (698KB) ( )  
    Related Articles | Metrics
    In order to catch the edge information of high resolution remote sensing image more effectively, a new method to get image edge was proposed. Firstly, the main information was collected by Principal Component Analysis (PCA) transform. Secondly, the information was divided with symletsA wavelet while the image in each scale was processed with morphological operators. Finally, the edge of image was enhanced by implementing correlation filtering to the image in the same scale with filtering algorithm of wavelet phase, and the edge information was caught by partitioning the image with OTSU algorithm. The results show that, compared with the existing algorithms, the edge of image is located more accurately and the edge detection effect is more evident with this method.
    Color image edge detection with region homogeneous measure
    ZHENG Mei-zhu ZHAO Jing-xiu
    2011, 31(09):  2485-2488.  DOI: 10.3724/SP.J.1087.2011.02485
    Asbtract ( )   PDF (924KB) ( )  
    Related Articles | Metrics
    Color image processing and analysis were implemented in HSI color space, concerning the difficulty in distinguishing the color similarity effectively in RGB color space. Firstly, the chromatic aberration components of hue, saturation, and intensity were calculated. Then through quoting fuzzy entropy, a group of information measures based on the fuzzy entropy were constructed to describe the natural characteristics of image edge quantitatively. Four component vectors were gotten via trained image samples, BP neural network was trained with some eigenvector of this four component vectors, and in the end the trained BP neural network was used for edge detection directly. Both the architecture and training of the BP neural are simple. Moreover, the proposed edge detector needs no threshold for conventional edge detection and has strong retention capacity of the details.
    Improved object tracking algorithm based on particle filter and Galerkin's method
    LIANG Nan GAO Shi-wei GUO Lei WANG Ying
    2011, 31(09):  2489-2492.  DOI: 10.3724/SP.J.1087.2011.02489
    Asbtract ( )   PDF (646KB) ( )  
    Related Articles | Metrics
    In the particle filter framework, estimation accuracy strongly depends on the choice of proposal distribution. The traditional particle filter uses system transition probability as the proposal distribution without considering the new observing information; therefore, they cannot give accurate estimation. A new tracking framework applied with particle filter algorithm was proposed, which used Galerkin's method to construct proposal distribution. This proposal distribution enhanced the estimation accuracy compared to traditional filters. In the proposed framework, color model and shape model were adaptively fused, and a new model update scheme was also proposed to improve the stability of the object tracking. The experimental results demonstrate the availability of the proposed algorithm.
    Tracking algorithm of interested moving target under complex background
    FENG Xiao-min GUO Ji-chang ZHANG Yan
    2011, 31(09):  2493-2496.  DOI: 10.3724/SP.J.1087.2011.02493
    Asbtract ( )   PDF (632KB) ( )  
    Related Articles | Metrics
    Concerning the problem of tracking interested moving target inaccurately because of complex background, a robust tracking algorithm based on adaptive multi-feature fusion was proposed. First, the algorithm obtained the weighted color distribution model of interested moving target in the HSV color space. Then invariant moment was used to eliminate the interference of the similar background color and illumination changes. The algorithm fused the two features in the particle filter by adjusting their weights and updating particle weights adaptively. Thus, the algorithm can track the moving target accurately and stably. The experimental results show that the algorithm can track interested moving target accurately when moving target is tracked under complex background such as translation, variant posture, and be blocked of the moving object, varying illumination and the interference of the similar background color. The algorithm has strong robustness to background interference.
    New codebook model based on HSV color space
    FANG Xian-yong HE Biao LUO Bin
    2011, 31(09):  2497-2501.  DOI: 10.3724/SP.J.1087.2011.02497
    Asbtract ( )   PDF (844KB) ( )  
    Related Articles | Metrics
    A new codebook model was proposed based on HSV color space to eliminate the effect of complex dynamic background in the moving object detection. The merits of this new model lie in three aspects: 1) HSV color space was introduced to effectively distinguish foreground and background for false targets removal; 2) a 4-tuple codeword was proposed for fast codebook training and small storage in comparison with the traditional 9-tuple codeword; 3) a new codebook learning and updating scheme was designed for easy and fast codebook training and detection. A global quantitative evaluation method named recall-precision curve was also proposed for the video sequence. Qualitative and quantitative experiments demonstrate that the proposed codebook model can effectively detect moving object under complex dynamic background.
    Embedded face recognition system based on Gabor uncertainty
    YE Ji-hua WANG Shi-min GUO Fan YU Min
    2011, 31(09):  2502-2505.  DOI: 10.3724/SP.J.1087.2011.02502
    Asbtract ( )   PDF (801KB) ( )  
    Related Articles | Metrics
    Gabor uncertainty features fusion can solve the problem that multiscale Gabor features are unsuitable for ARM because of huge data and dimensions in the embedded face recognition system. Multiscale Gabor features were first extracted, and then the uncertain weight was calculated, at last multiscale Gabor features were integrated into one. The embedded face recognition system detected face by using Haar-like features of face, and reduced dimensions by using 2-Dimensional Principal Component Analysis (2DPCA) algorithm. Based on EELiod 270 development board, the performance of face recognition was tested on ORL and Yale. Comparative results with other face recognition algorithms show that a significant decline is got in the amount of arithmetic operations, and a good real-time recognition is obtained while ensuring the recognition rate.
    Blurred direction identification based on local standard deviation and directional derivation
    FAN Hai-ju ZHANG Ai-li FENG Nai-qin
    2011, 31(09):  2506-2508.  DOI: 10.3724/SP.J.1087.2011.02506
    Asbtract ( )   PDF (620KB) ( )  
    Related Articles | Metrics
    Concerning the shortcomings of big recognition error and bad real-time calculation by the minimum differential directional algorithm, a method in combination with local standard deviation and directional derivation was put forward to identify the blurred direction of motion blur image. Firstly, the motion blur image was filtered by local standard deviation to enhance texture details in blurred direction. Secondly, the minimum directional derivation summation was obtained by bilinear interpolation, and its corresponding direction was the blurred direction. Meanwhile, the inherent law was found after concluding and analyzing the minimum directional derivation summation curve. Based on this law, the method of search range decreasing in half was presented to search minimum, which could reduce searching times. The simulation results show that this algorithm not only has high precision and strong immunity, but also meets the requirement of real-time calculation.
    Image restoration algorithm using APEX method based on dark channel prior
    ZHANG Yong WANG Hao-xian LI Fang MAO Xing-peng PAN Wei-min LIANG WEI
    2011, 31(09):  2509-2511.  DOI: 10.3724/SP.J.1087.2011.02509
    Asbtract ( )   PDF (542KB) ( )  
    Related Articles | Metrics
    In order to meet the demands for both availability and processing speed, in reference to dark channel prior estimation, an image restoration algorithm based on Approximate Point Spread Function Examining (APEX) algorithm commonly used in image deblurring was proposed. Meanwhile, because different-sized images under different weather conditions have different APEX parameters, the APEX parameter value was adjusted dynamically according to sandstorm and fog degree. Furthermore, unlike other multiple images methods, the proposed algorithm needs only one image to be the input. The experimental results show that the proposed algorithm is effective in restoring images. By using color constancy algorithm, the source color components were balanced; furthermore, the visual effect of images was enhanced.
    Image denoising model in combination with partial differential equation and median filtering
    WAN Shan LI Lei-min HUANG Yu-qing
    2011, 31(09):  2512-2514.  DOI: 10.3724/SP.J.1087.2011.02512
    Asbtract ( )   PDF (522KB) ( )  
    Related Articles | Metrics
    The denoising model based on Partial Differential Equation (PDE) model cannot eliminate impulse noise and low-order PDE will produce blocky effect. In order to solve these problems, a denoising model combining PDE and adaptive median filtering was proposed. Through analyzing the image gradient, this model used second order model to denoise at the region with obvious gradient change and the region with tiny gradient change. At the smooth region, fourth order model was used to denoise. The region of the impulse noise was localized by making use of the characteristic that the gradient of the impulse noise is far bigger than the gradient of the edge. At this region, the adaptive median filtering was used to eliminate impulse noise. This method can eliminate impulse noise and protect the image edge effectively. It also can overcome the blocky effect and improve the denoising efficiency. The experiments prove the validity of the model.
    Image denoising method using inter-scale and intra-scale dependencies of wavelet coefficients
    CAI Zheng TAO Shao-hua
    2011, 31(09):  2515-2517.  DOI: 10.3724/SP.J.1087.2011.02515
    Asbtract ( )   PDF (461KB) ( )  
    Related Articles | Metrics
    In order to retain the edge information and remove the image noise as much as possible at the same time, a wavelet shrinkage algorithm was proposed, which took the inter-scale and intra-scale dependencies of wavelet coefficients into account. The proposed method used the correlation of wavelet coefficients and the average magnitudes of the surrounding wavelet coefficients within a local window to describe the inter-scale and intra-scale dependencies of wavelet coefficients, respectively. Thus, the image information and noise were identified. Meanwhile, a new threshold function was proposed to shrink wavelet coefficients. The experimental results show that the proposed denoising method can achieve high Peak Signal-to-Noise Ratio (PSNR).
    Image annotation in reference to visual attention weight and word correlation
    CHEN Zhi-hong FENG Zhi-yong JIA Yu
    2011, 31(09):  2518-2521.  DOI: 10.3724/SP.J.1087.2011.02518
    Asbtract ( )   PDF (865KB) ( )  
    Related Articles | Metrics
    In order to overcome the semantic gap between low-level features and high-level semantic concept of image, an image annotation approach based on visual attention weight and word correlation was proposed. In the process of understanding an image, people pay more attention to focus region. Focus region of an image can be extracted by computing visual attention weight of image regions. The annotation word of focus region is relevant to the annotation word of other image regions, so we can choose proper annotation vector by word correlation. The experimental results show that the new method can improve the precision of image annotation.
    Artificial intelligence
    Social emotional optimization algorithm based on quadratic interpolation method
    WU Jian-na CUI Zhi-hua LIU Jing
    2011, 31(09):  2522-2525.  DOI: 10.3724/SP.J.1087.2011.02522
    Asbtract ( )   PDF (708KB) ( )  
    Related Articles | Metrics
    Social Emotional Optimization Algorithm (SEOA) is a new swarm intelligent population-based optimization algorithm to simulate the human social behaviors. The individual decision-making ability and individual emotion which have impact on optimization results were taken into account, so the diversity of the algorithm has been improved a lot than common swam intelligence algorithms. However, the local search capacity needs to be updated. Quadratic interpolation method is better-behaved in local search. Therefore, the introduction of it into SEOA will improve the search capability. According to the test for the optimization performance by using benchmark functions, it is proved that the local search ability can be improved by introducing quadratic interpolation method into SEOA, thus increasing the global search capability.
    Group argumentation model based on IBIS and Toulmin's argument schema
    CHEN Jun-liang CHEN Chao JIANG Xin ZHANG Zhen
    2011, 31(09):  2526-2529.  DOI: 10.3724/SP.J.1087.2011.02526
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    Argumentation model is the theoretical basis to establish group argumentation environment. Based on Issue-Based Information System (IBIS) model and Toulmin' argument schema, a group argumentation model was proposed, which was able to evaluate the argumentative utterance. With this model, the group argumentative information could be structured as a graph which consisted of utterance nodes and semantic links. A method of evaluating utterance nodes based on Language Weighted Aggregation (LWA) operator and node reduction was proposed. A group argumentation on the issue of system architecture design was illustrated as an example to show the usability and effectiveness of the proposed model.
    Incremental learning method of Bayesian classification combined with feedback information
    XU Ming-ying WEI Yong-qing ZHAO Jing
    2011, 31(09):  2530-2533.  DOI: 10.3724/SP.J.1087.2011.02530
    Asbtract ( )   PDF (634KB) ( )  
    Related Articles | Metrics
    Owing to the insufficiency of the training sets, the performance of the initial classifier is not satisfactory and can not track the users' needs dynamically. Concerning the defect, an incremental learning method of Bayesian classifier combined with feedback information was proposed. To reduce the redundancy between features effectively and improve representative ability of feedback feature subset, an improved feature selection method based on Genetic Algorithm (GA) was used to choose the best features from feedback sets to amend classifier. The experimental results show that the algorithm optimizes classification significantly and has good overall stability.
    Clustering based on energy diffusing model of sample points
    ZENG Zhao-xian ZHANG Mao-jun WANG Wei XIONG Zhi-hui
    2011, 31(09):  2534-2537.  DOI: 10.3724/SP.J.1087.2011.02534
    Asbtract ( )   PDF (653KB) ( )  
    Related Articles | Metrics
    Clustering is a complex issue. Although there is a variety of clustering methods, many shortcomings still exist, such as slow clustering convergence, unsatisfactory clustering results, requiring certain parameters provided by people. To solve these problems, a new idea of clustering was put forward. Firstly, the authors supposed each cluster had a cluster center. Secondly, each sample point was considered as an energy source, eradiating energy to the clustering space with a reasonable physical or mathematical diffusing model. Cluster center was confirmed by the total energy that each point gained. Finally, as a result, sample points could be easily clustered to their cluster centers. The experimental results demonstrate that this clustering approach has the characteristics of fast convergence, strong extendibility, and being suitable for natural clustering. Additionally, it can obtain the same results of many classic clustering methods.
    Kernel based intuitionistic fuzzy clustering algorithm
    FAN Cheng-li LEI Ying-jie
    2011, 31(09):  2538-2541.  DOI: 10.3724/SP.J.1087.2011.02538
    Asbtract ( )   PDF (554KB) ( )  
    Related Articles | Metrics
    A kernel based intuitionistic fuzzy clustering algorithm named IFKCM was proposed on the basis of analyzing the deficiency of the existing clustering algorithm. The new algorithm, through introducing Gauss kernel, mapped the intuitionistic fuzzy sets from their original space to a high dimensional space (or kernel space), so as to have shorter computational time and more accurate result. Besides, it was robust to the noises because it improved the constraint conditions used in the existing intuitionistic fuzzy clustering algorithm. Finally, compared with the traditional algorithm, the proposed algorithm has made some significant progress, and the experimental result has proved its effectiveness.
    Classification method based on large margin and fuzzy kernel hyper-ball
    WANG Juan HU Wen-jun WANG Shi-tong
    2011, 31(09):  2542-2545.  DOI: 10.3724/SP.J.1087.2011.02542
    Asbtract ( )   PDF (562KB) ( )  
    Related Articles | Metrics
    In order to improve the classification accuracy of multiclass, an algorithm called Large Margin and Fuzzy Kernel Hyper-Ball (LMFKHB) was proposed. First, the sample datasets were mapped into a high-dimensional feature space through a kernel function. Then, all decision functions were obtained using the proposed method. Meanwhile, a fuzzy membership function was introduced to solve the wrong classification issue for these samples in the dead zone, thus the flexibility was enhanced and the classification accuracy was improved. The experiments on the artificial and real data demonstrate the effectiveness of the method.
    Clustering based on quantum genetic spectral clustering algorithm
    JIANG Yong TAN Huai-liang LI Guang-wen
    2011, 31(09):  2546-2550.  DOI: 10.3724/SP.J.1087.2011.02546
    Asbtract ( )   PDF (878KB) ( )  
    Related Articles | Metrics
    On clustering large data, the spectral clustering can hardly be promoted because of large occupied storage space and high time complexity. Hence, multi-segment and upward and downward double-direction shrink QR algorithm for the corresponding eigenvectors of eigenvalues was adopted to achieve dimensionality reduction. Then, a new quantum genetic spectral clustering algorithm was proposed to cluster the sample points in the mapping space. Compact input with low-dimension for quantum genetic spectral clustering was obtained after mapping, and the quantum genetic spectral clustering algorithm, characterized by its rapid convergence to global optimum and minimal sensitivity to initialization, can obtain good clustering results. The experimental results show that the proposed method is superior to the spectral clustering algorithm, K-means, NJW algorithm in astringency and stability and has a higher overall optimal solution.
    Typical applications
    Naive Bayesian text classification algorithm in cloud computing environment
    JIANG Xiao-ping LI Cheng-hua XIANG Wen ZHANG Xin-fang
    2011, 31(09):  2551-2554.  DOI: 10.3724/SP.J.1087.2011.02551
    Asbtract ( )   PDF (667KB) ( )  
    Related Articles | Metrics
    The major procedures of text classification such as uniform text format expression, training, testing and classifying based on Naive Bayesian text classification algorithm were implemented using MapReduce programming mode. The experiments were given in Hadoop cloud computing environment. The experimental results indicate basically linear speedup with an increasing number of node computers. A recall rate of 86% was achieved when classifying Chinese Web pages.
    Hybrid multi-objective algorithm based on probabilistic model
    LIU Yang XIAO Bao-qiu DAI Guang-ming
    2011, 31(09):  2555-2558.  DOI: 10.3724/SP.J.1087.2011.02555
    Asbtract ( )   PDF (702KB) ( )  
    Related Articles | Metrics
    The traditional multi-objective algorithm named NSGA-Ⅱ and the multi-objective algorithm based model named RM-MEDA were analyzed. Meanwhile, the deficiencies of these two algorithms were pointed out. On the basis of that, a hybrid multi-objective algorithm based on probabilistic model was proposed and the corresponding model metric for mixing the two algorithms was designed. The proposed algorithm could take advantage of the mentioned two algorithms. The algorithm was contrasted with NSGA-Ⅱ and RM-MEDA on 10 test functions. The experimental results show that the proposed algorithm has a good performance on global convergence and diversity.
    Multi-project scheduling based on simulated harmonic oscillator algorithm
    NI Lin DUAN Chao ZHONG Hui
    2011, 31(09):  2559-2562.  DOI: 10.3724/SP.J.1087.2011.02559
    Asbtract ( )   PDF (568KB) ( )  
    Related Articles | Metrics
    For Resource-Constrained Multi-Project Scheduling Problem (RCMPSP), a simulated harmonic oscillator algorithm was introduced. By simulating the change of potential energy state in harmonic vibration system, the classical harmonic vibration stage was transformed to quantum harmonic vibration stage, and the algorithm achieved the change process from the global search to local search. Meanwhile, the two-stage search guarantees convergence accuracy and search efficiency of the algorithm. Combined with method based on order and serial schedule generation scheme and multi-project task list, the gotten scheduling scheme can meet the project schedule constraints of precedence relations. The tests on standard test functions indicate that the algorithm has high search efficiency and accuracy. Finally, three groups of multi-project scheduling examples were given.
    General computing resource sharing environments based on .NET over the Internet
    XIE Yan-hong
    2011, 31(09):  2563-2566.  DOI: 10.3724/SP.J.1087.2011.02563
    Asbtract ( )   PDF (627KB) ( )  
    Related Articles | Metrics
    In order to aggregate and utilize network resource efficiently and conveniently, a .NET based approach for general computing resource sharing environments over the Internet called GCRSE was proposed. Using the node function role approach, GCRSE was composed of three kinds of node: role peers, named server, volunteers and clients. With the Web services of .NET, GCRSE can submit, execute and transport the tasks and subtasks about the parallel applications across the Internet. Besides the master-slave style parallel programming model and the divide-conquer style were also supported on GCRSE. A heartbeat message and subtask oriented fault tolerance policy were also used which can achieve the reliability. The results obtained from performance analysis show that the throughput and speedup of GCRSE are good and it is a feasible and efficient approach which can provide a new way for computing resource sharing over the Internet.
    Temperature-aware thread scheduling algorithm for multi-core processors
    QU Shuang-xi ZHANG Min-xuan LIU Tao LIU Guang-hui
    2011, 31(09):  2567-2570.  DOI: 10.3724/SP.J.1087.2011.02567
    Asbtract ( )   PDF (596KB) ( )  
    Related Articles | Metrics
    Multi-core microprocessor consumes more power, leading to an increase in the number of its hot spots and uneven distribution of temperature, which creates a greater negative impact on performance. Concerning this problem, a temperature-aware thread scheduling algorithm was proposed to reduce the thermal emergencies and improve the throughput, which had been implemented in the Linux kernel on Intel's Quad-Core system. The experimental results show that this schedule algorithm can reduce 9.6%-78.5% of thermal emergencies in various combinations of workloads, and has an average of 5.2% and up to 9.7% throughput higher than the Linux standard scheduler.
    Design and implementation of conjugate gradient iterative solver on FPGA
    SONG Qing-zeng GU Jun-hua
    2011, 31(09):  2571-2573.  DOI: 10.3724/SP.J.1087.2011.02571
    Asbtract ( )   PDF (631KB) ( )  
    Related Articles | Metrics
    To overcome the disadvantage of inefficient and bad real-time capability in software version Conjugate Gradient (CG) iterative solver, a CG iterative solver was designed and implemented on Field Programmable Gate Array (FPGA) platform. The design of CG iterative solver was based on hardware/software co-design. Hardware CG co-processor implemented the code of enormous computation and simple control, which could accelerate the system. The code of control complexity and less calculation was still performed in the microprocessor. The use of row-interleaved data flow could make the system not to stall to improve performance. The experimental results illustrate that hardware CG iterative solver can speed up about 5.7 times over the software version of the same algorithm.
    Path planning of unmanned aerial vehicle
    CHEN Hai-han LIU Yin DU Yun-lei
    2011, 31(09):  2574-2576.  DOI: 10.3724/SP.J.1087.2011.02574
    Asbtract ( )   PDF (495KB) ( )  
    Related Articles | Metrics
    Path planning is designed to make use of terrain and enemy and other information to plan out the largest survival probability penetration trajectory of Unmanned Aerial Vehicle (UAV). After analyzing the simulation needs of path planning, the path planning of UAV was studied. Firstly, a Voronoi diagram was constructed based on the battle field environment full of threats. The Voronoi diagram yields the optimal routes to travel among a set of threat source points to avoid the threats. Then, Dijkstra algorithm was used to search the optimal route. Finally, the simulation system of path planning was carried out on the platform of Visual Studio .Net 2010 based on MS SQL Server 2008 database and Visual C # 2008 language, and the simulation result was given in graph form, which provided a good basis for further study.
    Time-based character motion adjusting technique
    HE Yi-hui MA Xiao-jian
    2011, 31(09):  2577-2580.  DOI: 10.3724/SP.J.1087.2011.02577
    Asbtract ( )   PDF (627KB) ( )  
    Related Articles | Metrics
    Owing to the problem that the motion can not be adjusted in the motion control method based on motion capture currently, the authors proposed a technique for adjusting motion based on time. Firstly, the time parameters were obtained by analyzing the space-time characteristics of the basic motions of character model, and then the real-time goal position of the character was calculated according to the controlled information and environment constraints. Subsequently, the parameters of joints with the space-time characteristic of the basic motions were calculated, so that the character could make new motions when it was necessary. Finally, the simulation results show that the character is able to adjust his motion when he turns around and climbs ladder with Unity by loading character model.
    Fission reproduction particle filter-based track-before-detect algorithm
    FAN Ling
    2011, 31(09):  2581-2583.  DOI: 10.3724/SP.J.1087.2011.02581
    Asbtract ( )   PDF (567KB) ( )  
    Related Articles | Metrics
    Since the Particle Filter-based Track-Before-Detect (PF TBD) is subject to severe sample impoverishment, the fission reproduction PF TBD algorithm was proposed. To incorporate TBD problem, the particles were divided into three types according to an existence variable which indicates the presence/absence of a target in the data. Three types of particles were death, birth and survival, respectively, and the survival particles were processed by the fission reproduction. The process increases the diversity of particles, and overcomes sample impoverishment. The simulation results demonstrate that, compared to the PF TBD, the proposed algorithm can provide stable and reliable detection as well as accurate tracking.
    Vehicle multimedia player based on Atom Z510
    LI Xia NIN Hua-song
    2011, 31(09):  2584-2588.  DOI: 10.3724/SP.J.1087.2011.02584
    Asbtract ( )   PDF (788KB) ( )  
    Related Articles | Metrics
    Since the current vehicle audio system has a defect that the function of the system is limited and it is difficult to update media files for the system, the design and implementation of a high-performance vehicle player that can play both audio and video media files were given. The latest embedded processor Atom Z510 published by Intel was selected as the core processor of the system, XPE (Windows XP Embedded) was built on it, and Qt-Creator was used to design the software of the player, Graphical User Interface (GUI) technology was also used to design a graphical user interface for the player. The player can not only play local media files, but also identify the CF card, U disk and other removable storage devices and play media files stored on them.
    Visual acuity distance control system based on Kalman filter CMAC-PID
    WANG Xu QIU Fei-yue
    2011, 31(09):  2589-2592.  DOI: 10.3724/SP.J.1087.2011.02589
    Asbtract ( )   PDF (538KB) ( )  
    Related Articles | Metrics
    To improve the accuracy and flexibility of visual acuity, a visual acuity distance control system was designed to control the acuity distance effectively, and a mathematic model was established. Concerning the nonlinear quality, time variability and more interference of the system, Cerebellar Model Articulation Controller combined with Proportion-Integration-Differentiation (CMAC-PID) control method based on Kalman filter was proposed, and Kalman filter was used to suppress measurement noise and control interference. As shown in the simulation results, this control method is more favorable than CMAC-PID in anti-interference, and the performance of distance control system is improved.
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF