Loading...

Table of Content

    01 August 2012, Volume 32 Issue 08
    Network and communications
    Flexible link-state routing protocol for mobile Ad Hoc networks
    WANG Xiao-gang CAO Jian
    2012, 32(08):  2085-2094.  DOI: 10.3724/SP.J.1087.2012.02085
    Asbtract ( )   PDF (946KB) ( )  
    References | Related Articles | Metrics
    The aim of the Quality of Service (QoS) routing in Mobile Ad Hoc NETwork (MANET) is to determine an efficient route with enough available mobile nodes to satisfy a request of a source node, and the selected nodes of Multi-Point Relay (MPR) are on the optimal route that needs evaluating by the routing protocol. In order to steadily look for the optimal QoS routing path with maximum bandwidth and minimum delay from a source node to a destination node with shorter time, a new flexible link-state QoS routing protocol called FLSQR was proposed, which used a new link-state method in which each node' cache stores an Effective Decision Table (EDT) for routing calculation. The FLSQR used the MPR1 and MPR2 selection according to the Effective Distance (ED) in EDT to select the optimal and suboptimal routing paths, and further chose the path of optimal bandwidth and delay by presented metric model. The experimental results show that the proposed FLSQR protocol can acquire better improvement in finding an optimal routing path in MANET than the OLSR and QOLSR-MPR protocol.
    Tunnel MTU discovery for nested mobile networks based on location update
    CHEN Long TANG Hong-bo WANG Ling-wei
    2012, 32(08):  2090-2094.  DOI: 10.3724/SP.J.1087.2012.02090
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics
    Concerning the Maximum Transmission Unit (MTU) problems of nested mobile networks, based on the analysis of the existing mechanisms and the characteristics of the network structure, a tunnel MTU model and a tunnel MTU discovery mechanism based on location update were proposed. By storing the path MTU values between the home Agents at them, and adding the MTU information into the signaling messages, such as router advertisement and binding update, the mechanism can track the tunnel MTU fast and securely with the location update process, and adapt to the multihoming configuration and a variety of route optimization schemes. The simulation and analysis show that this mechanism can reduce packet delay and transmission overhead, and improve bandwidth utilization compared to the existing mechanisms.
    Optimized scheme about FMIPv6 with π-calculus verification
    LI Xiang-li WANG Xiao-yan WANG Zheng-bin QU Zhi-wei
    2012, 32(08):  2095-2102.  DOI: 10.3724/SP.J.1087.2012.02095
    Asbtract ( )   PDF (879KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of long handover delay and high packet loss rate existing in FMIPv6, an improved scheme named PI-FMIPv6 was designed. Information learning, proxy binding and the tunnel timer were introduced into it so as to complete the configuration of the New Care-of Address (NCoA), Duplicate Address Detection (DAD), Binding Update (BU) by advancing and managing the tunnel. π-calculus was used to define and deduce the mathematical model about PI-FMIPv6. It is proved that the optimized scheme PI-FMIPv6 is standard and precise. Furthermore, the simulation results from the NS-2 show that PI-FMIPv6 can reduce the handover delay by 60.7% and packet loss rate by 61.5% at least compared to FMIPv6, which verifies that the PI-FMIPv6 is superior to FMIPv6 and can better meet the real-time requirement.
    Transmission scheme of two-way relaying systems based on asymmetric channel
    FAN Jin-hong HE Li
    2012, 32(08):  2100-2102.  DOI: 10.3724/SP.J.1087.2012.02100
    Asbtract ( )   PDF (417KB) ( )  
    References | Related Articles | Metrics
    When the channel qualities of two-way relaying systems to the two stations are asymmetric, the data rates from the relay are limited by the weaker link of the two stations. The channel gain of the stronger link is not fully used. Therefore, a new transmission scheme was proposed. By using the prior bit information method, the weaker link receiver exploited a known bit information in each transmit symbol and decoded on a subset of the transmit symbol constellation which caused different minimum Euclidean distance. The two links can respectively transmit signals at different data rate corresponding to link quality and reach the requirement of the same bit error rate. The simulation results show that the proposed transmission scheme can be applied to practical scenarios with asymmetric channel qualities.
    Improved algorithm of wireless sensor network node localization
    ZHANG Hong-jun MAO Yong-yi
    2012, 32(08):  2103-2105.  DOI: 10.3724/SP.J.1087.2012.02103
    Asbtract ( )   PDF (449KB) ( )  
    References | Related Articles | Metrics
    In order to eliminate the influence of Non-Line-Of-Sight (NLOS) transmission error in wireless sensor network node localization, and solve the problem of possible convergence in the Newton iterative algorithm, a Newton iterative localization algorithm based on the weighted residual was proposed. First, residual weighting algorithm was used for positioning to get the unknown node's preliminary position, then the node position was used as an initial value to iterate and calculate in Newton iterative localization algorithm, finally the precise position of the unknown node was obtained. The simulation results show that this algorithm can effectively restrain the effect of NLOS propagation error, improve the precision of sensor network node localization, and has stable performance.
    DOA estimation method based on sparse representation and constrained optimization
    GUO Ying MENG Cai-yun
    2012, 32(08):  2106-2127.  DOI: 10.3724/SP.J.1087.2012.02106
    Asbtract ( )   PDF (575KB) ( )  
    References | Related Articles | Metrics
    For Direction-Of-Arrival (DOA) estimation of signal in additive noise, the traditional Multiple Signal Classification (MUSIC) algorithm cannot process the coherent signal with fewer snapshots. The searching scope of estimated DOA was considered as redundant dictionary. Consequently, the estimated DOA was taken as some elements in the dictionary, and could be represented sparsely by the dictionary. Then, this problem was thrown into the Second Order Cone (SOC) constraints and an efficient estimation algorithm using a single snapshot was developed. This constrained problem could be depicted as a standard SOC form and be solved by the SeDuMi, an optimization toolbox. The simulation results show that the proposed algorithm has a few advantages over the existing subspace method including one single snapshot to be needed, no requirement for the number of source signals, ability to work with coherent and non-coherent signals.
    Real-valued extended propagator algorithm based on virtual interpolated array
    CHEN Hao JIA Wei LI Si-jia
    2012, 32(08):  2109-2112.  DOI: 10.3724/SP.J.1087.2012.02109
    Asbtract ( )   PDF (586KB) ( )  
    References | Related Articles | Metrics
    A real-valued extended propagator method algorithm based on virtual interpolated array named VIA-EPM real-valued algorithm was proposed to solve the utilization of Virtual Interpolated Array (VIA) in the non-circular Direction Of Arrival (DOA) algorithm. The real array output was virtually transformed by utilizing transformation matrix which was obtained through the real array manifold and virtual array manifold. The real part and imaginary part of the transformed array output were obtained, which could be reconstructed in series to extend dimensions according to the characteristic that the signal sources are real-valued, and then a Propagator Method (PM) DOA estimation algorithm was obtained after splitting the extended array output matrix. The simulation results show that, if sensor position errors exist, the performance of the new algorithm is similar to the calibrated extended propagator method real-valued algorithm (EPM real-valued algorithm) by using VIA for the calibrated sensor position data, and this algorithm also keeps the performance in array extension, high accuracy and high resolution, and the performance of the new algorithm is obviously better than the non-calibrated EPM real-valued algorithm in the case of two-dimension sensor position errors. Analysis of the computational complexity of VIA-EPM real-valued algorithm concludes that the new algorithm has the advantage of virtual interpolated array and non-circular characteristic, and its computational complexity is much lower than complex-valued algorithm.
    Turbo decoding algorithm based on linear approximation of correction function
    LI Zheng SONG Chun-lin ZHAO Yun-jie WU Zhu-jia
    2012, 32(08):  2113-2115. 
    Asbtract ( )   PDF (569KB) ( )  
    References | Related Articles | Metrics
    As the new generation communication system, LTE/LTE-A requires reliable communication of higher standard for its high-throughput characteristic. Among those decoding algorithms of Turbo, Log-MAP algorithm, as a simplified algorithm, has a good performance, but its high complexity and delay is still a big problem; Max-Log-MAP algorithm with low complexity could not achieve a good performance as the Log-MAP algorithm. This paper proposed an improved Turbo decoding algorithm based on a linear approximation of the correction function. The improved algorithm adopted different correction fitting parameters for different regions. The simulation results demonstrate that, compared with the existing algorithms, this improved algorithm can achieve the same Bit Error Rate (BER) performance as the Log-MAP algorithm and effectively reduce the decoding delay. More importantly, the proposed algorithm is better for implementation.
    Orthogonal cooperative scheme for digital network coding
    TIAN Xin-ji SONG Cheng
    2012, 32(08):  2116-2122.  DOI: 10.3724/SP.J.1087.2012.02116
    Asbtract ( )   PDF (533KB) ( )  
    References | Related Articles | Metrics
    Considering that error propagation cannot be avoided at relay node in Orthogonal Amplify-and-Forward (OAF) scheme for network coding, orthogonal cooperative scheme for network coding with Decode-and-Forward (DF) protocol was proposed in this paper, in which whether to forward the source signals was determined by the decoding results at the relay node. In the proposed scheme, the error propagation was avoided and system reliability was improved. The theoretical analysis shows that the proposed scheme outperforms OAF scheme in terms of average mutual information and reliability. The simulation results demonstrate the validity of the theoretical analysis.
    Modulation recognition for multi-component PSK signals based on cyclic spectrum amplitude
    YU Zhi-bin YU Ning-yu
    2012, 32(08):  2119-2122.  DOI: 10.3724/SP.J.1087.2012.02119
    Asbtract ( )   PDF (592KB) ( )  
    References | Related Articles | Metrics
    Concerning the modulation recognition problem of the single-channel receiver working under the multi-signal environment, a new approach which can directly extract the feature of each independent component was proposed to identify multi-component Phase Shift Keying (PSK) signals. In this method, the spectral correlation function of the multi-component signal was deduced, features of signals were extracted and the classifier was designed. The theoretic analysis and experimental results show that the proposed algorithm can extract the feature of each independent component under the mutual interference of component signals, it can effectively classify arbitrary combinations of dual-signals in the given set, and the average correct recognition rate of all combinations reaches 97% when Signal-to-Noise Rate (SNR) is 0dB.
    Modulation identification algorithm based on cyclic spectrum characteristics in multipath channel
    LI Shi-ping CHEN Fang-chao WANG Long WANG Ai-hong
    2012, 32(08):  2123-2127.  DOI: 10.3724/SP.J.1087.2012.02123
    Asbtract ( )   PDF (735KB) ( )  
    References | Related Articles | Metrics
    A new algorithm based on cyclic spectrum was proposed for classification of communication signals in multipath channel, which solved the problems of fewer identification types, difficults table feature parameters extraction and low recognition rate. Firstly, the features face and projective planes of cyclic spectrum, square cyclic spectrum and the fourth power cyclic spectrum were extracted. Secondly, correlation coefficients of features face and projective planes were used as the characteristic parameters. At last, the suitable decision threshold was chosen and seven signals of BPSK, QPSK, 2FSK, 4FSK, MSK, 16QAM and OFDM were identified automatically. The experimental results show that the characteristic parameters have great ability for multipath interference and high recognition rate is acquired at last. When the Signal-to-Noise Ratio (SNR) is higher than 2dB, its overall recognition rate is up to 94%. Compared with the existing algorithms, the simulation results prove that the algorithm is superior.
    Blind modulation recognition algorithm of burst QAM signal
    LIU Cong-jie PENG Hua WU Di
    2012, 32(08):  2128-2132.  DOI: 10.3724/SP.J.1087.2012.02128
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics
    For the modulation recognition of seven kinds of Quadrature Amplitude Modulation (QAM) in non-cooperative communication, a new blind identification algorithm was proposed based on combined features. Based on the discussion and analysis of the cyclostationarity and the instantaneous amplitude distribution of the QAM signals, the algorithm used the combined features which were cyclostationary detection feature, fourth-order zero-conjugate cyclic accumulation feature and instantaneous envelope feature. The algorithm used the binary tree support vector machine as classifier to classify the seven Intermediate Frequency (IF) QAM signals. The simulation results show that the correct recognition rate of the algorithm reaches over 90% when the number of symbols is 1000 and the Signal-to-Noise Ratio (SNR) is more than 6dB.
    Parallel weak signal detection algorithm based on envelope analysis
    LIU Lei FAN Tie-sheng WANG Yin-bin LI Zhi-hui TANG Chun-ge
    2012, 32(08):  2133-2136. 
    Asbtract ( )   PDF (649KB) ( )  
    References | Related Articles | Metrics
    The most commonly used technology in weak signal detection is using correlation operations to detect whether a known periodic signal exists; however, it is always very complicated and cannot be applied widely. To solve this problem, an envelope analysis based algorithm was proposed from the perspective of mathematical morphology. In this algorithm, salient points were selected from low level envelope to form a higher level envelope of the signal, and finally it converged at the peak position of every target signal in parallel. No priori knowledge about the target signal was needed here and it was also less sensitive of white Gaussian noise. This algorithm is effective in the simulation with signal-to-noise ratio of -10dB, and the measured data demonstrate that it is good at detecting weak signal.
    Statistical analysis of bit error in Ka band mobile satellite channel
    PAN Cheng-sheng LI Hua-fang LIU Chun-ling
    2012, 32(08):  2137-2140.  DOI: 10.3724/SP.J.1087.2012.02137
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    In view of the characteristics of Ka-band mobile satellite channel, a model was built up with taking full consideration of the decline influenced by weather and surrounding environment in this paper. Then, the satellite channel with Bipolarity Phase Shift Keying (BPSK) modulation was simulated, and the probability model of baseband channel bit error distribution was established. Meanwhile, the probability of error occurrences was fitted by using the least-squares method. According to the results, the occurrence number of channel errors obeys Poisson distribution. Moreover, during the digital baseband simulation of Ka-band mobile satellite channels, it is found that the system error with BPSK modulation consists of the burst error and the random error that obeys the Poisson distribution.
    Advanced computing
    Methods of metadata management in block-level continuous data protection system
    LI Hong-yan
    2012, 32(08):  2141-2149.  DOI: 10.3724/SP.J.1087.2012.02141
    Asbtract ( )   PDF (1044KB) ( )  
    References | Related Articles | Metrics
    To effectively organize the historical data of Continuous Data Protection (CDP) systems and improve the recovery efficiency in case of the disaster occurrences, this paper presented three different methods for metadata management in CDP systems, which has important impacts on the system recovery performance. The former two (DIR-MySQL and OPT-MySQL) were simple implementations based on MySQL database and the other one (META-CDP) was designed taking its characteristics into consideration. The experimental results show that the three methods all can increase the recovery efficiency of CDP systems. MySQL based methods will get worst as the amount of recovery data increases, and the META-CDP method is not much sensitive to the increasing amount of recovery data. The META-CDP method is far more efficient than the other two and its performance is reasonably acceptable.
    Distributed computation method for traffic noise mapping based on service object-oriented architecture
    LI Nan FENG Tao LIU Bin LI Xian-hui LIU Lei
    2012, 32(08):  2146-2149.  DOI: 10.3724/SP.J.1087.2012.02146
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics
    Current urban traffic noise mapping systems are not ideal for big scale project distributed computing in dynamic network. This paper proposed a noise mapping distributed computation method based on loosely-coupled services and the mechanism of Service Object Oriented Architecture (SOOA), investigated the generation approach of noise propagation calculation service, and introduced the deployment and management of services in the proposed system. At last, a demonstration indicated that the distributed computation approach considerably reduced the overhead of calculation and supplied flexible system architecture at the same time. The experimental results show that the imbalance of parallel subtasks will affect the parallel efficiency. Under normal circumstances, parallel efficiency can reach over 85%.
    Improved HDFS scheme based on erasure code and dynamical-replication system
    LI Xiao-kai DAI Xiang LI Wen-jie CUI Zhe
    2012, 32(08):  2150-2158.  DOI: 10.3724/SP.J.1087.2012.02150
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics
    In order to improve the storage efficiency of Hadoop Distributed File System (HDFS) and its load balance ability, this paper presented an improved solution named Noah to replace the original multiple-replication strategy. Noah introduced a coding module to HDFS. Instead of adopting the multiple-replication strategy by the original system, the module encoded every data block of HDFS into a greater number of data sections (pieces), and saved them dispersedly into the clusters of the storage system in distributed fashion. In the case of cluster failure, the original data would be recovered via decoding by collecting any 70% of the sections, while the dynamic replication strategy also worked synchronously, in which the amount of copies would dynamically change with the demand. The experimental results in analogous clusters of storage system show the feasibility and advantages of new measures in proposed solution.
    Community structure identification algorithm based on subcenter
    SHUI Chao LI Hui
    2012, 32(08):  2154-2158.  DOI: 10.3724/SP.J.1087.2012.02154
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics
    Some community structure identification algorithms in complex network research achieve high quality for identifying community correctly but low efficiency, but some algorithms were on the contrary. It is a challenge for community structure identifying algorithms to balance efficiency and correctness for category meanwhile. In this paper, a new algorithm named CoreScan was proposed. Different from existing algorithm, CoreScan found some special node named subcenter in network and using a preprocessing phase to divide nodes into several communities according to subcenter, then cluster nodes by evaluating the D modularity of community. The definition of subcenter was given and some certifications also were shown to guarantee the correction of CoreScan algorithm. At last, the algorithm was tested on two artificial networks and two real-world graphs. The experimental results show that the algorithm achieves the goal of linear efficiency, and the correction of identifying community is no less than existing algorithms. This algorithm is suitable to large-scale network structure investigation.
    Solving parameters of Van Genuchten equation by improved harmony search algorithm
    XING Chang-ming DAI Yan YANG Lin
    2012, 32(08):  2159-2164.  DOI: 10.3724/SP.J.1087.2012.02159
    Asbtract ( )   PDF (853KB) ( )  
    References | Related Articles | Metrics
    Van Genuchten equation is the most commonly used soil water characteristic curve equation, and its parameter value precision is the key to the use of the equation. In order to solve these parameters accurately, the Harmony Search (HS) algorithm was introduced, and a new HS algorithm based on the current global information named IGHS was proposed. IGHS algorithm has the following characteristics: firstly, IGHS employs a new method for generating new solution vectors, which uses the current global optimum in the harmony memory. Secondly, in order to avoid premature and enhance global search ability, IGHS disturbs the current global optimum at a certain probability. Lastly, the algorithm is simple, and easy to implement. The experimental results show that the solution accuracy of IGHS is similar to the random Particle Swarm Optimization (PSO) algorithm, but the convergence of IGHS is faster than PSO and the calculated amount is smaller, so IGHS can be used as a new method to calculate Van Genuchten equation parameters.
    Improved self-adaptive differential evolution algorithm for large-scale integer task assignment
    WANG Yong-jiao
    2012, 32(08):  2165-2167.  DOI: 10.3724/SP.J.1087.2012.02165
    Asbtract ( )   PDF (425KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the general 0-1 task assignment has dimension disaster problem, an integer task assignment based on improved Self-Adaptive Differential Evolution (SADE) algorithm was proposed. Firstly, 0-1 task assignment model was transferred into integer task assignment model, which not only decreased the dimension of variable, but also decreased equation constraints. Then, classical DE/rand/1/bin and DE/best/2/bin mutation operators were added with linear weight, which made the SADE algorithm not only converge quickly, but also decrease independence on concrete problem, and the integer task assignment model was optimized by the improved SADE algorithm. At last, several classic task assignment problems were tested. The experimental results show that the proposed algorithm is effective and speedy on the large-scale task assignment.
    Core-based multidimensional multiple-choice knapsack solution
    KANG Kun-peng
    2012, 32(08):  2168-2175.  DOI: 10.3724/SP.J.1087.2012.02168
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics
    The core has been used to design efficient algorithms for the knapsack problem. Several methods were put forward to obtain a core for Multidimensional Mutiple-choice Knapsack Problem (MMKP) as there is no algorithm based on core now. The method of obtaining a core of the Knapsack Problem (KP) was discussed firstly, and then the base solution and ordering relations of MMKP were explained in detail according to the metrics set beforehand. After that, the core which can be called subspace was obtained by three methods. The firrst method was based on the observation that which of E[lc] and E[d∞] was smaller. The second method was based on the observation that the Manhattan distance of the base solution was not too far from the optimal solution. In the third method, a total ordering was defined among all items and the first k items were taken as the core.The comparative study shows that the performance of the first one is superior on both accuracy and enumeration time. The MMKP can be solved efficiently with this method.
    Information security
    Synergetic phase transition detection method for network traffic anomolies based on wavelet
    XIONG Wei
    2012, 32(08):  2171-2174.  DOI: 10.3724/SP.J.1087.2012.02271
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics
    According to the nonlinear and non-stationary dynamic characteristics of the network traffic, the technique based on synergetic phase transition theory was proposed for detecting network traffic anomalies. By using the nonlinear dynamic equation of the order parameter, the paper described the complex behaviors of the network traffic system in discrete wavelet domain of the network traffic time series and the potential function was used to characterize non-stationary phase transition process of the network traffic system. The relationship between network traffic status and the various attack patterns was analyzed, and the synergetic model was used to calculate the network traffic order parameter. When the corresponding order parameter converged, the corresponding attack pattern or the normal traffic pattern could be detected. Finally, the DARPA 1999 data set was used to evaluate the proposed method. The average detection rate is 90.00% and the average false alarm rate is 15.03%. The experimental results show that the proposed method is effective for the network traffic anomaly detection.
    Advanced computing
    Definitions and analysis of general thermodynamic parameters in cloud computing phase space
    WANG Peng
    2012, 32(08):  2172-2175.  DOI: 10.3724/SP.J.1087.2012.02172
    Asbtract ( )   PDF (652KB) ( )  
    References | Related Articles | Metrics
    The cloud computing system is a high coupling system consisting of large number of nodes. By defining general thermodynamic parameters projected in phase space of cloud computing system, i.e., physical quantities such as general normalized temperature, general absolute temperature, general normalized entropy, center of gravity, the analysis of cloud computing system can be changed into the phase space of thermodynamics system. Thermodynamic parameters in phase space of the cloud computing system reflect system's whole performance, and realize evaluations on external load request, load balance, and node parameter change. Meanwhile, the simulation experiment proves that this analytical method is helpful to analyze the cloud computing system.
    Database technology
    Concept drift detection method with limited amount of labeled data
    LI Nan GUO Gong-de CHEN Li-fei
    2012, 32(08):  2176-2185.  DOI: 10.3724/SP.J.1087.2012.02176
    Asbtract ( )   PDF (1184KB) ( )  
    References | Related Articles | Metrics
    Most existing algorithms for data streams mining utilize the true label of testing data to detect concept drift and adjust current model according to requirements. It is impractical in real-world applications as manual labeling of instances which arrive continuously at a high speed requires a lot of human and material resources. Therefore, a concept drift detection method with limited amount of labeled data was proposed. The proposed method used the model clusters generated by the fast KNNModel algorithm to classify instances. It was able to detect concept drift on whether the number of instances which were not covered by any model clusters on the current block increased remarkably at a certain significance level than that of the prior block. Once concept drift happened, the domain experts were asked to label a few instances which were not covered by the model clusters and these representative instances were used to update the current model. The experimental results show that, compared with the traditional classification algorithms, the proposed method not only adapts to the situation of concept drift, but also acquires approximate or better classification accuracy.
    Density-based clustering algorithm combined with limited regional sampling
    ZHOU Hong-fang ZHAO Xue-han ZHOU Yang
    2012, 32(08):  2182-2185.  DOI: 10.3724/SP.J.1087.2012.02182
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics
    Concerning the inefficient time performance and lower clustering accuracy revealed by the traditional density-based algorithms of DBSCAN and DBRS, this paper proposed an improved density-based clustering algorithm called DBLRS, which is combined with limited regional sampling technique. The algorithm used the parameter Eps to search for the neighborhood and expanded points of a core point without increasing time and space complexity, and implemented data sampling in a limited area (Eps,2Eps). The experimental results confirm that DBLRS can reduce the probability of large clusters' splitting and improve the algorithmic efficiency and clustering accuracy by selecting representative points to expand a cluster.
    Selection algorithm for K-means initial clustering center
    ZHENG Dan WANG Qian-ping
    2012, 32(08):  2186-2192.  DOI: 10.3724/SP.J.1087.2012.02186
    Asbtract ( )   PDF (657KB) ( )  
    References | Related Articles | Metrics
    The initial clustering centers of K-means algorithm are randomly selected, which may result in low accuracy and unstable clustering. To solve these problems, a K-means initial clustering center selection algorithm was proposed. The locations of data points were determined by analyzing Difference of K-dist (DK) graph. One point with the least k-dist value on the main density curves was selected as an initial clustering center. The experimental results demonstrate that the improved algorithm can select unique initial clustering center, gain stable clustering result, get higher accuracy and reduce times of iteration.
    New clustering algorithm based on hybrid niching artificial fish swarm
    WANG Pei-chong QIAN Xu LEI Feng-jun
    2012, 32(08):  2189-2192.  DOI: 10.3724/SP.J.1087.2012.02189
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics
    To correct the weakness such as sensitive to initial value of K, convergence untimely in K-Means algorithm, this paper presented an improved K-Means algorithm based on artificial fish school mechanism(NAFS). Firstly, priori knowledge is exploited to generate randomly some cluster centers for the problems to be solved, then composing the fish school environment. Secondly, the cooperation and competition mechanism of fish individuals is utilized to search satisfied outcome. In view of the deficiency that artificial fish school is prone to occur local optimum, niching algorithm is introduced according to the fish crowding density to ameliorate the diversity of population and improve its the solution accuracy. The results of experiments on KDDCUP99 show NAFS has higher clustering accuracy and is appropriate to solve clustering problems of high dimensionality.
    Outlier mining algorithm based on data-partitioning and grid
    TANG Cheng-long XING Chang-zheng
    2012, 32(08):  2193-2197.  DOI: 10.3724/SP.J.1087.2012.02193
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of inefficiency and bad-adaptability for the existing outlier mining algorithms based on grid, this paper proposed an outlier mining algorithm based on data partitioning and grid. Firstly, the technology of data partitioning was applied. Secondly, the non-outliers were filtered out by cell and the intermediate results were temporarily stored. Thirdly, the structure of the improved Cell Dimension Tree (CD-Tree) was created to maintain the spatial information of the reserved data. Afterwards, the non-outliers were filtered out by micro-cell and were operated efficiently through two optimization strategies. Finally, followed by mining by data point, the outlier set was obtained. The theoretical analysis and experimental results show that the method is feasible and effective, and has better scalability for dealing with massive and high dimensional data.
    Multi-class association rule generation algorithm
    ZENG An-ping
    2012, 32(08):  2198-2201.  DOI: 10.3724/SP.J.1087.2012.02198
    Asbtract ( )   PDF (570KB) ( )  
    References | Related Articles | Metrics
    The association rules generated by traditional algorithms have the shortcomings of few classes and low correlation. Based on the analysis of these shortcomings, and combined with Spearman rank correlation coefficient, a new multi-class association rule algorithm was proposed. Based on the strong association rules generated by traditional algorithms, the new algorithm used Spearman rank correlation to calculate the synchronous and asynchronous correlation coefficient. Setting the correlation coefficient as the interest threshold, the new algorithm can generate synchronous positive rules, contrary positive rules, synchronous negative rules and contrary negative rules. Experiment has been carried out to illustrate the effectiveness and superiority of the algorithm.
    Multivariate regression analytical method based on heuristic constructed variable under condition of incomplete data
    ZHANG Xi-xiang LI Tao-shen
    2012, 32(08):  2202-2274.  DOI: 10.3724/SP.J.1087.2012.02202
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics
    Regression analysis is often used for filling and predicting incomplete data, whereas it has some flaws when constructing regression equation, the independent variable form is fixed and single. In order to solve the problem, the paper proposed an improved multivariate regression analytical method based on heuristic constructed variable. Firstly, the existing variables' optimized combination forms were found by means of greedy algorithm, then the new constructed variable for multivariate regression analysis was chosen to get a better goodness of fit. Results of calculating and estimating incomplete data of wheat stalks' mechanical strength prove that the proposed method is feasible and effective, and it can get a better goodness of fit when predicting incomplete data.
    Spatiotemporal index for moving objects in networks: IMon-tree
    LI Feng LUO Lei
    2012, 32(08):  2205-2222.  DOI: 10.3724/SP.J.1087.2012.02205
    Asbtract ( )   PDF (835KB) ( )  
    References | Related Articles | Metrics
    To overcome the shortcoming of the Mon-tree index, a method called IMon-tree (Improved Mon-tree) index was proposed. It included three layers: the top layer used grids to index roads; the bottom layer was used to index the objects' moving information; the middle, which connected the above two layers, was to complete the mapping from roads to moving information. In order to implement trajectory query, the hash table was used to save the moving information of objects. The experimental results show that the IMon-tree index has a better performance than TMN-tree on trajectory query. As to the average response time of the spatiotemporal query algorithm, the IMon-tree is 65 percent of the Mon-tree, and is 81 percent of the TMN-tree. Therefore, the suggested method can be applied to all kinds of geodatabases and geographic information systems.
    Artificial intelligence
    Probability analysis of capturing specific objects in stratified sampling model
    YANG Guan-ci LI Shao-bo ZHONG Yong
    2012, 32(08):  2209-2211.  DOI: 10.3724/SP.J.1087.2012.02209
    Asbtract ( )   PDF (417KB) ( )  
    References | Related Articles | Metrics
    In order to increase the probability of population-based search approach to capture specific objects from the decision space within limited time, based on the classical probability model, this paper established an overall random sampling model and a partition sampling model inspired by stratified sampling which divided the sample space into more than one subspace. By analyzing and comparing the two random event's probability of obtaining the specific objectives at least one time among repeatedly independent random sampling from those models, the paper proves that the partition sampling model's probability is greater than the overall random sampling model's probability permanently when the amount of specific objective in the collectivity is 1 or 2.
    New algorithm to get optimal threshold for three-decision-making
    CHEN Gang LIU Bing-quan WU Yan
    2012, 32(08):  2212-2215.  DOI: 10.3724/SP.J.1087.2012.02212
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics
    The traditional three-decision-making model relies on the experience of experts to set the threshold, thus impeding the wide application of three-decision-making model in many fields. To minimize the decision-making risk, a computational model of the risk-loss was built, and a new classification algorithm which needs no priori knowledge was given. The algorithm used model conditions to determine the range of parameters value which minimized the risk-loss, then divided the range into several equal grids, got the smallest range of parameters through searching these grids, and the smallest range was the optimal threshold. At last, a three-decision-making classifier was built by using the threshold, and then this classifier and Bayesian classifier were applied to part of UCI data sets. The comparison shows that the performance of three-decision-making classifier is superior, which shows the effectiveness of the algorithm.
    Particle swarm optimization algorithm with composite strategy inertia weight
    GAO Zhen-hua MEI Li ZHU Yuan-jian
    2012, 32(08):  2216-2218.  DOI: 10.3724/SP.J.1087.2012.02216
    Asbtract ( )   PDF (484KB) ( )  
    References | Related Articles | Metrics
    A new Particle Swarm Optimization (PSO) algorithm with linearly decreasing and dynamically changing inertia weight named L-DPSO was presented to solve the problem that the linearly decreasing inertia weight of the PSO cannot match with the nonlinear changing characteristic. The linear strategy of linearly decreasing inertia weight and the nonlinear strategy of dynamically changing inertia weight were used in the algorithm. The weights were given to two methods separately. Using the test functions of Griewank and Rastrigin to compare L-DPSO with linearly decreasing inertia weight (LPSO) and dynamically changing inertia weight (DPSO), the experimental results show that the convergence speed of L-DPSO is obviously superior to LPSO and DPSO, and the convergence accuracy is also increased. At last, the test functions of Griewank and Rastrigin were used to compare L-DPSO with several commonly used inertia weights, and results show that L-DPSO has obvious advantage too.
    Application of particle filter algorithm in traveling salesman problem
    WU Xin-jie HUANG Guo-xing
    2012, 32(08):  2219-2222.  DOI: 10.3724/SP.J.1087.2012.02219
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    The existing optimization algorithm for solving the Traveling Salesman Problem (TSP) easily falls into local extremum. To overcome this shortcoming, a new optimization method based on the particle filter, which regarded the searching process of the best route of TSP as a dynamic time-varying system, was brought forward. The basic ideas using particle filter principle to search the best route of TSP were enunciated, and its implementation procedures were given. In order to reduce the possibility of sinking into local extreme, the crossover and mutation operator of Genetic Algorithm (GA) was introduced into the new optimization algorithm to enhance the variety of particles in the sampling process. Finally, the simulation experiments were performed to prove the validity of the new method. The new optimization method based on particle filter can find better solutions than those of other optimization algorithms.
    Group path planning method based on improved group search optimization algorithm
    ZHENG Hui-jie LIU Hong ZHENG Xiang-wei
    2012, 32(08):  2223-2226.  DOI: 10.3724/SP.J.1087.2012.02223
    Asbtract ( )   PDF (608KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems that traditional path planning of group animation needs long time for searching and is of poor optimization, the authors proposed a multi-threaded path planning algorithm based on group search optimization. Firstly, to solve the problem that the algorithm easily gets trapped in local optimum, metroplis rule was introduced in this search mode. Secondly, by using random path through the multi-threading and stitching techniques, the algorithm was applied to path planning. The simulation results show that the algorithm has better global convergence both in high-dimensional and low-dimensional cases, and the method is good enough to meet the requirements of path planning in complex animation environment.
    Research and application of fuzzy tensor machine image classification algorithm
    XING Di GE Hong-wei LI Zhi-wei
    2012, 32(08):  2227-2234.  DOI: 10.3724/SP.J.1087.2012.02227
    Asbtract ( )   PDF (631KB) ( )  
    References | Related Articles | Metrics
    In small sample image classification application, most of traditional classification models take vectors as inputs, which may cause many defects and influence the classification performance. In this paper, the classifier of Fuzzy Support Tensor Machine (FSTM) based on tensor theory and fuzzy support vector machine was proposed. This algorithm took tensors as inputs to obtain the optimal tensor plane. After verifying the performance of the algorithm by using handwritten digital image database, FSTM was applied to triangle node of feather and down category recognition. Compared with the traditional algorithms, FSTM achieves approximately 6.3% increase in correct recognition rate on average. The experimental results show that the FSTM classifier is much more suitable for the application of image classification.
    Hybrid intelligent algorithm for solving stochastic chance-constrained programming and its application
    DUAN Fu YANG Rong
    2012, 32(08):  2230-2234.  DOI: 10.3724/SP.J.1087.2012.02230
    Asbtract ( )   PDF (745KB) ( )  
    References | Related Articles | Metrics
    In order to find an algorithm which can solve the Stochastic Chance-Constrained Programming (SCCP) problem more effectively, a hybrid intelligence algorithm based on Clonal Selection Algorithm (CSA), random simulation technology and neural network was proposed. Random simulation was used to produce random variables sample matrix for training Back Propagation (BP) neural network to approximate the stochastic function. Fitness value was calculated and feasible solution was checked by the trained neural network in CSA until it could get the solution to the optimization problems. In order to make the searching rapid and effective, double cloning operators and double mutation operators were adopted in CSA. The simulation results show that satisfactory result has been achieved before 500 generation; moreover, the precision in the single objective optimization problem is improved by 2.2% and the precision in multi-objective optimization problems is increased by 65% compared with other existing algorithms. In addition, the algorithm was applied to solve the problem of optimal reservoir scheduling. The simulation results also show the correctness and effectiveness of the model and the algorithm.
    Method of kernel-based semi-supervised locality preserving projection
    XUE Si-zhong TAN Rui CHEN Xiu-hong
    2012, 32(08):  2235-2244.  DOI: 10.3724/SP.J.1087.2012.02235
    Asbtract ( )   PDF (606KB) ( )  
    References | Related Articles | Metrics
    In order to effectively extract nonlinear features of data set, the paper proposed a new method, called Kernel Semi-supervised Locality Preserving Projection (KSSLPP). It redefined the between-class similarity and within-class similarity using rich labeled and unlabeled samples that contain valuable information, which was used to maximize the between-class separability and minimize the within-class separability in a high dimensional kernel space. The proposed method preserves the global and local structures of unlabeled samples in addition to separating labeled samples in different classes. Contrast experiments in the Olivetti face database and UCI database verify the effectiveness of the proposed algorithm.
    Comparative analysis of impact of lexical semantic information on Chinese entity relation extraction
    LIU Dan-dan PENG Cheng QIAN Long-hua ZHOU Guo-dong
    2012, 32(08):  2238-2244.  DOI: 10.3724/SP.J.1087.2012.02238
    Asbtract ( )   PDF (1150KB) ( )  
    References | Related Articles | Metrics
    A method was proposed to incorporate semantic information based on TongYiCi CiLin and HowNet into tree kernel-based Chinese relation extraction, the impact of these two kinds of semantic information on Chinese entity relation extraction was compared and analyzed, and the interrelation between lexical semantic information and entity type information was explored. The experimental results show that this method can improve the performance of Chinese relation extraction in some degree, and TongYiCi CiLin can complement the entity type information to a certain extent. Therefore, no matter whether the entity type information is involved or not, its semantic information can significantly improve the extraction performance for most of the relation types, while some conflicts exist between HowNet and the entity type information, leading to its performance improvements only for several relation types when entity types are provided.
    Text feature selection method based on invasive weed optimization
    LIU Kui ZHOU Zhu-rong
    2012, 32(08):  2245-2249.  DOI: 10.3724/SP.J.1087.2012.02245
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics
    In order to select text feature more comprehensively and improve the accuracy of the text feature selection, a new text feature selection method based on Invasive Weed Optimization (IWO) was proposed. The biggest advantage of IWO is that the offspring individuals are being randomly spread around their parents according to Gauss normal distribution, and the standard deviation of the random function is adjusted dynamically during the evolution process; thus, the algorithm explores new areas aggressively to maintain the diversity of the species in the early and middle iterations, and enhances the feature selection of the optimal individuals in final iteration. Such mechanism ensured the steady convergence of the algorithm to global optimal solution, and improved the accuracy of the text feature selection. The results of experiments indicate that this method can provide the entry of low weight value with feature selection opportunity, and ensure the feature selection advantage of the entry with high weight value, thereby enhancing the completeness and accuracy of the text feature selection.
    Unsupervised feature selection method based on latent Dirichlet allocation model and mutual information
    DONG Yuan-yuan CHEN Ji-li TANG Xiao-xia
    2012, 32(08):  2250-2257.  DOI: 10.3724/SP.J.1087.2012.02250
    Asbtract ( )   PDF (571KB) ( )  
    References | Related Articles | Metrics
    To solve the category-deficiency and the tendency of selecting low-frequency words in feature selection process based on Mutual Information (MI), the method named LDA-σ was presented. Firstly, the latent topics were extracted by the Latent Dirichlet Allocation (LDA) model, and then the standard deviation of "Word-Topic" MI was calculated as the feature evaluation function. When conducting feature selection and categorization in Reuters-21578, the micro average F1 of LDA-σ reached up to 0.9096, and the highest macro average F1 of LDA-σ was 0.7823, which were higher than that of other algorithms. The experimental results indicate that LDA-σ can be applied to feature selection in text sets.
    Dimensionality reduction method with data separability based on adaptive neighborhood selection
    LI Dong-rui XU Tong-de
    2012, 32(08):  2253-2257.  DOI: 10.3724/SP.J.1087.2012.02253
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    The existing dimensionality reduction methods based on manifold learning are sensitive to the selection of local neighbors, and the reduced data do not have good separability. This paper proposed a dimensionality reduction method with data separability based on adaptive neighborhood selection, which adaptively selected the neighborhood at each sample point based on estimated intrinsic dimensionality of data and local tangent orientation. Meanwhile, it clustered the similar sample points by using clustering information when mapping data, which guaranteed good separability for the reduced data and achieved better dimensionality reduction results. The experimental results show that the new method derives a better embedding result on the artificially generated data sets. In addition, it can get expected result on face visualization classification and image retrieval.
    Information security
    Identity-based cluster key agreement scheme in Ad Hoc network
    LIU Xue-yan ZHANG Qiang WANG Cai-fen
    2012, 32(08):  2258-2327.  DOI: 10.3724/SP.J.1087.2012.02258
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics
    In view of the characteristics of limited energy and dynamic change in Ad Hoc network, an identity-based group key agreement scheme was presented. The topology was in a structure composed by clusters, and allowed the synchronous execution of multi-party key agreement protocols based on pairings. The number of cluster members did not affect the key agreement, and it did not require interactivity during the key agreement. It provided the authentication and dynamics. In addition, the scheme was proved semantics secure under the Decisional Bilinear Diffie-Hellman (DBDH) problem. At last, compared with the previous schemes, the proposed scheme has advantages in terms of negotiation rounds and authentication.
    New image group encryption algorithm based on high dimensional hyperchaos system and matrix tensor product
    TANG Song XU Gui-lan LI Qing-du
    2012, 32(08):  2262-2264.  DOI: 10.3724/SP.J.1087.2012.02262
    Asbtract ( )   PDF (487KB) ( )  
    References | Related Articles | Metrics
    Shortcomings of image encryption schemes based on chaos currently can be mainly classified into three types as follows: firstly, low dimensional chaotic sequences cause degradation of chaos; secondly, the structure of chaotic system adopted is too simple; thirdly, the algorithm is merely related to the structure of chaotic system and key. Concerning the shortcomings above, this paper presented a new image group encryption algorithm. In order to overcome the degradation of chaos, the algorithm generated pixel diffusion matrix by coupling chaotic sequences of high dimensional hyperchaos system with chaotic sequences of one dimensional chaotic mapping by means of matrix tensor product. In the process, the generation of pixel diffusion matrix was controlled by plaintext, thus the algorithm was related to plaintext, which enhanced the security of the algorithm.
    Semi-fragile watermark algorithm based on second generation Bandelet and slant transform
    WANG Shu ZHANG Min-qing SHEN Jun-wei XIAO Hai-yan
    2012, 32(08):  2265-2287.  DOI: 10.3724/SP.J.1087.2012.02265
    Asbtract ( )   PDF (661KB) ( )  
    References | Related Articles | Metrics
    Concerning general image operations and attack method, the paper proposed a new semi-fragile watermarking algorithm. In this algorithm, the recovery bits generated from the compressed original image were embedded into the Least Significant Bits (LSB) of the watermarked image, then the switch coefficients coded by Turbo code after second generation Bandelet transform were embedded in midfrequency blocked regions after slant transform. Authentication could be realized and tampers could be located through comparison between error codes generated by Turbo code and coefficients of Bandelet transform. The experimental results show that the algorithm has enough robustness to general image operations, and can detect and locate the place being maliciously tampered accurately, and has good ability to recover the lost image content.
    Database watermarking algorithm based on parity modulation in two-dimension space
    MA Rui-min CHEN Ji-hong
    2012, 32(08):  2268-2270.  DOI: 10.3724/SP.J.1087.2012.02268
    Asbtract ( )   PDF (487KB) ( )  
    References | Related Articles | Metrics
    To decrease the modification of original data during the process of database watermarking, and to make watermark information more secretly, an algorithm based on parity modulation in two-dimensional space was proposed. One dimension was made up of primary keys' Hash values, the other was made up of numeric attributes' redundant bits. Watermark information was embedded by modulating the parity of Hash values and redundant bits. After watermarking, the modification of data average was 0.5296×10^(-2)%, and the modification of data variance was 0.6509×10^(-4)%. The experimental results indicate that the modification amount of original data is reduced and this algorithm is efficient and robust.
    Remote attestation mechanism of platform configuration based on dynamic Huffman tree
    FU Dong-lai PENG Xin-guang CHEN Gou-xi YANG Qiu-xiang
    2012, 32(08):  2275-2282.  DOI: 10.3724/SP.J.1087.2012.02275
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics
    In order to improve the efficiency of remote attestation for platform configuration, a method based on RAMT (Remote Attestation based on Merkle Hash Tree) was proposed which improved the approach to storing the Hash value of trusted entities by using a dynamic Huffman tree. And the relevant proof of verification efficiency was also given. From the point of view of data structures used for storing the integrity Hash value of the application software, the problems of the existing methods were analyzed. And detailed description about architecture, measurement and verification of RADHT (Remote Attestation based on a Dynamic Huffman Tree) was given. An example about integrity measurement algorithm was presented for the proposed mechanism. The ability in privacy protection and the efficiency of RADHT were discussed. Compared with RAMT, the probability of the integrity Hash value inquired and its update were considered. Results show the efficiency of the remote attestation is improved.
    Security analysis of a RFID authentication protocol based on physically unclonable function
    ZHANG Long-xiang
    2012, 32(08):  2280-2282.  DOI: 10.3724/SP.J.1087.2012.02280
    Asbtract ( )   PDF (469KB) ( )  
    References | Related Articles | Metrics
    The Radio Frequency IDentification (RFID) authentication protocols based on Physically Unclonale Function (PUF) is a hot research field recent years. In 2011, Bassil et al. proposed a new RFID authentication protocol based on PUF in international conference on Internet technology and secured transactions (BASSIL R, EL-BEAINO W, KAYSSI A, et al. A PUF-based ultra-lightweight mutual-authentication RFID protocol [C]// 2011 International Conference on Internet Technology and Secured Transactions. Piscataway: IEEE, 2011: 495-499). The paper analyzed the security of this protocol by an imaginative adversary and found that it cannot resist secret disclosure attack, traceability attack, reader impersonation attack and desynchronization attack. The paper described the details of these attacks and computed their success probabilities and computation complexities.
    Graphics and image technology
    Image quality assessment based on phase congruency structural similarity
    SU Yuan-yuan SANG Qing-bing
    2012, 32(08):  2283-2287.  DOI: 10.3724/SP.J.1087.2012.02283
    Asbtract ( )   PDF (758KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor image quality evaluation of Structural SIMilarity (SSIM) for blurred images and high texture images, this paper proposed a new algorithm for full reference image quality assessment based on phase congruency structural similarity. The new method kept luminance function and contrast function in SSIM, while replaced the structural function of SSIM with phase congruency function. Then the luminance, contrast and phase congruency information were combined to evaluate the image quality. The linear correlation coefficient and the Spearman rank correlation coefficient between the results of the proposed method on LIVE database and the subjective quality measurements were 0.9501 and 0.9362 respectively. The experimental results demonstrate that phase congruency obtains more structural information of the image. The proposed method can obtain better performances and provide more accurate assessment results on burred and high texture images quality.
    Visual saliency detection algorithm based on Bayes theorem and statistical learning
    DAI Hua WANG Jian-ping
    2012, 32(08):  2288-2290.  DOI: 10.3724/SP.J.1087.2012.02288
    Asbtract ( )   PDF (510KB) ( )  
    References | Related Articles | Metrics
    Image processing technology depends on the quality of the visual saliency map to obtain better results. The existing visual saliency detection method usually can only detect and get rough visual saliency attribute graph, seriously affecting the image processing results. This paper put forward a method of using Bayes theorem and statistical learning of visual saliency detection to detect the visual saliency property of image. The method was based on Bayes theorem of static image top-down significant and overall significance, and combined the top-down knowledge and the down-top significance. For the synthetic integration of characteristics, all the factors related to the weight parameter were studied by using linear model with the linear weighting combination method and regularized neural network combined with nonlinear weighting method. The ROC curves of the bottom-up visual saliency model in two fixed data set for quantitative evaluation, show that the effect of non-linear combination is better than that of linear combination.
    Improved Dcut and its application in image segmentation
    ZOU Xiao-lin
    2012, 32(08):  2291-2298.  DOI: 10.3724/SP.J.1087.2012.02291
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics
    Spectral clustering algorithms can cluster samples in any form of feature space and has global optimal solutions. However, Discriminant cut (Dcut) algorithm is time-consuming when calculating the regularization similarity matrix and its eigenvectors and Dcut based Subspace (SDcut) algorithm is unstable. Concerning these problems, the paper proposed an improved Dcut algorithm based on Principal Component Analysis (PCA), named PCA-Dcut. PCA-Dcut algorithm constructed a new matrix using the m eigenvectors corresponding to the m largest eigenvalues of the similarity matrix, then used matrix computation between the constructed matrix with the similarity matrix and Laplacian matrix respectively, and got an m-order regularization similarity matrix, and calculated its eigenvectors, then used the constructed matrix to multiply the eigenvectors to get the final feature vectors for classification. Therefore, PCA-Dcut reduces the computational complexity of Dcut. The experiments on man-made data set, UCI data set and real images show that the PCA-Dcut algorithm is comparable with other spectral clustering algorithms such as Dcut in accuracy, and its running speed is 5.4 times of Dcut, and has faster speed and better performance compared with SDcut.
    Improved Canny edge detection method based on self-adaptive threshold
    ZHANG Fan PENG Zhong-wei MENG Shui-jin
    2012, 32(08):  2296-2298.  DOI: 10.3724/SP.J.1087.2012.02296
    Asbtract ( )   PDF (496KB) ( )  
    References | Related Articles | Metrics
    The traditional Canny operator uses the global threshold method, but when the gray of the input image's background and foreground change largely, the global threshold method will lose some weak edge. Concerning this issue, an improved adaptive Canny operator was put forward. Firstly, the image was divided into blocks according to the gradient variance. Secondly, the threshold of the sub blocks was obtained by Otsu method. Then, the threshold matrix was got by interpolation. Finally, an improved edge connection algorithm was proposed to extract the edge with the threshold matrix. The experimental results show that the improved Canny algorithm has not only a good anti-noise function, but also a very good precision.
    Digit recognition based on distance distribution histogram
    WU Shao-hong WANG Yun-kuan SUN Tao LI Bing
    2012, 32(08):  2299-2304.  DOI: 10.3724/SP.J.1087.2012.02299
    Asbtract ( )   PDF (942KB) ( )  
    References | Related Articles | Metrics
    Due to the mutability of unstrained or handwritten digits, most algorithms in previous study either forfeited easy implementation for high accuracy, or vice versa. This paper proposed a new feature descriptor named Distance Distribution Histogram (DDH) and adapted Shape Accumulate Histogram (SAH) feature descriptor based on shape context which was not only easy to implement, but also was robust to noise and distortion. To make hybrid features more comprehensive, some other adapted topological features were combined. The new congregated features were complementary as they were formed from different original feature sets extracted by different means. What's more, they were not complicate. Meanwhile, three Support Vector Machine (SVM) with different feature vector were used as classifier and their results were integrated to get the final classification. The average accurate rate of several experiments based on self-established data sets, MNIST and USPS is as high as 99.21%, which demonstrates that the proposed algorithm is robust and effective.
    Video image character recognition based on stroke-related weight
    SU Chang HU Xiao-dong WANG Bin-fu SHANG Feng-jun
    2012, 32(08):  2305-2312.  DOI: 10.3724/SP.J.1087.2012.02305
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    In order to extract the subtitle in the video image, a robust method was proposed. First, the image edge feature was adopted in caption location step, and the binarization method of text images with the edge information was given. Then, the method combined with projection and regional generation was used to locate a character. Finally, taking fully account of the topology of the text strokes, the stroke correlation among the adjacent sub-grids was determined and the stroke fuzzy membership was used to complete the elastic grid feature extraction. This method can effectively get the binary image of characters from a complex background image, ensure the stability and robustness in feature extraction. The experimental results show the method is effective, and its recognition rate has been up to 92.1%.
    Uighur characters recognition based on locality preserving projection and hidden Markov model
    LIU Wei LI He-cheng
    2012, 32(08):  2309-2312.  DOI: 10.3724/SP.J.1087.2012.02309
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    Concerning the shortcomings of classical Hidden Markov Model (HMM) in handwritten Uighur characters recognition, such as largly varied width of characters, slow convergent speed and premature convergence, a new Uighur characters recognition algorithm was proposed in combination with Locality Preserving Projection (LPP) and HMM. Firstly, the aspect ratio of original image was maintained by a highly-normalized method. Sub-images were obtained by using sliding window, and observation sequences were extracted from these windows. Secondly, the observation sequences were mapped into low-dimensional space based on LPP, and the scale of adjacency matrix was reduced via the random sampling technique. Finally, HMM was trained by adopting obtained observation sequences. The algorithm decreases dimension of observation vectors, accelerates the convergence, and prevents premature convergence effectively. The simulation results show the LPP-HMM algorithm is efficient and robust, which decrease average convergence steps as well as errors.
    Dense noise face recognition based on sparse representation and algorithm optimization
    CAI Ti-jian FAN Xiao-ping LIU Jun-xiong
    2012, 32(08):  2313-2319.  DOI: 10.3724/SP.J.1087.2012.02313
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics
    To improve the speed and anti-noise performance of face recognition based on sparse representation, the Cross-And-Bouquet (CAB) model and Compressed Sensing (CS) reconstruction algorithm were studied. Concerning the large matrix inversion of reconstruction algorithm, a Fast Orthogonal Matching Pursuit (FOMP) algorithm was proposed. The proposed algorithm could convert the high complexity operations of matrix inversion into the lightweight operation of vector-matrix computation. To increase the amount of effective information in dense noise pictures, several practical and efficient methods were put forward. The experimental results verify that these methods can effectively improve the face recognition rate in dense noise cases, and identifiable noise ratio can reach up to 75%. These methods are of practical values.
    Face recognition algorithm based on multi-level texture spectrum features and PCA
    DANG Xin-peng LIU Wen-ping
    2012, 32(08):  2316-2319.  DOI: 10.3724/SP.J.1087.2012.02316
    Asbtract ( )   PDF (603KB) ( )  
    References | Related Articles | Metrics
    To improve the recognition rate of Principal Component Analysis (PCA) algorithm in face recognition, a new algorithm combining the image texture spectrum feature with PCA was proposed. Firstly, the texture unit operator was used to extract the texture spectrum feature of the face image. Secondly, PCA approach was used to reduce the dimensions of the texture spectrum feature. Finally, K-Nearest Neighbor (KNN) classification was chosen to recognize the face. ORL and Yale face database were used to test the proposed algorithm, and the recognition accuracies were 96.5% and 95% respectively, which were higher than those of PCA and Modular Two-Dimensional PCA (M2DPCA). The experimental results demonstrate the efficiency and accuracy of the proposed algorithm.
    Image segmentation method for bullet's primer surface defect
    SHI Jin-wei GUO Chao-yong LIU Hong-ning
    2012, 32(08):  2320-2323.  DOI: 10.3724/SP.J.1087.2012.02320
    Asbtract ( )   PDF (675KB) ( )  
    References | Related Articles | Metrics
    The checking of bullet's primer is the most important step in controlling the quality of bullet products. In order to segment the image of bullet's primer surface defect accurately, a new method of image segmentation was proposed. According to the checking requirement and the properties of cartridge's bottom, firstly, the image of bullet's primer was ascertained approximately to be detected, and Log operator was applied to extract the circle edge of primer. After analyzing both advantages and disadvantages of the Hough transform and the least square method, a new algorithm of circle detection combined improved Hough transform and the least square method was proposed, by which the center of circle and radius were acquired accurately. Finally, the image of primer circle was extracted by the parameters of circle, the primer surface defect was segmented by threshold, and the results of segmentation were optimized by mathematical morphology. The experimental results show that the proposed method is of accuracy and robustness in the application of bullet's primer surface defect segmentation. The average wrong segmentation rate is below 10%, and the average deviation is less than 17 pixels.
    Side information interpolation algorithm based on spatio-temporal correlations at decoder
    WANG Feng-qin CHEN Xiao-lei CHEN Yan
    2012, 32(08):  2324-2327.  DOI: 10.3724/SP.J.1087.2012.02324
    Asbtract ( )   PDF (686KB) ( )  
    References | Related Articles | Metrics
    In Wyner-Ziv video coding, the coding efficiency depends mainly on the quality of side information. However, the poor motion vector, which is obtained from motion estimation at decoder, will result in quality degradation of the side information. To improve the performance of Wyner-Ziv video coding, a side information interpolation algorithm was proposed, which applied multi-macroblock partition modes technology to bi-directional motion estimation. Sum of Bilateral Absolute Difference (SBAD) was utilized to measure the temporal continuity of motion vector, and Boundary Absolute Difference (BAD) was used to measure the spatial continuity of motion vector, and spatio-temporal matching criterion was applied to find the optimal motion vector. The simulation results show that the proposed algorithm can maximally achieve 1.41dB improvement in the Peak Signal-to-Noise Ratio (PSNR) of side information with reducing coding bit rate.
    Typical applications
    Logic Petri net based model for Web service cluster
    DENG Shi-yang DU Yu-yue
    2012, 32(08):  2328-2337.  DOI: 10.3724/SP.J.1087.2012.02328
    Asbtract ( )   PDF (1048KB) ( )  
    References | Related Articles | Metrics
    In clustering based Web service discovery, the service cluster is characterized by indeterminacy because parameters are uneven in name, quantity and order, which results in a great deal of work in parameter matching. For this reason, a logic Petri net based model for Web service cluster was proposed. It built map relationship between clusters and services by denoting the service parameter sets to logic vectors based upon clusters' parameter sets, and unified management for service parameters. Therefore, parameter matching based on semantic similarity is only necessary to process on cluster layer; whereas, in a service cluster, parameter can be located directly by position vector and parameter matching can be realized by logical comparison. The matching magnitude and computation complexity are reduced enormously, and the service discovery efficiency gets improved.
    Functional description extraction of code based on formal semantics of Clight
    WANG Tao CHEN Min-yi QI Jun
    2012, 32(08):  2333-2337.  DOI: 10.3724/SP.J.1087.2012.02333
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    Code function description extraction is the basic premise of functional integration. To address the problem of common low accuracy rate of extraction of code function, the mechanism of extracting code function description based on formal semantic of Clight was proposed, and it was realized by algorithm of Clight code function description. This mechanism was strictly based on Clight natural semantics inference rules, ignoring the details of code execution in the middle, only concerned with the memory states before and after implementation. And as a functional description of the code feature description, it improved the accuracy of functional extraction and the success rate of the key area in software development.
    Static BPEL program slicing technique based on BPEL program dependence graphs
    WANG Hong-da XING Jian-chun SONG Wei YANG Qi-liang
    2012, 32(08):  2338-2341.  DOI: 10.3724/SP.J.1087.2012.02338
    Asbtract ( )   PDF (590KB) ( )  
    References | Related Articles | Metrics
    The slices of BPEL obtained are not complete if traditional program slicing technique is used. Therefore, a static BPEL program slicing technique based on BPEL program dependence graphs was proposed. This technique computed slices based on BPEL program dependence graphs, which were created according to the characteristics of BPEL. The results of case analysis prove that the technique based on BPEL program dependence graphs can obtain more complete slices, and thus giving software engineers more help to test, debug and maintain BPEL programs.
    Distributed business deployment platform based on service-oriented architecture
    DUAN Han-cong LI Tong-xing LI Lin XING Jian-chuan
    2012, 32(08):  2342-2345.  DOI: 10.3724/SP.J.1087.2012.02342
    Asbtract ( )   PDF (679KB) ( )  
    References | Related Articles | Metrics
    The low utilization of resource and the lack of reliability and scalability have always been the problems when business system is running in distributed environment. Concerning these problems, this paper designed a distributed business deployment platform based on Service-Oriented Architecture (SOA). Distributed business system could be automatically deployed on the platform, and the resource it needed would be dynamically assigned according to the state of service component, which implemented the dynamic extension and draw-back of the business ability. At the same time, double heat-computer backup, high available cluster and business migration mechanism were applied to guarantee the high reliability. The simulation results indicate that the platform achieves high resource utilization while the Quality of Service (QoS) is ensured.
    BTopicMiner: domain-specific topic mining system for Chinese microblog
    LI Jin ZHANG Hua WU Hao-xiong XIANG Jun
    2012, 32(08):  2346-2349. 
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics
    As microblog application grows rapidly, how to extract users' interested popular topic from massive microblog information automatically becomes a challenging research area. This paper studied and proposed a topic extraction algorithm of Chinese microblog based on extended topic model. In order to deal with data sparse problem of microblog, the content related microblog text would be firstly clustered to generate synthetic document. Based on the assumption that posting relationship among microblogs implied topical correlation, the traditional LDA (Latent Dirichlet Allocation) topic model was extended to model the posting relationship among microblogs. At last, Mutual Information (MI) measurement was utilized to calculate topic vocabulary after extracting topics by proposing extended LDA topic model for topic recommendation. Furthermore, a prototype system for domain-specific topical mining system, named BTopicMiner, was implemented so as to verify the effectiveness of the proposed algorithm. The experimental result shows that the proposed algorithm can extract topics from microblogs more accurately. Meanwhile, the semantic similarity between automatically calculated topic vocabulary and manually selected topic vocabulary exceeds 75% while automatically calculating topic vocabulary based on MI.
    Multi-context trust and reputation evaluation system for server selection in online transaction
    LIU Bin ZHANG Ren-jin
    2012, 32(08):  2350-2359.  DOI: 10.3724/SP.J.1087.2012.02350
    Asbtract ( )   PDF (1229KB) ( )  
    References | Related Articles | Metrics
    In the systems which select servers according to reputation and trust values, the common problems, which result in that the servers chosen cannot meet the users' diversity requirement, are a small number of factors being considered and limited in the system and lack of flexibility in methods. In order to resolve these problems, a multi-context reputation and trust evaluation system, which was used for selecting servers in online transaction, was proposed. A server acquired its reputation and trust bootstrapping value through its registered quality attributes and guarantee fund, gained its reputation and trust experimental value by its trade experience in the internal system and external systems. The actual trust value was the dynamic linear combination of both values. With the increase of transaction times, the weight of the latter increased dynamically. Server was selected according to the context quality attribute set by user and the actual trust and reputation value. This selection method of a new server was tested and contrasted with other trust and reputation systems. The experimental results indicate that this method can easily meet the requirements of all kinds of users. This trust and reputation could afford the new server as well as other servers a competitive environment, and also reduced the possibility that user selected malicious server.
    Two-stage optimal ordering policy of joint inventory with backlogging for seasonal commodity
    CHEN Mang GONG Cun-yu
    2012, 32(08):  2356-2359.  DOI: 10.3724/SP.J.1087.2012.02356
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a two-stage inventory model for seasonal commodity, in which the market and manufacturing channels were combined. This model can be used to solve the production policy and the order policies of the raw materials for the manufacturer. By assuming that the retailers' demand obeys normal distribution and that the retailer makes orders according to the newsboy model, the necessary and sufficient conditions were given for the optimal solution of production size, wholesale price, and replenishment cycle of raw materials for the manufacturer. Finally, the necessary condition was explored in order to obtain managerial insights and economic implications based on numerical examples and sensitivity analysis.
    Visual representation model and automatic keywords extraction algorithm for hub Web pages
    2012, 32(08):  2360-2368.  DOI: 10.3724/SP.J.1087.2012.02360
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics
    It is very hard to exactly extract keywords from hub Web pages because of its topic noise. To resolve this problem, a new sub Web page representation model and its automatic keywords extraction algorithm were proposed in this paper. At first, the new model segmented Web page into some blocks by using the block composition algorithm. Secondly, according to the visual recognition method of humanity, the new model computed the visual measurement of these blocks. At the same time, the transmission rule of visual measurement made blocks special where keywords were contained more specially. The automatic keywords extraction algorithm could exactly find these keywords in the most special blocks. The experimental results show that the proposed algorithm has bumped up by 20.9% on average in accuracy compared with keywords extraction algorithm based on DocView model.
    Prediction on dispatching number of equipment maintenance people based on main factor method
    SHAN Li-li ZHANG Hong-jun ZHANG Rui CHENG Kai WANG Zhi-teng
    2012, 32(08):  2364-2368.  DOI: 10.3724/SP.J.1087.2012.02364
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics
    In order to forecast the number of equipment maintenance people more easily and validly, a common approach of selecting the features of input vector in Support Vector Machine (SVM) named Main Factor Method (MFM) was proposed. The relevant terms of "main factor", "driving factor", "voluntary action" and "actions' carrier" were defined, based on which the theoretical MFM was constructed. Firstly, the predicting vector's main factor of voluntary actions was setup by "infinitely related principle" and "action purpose" method. Then the driving factors which can be looked as the characteristics of SVM input vector were refined through the selected main factor and "selecting principles of driving factors". The experimental results and comparison with other congeneric methods show that the proposed method can select the more accurate prediction with the value of relative average error 0.0109.
    Design and implementation of ultra high frequency RFID reader system
    XIA Hong WU Ji-wen
    2012, 32(08):  2369-2373.  DOI: 10.3724/SP.J.1087.2012.02369
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics
    This paper introduced the design and implementation of UHF (Ultra High Frequency) RFID (Radio Frequency Identification) reader system, which took ARM9 microprocessor as the controller, and used austria microsystems' UHF RFID reader IC AS3992 for RF processing. With the design of the external power amplifier circuit, the external RF power detection circuit and the reader antenna impedance matching digital tuning circuit, the transmitter output power was up to +33dBm and the receiver Signal-to-Noise Ratio (SNR) had got improved. The reader system achieved high-speed identification of ISO/IEC 18000-6C 900MHz RFID tags. Besides, it transplanted the embedded Linux operating system based on ARM9 hardware system and developed the embedded Web firmware control system to realize tags operations and reader device network configuration, which also provided a platform for data communication between readers and secondary development. The UHF RFID reader system has been applied effectively in the RFID management system of a certain power plant vehicles, the operation result shows that it has successfully realized the purpose to identify and process the information of vehicles' tag in the distance of 10m with stable performance, which satisfies the engineering requirements.
    Design and implementation of oracle-bone-script input system based on directed stroke
    WU Qin-xia LI Qing-sheng
    2012, 32(08):  2374-2377.  DOI: 10.3724/SP.J.1087.2012.02374
    Asbtract ( )   PDF (590KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a new method to describe oracle-bone-script based on the combination of directed stroke and strokes, which aimed at solving the difficulties in input, quantification and jell of oracle-bone-script. Firstly, the oracle-bone-script strokes were classified and analyzed, then the composition of the directed stroke was determined according to the stroke, and an algorithm of the system to describe oracle-bone-script by directed stroke was formed. The input platform for oracle-bone-script was constructured. Users could input oracle-bone-script especially in variant forms or didnot identify oracle-bone-script through designing human-computer instruction. The experimental results prove that this input method can standardize the font, increase and decrease oracle-bone-script library, and freely modify glyphs.
    Traffic spillover identification based on fuzzy theory
    ZHANG Li-dong JIA Lei ZHU Wen-xing
    2012, 32(08):  2378-2384.  DOI: 10.3724/SP.J.1087.2012.02378
    Asbtract ( )   PDF (583KB) ( )  
    References | Related Articles | Metrics
    Traffic spillover is a kind of extreme traffic jam phenomenon, which can lead to serious disorder of traffic system. Identification algorithm should be carried out first in order to realize traffic spillover control, so the fuzzy identification algorithm was presented based on fuzzy inference theory. The fuzzy identification inference machine took the car queue rate and road average speed as the input language variables, and the traffic spillover severity index as the output variable. Mamdani was adopted as the implication method. Based on the definition of basic language domain and discrete language domain, the fuzzy query rules table was built based on traffic experts' knowledge. Finally, the simulation results indicate the correct rate of identification reaches 98%, which proves the fuzzy inference method is a good tool to recognize the state of traffic spillover.
    Mixed H2/H∞ control of permanent magnet synchronous motor based on particle swarm optimization algorithm
    QIAN Miao-wang
    2012, 32(08):  2381-2384.  DOI: 10.3724/SP.J.1087.2012.02381
    Asbtract ( )   PDF (565KB) ( )  
    References | Related Articles | Metrics
    A mixed H2/H∞ robust controller was designed to enhance the performances of Permanent Magnet Synchronous Motor (PMSM) control system. Firstly, the H∞ suboptimal controllers were designed with mixed sensitivity method. Then the controller with the best H2 performance standard was selected with Particle Swarm Optimization (PSO) and the H2/H∞ robust controller was obtained. Finally, the capability of mixed H2/H∞ controller was verified with simulation. The results indicate that the robust controller has better performance compared with traditional Proportional Integral (PI) controller, and the PSO algorithm is proved valid for the design of H2/H∞ controller in addition.
2024 Vol.44 No.10

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF