Loading...

Table of Content

    01 August 2011, Volume 31 Issue 08
    Network and communications
    Research survey on physical layer network coding
    Ming-Feng ZHAO Ya-jian ZHOU Quan YUAN Yi-xian YANG
    2011, 31(08):  2015-2020.  DOI: 10.3724/SP.J.1087.2011.02015
    Asbtract ( )   PDF (1204KB) ( )  
    References | Related Articles | Metrics
    It has been proved that Physical Layer Network Coding (PLNC) can also improve the system throughput and spectral efficiency by taking the advantage of the broadcast nature of electromagnetic waves in wireless cooperative environments. In this paper, the basic idea of the PLNC was introduced and its benefit over traditional forward and straight-forward network coding under the two-way relay scenario was illustrated. Firstly, three types of physical layer network coding—Physical Network Coding over Finite Field (PNCF), Analog Network Coding (ANC) and Complex Field Network Coding (CFNC) were presented, the theory research development of the three kinds of PLNC were overviewed and new theory and technology related to it were introduced. Secondly, the application and implementation for the ANC scheme in the real wireless cooperative environments were overviewed. Finally, the opening issues and challenges for PLNC concerning both theory and implementation in near future were proposed.It is an important trend to improve the theory and implementation of PLNC, research the security of PLNC, and combine PLNC with other technologies, such as channel coding and modulation, relay choice, effective scheduling and resource allocation.
    Super-peer topology construction based on k-perfect difference graph
    Yi-hong TAN Zhi-ping CHEN Xue-yong LI Ya-ping LIN
    2011, 31(08):  2021-2024. 
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    In the super-peer network, the super-peer topology structure and its mechanism of dynamic maintenance and search routing are important factors affecting network performance and search efficiency. In this paper, a new structure named k-Perfect Difference Graph (PDG) was proposed by analyzing the characteristics and the deficiencies of PDG, new Super-peer Network based on k-PDG (KPDGN) was constructed, and then the mechanism of dynamic maintenance and search routing was presented in KPDGN. The analysis and simulation results show that compared with current supper-peer topology, KPDGN has good performance with constant degree and fixed adjacent nodes, which reduces the bandwidth consumption during searching and the cost of topology construction and maintenance.
    Joint precoding scheme under condition of channel asymmetry
    Li-qing ZHENG Kai-zhi HUANG Yin-hai LI
    2011, 31(08):  2025-2028.  DOI: 10.3724/SP.J.1087.2011.02025
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics
    In a Base Station (BS) cooperation system, the capacity gains of two BSs by cooperating with each other are different owing to the channel asymmetry. Thus, in the process of selecting cooperative BS, when one BS wants to coordinate but another does not want to cooperate with it and prefers others, it will be difficult to judge whether to group the two into a cluster or not and the whole system is capacity-limited. To demonstrate this, the authors proposed an overlapped clustering scheme and then designed a joint precoding algorithm called Zero Force-Tomlinson-Harashima Precoding (ZF-THP) scheme. In this scheme, several BSs were adjusted to be overlapped and THP technique was used to eliminate the interference caused by overlapped BSs. The simulation results show that the proposed scheme solves the clustering contradiction well, efficiently increases the system capacity and enhances the system fairness.
    Wireless broadband video transmission system based on adaptive choice of multiple networks
    Yi WU Xiao LIN Jian-yong CAI
    2011, 31(08):  2029-2032.  DOI: 10.3724/SP.J.1087.2011.02029
    Asbtract ( )   PDF (813KB) ( )  
    References | Related Articles | Metrics
    As the unsteady bandwidth of mobile channel and the limited coverage of wireless Wi-Fi point, a wireless video transmission system based on adaptive choice of CDMA1X, 3G-EVDO and Wireless Local Area Network (WLAN) was proposed. The adaptive choice algorithm includs: adaptive access of multiple networks, dynamic switch of wireless heterogeneous networks and adaptive choice of homogeneous networks. The presented method can achieve high quality and reliable transmission.
    Evaluation on information transmission ability of command information system network
    Xin WANG Pei-yang YAO Xiang-xiang ZHOU Jie-yong ZHANG
    2011, 31(08):  2033-2036.  DOI: 10.3724/SP.J.1087.2011.02033
    Asbtract ( )   PDF (630KB) ( )  
    References | Related Articles | Metrics
    The information transmission ability of command information system was analyzed from the angle of uncertainty. The command information system network was divided into physical layer and logical layer, and relationships between information transmission and the layers were expounded. The effective working probability of nodes and links, time delay, logical links and physical links were taken into account. The information quantity was used to measure the uncertainty of information transmission, and then the information transmission ability of command information system was educed. The experiment was designed using combat command relationship, and influences of these factors above on information transmission ability were reflected. The experimental results show that the evaluation method take the demand of connectivity, timeliness and correctness in information transmission into consideration.
    Anti-jamming performance of frequency-hopping based on LDPC code
    Ming-hao XUE Lin-hua MA Zhi-guo LIN Xiao-dong YE
    2011, 31(08):  2037-2039.  DOI: 10.3724/SP.J.1087.2011.02037
    Asbtract ( )   PDF (438KB) ( )  
    References | Related Articles | Metrics
    Frequency-hopping communication was combined with the Low-Density Parity-Check (LDPC) code to improve anti-jamming performance of frequency hopping communications. By simplifying the complexity of coding algorithm in the “greedy algorithm”, an offset layered quantization decoding called Layered Belief Propagation-Offset Min-Sum (LBP-OMS) algorithm was applied to improve the performance of error correction code words. The simulation results show that when certain frequency bands are covered by strong noise, the anti-interference ability of broadband frequency-hopping communications is improved by using the improved channel coding method.
    Double-threshold cooperative spectrum sensing algorithm based on multi-fusion rule
    Qing ZHU Chun-lin SONG Cai-ping TAN Xing-ge JIANG
    2011, 31(08):  2040-2043.  DOI: 10.3724/SP.J.1087.2011.02040
    Asbtract ( )   PDF (573KB) ( )  
    References | Related Articles | Metrics
    In the research of cognitive radio networks, the usual spectrum sensing techniques do not consider fusing the judgment results respectively according to different cognitive users. To solve this problem, a double-threshold cooperative spectrum sensing algorithm based on multi-fusion rule was proposed. Using the combination of AND-rule and OR-rule according to the different reliability of the judgment results of the cognitive users, all the judgmental results were fused respectively. The theoretical analysis and simulation results show that the proposed algorithm significantly improves the spectrum sensing performance for the cognitive radio networks as opposed to the conventional methods.
    Borrowed address assignment algorithm for ZigBee network
    Yu-kun YAO Peng-xiang LI Zhi REN Yuan GU
    2011, 31(08):  2044-2047.  DOI: 10.3724/SP.J.1087.2011.02044
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    Wireless Sensor Network (WSN) adopts the default Distributed Address Assignment Mechanism (DAAM) of ZigBee technology to assign the addresses to the nodes without considering the optimization of the network topology, which causes the waste of network depth. In this paper, the authors proposed Distributed Borrowed Address Assignment (DBAA) algorithm to increase the success rate of joined nodes, which assigned the free addresses from 2-hops neighbors to the nodes for the optimization of the network topology. The theoretical analysis and simulation results show that DBAA algorithm outperforms both DAAM and Single Level Address Reorganization (SLAR) scheme in terms of the success rate of address assignment, communication overhead, and the average time of assigning addresses.
    Anti-collision algorithm for RFID system with moving tags
    Jiang-hong HE Xiao-ye DING Yao-xu ZHAI
    2011, 31(08):  2048-2051.  DOI: 10.3724/SP.J.1087.2011.02048
    Asbtract ( )   PDF (735KB) ( )  
    References | Related Articles | Metrics
    For practical application, the tags of Radio Frequency Identification (RFID) system are often moving. An RFID system on a conveyor belt was analyzed and simulated using Matlab. The result showed that if the tag density on the conveyor belt D and the conveyor belt speed V were constant, the identification rate became the maximum when the frame size N was equal to the number of unidentified tags n. When the slot duration was constant, the identification rate was only relative with D and V, and would not be affected by the conveyor belt length L in reader's coverage. Meanwhile, the existing tag estimation methods were amended according to the model of conveyor belt and a new estimation method was proposed, which significantly improved the precision in the case of large number of tags compared to other estimating methods.
    Anti-collision algorithm for adaptive multi-branch tree based on regressive-style search
    Wen-sheng SUN Ling-min HU
    2011, 31(08):  2052-2055.  DOI: 10.3724/SP.J.1087.2011.02052
    Asbtract ( )   PDF (638KB) ( )  
    References | Related Articles | Metrics
    Concerning the common problem of tag collision in Radio Frequency Identification (RFID) system, an improved anti-collision algorithm for multi-branch tree was proposed based on the regressive-style search algorithm. According to the characteristics of the tags collision, the presented algorithm adopted the dormancy count, and took quad tree structure when continuous collision appeared, which had the ability to choose the number of forks dynamically during the searching process, reduced the search range and improved the identification efficiency. The performance analysis results show that the system efficiency of the proposed algorithm is about 76.5%; moreover, with the number of tags increased, the superiority of the performance is more obvious.
    Load-balanced adaptive group clustering algorithm for wireless sensor network
    Ya-ming HU Ya-ping DENG Jia YANG
    2011, 31(08):  2056-2058.  DOI: 10.3724/SP.J.1087.2011.02056
    Asbtract ( )   PDF (576KB) ( )  
    References | Related Articles | Metrics
    In cluster-based routing algorithms, the drawbacks of classical Low Energy Adaptive Clustering Hierarchy (LEACH) algorithm and Steady Group Clustering Hierarchy (SGCH) algorithm were analyzed to propose a new adaptive group clustering hierarchy (AGCH) algorithm. During the group stage, the group heads candidate were firstly randomly selected, and then all the network nodes were divided into fixed groups through range competition among the heads. When selecting cluster head, each group considered not only the residential energy of nodes, but also their intergroup communication cost. The simulation results show that the proposed algorithm can effectively balance the network energy consumption and prolong the stability period of sensor networks.
    Improved model and performance analysis of simulation models for Rayleigh fading channels
    Xiao-lin SHI
    2011, 31(08):  2059-2061.  DOI: 10.3724/SP.J.1087.2011.02059
    Asbtract ( )   PDF (441KB) ( )  
    References | Related Articles | Metrics
    AutoRegression (AR) model and Clarke model (CLARKE R H. A statistical theory of mobile-radio reception. Bell System Technology Journal, 1968, 47(6): 957-1000) are typically used for the simulation of Rayleigh fading channels. But the results show that AR model has unavoidable numerical difficulties, hence it cannot generate the desired correlations of Rayleigh fading channels. Based on the Clarke model, an improved Sum-Of-Sinusoids (SOS) model was proposed, which reduced the distribution of the angel of arrivals and ensured its random distribution. Therefore, the new model could achieve better statistical properties of generated channels. Judging the variance of the time average statistical properties from their ideal ensemble averages, the new model has better performance and its statistical convergence to the desired correlations is faster than Clarke model.
    Indoor location algorithm of wireless sensor network based on fuzzy control
    Chen DENG Yong-qi WANG
    2011, 31(08):  2062-2064.  DOI: 10.3724/SP.J.1087.2011.02062
    Asbtract ( )   PDF (449KB) ( )  
    References | Related Articles | Metrics
    The authors proposed a system design scheme of indoor localization for Wireless Sensor Network (WSN), which used improved Received Signal Strength Indicator (RSSI) ranging technology based on fuzzy algorithm. The new method established the fuzzy distribution parameters of climate and environmental obstacles by fuzzy state classification, so as to improve the traditional "distance-loss" model. Hence, through calculating their membership function, a more accurate distance formula to calculate the location information of mobile nodes could be obtained. The experimental results show that the positioning algorithm proposed by the system for mobile nodes localization meets the actual needs in real-time and accuracy, and has application value.
    Design and realization of communication protocol based on Manchester code
    Qing-shan ZHOU Jue WANG Hui QIN
    2011, 31(08):  2065-2067.  DOI: 10.3724/SP.J.1087.2011.02065
    Asbtract ( )   PDF (588KB) ( )  
    References | Related Articles | Metrics
    To transmit data accurately in strong interference environments, a communication protocol based on Manchester code was designed. It is composed of sending module and receiving module: data were transferred in the form of “packets” at the sending end; dislocated code and a counter were used to solve the problem of clock synchronization at the receiving end. The communication protocol could solve the issue of identifying the data boundaries and the phase error caused by the accumulated error of clock. In a simulated interference environment, a transfer rate of 40Mbps was achieved on a platform. The experimental results indicate that the communication protocol is able to transmit data accurately.
    Network and distributed techno
    Trust evaluation based on community discovery in multi-Agent system
    Xing-hua YANG Wen-jie WANG Xiao-feng WANG Zhong-zhi SHI
    2011, 31(08):  2068-2071.  DOI: 10.3724/SP.J.1087.2011.02068
    Asbtract ( )   PDF (855KB) ( )  
    References | Related Articles | Metrics
    To solve the trust problem among Agents brought about by the characteristics of openness, dynamics and uncertainty of Multi-Agent System (MAS), a method for trust evaluation based on community discovery was proposed. Firstly, the G-N algorithm (GIRVAN M, NEWMAN M E J. Community structure in social and biological networks. Proceedings of the National Academy of Sciences of the United States of America, 2002, 99(12): 7821-7826) was employed to discover the community structure in the system. Both the inner and outer community reputations of the estimated Agents were calculated respectively by use of the belief of the recommending Agents, and then the total trust value was further assessed by combining the reputations and the direct trust values. Furthermore, the dynamic adjustment of Agent's trust value was realized via cooperation feedback. Lastly, the simulation results show that the community discovery-based trust evaluation method can effectively evaluate the Agent's trust value, and further enhance the ratio of successful interactions with the introduction of the feedback mechanism.
    Approximation algorithm of variable elimination of Bayesian network
    Wen-yu GAO Li ZHANG
    2011, 31(08):  2072-2074.  DOI: 10.3724/SP.J.1087.2011.02072
    Asbtract ( )   PDF (631KB) ( )  
    References | Related Articles | Metrics
    Variable Elimination (VE) is a basic reasoning method of Bayesian network; however, different order of elimination will lead to computational complexity of significant differences. It is a NP-hard problem to find the optimal order, so in practical application approximation algorithm is often used. Based on the analysis of the moral graph of Bayesian network, the added edges and the removed edges during elimination were considered, some methods of reducing graph complexity and controlling elimination cost were proposed, and a new algorithm was presented. Finally, the new algorithm was tested by random simulations. The simulation results show that the new algorithm outperforms the minimum deficiency search algorithm.
    New context caching replacement algorithm in ubiquitous computing
    Bin WANG Wen ZOU Jin-fang SHENG Ying SUN
    2011, 31(08):  2075-2078.  DOI: 10.3724/SP.J.1087.2011.02075
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics
    Due to the high dynamics of pervasive computing environment and the constraints of easy interruption and low transmission rate of wireless network, the overhead of context access is very huge. To solve these problems, a framework of context-aware system was proposed in this paper, and then a context caching replacement algorithm based on rules (RCRA) was introduced. The proposed algorithm determined whether to replace a context in the cache based on its access probability, timeliness and access history. When a new context was to get into the context cache, the algorithm was used to ensure that the latest and the most valuable context stayed in the cache. The experimental results show that the RCRA improves the hit rate and effectively reduces the overhead of context access. The RCRA is used in the rules-based context-aware system, and the algorithm has good utility.
    Parallel granularity selection technique for high performance SAR imaging program
    Jing DU Fu-jiang AO Hua-bin WANG Lian-dong WANG
    2011, 31(08):  2079-2082.  DOI: 10.3724/SP.J.1087.2011.02079
    Asbtract ( )   PDF (691KB) ( )  
    References | Related Articles | Metrics
    High performance parallel simulation programs need adopt parallel optimizing techniques to achieve efficient performance acceleration. And achieving appropriate parallel granularity according to the characteristics of given program is the basis of parallel optimizing technique. Therefore, represented by a typical Synthetic Aperture Radar (SAR) imaging program, namely Range-Doppler (RD), the authors researched into parallel granularity selection techniques for high performance SAR imaging programs. Especially, two important components were researched, including the basic rules of parallel granularity selection and the parallel granularity selection approach of RD algorithm. The experimental results show that according to the selected parallel granularity, the SAR imaging program can achieve obvious performance improvement and high scalability.
    Artificial intelligence
    Speaker recognition based on linear log-likelihood kernel function
    Liang HE Jia LIU
    2011, 31(08):  2083-2086.  DOI: 10.3724/SP.J.1087.2011.02083
    Asbtract ( )   PDF (612KB) ( )  
    References | Related Articles | Metrics
    To improve the performance of a text-independent speaker recognition system, the authors proposed a speaker recognition system based on linear log-likelihood kernel function. The linear log-likelihood kernel compressed the input cepstrum feature sequence of a speaker model by a Gaussian mixture model. The log-likelihood between two utterances was simplified to the distance between the parameters of Gaussian mixture model. Polarization identity was applied to obtain the mapping from a cepstrum feature sequence to a high dimension vector. Support Vector Machine (SVM) was used to train speaker models. The experimental results on National Institute of Standard and Technology show that the proposed kernel has excellent performance.
    Short coding sequence identification of human genes based on YKW graphical representation
    Jia-wei LUO Jun YAN Hai-feng HE
    2011, 31(08):  2087-2091.  DOI: 10.3724/SP.J.1087.2011.02087
    Asbtract ( )   PDF (716KB) ( )  
    References | Related Articles | Metrics
    According to base bias in the three positions of codon and base chemical properties, the YKW graph, a new graphical representation of gene sequences was introduced for recognizing short coding sequences of human genes. Nine effective features of area matrix were extracted in the YKW curves. In the identifying process, the incremental feature selection algorithm was used to add four statistical features to improve the accuracy. Then Principal Component Analysis (PCA) method was adopted to reduce dimensions and Support Vector Machine (SVM) was applied to classify the coding/un-coding sequence in short human genes. Finally, the experimental results show that the proposed method uses fewer features (seven or four) and gets better recognition results than other methods.
    Object recognition based on one-class support vector machine in hyperspectral image
    Wei CHEN Xu-chu YU Peng-qiang ZHANG Zhi-chao WANG He WANG
    2011, 31(08):  2092-2096.  DOI: 10.3724/SP.J.1087.2011.02092
    Asbtract ( )   PDF (933KB) ( )  
    References | Related Articles | Metrics
    The hyperspectral remote sensing image is rich in spectrum information, so it has advantages in object recognition. One-Class Support Vector Machine (OCSVM) not only holds the advantages of support vector machines but also only needs the train samples of the recognized objects. The algorithm proposed in this paper selected mathematical model, designed kernel function, adjusted parameter adaptively, and added the theory of OCSVM into the object recognition algorithm for hyperspectral image which improved the precision of recognition and reduced the demand of train samples. Lastly, the experiments were conducted on two hyperspectral images, and the results prove the validity of the proposed method.
    Grading model of seed cotton based on fuzzy pattern recognition
    Rong-chang YUAN Long-qing SUN Chen-xi DONG Li WANG
    2011, 31(08):  2097-2100.  DOI: 10.3724/SP.J.1087.2011.02097
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics
    Grade classification of seed cotton is a major issue that has a significant impact on the agricultural economy. According to the characteristics such as impurities, yellowness and brightness extracted from images of seed cotton, fuzzy pattern recognition was used to improve the classification of cotton grade. A classification model of seed cotton was constructed based on the fuzzy nearness. Fuzzy mathematics was combined with artificial neural network to build up a well improved model and algorithm. Statistical distribution was used to calculate and select the model parameter method. Eventually, the numbers of impurities of different sizes were worked out by using the Euler's numbers of the image. Based on the method of selecting model parameters, the proposed algorithm could be optimized step by step. After full learning, seed cotton classification accuracy rate reached 92%. The experimental results show that the presented algorithm satisfies the actual application needs.
    Reasoning algorithm of geometry automatic reasoning platform with sustainable development by user
    Huan ZHENG Jing-zhong ZHANG
    2011, 31(08):  2101-2104.  DOI: 10.3724/SP.J.1087.2011.02101
    Asbtract ( )   PDF (837KB) ( )  
    References | Related Articles | Metrics
    All the available geometry theorem provers are not sustainable. A knowledge representation with the general structure and a reasoning algorithm which could deal with all the rules were proposed. According to these ideas, a geometry automatic reasoning platform that could be sustainably developed by the user had been initially implemented. This platform allows the user to add geometric knowledge such as geometric objects, predicates and rules, and provides multiple reasoning algorithms such as forward search method and a part of area method, so it will be more suitable for geometry teaching.
    Target recognition for low-resolution radar based on compressed sensing
    Hong-mei MI Tian-shuang QIU
    2011, 31(08):  2105-2107.  DOI: 10.3724/SP.J.1087.2011.02105
    Asbtract ( )   PDF (462KB) ( )  
    References | Related Articles | Metrics
    According to the target echo of low-resolution radar, a feature extraction method based on the theory of compressed sensing was proposed in this paper. An orthonormal wavelet basis was selected as the sparsity basis and a random sub-Gaussian matrix as the measurement matrix, and the feature extraction vector was composed of a few measurements. The proposed algorithm can not only get the intrinsic driving source of radar echo signal, but also keep the structure of the original signal and enough target information. The experimental results show that the feature extraction approach offers low dimension and high information density, and gets better result.
    Unsupervised feature selection approach based on spectral analysis
    Feng PAN Jiang-dong WANG Ben NIU
    2011, 31(08):  2108-2110.  DOI: 10.3724/SP.J.1087.2011.02108
    Asbtract ( )   PDF (656KB) ( )  
    References | Related Articles | Metrics
    To improve the performance of feature selection under the unsupervised scenario, the relationship between the distribution of the first K minimal eigenvalues for a normalized graph Laplacian matrix and the structure of the clusters was identified, and a new feature selection algorithm based on the spectral analysis was proposed. The feature selection algorithm might be time-consuming; hence the Nystrm method was applied to reduce the computational cost of the eigen-decomposition. The experiments on synthetic and real-world data sets show the efficiency of the proposed approach.
    Robust least square support vector regression
    Kuai-ni WANG Jin-feng MA Xiao-shuai DING
    2011, 31(08):  2111-2114.  DOI: 10.3724/SP.J.1087.2011.02111
    Asbtract ( )   PDF (569KB) ( )  
    References | Related Articles | Metrics
    Least Square Support Vector Regression (LS-SVR) is sensitive to noise and outliers. By setting the upper bound of the loss function, a non-convex Ramp loss function was proposed, which had strong ability of suppressing the impact of outliers. Since the Ramp loss function was neither convex nor differentiable and the corresponding non-convex optimization problem was difficult to implement, the Concave-Convex Procedure (CCCP) was employed to transform the non-convex optimization problem into a convex one. Finally, a Newton algorithm was introduced to solve the robust model and the computational complexity was analyzed. The numerical experimental results on artificial and Benchmark data sets show that, in comparison with LS-SVR, the proposed approach has significant robustness to noise and outliers, and it also simultaneously reduces the training time.
    Sparse representation of face feature recognition based on multiple dictionaries of double-density dual-tree complex wavelet transform
    Cheng-yu WANG Wei-hong LI
    2011, 31(08):  2115-2118.  DOI: 10.3724/SP.J.1087.2011.02115
    Asbtract ( )   PDF (692KB) ( )  
    References | Related Articles | Metrics
    The difficulty in sparse representation of facial images based on over-complete dictionary is the dictionary generation. This paper first introduced the Double-Density Dual-Tree Complex Wavelet Transform (DD-DT CWT) for filtering the high-frequency sub-bands and the principle of energy distribution for selecting some sub-bands as the feature of a facial image to form multi-scale dictionaries, then viewed the similar feature of a test sample as the linear combination of some atoms in the overcomplete dictionary, finally got the recognition results via ensembling sparse representations on these dictionaries. The experimental results on AR face database demonstrate the efficiency of the proposed algorithm.
    Fatigue pattern recognition of human face based on Gabor wavelet transform
    Fen-hua CHENG Hai-yan YANG
    2011, 31(08):  2119-2122. 
    Asbtract ( )   PDF (682KB) ( )  
    References | Related Articles | Metrics
    Fatigue is one of the main factors that cause traffic accidents. A new method for monitoring fatigue state based on Gabor wavelet transform was proposed. In this method, the frequent patterns mining algorithm was designed to mine the fatigue patterns of fatigue facial image sequences during the training phase first. And then, during the fatigue recognition phase, the face image sequence to be detected was represented by fused feature sequence through Gabor wavelet transform. Afterwards, the classification algorithm was used for fatigue detection of the human face sequence. The simulation results on 500 fatigue images sampled by the authors show that the proposed algorithm achieves 92.8% in right detection rate and 0.02% in error detection rate, and outperforms than some similar method.
    IIHT: New improved iterative hard thresholding algorithm for compressive sensing
    Zong-nian ZHANG Jin-hui LI Ren-tai HUANG
    2011, 31(08):  2123-2125.  DOI: 10.3724/SP.J.1087.2011.02123
    Asbtract ( )   PDF (596KB) ( )  
    References | Related Articles | Metrics
    To overcome the shortcomings of the overdependence on the measurement matrix, the high computation complexity, the long computation time of the Iterative Hard Thresholding (IHT) algorithm, a new improved iterative hard thresholding (IIHT) algorithm was proposed by studying the theory of signal reconstruction for compressive sensing. It improved the cost function and the selection method of step size for the IHT algorithm. The simulation results show that the proposed algorithm increases the probability of recovery and the speed of convergence and reduces the computational complexity and time.
    Illumination invariant face recognition based on wavelet transform and denoising model
    Xue CAO Li-gong YU Jing-yu YANG
    2011, 31(08):  2126-2129.  DOI: 10.3724/SP.J.1087.2011.02126
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    The recognition of frontal facial appearance with illumination is a difficult task for face recognition. In this paper, a novel illumination invariant extraction method was proposed to deal with the illumination problem based on wavelet transform and denoising model. The illumination invariant was extracted in wavelet domain by using wavelet-based denoising techniques. Through manipulating the high frequency wavelet coefficient combined with denoising model, the edge features of the illumination invariants were enhanced and more useful information was restored in illumination invariants, which could lead to an excellent face recognition performance. The experimental results on Yale face database B and CMU PIE face database show that satisfactory recognition rate can be achieved by the proposed method.
    Orientation analysis for Chinese blog text based on semantic comprehension
    Feng-ying HE
    2011, 31(08):  2130-2133.  DOI: 10.3724/SP.J.1087.2011.02130
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics
    Blog has been accepted by more and more people as a popular information and cultural carrier. Orientation analysis for blog text also has become a hot spot in the field of information mining. The previous researches of text orientation mainly focus on plain text or news comments. A method of orientation analysis for blog text based on semantic comprehension was proposed according to the characteristics of blog text. Firstly, a Chinese basic emotional lexicon dictionary based on the HowNet emotional word set was constructed and the emotional value of Chinese emotional words was calculated on the basis of the similarity of Chinese words. Then, the adverbs and its influence on identification of text orientation in the semantic level were analyzed. Finally, the results were amended by using bloggers' language style factors and then the sentimental classification for blog text was realized. The experimental results show that the proposed method can effectively judge the blog text sentimental preference.
    HC_AL: New active learning method based on hierarchical clustering
    Jun-fang JIA
    2011, 31(08):  2134-2137.  DOI: 10.3724/SP.J.1087.2011.02134
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    Concerning the slow convergence speed of unlabeled samples classification while using the traditional Active Learning (AL) method to deal with the large-scale data, a Hierarchical Clustering Active Learning (HC_AL) algorithm was proposed. During operation in the algorithm, the majority of the unlabeled data were clustered hierarchically and the center of each cluster was labeled to replace the category label of this hierarchy. Then the wrong labeled data were added into the training data sets. The experimental results at the data sets show that the proposed algorithm improves the generalization ability and the convergence speed. Moreover, it can greatly improve the active learning convergence speed and obtain relatively satisfactory learning ability by using the method of hierarchical refinement and stepwise refinement.
    Professional literature annotation method based on domain ontology
    Mo-ji WEI Tao YU
    2011, 31(08):  2138-2142.  DOI: 10.3724/SP.J.1087.2011.02138
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics
    An automatic annotation method for professional literature was proposed. Through comparing with other storage formats and literary styles, two features of professional literature were summarized, and then three assumptions were proposed. To improve annotation efficiency, based on topology structure, the domain ontology was partitioned into segments which were self-consistent, then the most related segments were located with the keywords extracted from document, finally the document with located segments was annotated and the annotation scope was expanded according to the correspondence between grammatical structure and semantic structure. The experimental results show that the proposed method can improve annotation efficiency, annotation quantity and annotation accuracy.
    Fault diagnostic method for power converter based on wavelet neural network with improved algorithm
    Qi-chang DUAN Liang ZHANG Jing-ming YUAN
    2011, 31(08):  2143-2145.  DOI: 10.3724/SP.J.1087.2011.02143
    Asbtract ( )   PDF (411KB) ( )  
    References | Related Articles | Metrics
    As one of the core equipments in doubly-fed induction wind power generation system, the operation reliability of power converters seriously influences the safety and stability of power generation system. Since some flaws exist in Wavelet Neural Network (WNN) based on Recursive Least Square (RLS) algorithm such as low convergence precision and rate, and searching space possessing local minima and oscillation. The authors proposed a modified algorithm for fault detection of diagnostic power converters, in which variable weight and alter learning coefficient were employed to resolve above problems. After the modified WNN was trained and the faults were recognized from practical current data, comparison and analysis were carried out in simulation. The experimental results demonstrate that the modified algorithm can provide higher diagnostic precision and require less convergence time than the RLS algorithm.
    Information security
    Rough attack model based on object Petri net of expanded time
    Guang-qiu HUANG Chun-zi WANG Bin ZHANG
    2011, 31(08):  2146-2151.  DOI: 10.3724/SP.J.1087.2011.02146
    Asbtract ( )   PDF (1132KB) ( )  
    References | Related Articles | Metrics
    To solve the redundancy problem caused by similar attack methods and similar node objects in an attack model of complex network, a rough network attack model based on the vulnerability relation model was put forward. The attribute set was defined on the node domain and the transition domain in a Petri net, similar attack methods and similar node objects were classified to form the class space of the domain Petri nets. By defining similar degree of path, all characteristic attack paths which could arrive at an attack goal could be searched out by the ant algorithm, and the maximal threat path, which could access the goal node, could be found out from all these characteristic attack paths. The experimental results show that the proposed model can quickly locate the node objects and the related attack methods from on-time monitoring information and find accurately their positions from all these characteristic attack paths.
    Joint pollution model in Kad network
    Jie KONG Wan-dong CAI
    2011, 31(08):  2152-2155.  DOI: 10.3724/SP.J.1087.2011.02152
    Asbtract ( )   PDF (712KB) ( )  
    References | Related Articles | Metrics
    In this paper, a joint pollution model, which combined the pollution of keyword and the pollution of location, was proposed. The degree of pollution, the rate of exit and the rate of waiting were taken into account in the model. The simulation results show that the quantity of user of querying failed is much larger than the quantity of user of querying successfully by the impact of joint pollution and become stable with time increasing. The degree of pollution is the key factor which influence the effect of the joint pollution, the effect of exit rate is smaller than the degree of pollution and the effect of waiting rate is the smallest.
    Application of improved information gain algorithm in intrusion forensics
    Jian JIA Pei-yu LIU Wei GONG
    2011, 31(08):  2156-2158.  DOI: 10.3724/SP.J.1087.2011.02156
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics
    Feature selection algorithm based on the Information Gain (IG) can solve the problem of high-dimension and magnanimous data in intrusion forensics, but it neglects the correlation between features, which can lead to the redundancy of features, and affect the speed and accuracy of intrusion forensics. Therefore, an Improved Information Gain (IIG) algorithm based on feature redundancy was proposed. In the improved algorithm, the irrelevant features and the redundant features were removed by adding the judgments of redundancy between features, which effectively simplified feature subset. The experimental results show that the proposed algorithm can effectively select features, ensure detection accuracy and improve processing speed.
    Design of CCMP based on split medium access control of centralized wireless local area network
    Li-qun LIU
    2011, 31(08):  2159-2161.  DOI: 10.3724/SP.J.1087.2011.02159
    Asbtract ( )   PDF (458KB) ( )  
    References | Related Articles | Metrics
    Concerning the potential security flaws of Temporal Key Integrity Protocol (TKIP), a new scheme for implementing counter mode with cipher-block chaining with message authentication code protocol (CCMP) based on Field Programmable Gate Array (FPGA) was proposed. The circuit architecture of CCMP process was implemented based on the existing centralized Wireless Local Area Network (WLAN) split Medium Access Control (MAC) architecture. By comparing the performances of four different Advanced Encryption Standard (AES) implementations, the test results indicate that the proposed scheme can provide higher encryption performance and enhance wireless confidentiality.
    Secure batch steganographic model without carrying secret information
    Yu-liang WU Gou-xi CHEN Hong-lei SHEN Peng-cheng ZHANG
    2011, 31(08):  2162-2164.  DOI: 10.3724/SP.J.1087.2011.02162
    Asbtract ( )   PDF (714KB) ( )  
    References | Related Articles | Metrics
    Based on digital image scaling, a secure steganographic model for batch steganography was proposed, which conformed to the absolute safety definition. After being divided into many blocks, the secret information was deduced by using image scaling algorithm, rather than directly embedded in the carrier images. Firstly, a batch of carrier images were selected and zoomed to a specified magnification through a specific algorithm, then the relevance of the pixel information between secret image blocks and new images could be found out, finally new images were reduced to size of the original images for transmission. Since the scheme did not directly modify the image pixels, the original images were not required for extracting secret images and the security of the stegosystem had been improved. The experimental results and analysis show that the proposed algorithm is effective and it can be applied to image concealed communication.
    Multiplicative watermarking algorithm based on wavelet visual model
    Er-song HUANG Jin-hua LIU Ru-hong WEN
    2011, 31(08):  2165-2168.  DOI: 10.3724/SP.J.1087.2011.02165
    Asbtract ( )   PDF (832KB) ( )  
    References | Related Articles | Metrics
    The additive watermarking algorithm has good imperceptibility, however, the robustness of watermark is poor. As a result, a multiplicative image watermarking method was proposed by combining the visual model in the wavelet domain. In the proposed embedding scheme, the middle-frequency subband acted as the watermark embedding space, which was used to achieve the tradeoff between the imperceptibility and the robustness of watermarking system. Besides, the embedding strength factor was determined by considering the frequency masking, luminance masking and texture masking of host image. In the proposed detection scheme, the probability density function of wavelet coefficients was modeled by the Generalized Gaussian Distribution (GGD), and the watermark decision threshold was obtained by using the Neyman-Pearson (NP) criterion, and the Receiver Operating Characteristic (ROC) curve between the probability of false alarm and the probability of detection was derived. Finally, the robustness of the proposed watermarking was tested when being against common image processing attacks such as JPEG compression, Additive White Gaussian Noise (AWGN), scaling and cropping. The experimental results demonstrate that the proposed method has good detection performance and good robustness.
    Semi-fragile watermarking algorithm based on dynamic image segmentation and information entropy
    Hai-yang WANG Sheng-bing CHE Xu SHU
    2011, 31(08):  2169-2173.  DOI: 10.3724/SP.J.1087.2011.02169
    Asbtract ( )   PDF (922KB) ( )  
    References | Related Articles | Metrics
    Most of the existing semi-fragile watermarking algorithms adopt the means of double-step fixed quantization, do not consider the attack characteristics for carrier image, and only divide the original image into smooth region and texture region, so that the robustness of watermarking has reached a bottleneck. To improve the robustness of watermarking further, the authors proposed a new semi-fragile watermarking algorithm based on the technique of dynamic image region segmentation and information entropy. The technique of dynamic image region segmentation divided an image into several embedding regions, and determined the strength of every embedding region; a quantization algorithm based on entropy introduced information entropy into the quantization algorithm and could effectively measure the amount of sensitive information carried by different embedding regions. The experimental results show that, compared with the existing semi-fragile watermarking algorithm, the proposed algorithm has better masking performance and stronger robustness.
    Blind watermarking algorithm for 2D vector map
    Xiao-guang CHEN Yan LI
    2011, 31(08):  2174-2177.  DOI: 10.3724/SP.J.1087.2011.02174
    Asbtract ( )   PDF (610KB) ( )  
    References | Related Articles | Metrics
    The vector digital watermark is one of the most important means of copyright protection for graphics and vector maps. The authors discussed a blind watermarking method for 2D vector map. First, the entire vector map was traversed to get the tolerance dynamically, then the classical Douglas-Peucker algorithm was used to get the entire feature nodes from vector map, and finally, the watermark was embedded into feature nodes in tolerance range. And with the inversed procedure of the embedding process, the watermark could be extracted. Through the attack method including random adding points, random deleting points, compression and cropping, the correlation coefficient of original watermark bits and the extracted watermark bits from attacked watermarked maps were calculated. The experimental results show that the proposed method has a great robustness.
    TD-ERCS chaotic sequences' improvement and color image encryption algorithm
    Qi-nan LIAO
    2011, 31(08):  2178-2182.  DOI: 10.3724/SP.J.1087.2011.02178
    Asbtract ( )   PDF (836KB) ( )  
    References | Related Articles | Metrics
    To improve the performance of Tangent-Delay Ellipse Reflecting Cavity map System (TD-ERCS) and achieve effective protection of color image information, the authors proposed an improved algorithm for the TD-ERCS chaotic sequences and a color image encryption algorithm which was based on this improved sequences. The four-dimensional chaotic real sequences and binary sequences which had ideal random performance were obtained by improving the TD-ERCS. The image, which was composed of the RGB component of the color image, was scrambled by using the improved TD-ERCS chaotic sequences with 8×8 blocks, and then the image was encrypted. The theoretical analysis and experimental results show that the performances of improved TD-ERCS sequences are better than original sequences; the color image encryption algorithm has larger space of key, better encryption effect, higher encryption efficiency, and it also has better security to statistical analysis and stronger ability against JPEG compressing attacks.
    Steganography algorithm of dynamic threshold bit-plane complexity segmentation based on image classification
    Lin MA Hui-cheng LAI
    2011, 31(08):  2183-2186.  DOI: 10.3724/SP.J.1087.2011.02183
    Asbtract ( )   PDF (711KB) ( )  
    References | Related Articles | Metrics
    To improve the security of Bit-Plane Complexity Segmentation (BPCS) steganography algorithm, the authors proposed a BPCS steganography algorithm against statistical analysis. Firstly, an image was divided into blocks to calculate the information entropy and the wavelet contrast. Then the blocks were classified by Fuzzy C-Means (FCM) and each bit-plane block was set different complexity thresholds based on the classification result and a random number. Finally, if the similarity of secret data and carrier bit-plane block was less than 0.5, the secret data bit-plane block was negated to replace the carrier bit-plane block. The experimental results show that the proposed algorithm can effectively resist the detection of statistical analysis of the complexity histogram. Furthermore, the visual imperceptibility of stego image has been greatly improved.
    Anonymous fingerprinting scheme with straight-line extractors
    Xin LIU
    2011, 31(08):  2187-2191.  DOI: 10.3724/SP.J.1087.2011.02187
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    Until now, fingerprinting scheme based on anonymous group signature construction has not yet been solved. To solve this problem, an anonymous fingerprinting scheme with straight-line extractors was proposed, which incorporated the technique of the Canard-Gouget-Hufschmitt zero-knowledge proof (CANARD S, GOUGET A, HUFSCHMITT E. A handy multi-coupon system. ACNS 2006: Proceedings of the 4th International Conference on Applied Cryptography and Network Security, LNCS 3989. Berlin: Springer-Verlag, 2006: 66-81) of the OR statement, the Chida-Yamamoto batch zero-knowledge proof and verification (CHIDA K, YAMAMOTO G. Batch processing for proofs of partial knowledge and its applications. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2008, E91-A(1): 150-159), and the straight-line extractable commitment scheme of Arita (ARITA S. A straight-line extractable non-malleable commitment scheme. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2007, E90-A(7): 1384-1394). To note that, one of the salient features of the new scheme was supporting concurrent registration, so it was especially suitable to be deployed over the Internet. Moreover, the proposed scheme had straight-line extractors, i.e., the security reduction algorithm did not depend on inefficient rewinding strategy and got tight security reduction. Formal security analysis shows that the proposed scheme achieves all the properties required by anonymous fingerprinting schemes.
    Analysis and improvement on new three-party password-based authenticated key agreement protocol
    Li-lin LI Zhu-wen LIU
    2011, 31(08):  2192-2195.  DOI: 10.3724/SP.J.1087.2011.02192
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics
    Password-based Authenticated Key Agreement (PAKA) is an important research point of Authenticated Key Agreement (AKA) protocols. The authors analyzed a new protocol named three-party Round Efficient Key Agreement (3REKA) and found that if the verification values were stolen or lost, the adversary could initiate the man-in-the-middle attack. The result of this attack was serious: the adversary could establish two session keys with two different participants. This attack was described and an improved protocol called Improved 3REKA (I-3REKA) was proposed in this paper. The analysis on the security and performance show that the proposed protocol can realize secure communication with lower computational cost.
    Cryptanalysis and improvement of two signcryption schemes
    Han FAN Shao-wu ZHANG
    2011, 31(08):  2196-2200.  DOI: 10.3724/SP.J.1087.2011.02196
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics
    A certificateless signcryption scheme and a self-certified proxy signcryption scheme based on Discrete Logarithm Problem (DLP) were analyzed. It was pointed out that, in this certificateless signcryption scheme, besides the type Ⅰ attack proposed by Selvi et al. (SELVI S S D, VIVEK S S, RANGAN C P. Security weaknesses in two certificateless signcryption schemes. http://eprint.iacr.org/2010/092.pdf), there was another forgery attack which could successfully forge a signcryption passing the verification procedure and did not have public verifiability. In the self-certified proxy signcryption scheme based on Discrete Logarithm Problem (DLP), because of the existence of suspending-factor, any dishonest receiver could forge a signcryption passing the verification procedure. The attack methods and the improvement methods were presented. The experimental results prove that the proposed scheme is secure and effective, and it overcomes the flaw in the original schemes.
    Graphics and image technology
    Regularized Gmres method of image restoration
    Tao MIN Miao-miao ZHAO Yao CHENG
    2011, 31(08):  2201-2203.  DOI: 10.3724/SP.J.1087.2011.02201
    Asbtract ( )   PDF (595KB) ( )  
    References | Related Articles | Metrics
    Dealing with the restoration problem of image through the linear, spatial displacement of the imaging system, a completely orthogonal regularization Gmres method based on Krylov vectors was proposed. The proposed algorithm considered the ill-posedness in image restoration and the complexity of the calculation, and combined the regularization algorithm with the generalized minimal residual algorithm. By introducing the regularization method, the discredited integral equation was transformed into a posed problem of discrete and the numerical solution was obtained by generalized minimal residual algorithm. In the numerical simulation, the different methods were compared. The experimental results show that the proposed method can significantly improve the quality of image restoration.
    Fractured surface segmentation of triangular mesh of fragments for solid reconstruction
    Qun-hui LI Ming-quan ZHOU Guo-hua GENG
    2011, 31(08):  2204-2205.  DOI: 10.3724/SP.J.1087.2011.02204
    Asbtract ( )   PDF (528KB) ( )  
    References | Related Articles | Metrics
    A method that segmented fracture surfaces for automatic reassembly of broken 3D solids was presented. Firstly, the fragments were segmented into a set of surfaces bounded by sharp curves according to the angle of normal vectors of adjacent triangles. Then according to perturbation value and perturbation image of the normal vectors, after the second segmentation, surfaces were divided into the original surface and fractures. The experimental results show that the proposed method can distinguish fractured surfaces of complex fragment correctly and quickly.
    Image inpainting based on variable splitting and alternate minimization
    Su XIAO
    2011, 31(08):  2206-2209.  DOI: 10.3724/SP.J.1087.2011.02206
    Asbtract ( )   PDF (668KB) ( )  
    References | Related Articles | Metrics
    An algorithm, which used variable splitting and alternate minimization to solve the l1 regularized optimization problem, was proposed and applied to image inpainting. Based on the variable splitting, the proposed algorithm decoupled the l1 and l2 portions of the objective function, which reduced the l1 regularized problem to a sequence of unconstrained optimization problems. The alternate minimization was used to solve these unconstrained optimization problems with the projection algorithm for accelerating and simplifying the resolving procedure. With and without the noise, the experiments were carried out on the images with 30% pixels missing. And the experimental results demonstrate that the proposed algorithm can solve a series of image restoration problems including the image inpainting. Compared with other similar algorithms, it shows competitive speed and inpainting results.
    Equivalent proof of two 2-D cross entropy thresholding methods and their fast implementation
    Xin-ming ZHANG Zhen-yun LI Yan-bin ZHENG
    2011, 31(08):  2210-2213.  DOI: 10.3724/SP.J.1087.2011.02210
    Asbtract ( )   PDF (901KB) ( )  
    References | Related Articles | Metrics
    The method of two-dimensional oblique segmentation maximum inter-class cross entropy (TOSMICE) and the method of two-dimensional maximum cross entropy linear type (TMCELT) are effective cross entropy threshoding methods. To compare their segmentation results, the equivalence about them was discussed in this paper. First the two methods were analyzed: with different names, the cardinal segmentation principles were proved alike; then the formulae were deduced to obtain a simplest formula, the equivalence of two methods was proved, and its recurring algorithm of the formula based on 2-D histogram oblique segmentation was inferred; finally the features of 2-D histogram and the algorithm were combined to get a novel recurring algorithm. The experimental results show that there are equal thresholds in the two methods and that the proposed recurring algorithm's speed is much faster than that of the current method based on 2-D oblique segmentation.
    C-V model with H1 regular term
    Shao-hua ZHANG
    2011, 31(08):  2214-2216.  DOI: 10.3724/SP.J.1087.2011.02214
    Asbtract ( )   PDF (540KB) ( )  
    References | Related Articles | Metrics
    Chan-Vese (C-V) model (CHAN T F, VESE L A. Active contours without edges. IEEE Transactions on Image Processing, 2001, 10(2): 266-277) is one of the well-known region-based image segmentation models. It is much less sensitive to the initialization of the contours and noise; however, the range of segmented images are not enough extensive. Therefore, through using the method of combining theoretical analysis with experiment and adding H1 regular term in C-V model, the algorithm was improved. A novel energy function for image segmentation was proposed, and an active contour model of region-based adaptive interpolation fit for a Partial Differential Equation (PDE) formulation was deduced. The experimental results show that the improved model can segment some images that C-V model is not applicable, and it is less sensitive to flexible initialization contours and significantly less sensitive to noise.
    Image de-noising method based on multi-feature combination and weighted support vector machine
    Yan FU Ning NING
    2011, 31(08):  2217-2220.  DOI: 10.3724/SP.J.1087.2011.02217
    Asbtract ( )   PDF (665KB) ( )  
    References | Related Articles | Metrics
    The authors put forward an image de-noising method by combining multiple features with weighted Support Vector Machine (SVM) based on the image de-noising by using SVM. Firstly, according to the adjacent pixels correlation in the image and the characteristics of salt-pepper noises, multiple features were extracted from noisy image. Then the noise points in the noisy image were detected by using weighted SVM classifier which improved on imbalanced dataset, then Support Vector Regression (SVR) was used to forecast the gray value of noise points, finally the image was reconstructed so as to remove noise points. The experimental results show that the proposed method can improve the capability of classifier and the recognition rate of noise points. Moreover, it retains the information of image edge when removing noise points, and obtains higher Peak Signal-to-Noise Ratio (PSNR).
    Volume rendering acceleration method based on optimal bricking for large volume data
    Wei PENG Jian-xi LI Bin YAN Li TONG Jian CHEN Shi-yong GUAN
    2011, 31(08):  2221-2224.  DOI: 10.3724/SP.J.1087.2011.02221
    Asbtract ( )   PDF (828KB) ( )  
    References | Related Articles | Metrics
    GPU-based volume rendering has become an active research area in the domain of volume visualization. Large volume data cannot be uploaded directly due to the limitation of GPU memory, which has been a bottleneck of the application of GPU. Bricking method could not only solve this problem, but also maintain the quality of original volume-rendered image. However, the data exchange via the graphics bus is really time consuming and will definitely degrade the render performance. As for these difficulties, the optimal bricking for large volume data was calculated by establishing the model for optimal bricking, and also a 3D texture named node code texture was constructed and distance template was improved to accelerate the octree-based bricking volume rendering. The experimental results illustrate that the proposed method can significantly accelerate the bricking-based volume rendering for large volume data.
    Mixed interpolation algorithm integrating B-spline filter and generalized partial volume estimation in image registration
    Shun-bo HU
    2011, 31(08):  2225-2228.  DOI: 10.3724/SP.J.1087.2011.02225
    Asbtract ( )   PDF (639KB) ( )  
    References | Related Articles | Metrics
    Many false maxima were generated by B-spline Generalized Partial Volume Estimation (GPVE) or B-spline filter, but they had opposite dispersion effects on joint histogram of two registering images. By integrating the same order B-spline filter and B-spline GPVE, the mixed interpolation algorithm was proposed. When the registered images were rigidly aligned, the smoothness and number of maxima of the normalized mutual information curves were compared by applying many interpolation algorithms, which included the order 1, 3, and 5 B-spline filters, B-spline GPVE, and the new mixed interpolation algorithms. The experimental results show that the new mixed interpolation algorithms with order n B-spline outperform the corresponding B-spline filter and B-spline GPVE in image registration.
    Image segmentation based on fast converging loopy belief propagation algorithm
    Sheng-jun XU Xin LIU Liang ZHAO
    2011, 31(08):  2229-2231.  DOI: 10.3724/SP.J.1087.2011.02229
    Asbtract ( )   PDF (682KB) ( )  
    References | Related Articles | Metrics
    Large-scale computing and high mis-classification rate are two disadvantages of Loopy Belief Propagation (LBP) algorithm for image segmentation. A fast image segmentation method based on LBP algorithm was proposed. At first, a local region Gibbs energy model was built up. Then the region messages were propagated by LBP algorithm. In order to improve the running speed for LBP algorithm, an efficient speedup technique was used. At last, the segmentation result was obtained by the Maximum A Posterior (MAP) criterion of local region Gibbs energy. The experimental results show that the proposed algorithm not only obtains more accurate segmentation results, especially to noise or texture image, but also implements more fast.
    Video quality evaluation model of variable weight no-reference
    Hai-feng WANG
    2011, 31(08):  2232-2235.  DOI: 10.3724/SP.J.1087.2011.02232
    Asbtract ( )   PDF (711KB) ( )  
    References | Related Articles | Metrics
    Transmitting video through network might suffer from impairments introduced by packet losses. No-reference evaluation method can automatically predict the quality of video without any other loader for transmission channel. Therefore, video quality monitoring is an import task. To improve the accuracy of no-reference video quality evaluation, a new variable weight evaluation model was proposed. Video features both in spatial domain and in temporal domain were included in the evaluation model. It could adjust its weight in real-time by using the motion vectors. The evaluation model obtained good linear correlation, and the simple correlation coefficient equals 0.85. The experimental results show that variable weight evaluation method works better than the fixed one. It has lower computational complexity and higher value in application than other similar researches.
    Detection of copy-move forgery image based on fractal and statistics
    Mei-hong LIU Wei-hong XU
    2011, 31(08):  2236-2239.  DOI: 10.3724/SP.J.1087.2011.02236
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics
    Most of the existing detection algorithms for image copy-move forgery cannot effectively detect the sequential mixed image forgeries with regional duplication, so a new detection method based on fractal and statistics was proposed. The presented method first divided an image into overlapping blocks, and the each block would respectively be extracted to an eigenvector, which was composed by fractal dimension and three statistical data. Then, all the eigenvectors were lexicographically sorted. Finally, the forgery part was localized by means of the location information of the blocks and the Euclidean distance. The proposed method can not only detect the traditional copy-move forgery, but also detect the multi-region forgery for images subjected to rotation, flipping, and a mixture of these processing operations. The method is also robust to tampered images undergoing some attacks like Gaussian blurring, contrast adjustment, brightness adjustment, etc. The experimental results show the validity of the method.
    Adaptive light radiation intensity estimation based on variable kernel
    Hai-bo WANG Wen-hui ZHANG Hui-hua YANG Huan CHEN
    2011, 31(08):  2240-2242.  DOI: 10.3724/SP.J.1087.2011.02240
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    The conventional light radiation intensity estimation of K-Nearest Neighbor (K-NN) algorithm can only be improved by increasing the density of photons. The authors replaced the K-NN algorithm with Variable Kernel (VK) method which inherited the properties of smoothing kernel, and estimated the light radiation intensity for different surface point adaptively by calculating the ratio of the assigned radius of each photon to the distance between the photon and the surface point. The experimental results show that the VK algorithm is faster than K-NN algorithm and it can improve image quality without shooting a great number of photos.
    Filtering of ground point cloud based on scanning line and self-adaptive angle-limitation algorithm
    Jie GUO Jian-yong LIU You-liang ZHANG Yu ZHU
    2011, 31(08):  2243-2245.  DOI: 10.3724/SP.J.1087.2011.02243
    Asbtract ( )   PDF (451KB) ( )  
    References | Related Articles | Metrics
    Concerning the filtering problem of trees, buildings or other ground objects in field terrain reverse engineering, the disadvantages of conventional angle-limitation algorithm were analyzed, which accumulated errors or used a single threshold and could not meet the requirement of wavy terrain. Therefore, a self-adaptive angle-limitation algorithm based on scanning line was put forward. This method worked through limiting the angle of scanning center, reference point (known ground point) and the point to be sorted, which was adaptive with the wavy terrain. Then the modified point cloud was optimized with a curve fitting method by moving window. The experimental results prove that, the proposed algorithm has a sound control of the macro-terrain, and it can filter the wavy terrain point cloud much better.
    Improved algorithm of feather image segmentation based on active contour model
    Hong-jiang LIU Ren-huang WANG Xue-cong LI
    2011, 31(08):  2246-2248.  DOI: 10.3724/SP.J.1087.2011.02246
    Asbtract ( )   PDF (655KB) ( )  
    References | Related Articles | Metrics
    Using active contour model to get the bone of feathers is affected by other strong edge, and the computation is too much. According to the characteristics of feathers, a method of describing the object contour using the centerline and width was proposed. Two-dimensional contour described in the model was converted into two independent one-dimensional functions, and according to it, the energy function of the model was modified. The improved algorithm made use of symmetry to avoid the interference of strong edges, reduced the computation scale, and it could achieve fully automatic segmentation. The experimental results show that the improved algorithm is robust to noise; it realizes good feather image segmentation and meets industry needs.
    Segmentation of microscopic images based on image patch classifier and conditional random field
    Wei YANG Shu-heng ZHANG Lian-yun WANG Su ZHANG
    2011, 31(08):  2249-2252.  DOI: 10.3724/SP.J.1087.2011.02249
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics
    An automatic segmentation for pollen microscopic images was proposed in this paper, which was useful to develop a recognition system of airborne pollen. First, the image patch classifier was trained with normalized color component. Then, conditional random field was employed to model pollen images and Maximum A Posterior (MAP) was used to segment the pollen areas in microscopic images, with graph cut algorithm for optimization. In the experiments, the respective average values of mean distance error was 7.3 pixels and the true positive rate was 87% on 133 images. The experimental results show that image patch classifier and conditional random field model can be used to accomplish segmentation of the pollen microscopic images.
    Typical applications
    Semantic matching mechanism based on algebraic expression in workflow integration
    Jun QI Yue-ju ZHANG Tao WANG
    2011, 31(08):  2253-2257.  DOI: 10.3724/SP.J.1087.2011.02253
    Asbtract ( )   PDF (711KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of low precision ratio and low recall ratio of function match in the research of workflow integration, the authors implemented the matching mechanism based on formal semantic of extract pre/post match pattern, and proposed matching principles on the basis of algebraic expressions in high level programming languages. The specific algorithm was raised up and also an example was given to analyze and illustrate the algorithm. The proposed algorithm is suitable for function matching in workflow integration and it is founded on strict formal method, so that it can be analyzed and verified conveniently with mathematical methods. The limitation is that it is based on elementary algebraic expression.
    Service similarity checking based on event for Internet of things
    Chuan XIE Fang WANG
    2011, 31(08):  2258-2260.  DOI: 10.3724/SP.J.1087.2011.02258
    Asbtract ( )   PDF (409KB) ( )  
    References | Related Articles | Metrics
    To check the redundant services of Internet of Things (IoT) and save resources, a novel similarity calculation model based on service event class diagram was proposed to check redundancy by means of relations between events and services. It analyzed the context of IoT and service type and obtained the services similarity measurement based on events according to this model. By this similarity calculation method, a static service redundancy detection algorithm was proposed to remove duplication function invocation of services, which saved system resources occupied by services and decreased the consumption of resources in the IoT.
    Design and application of middleware for Web full-text retrieval
    Wei-gang ZHANG Yong-dong XU Xiao-qiang LEI Hui HE
    2011, 31(08):  2261-2264.  DOI: 10.3724/SP.J.1087.2011.02261
    Asbtract ( )   PDF (609KB) ( )  
    References | Related Articles | Metrics
    To provide better Web search services, the key techniques of the full-text retrieval were studied and a middleware was designed and implemented. By using a multi-thread website crawler program, the Web pages of the given URLs were collected. Bloom-Filter algorithm was employed to get rid of large-scale duplicate URLs in the collected Web pages. A new content extraction approach based on the Web tags was presented to extract the full-text content of Web pages for indexing and searching. The experimental results verify the efficiency of the content extraction method. Furthermore, to improve the search experience of users, many personalized search assistances were provided by this middleware. Boso, a blog search engine, was developed to test and verify the presented middleware. The results show that the presented middleware can be applied to actual search engines.
    Design and implementation of hybrid index mechanism for real-time database
    Bo LIU Shi-ming FAN Hua LIU
    2011, 31(08):  2265-2269.  DOI: 10.3724/SP.J.1087.2011.02265
    Asbtract ( )   PDF (886KB) ( )  
    References | Related Articles | Metrics
    It is necessary to store massive real-time data into database and query records from database in real-time on the field of satellite ground device monitoring. Taking account of the characteristics of real-time data and Judy array, a bitmap memory allocation method based on memory map file was proposed. A hybrid index mechanism which employed Hash table, B+ tree and Judy array was designed. Through insertion and querying of massive records, the experimental results show that bitmap allocation method avoids the generation of massive tiny memory holes. Being combined with bitmap allocation method, the hybrid index mechanism provides real-time index insertion and record querying for applications.
    Embedded gas detection system and its image analysis algorithm
    Xiao-gang LUO De-nuan WANG Xing-hong BAI
    2011, 31(08):  2270-2274.  DOI: 10.3724/SP.J.1087.2011.02270
    Asbtract ( )   PDF (1006KB) ( )  
    References | Related Articles | Metrics
    A fast gas detection system and its image analysis algorithm based on embedded platform and porphyrin sensor arrays were designed to overcome the deficiency of the traditional gas detection methods. The system captured the images before and after porphyrin sensor arrays contacted with gases using the camera with USB interface at first. Then the color change information of each porphyrin spot in the images could be obtained by using the image processing algorithm. Finally the type and the concentration of unknown gas were recognized using pattern recognition algorithm. In addition, the design of the system structure and its software function were proposed. The image processing algorithm for porphyrin sensor array images and pattern recognition for matching the records in standard database were given. Lots of tests had been done for Ammonia and other gases. The experimental results show that the system and analysis algorithm can identify the type and concentration of gas.
    Solution to complex container loading problem based on ant colony algorithm
    Li-ning DU De-zhen ZHANG Shi-feng CHEN
    2011, 31(08):  2275-2278.  DOI: 10.3724/SP.J.1087.2011.02275
    Asbtract ( )   PDF (687KB) ( )  
    References | Related Articles | Metrics
    In view of the complex Container Loading Problem (CLP), the optimal loading plan with heuristic information and the ant colony algorithm was proposed. Firstly, a mathematical model was generated. Considering the strong search ability, potential parallelism and scalability of ant colony algorithm, the proposed algorithm was combined with the triple-tree structure to split the layout of space in turn. Then, the three-dimensional rectangular objects of different sizes were placed to the layout space under the constraints. An ant colony algorithm based on spatial partition was designed to solve the optimal procedure. Finally, a design example that 700 pieces of goods were loaded into a 40-foot (12.025m) high cubic was calculated. The experimental results show that the proposed method can enhance the utilization of the container and it has a strong practicality.
    Management domain division algorithm based on analytic hierarchy process in space information network
    Li LIN De-jun GUANG
    2011, 31(08):  2279-2281.  DOI: 10.3724/SP.J.1087.2011.02279
    Asbtract ( )   PDF (606KB) ( )  
    References | Related Articles | Metrics
    Space information network is an intelligent system constituted by information systems of land, sea, air and space. An algorithm of management domain division was proposed to meet the needs of scalability in network management for space information network. An organization model of management was designed. Analytic Hierarchy Process (AHP) was used to select sub-management station, and then appropriate management domains could be formed. In domain maintenance phase, the function of sub-management station was migrated by mobile Agents. Dynamical maintaining mechanisms like domain merger/partition, reaffiliation and adaptive adjustment of information update period were also designed. The overhead and time spent in division as well as in data collecting for management were analyzed through simulations. The experimental results show the proposed algorithm of management domain division is suitable for space information network.
    Parameter estimation of complex moving target based on micro-Doppler analysis
    Guang-feng CHEN Lin-rang ZHANG Gao-gao LIU Chun WANG
    2011, 31(08):  2282-2285.  DOI: 10.3724/SP.J.1087.2011.02282
    Asbtract ( )   PDF (597KB) ( )  
    References | Related Articles | Metrics
    Micro-Doppler signature produced by micro-motion contains movement and structure information, which is useful for radar classification and recognition. In this paper, complex micro-motion scattering point with rotation and acceleration was modeled. Based on a quantitative analysis on micro-Doppler modulation, the acceleration, rotational frequency and rotational radius were estimated by peak value extraction method which extracted time-frequency analysis matrix maximum along the frequency and least square fitting straight line method. Finally, the simulation results verify the correctness of the theoretical analysis and the validity of the parameter estimation.
    Design of cement calcination process controller based on dual heuristic programming algorithm
    Bao-sheng YANG Xiu-shui MA
    2011, 31(08):  2286-2288.  DOI: 10.3724/SP.J.1087.2011.02286
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics
    For the multiple variables, disturbances, nonlinearity and other properties of the cement clinker kiln process, it is very difficult to establish an accurate model of the cement kiln system. It is strongly dependent on the experience of workers in the actual production. The error Back-Propagation (BP) neural network was used to establish the firing system model, and a controller was designed for the kiln based on Dual Heuristic Programming (DHP). DHP critic network output the partial derivative of cost function J with the state to obtain the optimal or sub-optimal control signal. The action network output control actions to control the system to achieve desired trajectory. The simulation results show that the controller has faster response time and less overshoot. These features contribute to the stable operation of the real system.
    Innovation extrapolation method for GPS/SINS tightly coupled system
    Guo-rong HUANG Xing-zhao PENG Chuang GUO Hong-bing CHENG
    2011, 31(08):  2289-2292.  DOI: 10.3724/SP.J.1087.2011.02289
    Asbtract ( )   PDF (530KB) ( )  
    References | Related Articles | Metrics
    Integrity is a critical parameter for Global Positioning System (GPS)/ Strapdown Inertial Navigation System (SINS) tight coupling system. In order to reduce satellites' failure detection time, an innovation extrapolation method based on the innovation test method was proposed. By disposing the innovation produced in the extrapolation process, the innovation extrapolation method's test statistics that has been used for failure detection was formed. Applying the proposed method in GPS/SINS tightly coupled system, the simulation results show that innovation extrapolation method can detect slowly growing failure faster than the innovation test method, and innovation extrapolation method can undermine the effect of outliers for failure detection.
    Trajectory tracking control of three-wheeled mobile robot
    Guo-liang ZHANG Lei AN Wen-jun TANG
    2011, 31(08):  2293-2296.  DOI: 10.3724/SP.J.1087.2011.02293
    Asbtract ( )   PDF (526KB) ( )  
    References | Related Articles | Metrics
    A kinematic model of mobile robot with certain constraints of motion was established for unsmooth motion of three-wheeled mobile robot in the process of trajectory tracking control. According to the description of differential equation of mobile robot's position and orientation error, a trajectory tracking controller based on back stepping and time-varying state feedback was designed. It was proved that the controller could guarantee the uniformly asymptotical stability of the closed-loop system, according to the stability analysis of trajectory tracking controller. The simulation results verify the correctness of the kinematic model and the effectiveness of trajectory tracking controller.
    Implementation and optimization of speech enhancement algorithm under soft-decision modification for digital signal processor
    Chao-fan BAN Xiao-ming LIU Yu TIAN
    2011, 31(08):  2297-2300.  DOI: 10.3724/SP.J.1087.2011.02297
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics
    The authors proposed a short-time spectral amplitude estimation approach based on speech absence probability information and the masking properties of the human auditory system to improve the performance of speech enhancement algorithm in low signal to noise ratio input environments. Meanwhile, the hardware design and the algorithm optimization based on the TMS320C5502 digital signal processor embedded system were introduced. The system testing results show that the hardware platform works stably and reliably, and the optimization can significantly improve system processing speed. Besides, the output signal achieves a good balance between noise reduction and speech distortion.
2025 Vol.45 No.1

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF