Loading...

Table of Content

    01 October 2014, Volume 34 Issue 10
    Network and communications
    Hierarchical networks-on-chip routing algorithm based on source region path selection
    HAN Guodong KONG Feng SHEN Jianliang
    2014, 34(10):  2761-2765.  DOI: 10.11772/j.issn.1001-9081.2014.10.2761
    Asbtract ( )   PDF (862KB) ( )  
    References | Related Articles | Metrics

    To facilitate the communication between remote and adjacent nodes on the large-scale Networks-on-Chip (NoC), a hierarchical Cluster-like Hierarchical Mesh (CHM) topology based on region partition was proposed. Correspondingly, to avoid performance degradation due to the bad network congestion near the intermediate nodes, an adaptive source region path selection algorithm was elaborated. According to the region characteristic in CHM, the routing decision was determined in the source region instead of source node only, and the adaptive routing node pair was distinguished among bottom and upper node pairs, which enhanced the routing selection performance for those node pairs, so as to alleviate the bad network congestion. The experimental results show that, compared with the shortest path algorithm, the proposed algorithm can increase saturation injection by at most 51% and 31% respectively under synthetic and local traffic patterns, showing an effective improvement in network throughput.

    Clustered data collection framework based on time series prediction model
    WANG Zhenglu WANG Jun CHENG Yong
    2014, 34(10):  2766-2770.  DOI: 10.11772/j.issn.1001-9081.2014.10.2766
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics

    Due to the space-time continuity of the physical attributes, such as temperature and illumination, high spatio-temporal correlation exists among the sensed data in the high-density Wireless Sensor Network (WSN). The data redundancy produced by the correlation brings heavy burden to network communication and shortens the networks lifetime. A Clustered Data Collection Framework (CDCF) based on prediction model was proposed to explore the data correlation and reduce the network traffic. The framework included a time series prediction model based on curve fitting least square method and an efficient error control strategy. In the process of data collection, the clustered structure considered the spatial correlation, and the time series prediction model investigated the temporal correlation existing in sensed data. The experimental simulation proves that CDCF used only 10%—20% of the amount of raw data to finish the data collection of the networks in the relatively stable environment, and the error of the data restored in sink is less than the threshold value which defined by user.

    Multi-round vote location verification mechanism based on weight and difference value in vehicular Ad Hoc network
    WANG Xueyin FENG Jianguo CHEN Jiawei ZHANG Fang XUE Xiaoping
    2014, 34(10):  2771-2776.  DOI: 10.11772/j.issn.1001-9081.2014.10.2771
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of location verification caused by collusion attack in Vehicular Ad Hoc NETworks (VANET), a multi-round vote location verification based on weight and difference was proposed. In the mechanism, a static frame was introduced and the Beacon messages format was redesigned to alleviate the time delay of location verification. By setting malicious vehicles filtering process, the position of the specific region was voted by the neighbors with different degrees of trust, which could obtain credible position verification. The experimental results illustrate that in the case of collusion attack, the scheme achieves a higher accuracy of 93.4% compared to Minimum Mean Square Estimation (MMSE) based location verification mechanism.

    Range-based localization algorithm with virtual force in wireless sensor and actor network
    WANG Haoyun WANG Ke LI Duo ZHANG Maolin XU Huanliang
    2014, 34(10):  2777-2781.  DOI: 10.11772/j.issn.1001-9081.2014.10.2777
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics

    To solve the sensor node localization problem of Wireless Sensor and Actor Network (WSAN), a range-based localization algorithm with virtual force in WSAN was proposed in this paper, in which mobile actor nodes were used instead of Wireless Sensor Network (WSN) anchors for localization algorithm, and Time Of Arrival (TOA) was combined with virtual force. In this algorithm, the actor nodes were driven under the action of virtual force and made themself move close to the sensor node which sent location request, and node localization was completed by the calculation of the distance between nodes according to the signal transmission time. The simulation results show that the localization success rate of the proposed algorithm can be improved by 20% and the average localization time and cost are less than the traditional TOA algorithm. It can apply to real-time field with small number of actor nodes.

    Dynamic spectrum access mechanism of multi-users based on restless multi-armed bandit model in cognitive networks
    ZHU Jiang HAN Chao YANG Jielei PENG Zhuxun
    2014, 34(10):  2782-2786.  DOI: 10.11772/j.issn.1001-9081.2014.10.2782
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics

    Based on the theory of Restless Multi-Armed Bandit (RMAB) model, a novel mechanism of dynamic spectrum access was proposed for the problem that how to coordinate multiple user access multiple idle channels. Firstly, concerning the channel sensing error of the cognitive user being existed in the practical network, the Whittle index policy which can deal with sensing error effectively was derived. In this policy, the users achieved one belief value for every channel based on the historical experience accumulation and chose the channel, which was needed to sense and access, by considering the immediate and future rewards based on the belief values. Secondly, this paper used the multi-bid auction algorithm to deal with the collision among secondary users when they selected the channels to improve the spectrum utilization. The simulation results demonstrate that, in the same environment, the cognitive users with the proposed mechanism have higher throughtput than the mechanism without dealing with sensing error or without multi-bid.

    Novel blind frequency offset estimation algorithm in orthogonal frequency division multiplexing system based on particle swarm optimization
    YANG Zhaoyang YANG Xiaopeng LI Teng YAO Kun NI
    2014, 34(10):  2787-2790.  DOI: 10.11772/j.issn.1001-9081.2014.10.2787
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics

    To estimate the frequency offset in Orthogonal Frequency Division Multiplexing (OFDM) system, a novel blind frequency offset estimation algorithm based on Particle Swarm Optimization (PSO) method was proposed. Firstly the mathematical model and cost function were designed according to the principle of minimum reconstruction error of the reconstructed signal and the signal actually received. The powerful random, parallel, global search property of PSO was utilized to minimize the cost function to get the frequency offset estimation. Two inertia weight strategies for PSO algorithm of constant coefficient and differential descending were simulated, and comparison was made with the minimum output variance and gold section methods. The simulation results show that the proposed algorithm performs highly accuracy, about one order of magnitude higher than other similar algorithms in same Signal-to-Noise Ratio (SNR) and it is not restricted by modulation type and frequency estimation range (-0.5,0.5).

    Power allocation algorithm in cognitive orthogonal frequency division multiplexing system based on interference temperature limit
    LAI Xiaojun SONG Guanghua YANG Bowei
    2014, 34(10):  2791-2795.  DOI: 10.11772/j.issn.1001-9081.2014.10.2791
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics

    In cognitive Orthogonal Frequency Division Multiplexing (OFDM) systems, to avoid interference to Primary Users (PU), the transmission power of Cognitive Users (CU) need to be controlled and allocated. Since the transmission power can not be allocated legitimately and the data transmission rate can not be improved effectively, a power allocation algorithm of double factor binary search optimization was proposed on the basis of traditional water-filling power allocation algorithm. In the presented algorithm, the interference temperature limit on the cognitive user channel was taken into account. Firstly, a surplus function was introduced under the total power constraints. Secondly, because of the monotonicity of the surplus function, the accurate values of Lagrangian multipliers could be attained through the double binary search iteration method. Finally, the power allocation of the sub-channels was conducted through the values of Lagrangian multipliers. The simulation results show that the proposed algorithm can effectively use the spectrum hole between primary users. The data transmission rate of the cognitive users can be maximized under both total power constraints and Interference Temperature (IT) constraints. The data transmission rate is approaching to the traditional water-filling algorithm. Compared with the total power average control algorithm and the interference temperature average control algorithm, the data transmission rate of the presented algorithm is obvious higher, which exceeds about 4×105b/s under the same circumstance. Moreover, the algorithm has less processing time and reflects a good robustness.

    Simple efficient bit-flipping decoding algorithm for low density parity check code
    ZHANG Gaoyuan WEN Hong LI Tengfei SONG Huanhuan
    2014, 34(10):  2796-2799.  DOI: 10.11772/j.issn.1001-9081.2014.10.2796
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics

    To improve the efficiency of the Bit Flipping (BF), a weighted gradient descent bit-flipping decoding algorithm based on average magnitude was proposed for Low Density Parity Check (LDPC) code. The average magnitude of the information nodes was first introduced as the reliability of the parity checks, which was used to weigh the bipolar syndrome, and then an effective bit-flipping function was obtained. Simulation was conducted at Bit-Error Rate (BER) of 10-5 under an Additive White Gaussian Noise (AWGN) channel, and coding gains of 0.08 and 0.29 dB were achieved in comparison to conventional weighted Gradient Descent Bit-Flipping (GDBF) and Reliability Ratio based Weighted Gradient Descent Bit-Flipping (RRWGDBF) algorithms while the average number of decoding iterations was reduced by 72.6% and 9.3%, respectively. The simulation results show that the improved algorithm outperforms the conventional algorithms while average decoding number is also reduced. It indicates that this new scheme can better balance error-correcting ability, decoding complexity and delay, which can be applied to high-speed communication system with high real-time requirement.

    Low-cost cloud storage scheme based on hybrid strategy
    LI Songtao JIN Xin
    2014, 34(10):  2800-2805.  DOI: 10.11772/j.issn.1001-9081.2014.10.2800
    Asbtract ( )   PDF (1130KB) ( )  
    References | Related Articles | Metrics

    To ensure the high data availability in cloud storage system, reduce the cost of data storage and bandwidth, shorten the time for accessing data objects, a new scheme named Cache A Replica On Modification (CAROM) was proposed. It combined traditional methods based on replication strategies and erasure coding strategies to improve the flexibility and efficiency of cloud file system. In addition, in order to achieve a balance between the cache size and efficiency, an adaptive method based on the characteristics of the overall cost of a convex function was proposed to select adaptive cache size. A large-scale evaluation using real-world file system was conducted, and CAROM outperformed replication based schemes in storage cost by up to 60% and erasure coding schemes in bandwidth cost by up to 43%, while maintaining low access latencies close to those in replication based schemes. The results indicate that, while maintaining the same consistency semantics seen in current cloud file systems, CAROM provides the benefit of low bandwidth cost, low storage cost and low access latencies.

    Data management based on Hadoop for power geographic information system
    LIN Biying WANG Yanping
    2014, 34(10):  2806-2811.  DOI: 10.11772/j.issn.1001-9081.2014.10.2806
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics

    In consideration of the problem that the traditional power Geographic Information System (GIS) has inabilities in storage, computing and scalability, cloud computing was applied into the power GIS field and a solution which used Hadoop cloud platform to store and manage the massive GIS data was proposed. After analyzing the characteristics of the power GIS data,a data storage strategy was proposed which combined relational database with non-relational database. Based on this strategy, the architecture of power GIS management based on Hadoop was presented, the data model and parallel data analysis based on MapReduce were designed. Finally, a quantity of experiments were carried out including spatial analysis and operation data queries in single-machine and cluster environment to compare and validate the performance. The experimental results show that the average time of data analysis and query declines over 30% after reaching certain amount of data. The proposed scheme has more obvious advantages to deal with large-scale data and has high efficiency and good feasibility.

    Community detection algorithm based on clustering granulation
    ZHAO Shu Wang KE CHEN Jie ZHANG Yanping
    2014, 34(10):  2812-2815.  DOI: 10.11772/j.issn.1001-9081.2014.10.2812
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    To keep the trade-off of time complexity and accuracy of community detection in complex networks, Community Detection Algorithm based on Clustering Granulation (CGCDA) was proposed in this paper. The granules were regarded as communities so that the granulation for a network was actually the community partition of a network. Firstly, each node in the network was regarded as an original granule, then the granule set was obtained by the initial granulation operation. Secondly, granules in this set which satisfied granulation coefficient were merged by clustering granulation operation. The process was finished until granulation coefficient was not satisfied in the granule set. Finally, overlapping nodes among some granules were regard as isolated points, and they were merged into corresponding granules based on neighbor nodes voting algorithm to realize the community partition of complex network. Newman Fast Algorithm (NFA), Label Propagation Algorithm (LPA), CGCDA were realized on four benchmark datasets. The experimental results show that CGCDA can achieve modularity 7.6% higher than LPA and time 96% less than NFA averagely. CGCDA has lower time complexity and higher modularity. The balance between time complexity and accuracy of community detection is achieved. Compared with NFA and LPA, the whole performance of CGCDA is better.

    Global convergence analysis of evolutionary algorithm based on state-space model
    WANG Dingxiang LI Maojun LI Xue CHENG Li
    2014, 34(10):  2816-2819.  DOI: 10.11772/j.issn.1001-9081.2014.10.2816
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics

    Evolutionary Algorithm based on State-space model (SEA) is a new evolutionary algorithm using real strings, and it has broad application prospects in engineering optimization problems. Global convergence of SEA was analyzed by homogeneous finite Markov chain to improve the theoretical system of SEA and promote the application research in engineering optimization problems of SEA. It was proved that SEA is not global convergent. Modified Elastic Evolutionary Algorithm based on State-space model (MESEA) was presented by limiting the value ranges of elements in state evolution matrix of SEA and introducing the elastic search. The analytical results show that search efficiency of SEA can be enhanced by introducing elastic search. The conclusion that MESEA is global convergent is drawn, and it provides theory basis for the application of algorithm in engineering optimization problems.

    MWARM-SRCCCI :efficient algorithm for mining matrix-weighted positive and negative association rules
    ZHOU Xiumei HUANG Mingxuan
    2014, 34(10):  2820-2826.  DOI: 10.11772/j.issn.1001-9081.2014.10.2820
    Asbtract ( )   PDF (1115KB) ( )  
    References | Related Articles | Metrics

    In view of the deficiency of the existing weighted association rules mining algorithms which are not applied to deal with matrix-weighted data, a new pruning strategy of itemsets was given and the evaluation framework of matrix-weighted association patterns, SRCCCI (Support-Relevancy-Correlation Coefficient-Confidence-Interest), was introduced in this paper firstly, and then a novel mining algorithm, MWARM-SRCCCI (Matrix-Weighted Association Rules Mining based on SRCCCI), was proposed, which was used for mining matrix-weighted positive and negative patterns in databases. Using the new pruning technique and the evaluation standard of patterns, the algorithm could overcome the defects of the existing mining techniques, mine valid matrix-weighted positive and negative association rules, avoid the generation of ineffective and uninteresting patterns. Based on Chinese Web test dataset CWT200g (Chinese Web Test collection with 200GB web Pages) for the experimental data, MWARM-SRCCCI could make the biggest decline of its mining time by up to 74.74% compared with the existing no-weighted positive and negative association rules mining algorithms. The theoretical analysis and experimental results show that, the proposed algorithm has better pruning effect, which can reduce the number of candidate itemsets and mining time and improve mining efficiency markedly, and the association patterns of this algorithm can provide reliable query expansion terms for information retrieval.

    Parameter estimation of Richards model and algorithm effectiveness based on particle swarm optimization algorithm
    YAN Zhen'gang HU Henian LI Guang
    2014, 34(10):  2827-2830.  DOI: 10.11772/j.issn.1001-9081.2014.10.2827
    Asbtract ( )   PDF (593KB) ( )  
    References | Related Articles | Metrics

    Aiming to the practical problem that it is difficult to estimate the Richards model parameters, the parameter estimation problem of the Richards model was formulated as a multi-dimensional unconstrained function optimization problem. Combined with the actual growth concentration of glutamic acid, in Matlab 2012b environment, the fitness function was established by Particle Swarm Optimization (PSO) algorithm, four parameters of Richards model were estimated by the least square method, and the growth curve and the optimum curve were established. To further verify the effectiveness of the algorithm, the PSO algorithm was compared with traditional parameter estimation method, such as four point method and Genetic Algorithm (GA) method, the related index and the residual standard deviation were used as the evaluation index. The results show that, the PSO algorithm has better fitting effect for Richards model and good applicability for parameter estimation.

    Integral attack on SNAKE(2) block cipher
    GUAN Xiang YANG Xiaoyuan WEI Yuechuan LIU Longfei
    2014, 34(10):  2831-2833. 
    Asbtract ( )   PDF (570KB) ( )  
    References | Related Articles | Metrics

    At present, the safety analysis of SNAKE algorithm is mainly about interpolation attack and impossible differential attack. The paper evaluated the security of SNAKE(2) block cipher against integral attack. Based on the idea of higher-order integral attack, an 8-round distinguisher was designed. Using the distinguisher, integral attacks were made on 9/10 round SNAKE(2) block cipher. The attack results show that the 10-round SNAKE(2) block cipher is not immune to integral attack.

    Secure identity-based proxy signcryption scheme in standard model
    MING Yang FENG Jie HU Qijun
    2014, 34(10):  2834-2839. 
    Asbtract ( )   PDF (850KB) ( )  
    References | Related Articles | Metrics

    Concerning the proxy signcryption security problem in reality, motivated by Gus proxy signature scheme (GU K, JIA W J, JIANG C L. Efficient identity-based proxy signature in the standard model. The Computer Journal, 2013:bxt132), a new secure identity-based proxy signcyption scheme in the standard model was proposed. Proxy signcryption allowed that the original signcrypter delegated his authority of signcrption to the proxy signcrypter in such a way that the latter could generate ciphertext on behalf of the former. By combining the functionalities of identity-based signcryption and proxy signature scheme, the new scheme not only had the advantage of identity-based signcryption scheme, but also had the function of proxy signature scheme. Analysis results show that, under the assumption of Diffie-Hellman problem, the proposed scheme is confidential and unforgeable. Compared with the known scheme, the scheme requires 2 pairings computation in proxy key generation and 1 pairing computation in proxy signcryption. So it has higher computational efficiency.

    Video steganography algorithm based on invariant histogram of motion vector
    GUO Chaojiang ZHANG Minqing NIU Ke
    2014, 34(10):  2840-2843.  DOI: 10.11772/j.issn.1001-9081.2014.10.2840
    Asbtract ( )   PDF (559KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that some video steganography algorithms based on Motion Vector (MV) change statistical features of histogram, a new video steganography algorithm based on keeping histogram of MV was proposed. In this paper, the secret information were hidden in video MV by using the data mapping of histogram. At the same time, a code-matching method was used to encode the secret information before being embedded, then the data stream would have the same statistical characteristics by MV, which made the scheme absolutely secure in theory. The experimental results show that: the change of histogram features is effectively controlled, the increment of bitrate is controlled in 1%, and the steganalysis detection rate is decreased by an average of 30% to 50%.

    Artificial intelligence
    Complete coverage path planning algorithm for mobile robot:progress and prospect
    JIAN Yi ZHANG Yue
    2014, 34(10):  2844-2849.  DOI: 10.11772/j.issn.1001-9081.2014.10.2844
    Asbtract ( )   PDF (1196KB) ( )  
    References | Related Articles | Metrics

    Different complete coverage algorithms for single mobile robot were classified into three kinds. They are potential filed grid based, cellular decomposition based, and transformation between local transform and global transform based approaches. Their performances were analyzed and the advantages and disadvantages were pointed out, then the improved methods were discussed and analyzed. In addition, the complete coverage path planning algorithms for multiple mobile robots based on combination of single robot path planning algorithm and task allocation were investigated. Finally, the further research direction of complete coverage algorithm for mobile robots was discussed. The analytical results show that the full use of complementary advantages of the current algorithms, or taking multi-disciplinary advantages may provide whole algorithm of mobile robot research with more efficient algorithm.

    Path planning for mobile robots based on improved shuffled frog leaping algorithm
    PAN Guibing PANFeng LIU Guodong
    2014, 34(10):  2850-2853.  DOI: 10.11772/j.issn.1001-9081.2014.10.2850
    Asbtract ( )   PDF (711KB) ( )  
    References | Related Articles | Metrics

    To solve the problems that the Shuffled Frog Leaping Algorithm (SFLA) path planning is easy to fall into local optima and optimization effect is poor, an improved SFLA was proposed. Euclidean distance and the best frog of the population were added into the original algorithms update strategy, and a method to generate a new individual with a adjustable control parameter instead of the original random update operation was proposed. This paper firstly transformed the problem of robot path planning into a minimization one, and then defined the fitness of a frog based on the position of target and obstacles in the environment. The robot selected and reached the position of the globally best frog in each iteration successively to realize the optimal path planning. In the mobile robots simulation experiment, compared with other algorithms, the successes number of the improved algorithm was increased from 82 to 98, and the path planning time was decreased from 9.7s to 5.3s. The experimental results show that the improved algorithm has stronger security and better optimization performance.

    Collaborative filtering recommendation based on tags of scenic spots
    SHI Yifan WEN Yimin CAI Guoyong MIU Yuqing
    2014, 34(10):  2854-2858.  DOI: 10.11772/j.issn.1001-9081.2014.10.2854
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics

    In user-based collaborative filtering recommendation based on social relations, sometimes the ratings for the target items can not be predicted. Whats more, in traditional item-based collaborative filtering, there are still some items which are not in the same class with the target item and not suitable to be references for predicting ratings. To handle these problems, two new algorithms of collaborative filtering recommendation were proposed, in which the tags of scenic spots type were introduced to compute the similarity between two scenic spots. The experimental results on the data set of scenic spots ratings show that, compared with the user-based collaborative filtering recommendation algorithms based on social relations, the algorithm based on the social relation and tag can increase the accuracy and the coverage by 10% and 4% respectively, and compared with the item-based collaborative filtering recommendation algorithms, the collaborative filtering recommendation algorithm based on item and tag can increase the accuracy by 15%, it also shows that introducing the tags of scenic spots type can make the computation of the similarity between two scenic spots more accurate.

    Summary extraction of news comments based on weighed textual matrix factorization and information entropy model
    GUO Yujing JI Donghong
    2014, 34(10):  2859-2864.  DOI: 10.11772/j.issn.1001-9081.2014.10.2859
    Asbtract ( )   PDF (889KB) ( )  
    References | Related Articles | Metrics

    This paper addressed to select the most interesting and useful comments for an online news article. In summary of comments for news extraction problem, a new way was introduced, and it was proved to be effective in the social media comments automatic extraction with the combination of Weighed Textual Matrix Factorization (WTMF) and information entropy. The construction of information for tweets and news was based on heterogeneous graph WTMF model which solved the sparse problems of short text and maintained the similarity of information. Meanwhile, according to tweet character distribution, binary entropy and continuous entropy were built to guarantee the diversity of information.Last, according to the characteristics of submodularity, a greedy algorithm was designed to get an approximate optimal solution for the optimization problems. The experimental results show that, the method with combination of WTMF and information entropy can improve the extraction performance of summary of comments for social media effectively. The recall rate and F1 value on the ROUGE2 respectively reaches 0.40074 and 0.27330,which is increased by 0.05 and 0.03 in comparison of the Latent Dirichlet Allocation (LDA) extended model—Biterm Topic Model (BTM). The proposed model improves the quality of news summary of comments effectively.

    Extracting method of emergency news headline and text from webpages
    LUO Yonglian ZHAO Changyuan
    2014, 34(10):  2865-2868.  DOI: 10.11772/j.issn.1001-9081.2014.10.2865
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics

    Concerning the processing of emergency news webpages corpora, an news content extracting and locating method based on the characteristics of emergency news and webpage tags was proposed. By taking webpage tags and text similarity as the features of machine learning, this method extracted the news headlines based on the Bayes method. Meanwhile, the method reduced text processing quantity and dimensionality of text vector based on the stability of emergency news words and nesting of webpage tags, so that it calculated similarity of vector to locate the news beginning and ending. The experimental results show that this method extracts news headlines with an 86.5% accuracy rate and extracts news texts with an average accuracy rate of more than 78%. The proposed method is effective and efficient. It has certain reference for mining webpage tags and own information of text on webpages.

    Emotion classification with feature extraction based on part of speech tagging sequences in micro blog
    LU Weisheng GUO Gongde CHEN Lifei
    2014, 34(10):  2869-2873.  DOI: 10.11772/j.issn.1001-9081.2014.10.2869
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics

    Traditional n-gram feature extraction tends to produce a high-dimensional feature vector. High-dimensional data not only increases the difficulty of classification, but also increases the classification time. Aiming at this problem, this paper presented a feature extraction method based on Part-of-Speech (POS) tagging sequences. The principle of this method was to use POS sequences as text features to reduce feature dimension, according to the property that POS sequences can represent a kind of text.In the experiment,compared with the n-gram feature extraction, the feature extraction based on POS sequences at least improved the classification accuracy of 9% and reduced the dimension of 4816. The experimental results show that the method is suitable for emotion classification in micro blog.

    混合分散搜索的进化多目标优化算法
    WU Kunan YAN Xuanhui CHEN Zhenxing BAI Meng
    2014, 34(10):  2874-2879.  DOI: 10.11772/j.issn.1001-9081.2014.10.2874
    Asbtract ( )   PDF (978KB) ( )  
    References | Related Articles | Metrics

    The diversity of population, the searching capability and the robustness are three key points to the multi-objective optimization problem, which directly affect the convergence of algorithm and the spread of solutions set. To better deal with above problems, a Scatter Search hybrid Multi-Objective Evolutionary optimization Algorithm (SSMOEA) was proposed. The SSMOEA followed the scatter search structure but designed a new selection strategy of diversity and integrated the method of co-evolution in the process of subset generation. Additionally, a novel adaptive multi-crossover operation was employed to improve the self-adaptability and robustness of the algorithm. The experimental results on twelve standard benchmark problems show that, compared with three state-of-the-art multi-objective optimizers, SPEA2, NSGA-Ⅱ and AbYSS, SSMOEA outperforms the other three algorithms as regards the coverage, uniformity and approximation. Meanwhile, its robustness is also significantly improved.

    Hybrid decomposition and strength Pareto multi-objective evolutionary algorithm
    QIU Xingxing ZHANG Zhenzhen WEI Qiming
    2014, 34(10):  2880-2885.  DOI: 10.11772/j.issn.1001-9081.2014.10.2880
    Asbtract ( )   PDF (866KB) ( )  
    References | Related Articles | Metrics

    In multi-objective evolutionary optimization, Multi-objective Evolutionary Algorithm based on Decomposition (MOEA/D) with decomposition strategy is of low computational complexity, while Strength Pareto Evolutionary Algorithm-2 (SPEA2) 〖BP(〗with strength Pareto strategy 〖BP)〗can generate solution sets distributed uniformly. By combining these two strategies, a novel multi-objective evolutionary algorithm was proposed for solving Multi-objective Optimization Problem (MOP) with complex and discontinuous Pareto fronts in this paper. In the proposed algorithm, decomposition strategy was firstly used to make the solution set quickly approach the Pareto front; then strength Pareto strategy was used to make the solution set uniformly distributed along the Pareto front, and the weight vector set in decomposition strategy was rearranged based on this solution set, with a consequence of the weight vector set adapted to a particular Pareto front; and finally, decomposition strategy was used again to make the solution set further approach the Pareto front. In terms of Inverted Generational Distance (IGD) metric, the novel algorithm was compared with three state-of-art algorithms, MOEA/D, SPEA2 and paλ-MOEA/D on twelve benchmark problems. The experimental results indicate that the performance of the proposed algorithm is optimal on seven benchmark problems, and is approximately optimal on five benchmark problems. Moreover, the algorithm can generate uniformly distributed solution sets whether Pareto fronts of MOP are simple or complex, continuous or discontinuous.

    Hybrid fireworks explosion optimization algorithm using elite opposition-based learning
    WANG Peichong GAO Wenchao QIAN Xu GOU Haiyan WANG Shenwen
    2014, 34(10):  2886-2890.  DOI: 10.11772/j.issn.1001-9081.2014.10.2886
    Asbtract ( )   PDF (719KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that Fireworks Explosion Optimization (FEO) algorithm is easy to be premature and has low solution precision, an elite Opposition-Based Learning (OBL) was proposed. In every iteration, OBL was executed by the current best individual to generate an opposition search populations in its dynamic search boundaries, thus the search space of the algorithm was guided to approximate the optimum space. This mechanism is helpful to improve the balance and exploring ability of the FEO. For keeping the diversity of population, the sudden jump probability of the individual to the current best individual was calculated, and based on it, the roulette mechanism was adopted to choose the individual which entered into the child population. The experimental simulation on five classical benchmark functions show that, compared with the related algorithm, the improved algorithm has higher convergence rate and accuracy for numerical optimization, and it is suitable to solve the high dimensional optimization problem.

    Graph embedding method integrated multiscale features
    LI Zhijie LI Changhua YAO Peng LIU Xin
    2014, 34(10):  2891-2894.  DOI: 10.11772/j.issn.1001-9081.2014.10.2891
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics

    In the domain of structural pattern recognition, the existing graph embedding methods lack versatility and have high computation complexity. A new graph embedding method integrated with multiscale features based on space syntax theory was proposed to solve this problem. This paper extracted the global, local and detail features to construct feature vector depicting the graph feature by multiscale histogram. The global features included vertex number, edge number, and intelligible degree. The local features referred to node topological feature, edge domain features dissimilarity and edge topological features dissimilarity. The detail features comprised numerical and symbolic attributes on vertex and edge. In this way, the structural pattern recognition was converted into statistical pattern recognition, thus Support Vector Machine (SVM) could be applied to achieve graph classification. The experimental results show that the proposed graph embedding method can achieve higher classifying accuracy in different graph datasets. Compared with other graph embedding methods, the proposed method can adequately render the graphs topology, merge the non-topological features in terms of the graphs domain property, and it has a favorable universality and low computation complexity.

    Application of kernel parameter discriminant method in kernel principal component analysis
    ZHANG Cheng LI Na LI Yuan PANG Yujun
    2014, 34(10):  2895-2898.  DOI: 10.11772/j.issn.1001-9081.2014.10.2895
    Asbtract ( )   PDF (549KB) ( )  
    References | Related Articles | Metrics

    In this paper, aiming at the priority selection of the Gaussian kernel parameter (β) in the Kernel Principal Component Analysis (KPCA), a kernel parameter discriminant method was proposed for the KPCA. It calculated the kernel window widths in the classes and between two classes for the training samples.The kernel parameter was determined with the discriminant method for the kernel window widths. The determined kernel matrix based on the discriminant selected kernel parameter could exactly describe the structure characteristics of the training space. In the end, it used Principal Component Analysis (PCA) to the decomposition for the feature space, and obtained the principal component to realize dimensionality reduction and feature extraction. The method of discriminant kernel window width chose smaller window width in the dense regions of classification, and larger window width in the sparse ones. The simulation of the numerical process and Tennessee Eastman Process (TEP) using the Discriminated Kernel Principle Component Analysis (Dis-KPCA) method, by comparing with KPCA and PCA, show that Dis-KPCA method is effective to the sample data dimension reduction and separates three classes of data by 100%,therefore, the proposed method has higher precision of dimension reduction.

    Fuzzy rule extraction based on genetic algorithm
    GUO Yiwen LI Jun GENG Linxiao
    2014, 34(10):  2899-2903.  DOI: 10.11772/j.issn.1001-9081.2014.10.2899
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics

    To avoid the limitations of the traditional fuzzy rule based on Genetic Algorithm (GA), a calculation method of fuzzy control rule which contains weight coefficient was presented. GA was used to find the best weight coefficient which calculate the fuzzy rules. In this method, different weight coefficients could be provided according to different input levels, the correlation and symmetry of the weight coefficients could be used to assess all the fuzzy rules and then reduce the influence of the invalid rules. The performance comparison experiments show that the system which consists of these fuzzy rules has small overshoot, short adjustment time, and practical applications in fuzzy control. The experiments of different stimulus signals show that the system which consists of these fuzzy rules doesnt rely on stimulus signal as well as having a good tracking effect and stronger robustness.

    Nonlinear modeling of power amplifier based on improved radial basis function networks
    LI Ling LIU Taijun YE Yan LIN Wentao
    2014, 34(10):  2904-2907.  DOI: 10.11772/j.issn.1001-9081.2014.10.2904
    Asbtract ( )   PDF (535KB) ( )  
    References | Related Articles | Metrics

    Aiming at the nonlinear modeling of Power Amplifier (PA), an improved Radial Basis Function Neural Networks (RBFNN) model was proposed. Firstly, time-delay of cross terms and output feedback were added in the input. Parameters (weigths and centers) of the proposed model were extracted using the Orthogonal Least Square (OLS) algorithm. Then Doherty PA was trained and validated successfully by 15MHz three-carrier Wideband Code Division Multiple Access (WCDMA) signal, and the Normalized Mean Square Error (NMSE) can reach -45dB. Finally, the inverse class F power amplifier was used to test the universality of the model. The simulation results show that the model can more truly fit characteristics of power amplifier.

    Software reliability growth model based on self-adaptive step cuckoo search algorithm fuzzy neural network
    LIU Luo GUO Lihong
    2014, 34(10):  2908-2912.  DOI: 10.11772/j.issn.1001-9081.2014.10.2908
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics
    According to the poor applicability and poor prediction accuracy fluctuation of the existing Software Reliability Growth Model (SRGM), this paper proposed a model based on Fuzzy Neural Network (FNN) which was connected with self-Adaptive Step Cuckoo Search (ASCS) algorithm, the weights and thresholds of the FNN were optimized by ASCS algorithm, then the FNN was used to establish SRGM. Software defect data were used in the FNNs training process, the weights and thresholds of FNN were adjusted by ASCS, the accuracy of prediction process was improved correspondingly, at the same time, in order to reduce the fluctuation of prediction by FNN, averaging method was used to deal with predicted results. Based on those, SRGM was established by self-Adaptive Step Cuckoo Search algorithm—Fuzzy Neural Network (ASCS-FNN). According to 3 groups of software defect data, taking Average Error (AE) and Sum of Squared Error (SSE) as measurements, the SRGMs one-step forward predictive ability established by ASCS-FNN was compared with the SRGMs one-step forward predictive ability established by Simulated Annealing—Dynamic Fuzzy Neural Network (SA-DFNN), FNN and Back Propagation Network (BPN). The simulation results confirm that, the SRGM based on ASCS-FNN relative to the SRGM based on SA-DFNN, FNN and BPN, the mean of Relative Increase (RI) of prediction accuracy rate for RI (AE) is -1.48%, 54.8%, 33.8%, and the mean of Relative Increase (RI) of prediction accuracy rate for RI (SSE) is 14.4%, 76%, 35.9%. The prediction of SRGM established by ASCS-FNN is more steadily than the prediction of SRGM established by FNN and BPN, and the net structure of ASCS-FNN is much simpler than the net structure of SA-DFNN, so the SRGM established by ASCS-FNN has high prediction accuracy, prediction stability, and some adaptability.
    Survey on image holistic scene understanding based on probabilistic graphical model
    LI Lin LIAN Jin WU Yue YE Mao
    2014, 34(10):  2913-2921.  DOI: 10.11772/j.issn.1001-9081.2014.10.2913
    Asbtract ( )   PDF (1472KB) ( )  
    References | Related Articles | Metrics

    In the recent years, the computer image understanding has wide and profound applications in intelligence traffic, satellite remote sensing, machine vision, image analysis of medical treatment, Internet image search and etc. As its extension, the image holistic scene understanding is more complex and integrated than basic image scene understanding task. In this paper, the basic framework for image understanding, the researching implication and value, typical models for image holistic scene understanding were summarized. The four typical holistic scene understanding models were introduced, and the model frameworks were thoroughly compared. At last, some research insufficiency and future direction in image holistic scene understanding were presented, which pointed out some new insights for the further research in this area.

    Delaunay-based Non-uniform sampling for noisy point cloud
    LI Guojun LI Zongchun HOU Dongxing
    2014, 34(10):  2922-2924.  DOI: 10.11772/j.issn.1001-9081.2014.10.2922
    Asbtract ( )   PDF (581KB) ( )  
    References | Related Articles | Metrics

    To satisfy ε-sample condition for Delaunay-based triangulation surface reconstruction algorithm, a Delaunay-based non-uniform sampling algorithm for noisy point clouds was proposed. Firstly, the surface Medial Axis (MA) was approximated by the negative poles computed by k-nearest neighbors Voronoi vertices. Secondly, the Local Feature Size (LFS) of surface was estimated with the approximated medial axis. Finally, combined with the Bound Cocone algorithm, the unwanted interior points were removed. Experiments show that the new algorithm can simplify noisy point clouds accurately and robustly while keeping the boundary features well. The simplified point clouds are suitable for Delaunay-based triangulation surface reconstruction algorithm.

    No-reference Gaussian image quality assessment based on wavelet high frequency structural similarity
    HUANG Xiaosheng YAN Hao CAO Yiqin
    2014, 34(10):  2925-2929.  DOI: 10.11772/j.issn.1001-9081.2014.10.2925
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of the high computation and application difficulty in traditional no-reference image assessment methods, a simple and direct no-reference Gaussian image quality assessment algorithm based on wavelet high frequency Structural SIMilarity (SSIM) was proposed. The proposed algorithm took into account the similarity among the natural images high frequency in the same scale which would be reduced with the distortion deepening. Three different directional sub-bands of high frequencies were obtained by the wavelet transform firstly,and then the Peak Signal-to-Noise Ratio (PSNR) and SSIM were combined for calculating the differences of sub-bands respectively as the last objective assessment index. The simulation results show that the proposed method has good consistence with the subjective assessment on three common image databases,in addition,the algorithm only needs about 0.2s for evaluating an image, which has good practicality.

    Fundamental matrix estimation based on three-view constraint
    LI Cong ZHAO Hongrui FU Gang
    2014, 34(10):  2930-2933.  DOI: 10.11772/j.issn.1001-9081.2014.10.2930
    Asbtract ( )   PDF (627KB) ( )  
    References | Related Articles | Metrics

    The matching points cant be decided absolutely by its residuals just relying on epipolar geometry residuals, which influences the selection of optimum inlier set. So a novel fundamental matrix calculation algorithm was proposed based on three-view constraint. Firstly, the initial fundamental matrices were estimated by traditional RANdom SAmple Consensus (RANSAC) method. Then matching points existed in every view were selected, and the epipolar lines of points not in the common view were calculated in fundamental matrix estimation. Distances between the points in common view and the intersection of its matching points epipolar lines were calculated. Under judgment based on the distances, a new optimum inlier set was obtained. Finally, the M-Estimators (ME) algorithm was used to calculate the fundamental matrices based on the new optimum inlier set. Through a mass of experiments in case of mismatching and noise, the results indicate that the algorithm can effectively reduce the influence of mismatch and noise on accurate calculation of fundamental matrices. It gets better accuracy than traditional robust algorithms by limiting distance between point and epipolar line to about 0.3 pixels, in addition, an improvement in stability. So, it can be widely applied to fields such as 3D reconstruction based on image sequence and photogrammetry.

    Motion detection based on deep auto-encoder networks
    XU Pei CAI Xiaolu HE Wenwei XIE Yidao
    2014, 34(10):  2934-2937.  DOI: 10.11772/j.issn.1001-9081.2014.10.2934
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics

    To address the poor results of foreground extraction from dynamic background, a motion detection method based on deep auto-encoder networks was proposed. Firstly, background images without containing motion objects were subtracted from video frames using a three-layer deep auto-encoder network whose cost function contained background as variable. Then, another three-layer deep auto-encoder network was used to learn the subtracted background images which are obtained by constructed separating function. To achieve online motion detection through deep auto-encoder learning, an online learning method of deep auto-encoder network was also proposed. The weights of network were merged according to the sensitivity of cost function to process more video frames. From the experimental results, the proposed method obtains better motion detection accuracy by 6%, and lower false rate by 4.5% than Lus work (LU C, SHI J, JIA J. Online robust dictionary learning. Proceeding of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Piscataway: IEEE Press, 2013:415-422). This work also obtains better extraction results of background and foreground in real applications, and lays better basis for video analysis.

    Image retrieval based on enhanced micro-structure and context-sensitive similarity
    HU Yangbo YUAN Jie WANG Lidong
    2014, 34(10):  2938-2943.  DOI: 10.11772/j.issn.1001-9081.2014.10.2938
    Asbtract ( )   PDF (994KB) ( )  
    References | Related Articles | Metrics

    A new image retrieval method based on enhanced micro-structure and context-sensitive similarity was proposed to overcome the shortcoming of high dimension of combined image feature and intangible combined weights. A new local pattern map was firstly used to create filter map, and then enhanced micro-structure descriptor was extracted based on color co-occurrence relationship. The descriptor combined several features with the same dimension as single color feature. Based on the extracted descriptor, normal distance between image pairs was calculated and sorted. Combined with the iterative context-sensitive similarity, the initial sorted image series were re-ranked. With setting the value of iteration times as 50 and considering the top 24 images in the retrieved image set, the comparative experiments with Multi-Texton Histogram (MTH) and Micro-Structure Descriptor (MSD) show that the retrieval precisions of the proposed algorithm respectively are increased by 13.14% and 7.09% on Corel-5000 image set and increased by 11.03% and 6.8% on Corel-10000 image set. By combining several features and using context information while keeping dimension unchanged, the new method can enhance the precision effectively.

    Fast image stitching algorithm based on improved speeded up robust feature
    ZHU Lin WANG Ying LIU Shuyun ZHAO Bo
    2014, 34(10):  2944-2947.  DOI: 10.11772/j.issn.1001-9081.2014.10.2944
    Asbtract ( )   PDF (639KB) ( )  
    References | Related Articles | Metrics

    An fast image stitching algorithm based on improved Speeded Up Robust Feature (SURF) was proposed to overcome the real-time and robustness problems of the original SURF based stitching algorithms. The machine learning method was adopted to build a binary classifier, which identified the critical feature points obtained by SURF and removed the non-critical feature points. In addition, the Relief-F algorithm was used to reduce the dimension of the improved SURF descriptor to accomplish image registration. The weighted threshold fusion algorithm was adopted to achieve seamless image stitching. Several experiments were conducted to verify the real-time performance and robustness of the improved algorithm. Furthermore, the efficiency of image registration and the speed of image stitching were improved.

    Integer discrete cosine transform algorithm for distributed video coding framework
    WANG Yanming CHEN Bo GAO Xiaoming YANG Cheng
    2014, 34(10):  2948-2952.  DOI: 10.11772/j.issn.1001-9081.2014.10.2948
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics

    Now the integer Discrete Cosine Transform (DCT) algorithm of H.264 can not apply to Distributed Video Coding (DVC) framework directly because of its high complexity. In view of this, the authors presented a integer DCT algorithm and transform radix generating method based on fixed long step quantization which length was 2x (x was a plus integer). The transform radix in H.264 could be stretched. The authors took full advantage of this feature to find transform radix which best suits for working principle of hardware, and it moved the contracted-quantized stage from coder to decoder to reduced complexity of coder under the premise of "small" transform radix. In the process of "moving", this algorithm guaranteed image quality by saturated amplification for DCT coefficient, guaranteed reliability by overflow upper limit, and improved compression performance by reducing radix error. The experimental results show that, compared with corresponding module in H.264, the quantization method of this algorithm is convenient for bit-plane extraction. And it reduces calculating work of contracted-quantized stage of coder to 16 times of integer constant addition under the premise of quasi-lossless compression, raises the ratio of image quality and compression by 0.239. This algorithm conforms to DVC framework.

    Undersampling image reconstruction method based on second order total generalized variation model
    WEI Jinjin JIN Zhigang WANG Ying
    2014, 34(10):  2953-2956.  DOI: 10.11772/j.issn.1001-9081.2014.10.2953
    Asbtract ( )   PDF (657KB) ( )  
    References | Related Articles | Metrics

    Aiming at convex optimization problem of undersampling image reconstruction, a new image reconstruction algorithm based on the second order Total Generalized Variation (TGV) model was proposed. In the new model, the second-order TGV semi-norm of images was used as the regularization term, which could automatically balance the first order and second order derivative. The characteristics of the TGV made the new model recover the image edge information better, smooth noise and avoid the staircasing effect. For computing the new model effectively, the orthogonal projection and the adjustment of weight threshold were presented to adaptively amend the iteration results of each step in order to obtain accurate image reconstruction results. The experimental results show that the proposed model can get better results with large value of Peak Signal-to-Noise Ratio (PSNR) and Structure SIMilarity (SSIM) in image reconstruction compared with Orthogonal Matching Pursuit (OMP) and Total Variation (TV) models.

    Image denoising algorithm using fractional-order integral with edge compensation
    HUANG Guo CHEN Qingli XU Li MEN Tao PU Yifei
    2014, 34(10):  2957-2962.  DOI: 10.11772/j.issn.1001-9081.2014.10.2957
    Asbtract ( )   PDF (1008KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of losing edge and texture information in the existing image denoising algorithms based on fractional-order integral, an image denoising algorithm using fractional-order integral with edge compensation was presented. The fractional-order integral operator has the performance of sharp low-pass. The Cauchy integral formula was introduced into digital image denoising, and the image numerical calculation of fractional-order integral was achieved by the method of slope approximation. In the process of iterative denoising, the algorithm built denoising mask by setting higher tiny fractional-order integral order at the rising stage of image Signal-to-Noise Ratio (SNR); and the algorithm built denoising mask by setting lower small fractional-order integral order at the declining stage of image SNR. Additionally, it could partially restore the image edge and texture information by the mechanism of edge compensation. The image denoising algorithm using fractional-order integral proposed in this paper makes use of different strategies of the fractional-order integral order and edge compensation mechanism in the process of iterative denoising. The experimental results show that compared with traditional denoising algorithm, the denoising algorithm proposed in this paper can remove the noise to obtain higher SNR and better visual effect while appropriately restoring the edge and texture information of image.

    Patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising
    DONG Chanchan ZHANG Quan HAO Huiyan ZHANG Fang LIU Yi SUN Weiya GUI Zhiguo
    2014, 34(10):  2963-2966.  DOI: 10.11772/j.issn.1001-9081.2014.10.2963
    Asbtract ( )   PDF (815KB) ( )  
    References | Related Articles | Metrics

    Concerning the contradiction between edge-preserving and noise-suppressing in the process of image denoising, a patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising was proposed. The algorithm combined adaptive Perona-Malik (PM) model based on variable exponent for image denoising and the idea of patch similarity, constructed a new edge indicator and a new diffusion coefficient function. The traditional anisotropic diffusion algorithms for image denoising based on the intensity similarity of each single pixel (or gradient information) to detect edge cannot effectively preserve weak edges and details such as texture. However, the proposed algorithm can preserve more detail information while removing the noise, since the algorithm utilizes the intensity similarity of neighbor pixels. The simulation results show that, compared with the traditional image denoising algorithms based on Partial Differential Equation (PDE), the proposed algorithm improves Signal-to-Noise ratio (SNR) and Peak-Signal-to-Noise Ratio (PSNR) to 16.602480dB and 31.284672dB respectively, and enhances anti-noise capability. At the same time, the filtered image preserves more detail features such as weak edges and textures and has good visual effects. Therefore, the algorithm achieves a good balance between noise reduction and edge maintenance.

    Weighted diffusion for Rician noise reduction in magnetic resonance imaging image
    HE Jianfeng CHEN Yong YI Sanli
    2014, 34(10):  2967-2970.  DOI: 10.11772/j.issn.1001-9081.2014.10.2967
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics

    Since the isotropic diffusion will easily blur edge features,and coherence-enhancing diffusion will produce pseudo striations in the background regions during the denoising process, a weighted diffusion algorithm was proposed to reduce the Rician noise of Magnetic Resonance Imaging (MRI) image according to the distribution of noise. A threshold value was calculated by the Rician noise variance in the background region of MRI image, which might be used to distinguish the image background and the edge of Region-Of-Interest (ROI). A weighting function combining the isotropic diffusion and the coherence-enhancing diffusion based on the calculated value was constructed. The constructed function could adaptively adjust the weight values of two kinds of diffusion in different structural regions in order to give full play to the advantages while overcoming the disadvantages of the above two kinds of diffusion.The experimental results show that it is better than some classical diffusion algorithms in Peak Signal-to-Noise Ratio (PSNR) and Mean Structural Similarity(MSSIM).Thus, it has better performance on noise reduction and edge preservation or enhancement.

    Improving ideal low-pass filter with noise detection algorithm
    YANG Zhuzhong ZHOU Jiliu LANG Fangnian
    2014, 34(10):  2971-2975.  DOI: 10.11772/j.issn.1001-9081.2014.10.2971
    Asbtract ( )   PDF (799KB) ( )  
    References | Related Articles | Metrics

    For the contradiction between filtering noise and preserving image detail in image denoising algorithms, a random noise detection algorithm based on fractional differential gradient, was proposed to improve denoising performance of the ideal low-pass filter in this paper. Firstly, the fractional differential gradient templates of different directions were used to convolve with noisy images, and calculate fractional differential gradients in different directions. Then according to a pre-set threshold value, the fractional differential gradient detection figures in different directions could be obtained. If the pixel gradients occurred hopping in all selected directions, and this pixel was determined to be a noise pixel. Finally, only the detected noise pixels were processed by ideal low-pass filter. The denoised image could get a better effect of removing the noise and preserving image detail at the same time. The experimental results show that the proposed algorithm can get a better visual effect, the Peak Signal-to-Noise Ratio (PSNR) of denoised image indicates the denoised image is more closer to the original image: The maximum PSNR by using the ideal low-pass filter is 29.0893dB, meanwhile the maximum PSNR obtained by the proposed algorithm is 34.7027dB. It is an exploration of fractional calculus for image denoising, and provides a new research direction to improve performance of image denoising.

    Fast moving objects segmentation algorithm based on boundary searching with convex hull
    QIAN Zenglei LIANG Jiuzhen
    2014, 34(10):  2976-2981.  DOI: 10.11772/j.issn.1001-9081.2014.10.2976
    Asbtract ( )   PDF (863KB) ( )  
    References | Related Articles | Metrics

    Most of current segmentation algorithms in H.264/AVC compressed domain lose the local Motion Vector (MV) field and have high time complexity because of global motion compensation. A new fast segmentation algorithm, called Convex Hull segmentation in Spatial-Temporal Filter (STF) based on Boundary Search on compressed domain (BS-CHSTF) was proposed. Motion vector field in bit stream was mainly used in this algorithm. Firstly, the STF algorithm was used in preprocessing the MV, and then eight-direction adaptive search algorithm was used to get connected region, which is filled by constructing the convex hull using boundary of the connected region. Afterwards, multiple connected regions were clustered by redefining the distance between connected regions distance. Finally, the motion object segmentation was obtained by optimizing the mask. Compared with Gaussian Mixture Model (GMM) segmentation algorithm and Ant Colony Algorithm (ACA), the experimental results show that the segmentation accuracy is improved about 3% averagely, and even in the case of lack of motion vector field, the segmentation accuracy increased nearly 20%, while the segmentation speed increased an average of nearly 25%. The method focuses on obtaining the moving object quickly with better segmentation accuracy even in the case of Moving Object (MO) uncompleted.

    Automatic selection algorithm of patch size for texture synthesis
    JIANG Julang LI Fei ZHU Zhu ZHAN Wenfa
    2014, 34(10):  2982-2984.  DOI: 10.11772/j.issn.1001-9081.2014.10.2982
    Asbtract ( )   PDF (653KB) ( )  
    References | Related Articles | Metrics

    In previous patch-based texture synthesis algorithm, the size of patch requires artificial choice which leads to an uncertainty quality of texture synthesis, so an automatic selection algorithm of patch size for texture synthesis was presented. When the patch was slid on the examplar in scan-line order until getting through all the plane, the histograms of the patch and the examplar were both pretreated by normalization and mean filtering, then the intersection of the two histograms was calculated. Among all the calculation results for different position, the maximum value was selected as the measurement of the color similarity between patches and examplar. Due to approximate monotone increasing relation between color similarity and patchs size, bisection method was adopted to calculate the abscissa for the color similarity threshold point, and the abscissa was used as the patchs size for texture synthesis. The experimental results of various types of textures show that the size of patch automatically selected by this method coincides with the range of best empirical value. Compared with the other automatic selection methods of patchs size, this method not only applies to structured texture synthesis, but also applies to stochastic texture synthesis, and obtains suitable results of texture synthesis.

    Reversible data hiding based on histogram pairs in MPEG-4 video
    HAN Yigang TONG Xuefeng XUAN Guorong QU Xin SHI Yunqing
    2014, 34(10):  2985-2989.  DOI: 10.11772/j.issn.1001-9081.2014.10.2985
    Asbtract ( )   PDF (879KB) ( )  
    References | Related Articles | Metrics

    In terms of the issue of reversible data hiding algorithm in videos, a novel algorithm based on histogram-pair method was proposed, which embedded data by selecting reasonable fluctuation value, freguercy range and area in I frames Discrete Cosine Transform (DCT) field, achieved high quality embedded MPEG-4 video. By embedding data in the macroblocks (8×8) in the optimum area, optimum frequency range in the macroblocks, and optimum DCT fluctuation, optimum reversible data hiding was completed. Higher Peak Signal-to-Noise Ratio (PSNR) was achieved in the experiments of 6 sequences videos. For akiyo, the PSNR of embedded I frame reached 45.33dB (1000b/frame),43.58dB (2000b/frame)和40.28dB (4000b/frame)。In the cases of high capacity, the increase of bit rate is relatively low, approximately 6% on average. The proposed method embedded data in DCT coefficient, achieves higher PSNR than the method based on DCT quantization table. The method embedded information in I frame beter than in B frame, which has formed relatively completed reversible data hiding method in video.

    Single image defogging algorithm based on HSI color space
    WANG Jianxin ZHANG Youhui WANG Zhiwei ZHANG Jing LI Juan
    2014, 34(10):  2990-2995.  DOI: 10.11772/j.issn.1001-9081.2014.10.2990
    Asbtract ( )   PDF (910KB) ( )  
    References | Related Articles | Metrics

    Images captured in hazy weather suffer from poor contrast and low visibility. This paper proposed a single image defogging algorithm to remove haze by combining with the characteristics of HSI color space. Firstly, the method converted original image from RGB color space to HSI color space. Then, based on the different affect to hue, saturation and intensity, a defogged model was established. Finally, the range of weight in saturation model was obtained by analyzing original images saturation, then the range of weight in intensity model was also estimated, and the original image was defogged. In comparison with other algorithms, the experimental results show that the running efficiency of the proposed method is doubled. And the proposed method effectively enhances clarity, so it is appropriate for single image defogging.

    Improved enhancement algorithm of fog image based on multi-scale Retinex with color restoration
    LI Yaofeng HE Xiaohai WU Xiaoqiang
    2014, 34(10):  2996-2999.  DOI: 10.11772/j.issn.1001-9081.2014.10.2996
    Asbtract ( )   PDF (828KB) ( )  
    References | Related Articles | Metrics

    An improved method for Multi-Scale Retinex with Color Restoration (MSRCR) algorithm was proposed, to remove the fog at the far prospect and solve gray hypothesis problem. First, original fog image was inverted. Then, MSRCR algorithm was used on it. The inverted image was to be inverted again and then was linearly superposed with the result which was processed by MSRCR algorithm directly .At the same time , the reflection component which was got during the process of the extraction was linearly superposed with the original luminance, and the mean and variance were calculated to decide the contrast stretching degree adaptively, finally, it was uniformly stretched to the display device.The experimental results show that the proposed algorithm can get a better effect of removing the fog. Evaluation values of the processed image, including standard difference, average brightness, information entropy, and squared gradient, are improved than the original algorithm. It is easy to implement and has important significance for real-time video to remove fog.

    3D hair reusable method based on scalp layer feature points
    LIU Haizhou HOU Jin
    2014, 34(10):  3000-3003.  DOI: 10.11772/j.issn.1001-9081.2014.10.3000
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics

    Aiming at solving the misplaced and mismatch problems when 3D hair attaches different head models in the reuse process, a 3D hair reusable method based on scalp layer feature points was proposed. Firstly, according to the data storage structure of the model file, the scalp layer was isolated from the hair model and the feature points were extracted. Secondly, combined with the 2D face image detection method, feature points of the range of hair root on head cortex were extracted. Then shift and zoom coefficients were calculated by the above mentioned feature points. Finally, the fitting process of scalp layer and head model was handled individually. Eventually the 3D hair was adapted to the target head model,which could keep hair styling information without loss. The effect of a close fit between the scalp layer and the head model was achieved. The experimental results show that the proposed method can effectively improve the reusability of 3D hair model, and it is not influenced by the restrictions of the hair model personality part and the distribution area.

    Coherent skeleton extraction for time varying surfaces
    p zq ZHU Biying
    2014, 34(10):  3004-3008.  DOI: 10.11772/j.issn.1001-9081.2014.10.3004
    Asbtract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics

    Concerning the low efficiency and non-coherency in extracting skeletons of time varying surfaces, a new method through applying a registration based propagation strategy to repair the incomplete initial skeletons was proposed in this paper. Firstly, complete skeletons for the key frames of the time varying surfaces were extracted, the initial skeleton series for the surfaces between the key frames were directly extracted. Then a global skeleton alignment method was designed and used to warp key skeleton to its neighboring skeletons. Finally, the information of the warped key skeleton was transferred to its neighboring skeletons, and a complete skeleton was produced. The entire skeleton series were processed by this process to produce complete skeletons. The experimental results show that this system is efficient and accurate. This system can be applied to the raw scanned dynamic geometry data with significant noise, outliers, and large areas of missing data, but still can receive time coherent skeleton series.

    Dual-scale fabric defect detection based on sparse coding
    ZHANG Longjian ZHANG Zhuo FAN Ci'en DENG Dexiang
    2014, 34(10):  3009-3013.  DOI: 10.11772/j.issn.1001-9081.2014.10.3009
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics

    Defect detection is an important part of fabric quality control. To make the detection algorithm possess good commonality and high detection accuracy, a dual-scale fabric defect detection algorithm based on sparse coding was proposed. The algorithm combined the advantage of high stability under large-scale and the advantage of high detection sensitivity under small-scale. At first, the dictionaries under large and small scales were obtained through a small-scale over-complete dictionary training method. Then, the projection of detection image block on the over-complete dictionary was used to extract detection characteristics. Finally, the detection results under dual-scale were fused by the means of distance fusion. The algorithm overcame the disadvantage of large computation because of the introduction of dual-scale while using small-scale over-complete dictionary and downsampling the detection image under large-scale. TILDA Textile Texture Data base was used in the experiment. The experimental results show that the algorithm can effectively detect defects on plain, gingham and striped fabric, the comprehensive detection rate achieves 95.9%. And its moderate amount of calculation can satisfy the requirement of industrial real-time detection, so it does have much value of practical application.

    Pitch measurement methed of twisted-pair wire based on image detection
    WANG Gang SHI Shoudong LIN Yibing
    2014, 34(10):  3014-3019.  DOI: 10.11772/j.issn.1001-9081.2014.10.3014
    Asbtract ( )   PDF (896KB) ( )  
    References | Related Articles | Metrics

    To measure the pitch of twisted-pair wires, a kind of image detection framework was put forward. With image segmentation, image restoration, image thinning, curve fitting and scale setting, the pitch of twisted-pair wires was calculated in real time. In combination with this framework, to deal with the problem that the traditional two-dimensional maximum between-cluster variance algorithm (Otsu) runs too slow, a new fast algorithm based on regional diagonal points was proposed. With redefining two-dimensional histogram area, using the quick lookup table and recursion method, it reduced running time drastically. To solve the problem of image missing, an edge detection algorithm was adopted. After repairing, the image thinning operation was acted on the image. The least square method was used to fit the single pixel point of thinning image, then fitting curve was acquired. It could acquire the pitch of twisted-pair wires in the image by calculating the distance between the fitting curve intersections. Finally the distance in image was converted to an observed value by the scale. The experimental results show that the segmentation time of fast algorithm is about 0.22% of traditional algorithm. And two segmentation results of algorithms are identical. With the pitch from the image detection method comparing with its real value, results show that the absolute errors between both of them are 0.48%. Through the image detection method, the pitch is measured accurately and the efficiency of twisted-pair pitch measurement is improved.

    High speed data transfer and imaging for intravascular ultrasound
    WU Milong QIU Weibao LIU Baoqiang CHI Liyang MU Peitian LI Xiaolong ZHENG Hairong
    2014, 34(10):  3020-3023.  DOI: 10.11772/j.issn.1001-9081.2014.10.3020
    Asbtract ( )   PDF (598KB) ( )  
    References | Related Articles | Metrics

    IntraVascular UltraSound (IVUS) imaging can provide information of the coronary atherosclerotic plaque. It allows the doctor to make comprehensive and accurate evaluation of diseased vessel. Some ultrasound data collecting devices for imaging system exhibited insufficient data transfer speed, high cost or inflexibility, so the authors presented a high speed data transfer and imaging method for intravascular ultrasound. After being collected and processed, ultrasound data was transferred to computer through USB3.0 interface. In addition, logarithmic compression and digital coordinate conversion were applied in computer before imaging. Data transmission experiment shows that the transfer speed always stays around 2040Mb/s. Finally, phantom imaging was conducted to demonstrate the performance of the system. It shows a clear pipe wall and a smooth luminal border.

    Method of marine radar image simulation based on electronic chart display information system
    WAGN Shengzheng HUANG Yugui
    2014, 34(10):  3024-3028.  DOI: 10.11772/j.issn.1001-9081.2014.10.3024
    Asbtract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics

    To meet the requirements of the military and merchant marine radar simulation, and enhance the simulation reality of radar image, a real-time scan simulation approach based on sector-banded texture blending model was presented to simulate highly realistic radar echo image. In this method, Electronic Navigation Chart (ENC) was regarded as the resource data of the radar echo signal, and according to the principle of the radar echo, the sector-banded texture blending algorithm was proposed to replace the traditional radar image simulation method based on the pixel-scan model and generate the radar echo texture data. Based on that, the simulation models of the radar echo signal processing were presented to implement the basic functions of the marine radar, such as gain adjustment, sea clutter suppression and rain/snow clutter suppression. The experimental results show the proposed approach improves distinctly the efficiency and effectiveness of the radar echo simulation, and it is a promising means to address the problem of radar and Electronic Chart Display Information System (ECDIS) simulation.

    Implementation of IPv6 over low power wireless personal area network based on wireless sensor network in smart lighting
    HUANG Zucheng YUAN Feng LI Yin
    2014, 34(10):  3029-3033.  DOI: 10.11772/j.issn.1001-9081.2014.10.3029
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics

    Concerning the disadvantages such as the complexity of the system structure, low compatibility and expansibility, time consuming in development and deployment, low security and anti-interference in the smart lighting system based on Power Line Communication (PLC), a new smart lighting system based on IPv6 over Low power Wireless Personal Area Network (6LoWPAN) was proposed in this paper. An example of implementing 6LoWPAN in smart lighting system by replacing PLC was presented, the PLC nodes were replaced by 6LoWPAN nodes, the central controller was replaced by border router, and the Constrained Application Protocol (CoAP) and Internet Protocol for Smart Objects (IPSO) application framework was applied in the application layer. Compared with the smart lighting system based on PLC, the new smart lighting system with 6LoWPAN technology is simpler in system architecture, it has higher compatibility and expansibility, the development and deployment time is reduced more than 50%, the network security and anti-interference is better.

    Hub-and-spoke network design with constraint of hub cost for less-than-truckload logistics
    GAO Chaofeng XIAO Ling HU Zhihua
    2014, 34(10):  3034-3038.  DOI: 10.11772/j.issn.1001-9081.2014.10.3034
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics

    The problem of hub construction with considering the uncertainty of hub construction cost and freight flow was discussed in this paper. A mixed integer linear programming model based on life-cycle cost theory was built to minimize the total cost for Less-than-TruckLoad (LTL) logistical hub-and-spoke network, and uncertainty decision making method based on the improved minimax regret value was proposed. Through numerical examples, the effect of investment period, coefficient of discount between hubs and uncertain hub construction cost on network design were analyzed. The experimental results show that operating cost obtained by the improved uncertainty decision making method is reduced by an average of 2.17% compared with other five scenes, which indicates that this method can decrease the total operation cost of hub-and-spoke network.

    Interference suppression and quantitative analysis of instrument landing system signal based on modified fast Fourier transformation algorithm
    SUN Dan BAI Jie SHI Zhibo
    2014, 34(10):  3039-3043.  DOI: 10.11772/j.issn.1001-9081.2014.10.3039
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics

    For the problem of the increasing busy airport airspace and the surrounding buildings attaching disturbance to the Instrument Landing System (ILS) signal, and the defects of traditional analogue signal processing technology, a new method was given to realize the ILS signal frequency separation based on the modified Fast Fourier Transformation (FFT) spectrum correction and Least Mean Square (LMS) adaptive filter. The LMS adaptive filter was used to suppress ILS signal interference, with setting optimal weight filter coefficient. And the ILS signal spectrum was separated and extracted based on the modified FFT technique in time domain and frequency domain. The spectrum amplitude was corrected in order to eliminate the impact of the spectral leakage and picket fence effect caused by signal sample for improving the recognition accuracy of Difference in the Depth of Modulation (DDM). The simulation was carried out to verify the effect of interference suppression and frequency separation to the ILS signal. The results reveal that the proposed technical scheme of signal processing system can effectively suppress the interference and complete frequency domain identification, which can provide reliable and accurate navigation information in the aircraft landing approach.

    Design of dual-band wideband dipole antenna based on high frequency structure simulator and neural network
    NAN Jingchang SANG Baihang GAO Mingming
    2014, 34(10):  3044-3047.  DOI: 10.11772/j.issn.1001-9081.2014.10.3044
    Asbtract ( )   PDF (589KB) ( )  
    References | Related Articles | Metrics

    To quickly design a two-sided dipole antenna, which is used to Wireless Local Area Network (WLAN), with small size, dual-band and broadband characteristics, the dipole patch were printed on opposite sides of the dielectric substrate, and balun feed mode was used to achieve better broadband matching. Two arms of the dipole were slotted to achieve the small and dual-band characteristics, and to meet the dual-band requirements of WLAN of 2.45GHz and 5.49GHz. The size of the whole antenna is 28mm×44mm×1.6mm. And the Neural Network (NN) with electromagnetic simulation software named High Frequency Structure Simulator (HFSS) was combined to optimize the key size of the antenna and speed up the design process. The simulation results show that, when the S11 is less than -10dB, the bandwidth of the antenna in low frequency and high frequency can reach 470MHz (2.29—2.76GHz) and 3650MHz (4.96—8.61GHz) respectively; When the S11 is less than -14dB, the bandwidth of the antenna can reach 210MHz (2.36—2.57GHz) and 770MHz (5.13—5.9GHz) respectively. The pattern has good omni-direction, and the measurement and simulation results are in good consistency. The antenna can meet the requirements of WLAN.

    Control allocation for fly-wing aircraft with multi-control surfaces based on estimation of distribution algorithm
    ZHAO Junwei ZHAO Jianjun YANG Libin
    2014, 34(10):  3048-3053.  DOI: 10.11772/j.issn.1001-9081.2014.10.3048
    Asbtract ( )   PDF (1013KB) ( )  
    References | Related Articles | Metrics

    For the control allocation problem of flexible fly-wing aircraft with multi-control surfaces, the machine vibration force index was put forward to measure the elastic vibration. Total control allocation model was established, the superior performance of the Estimation of Distribution Algorithm (EDA) was used for solving the model. Firstly the rudder structure was designed, the way of work and control capability of every aerodynamic rudder were analyzed, and the rudder functional configuration was built in accordance with the rudder control efficiency of redundant rudder, elevator aileron and aileron rudder in aerodynamic data. During the control allocation, main performance indices of control allocation were analyzed, the overall multi-objective optimal evaluation function was established, which combined with the equality and inequality constraints, and solved by EDA. The true distribution was estimated by establishing a probability model, during the evolutionary process of EDA, the rudder would be allocated according to the deflection efficiency, the optimal solution was got by combining with the optimization function. At last, the impact of aero wing flexibility on static control performance of the system was analyzed. After considering aeroelasticity, the overshoot and transition time are decreases. The flying quality of flying wing aircraft is significantly improved, the system efficiency is improved by at least 10% after optimization. The simulation results show that the EDA can better solve the control allocation problem, and can improve the dynamic quality of the system, verifying the effectiveness of multi-control surfaces to control allocation.

    Cascade model-free adaptive tracking control for outer-rotor permanent magnet synchronous motor
    HU Wei TANG Jie
    2014, 34(10):  3054-3058.  DOI: 10.11772/j.issn.1001-9081.2014.10.3054
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of large space, low transmission efficiency, high maintenance frequency in existing coal mine belt conveyor transmission system, a kind of direct drive structure using outer-rotor Permanent Magnet Synchronous Motor (outer-rotor PMSM) was proposed, and a model-free adaptive control with cascade structure was applied to the speed control system of belt conveyor. According to the requirements of mine belt conveyor running, the detailed design parameters and mathematical model of this motor were given, and the ideal speed curves of starting and the steady state were set. By using model-free adaptive control algorithm, a cascade model-free adaptive control law was designed, and the cascade control system structure was also given. The simulation of the ideal starting curve in direct-drive system in coal mine belt conveyor for outer-rotor PMSM was conducted using Matlab software. The results show that the cascade model-free adaptive control algorithm reduces the speed tracking error and improves control precision, which suppresses the system noise and the disturbance for load changing effectively. This method achieves good startup and steady-state characteristics of belt conveyor.

    Software architecture design for space manipulator control system based on C/S structure
    ZHANG Guanghui WANG Yaonan
    2014, 34(10):  3059-3064.  DOI: 10.11772/j.issn.1001-9081.2014.10.3059
    Asbtract ( )   PDF (960KB) ( )  
    References | Related Articles | Metrics

    To achieve high-performance and practical space manipulator control software, a software architecture for space manipulator control system based on multithreading and round-robin queue was proposed under the C/S structure, as well the details of implementation. After analyzing the features and functional requirements of space manipulator control software, the various functions of manipulator control software were distributed to four different parallel threads, according to the principle of transverse block and vertical stratification, and two round-robin queues were created for caching, to improve the control systems data processing ability and reduce unnecessary waiting time. Those four threads and two round-robin queues communicated with each other to work together. The experimental results show that it is easy enough to control space manipulators with a short delay through this software architecture, and the performance meets the actual needs, which proves the effectiveness and feasibility of this scheme.

2025 Vol.45 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF