Loading...

Table of Content

    10 May 2016, Volume 36 Issue 5
    High frequency cognitive frequency selection mechanism based on hidden Markov model
    WANG Dongli, CAO Peng, HUANG Guoce, SUN Qilu, LI Lianbao
    2016, 36(5):  1179-1182.  DOI: 10.11772/j.issn.1001-9081.2016.05.1179
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics
    Since the limitation of inefficient use and unintelligent frequency selection of the HF (High Frequency) band, a method of HF cognitive frequency selection using Hidden Markov Model (HMM) was proposed. Applying cognitive radio principles to HF communications, HF legacy users were considered as primary users, and the HF radio using cognitive technologies were seen as the secondary user. Firstly, the HMM was established to predict channel states of HF legacy users based on the history data of spectrum sensing; secondly, channel parameters were estimated if the predicted state was idle; finally, the optimal frequency was selected among the channels whose predicted states were idle according to the estimated channel parameters. Simulation results show that the proposed method can be used to actually predict HF legacy users' channel states and quickly estimate channel parameters. Under the given simulation conditions, the successful transmission ratio of the proposed method is 5.54% and 10.56% higher than the methods of random channel selection using HMM prediction and energy detection, therefore the proposed method can select the optimal channel.
    Routing algorithm based on layered mechanism in river underwater sensor networks
    LIU Yang, PENG Jian, LIU Tang, WANG Bin
    2016, 36(5):  1183-1187.  DOI: 10.11772/j.issn.1001-9081.2016.05.1183
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics
    For the unique environment of the Underwater Wireless Sensor Network (UWSN) in river, the model was built by method of fluid dynamics to obtain the real-time position of sensor nodes and simulate the movement law of sensor nodes in real river environment. Furthermore, on the problem of data transmission in UWSN, a Routing Algorithm based on Layered Mechanism (RALM) was proposed for river environment. The topology information was calculated and updated by each node periodically based on the receiving speed from sink. The node to transmit data would choose the neighbor node in upper layer which has the most residual energy to be the next hop. If the node has no neighbor node in upper layer, the next hop would be the neighbor node in the same layer which has the most residual energy. The simulation results show that, compared with DBR (Depth-Based Routing) and Layered-DBR (Layered-Depth Based Routing), RALM algorithm can effectively reduce the network redundancy and packet loss rate, and the network life cycle is raised by 71% and 45%.
    WiFi-pedestrian dead reckoning fused indoor positioning based on particle filtering
    ZHOU Rui, LI Zhiqiang, LUO Lei
    2016, 36(5):  1188-1191.  DOI: 10.11772/j.issn.1001-9081.2016.05.1188
    Asbtract ( )   PDF (788KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy and stability of indoor positioning, an indoor localization algorithm using particle filtering to fuse WiFi fingerprinting and Pedestrian Dead Reckoning (PDR) was proposed. To reduce the negative influence of complex indoor environment on WiFi fingerprinting, a Support Vector Machine (SVM)-based WiFi fingerprinting algorithm using SVM classification and regression for more accurate location estimation was proposed. For smartphone based PDR, in order to reduce the error of inertial sensor, and the effects of random walk, the method of state transition was used to recognize the gait cycles and count the steps, the parameters of state transition were set dynamically using real-time acceleration data, the step length was calculated with Kalman filtering by making use of the relationship between vertical acceleration and step size, and the relationship between adjacent step sizes. The experimental results show that SVM-based WiFi fingerprinting outperformed Nearest Neighbor (NN) algorithm by 34.4% and K-Nearest Neighbors (KNN) algorithm by 27.7% in average error distance, the enhanced PDR performed better than typical step detection software and step length estimation algorithms. After particle filtering, the trajectory of the fused solution is closer to the real trajectory than WiFi fingerprinting and PDR. The average error distance of linear walking is 1.21 m, better than 3.18 m of WiFi and 2.76 m of PDR; the average error distance of a walking through several rooms is 2.75 m, better than 3.77 m of WiFi and 2.87 m of PDR.
    WiFi fingerprint database updating based on surface fitting
    TIAN Zengshan, DAI Haipeng
    2016, 36(5):  1192-1195.  DOI: 10.11772/j.issn.1001-9081.2016.05.1192
    Asbtract ( )   PDF (571KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that the fingerprint database updating requires a huge amount of time and laboring effort, a new Received Signal Strength (RSS) estimation method was proposed in WiFi environment. Part of remeasured RSS fingerprints was used to fit RSS surface by the radial basis function interpolation, the RSSs of the unknown reference points were estimated nearby the known reference points, and then the whole fingerprint database was update. The extensive experiments prove that only a quarter of the reference points need to be remeasured, the cumulative error probability will be similar with the actual database within the positioning error of 2 m, and thus the method guarantees the satisfactory positioning accuracy.
    Improved modified gain extended Kalman filter algorithm based on back propagation neural network
    LI Shibao, CHEN Ruixiang, LIU Jianhang, CHEN Haihua, DING Shuyan, GONG Chen
    2016, 36(5):  1196-1200.  DOI: 10.11772/j.issn.1001-9081.2016.05.1196
    Asbtract ( )   PDF (729KB) ( )  
    References | Related Articles | Metrics
    In practical application, Modified Gain Extended Kalman Filter (MGEKF) algorithm generally uses erroneous measured values instead of the real values for calculation, so the modified results also contain errors. To solve this problem, an improved MGEKF algorithm based on Back Propagation Neural Network (BPNN), termed BPNN-MGEKF algorithm, was proposed in this paper. At BPNN training time, measured values were used as the input, and modified results by true values as the output. BPNN-MGEKF was applied to single moving station bearing-only position experiment. The experimental results shows that, BPNN-MGEKF improves the positioning accuracy of more than 10% compared to extended Kalman filter, MGEKF and smoothing modified gain extended Kalman filter algorithm, and it is more stable.
    Region-based fault tolerant routing algorithm for 2D mesh network on chip
    HU Zhekun, YANG Shengchun, CHEN Jie
    2016, 36(5):  1201-1205.  DOI: 10.11772/j.issn.1001-9081.2016.05.1201
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the entries of routing tables and avoid using large numbers of Virtual Channels (VC), a Region-based Fault Tolerant Routing (RFTR) algorithm was proposed for wormhole switching 2D Mesh Network on Chip (NoC) to reduce the amount of hardware resources. According to the positions of faulty nodes and links, the 2D Mesh network was divided into several rectangular regions. Within each region the packet could be routed by deterministic or adaptive routing algorithms, while among these regions the routing path was determined by up*/down* routing algorithm. Besides, with the Channel Dependency Graph (CDG) model, the proposed algorithm was proved to be deadlock-free using only two VCs. In a 6×6 Mesh network, the RFTR algorithm can reduce the amount of routing table resources by 25%. Simulation results show that, with the same amount of buffer resources, the RFTR algorithm can achieve an equivalent or even higher performance compared to up*/down* and segment-based routing algorithms.
    Rent's rule-based localized traffic generation algorithm for network on chip
    ZHOU Yuhan, HAN Guodong, SHEN Jianliang, JIANG Kui
    2016, 36(5):  1206-1211.  DOI: 10.11772/j.issn.1001-9081.2016.05.1206
    Asbtract ( )   PDF (1002KB) ( )  
    References | Related Articles | Metrics
    In view of the problems that the spatial distribution of traffic model in traditional Network on Chip (NoC) was not consistent with the communication locality in practical applications and the overhead of network bandwidth is large, a novel algorithm for flow generation with NoC localized characteristic based on Rent rule was proposed. By establishing the communication probability distribution model with finite Mesh structure, the communication probability matrix was used to send packets to each node uniformly and obtain synthesis flows, and the locality was realized. The experiment simulated on flow with different locality degree and different network size. The results show that the proposed algorithm has better performance in flow locality, which is more close to the actual flows compared with five algorithms including Random Uniform, Bit Complement, Reversal, Transpose and Butterfly. In addition, the overhead of network bandwidth is lower.
    Improved automatic classification algorithm of software bug report in cloud environment
    HUANG Wei, LIN Jie, JIANG Yu'e
    2016, 36(5):  1212-1215.  DOI: 10.11772/j.issn.1001-9081.2016.05.1212
    Asbtract ( )   PDF (705KB) ( )  
    References | Related Articles | Metrics
    User-submitted bug reports are arbitrary and subjective. The accuracy of automatic classification of bug reports is not ideal. Hence it requires many human labors to intervention. With the bug reports database growing bigger and bigger, the problem of improving the accuracy of automatic classification of these reports is becoming urgent. A TF-IDF (Term Frequency-Inverse Document Freqency) based Naive Bayes (NB) algorithm was proposed. It not only considered the relationship of a term in different classes but also the relationship of a term inside a class. It was also implemented in distributed parallel environment of MapReduce model in Hadoop platform. The experimental results show that the proposed Naive Bayes algorithm improves the performance of F1 measument to 71%, which is 27 percentage points higher than the state-of-the-art method. And it is able to deal with massive amounts of data in distributed way by addding computational node to offer shorter running time and has better effective performance.
    Adaptive multi-resource scheduling dominant resource fairness algorithm for Mesos in heterogeneous clusters
    KE Zunwang, YU Jiong, LIAO Bin
    2016, 36(5):  1216-1221.  DOI: 10.11772/j.issn.1001-9081.2016.05.1216
    Asbtract ( )   PDF (870KB) ( )  
    References | Related Articles | Metrics
    The fairness of multi-resource allocation is one of the most important indicators in the resource scheduling subsystem, Dominant Resource Fairness (DRF), as a general resource allocation algorithm for multi-resources scenarios, it may be unfair in heterogeneous cluster environment. On the basis of the research on the DRF multi-resource fair allocation algorithm under Mesos framework environment, meDRF allocation algorithm was designed and implemented to evaluate the influence factors of the performance of the server. The machine performance scores of computing nodes, as the dominant factor of DRF share calculation, made computing tasks have equal chance to obtain high quality computing resources and poor computing resources. Experiments were conducted by using K-means, Bayes and PageRank jobs under Hadoop. The experimental results show that, compared with DRF allocation algorithm, the meDRF algorithm can reflect more fairness of the allocation of resources, and the allocation of resources has better stability, which effectively improves the utilization of system resources.
    Strategy for object index based on RAMCloud
    WANG Yuefei, YU Jiong, LU Liang
    2016, 36(5):  1222-1227.  DOI: 10.11772/j.issn.1001-9081.2016.05.1222
    Asbtract ( )   PDF (876KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low using rate, RAMCloud would change the positions of objects, which would cause the failure for Hash to localize the object, and the low efficiency of data search. On the other hand, since the needed data could not be positioned rapidly in the recovery process of the data, the returned segments from every single backup could not be organized perfectly. Due to such problems, RAMCloud Global Key (RGK) and binary index tree, as solutions, were proposed. RGK can be divided into three parts:positioned on master, on segment, and on object. The first two parts constituted Coordinator Index Key (CIK), which means in the recovery process, Coordinator Index Tree (CIT) could position the master of segments. The last two parts constituted Master Index Key (MIK), and Master Index Tree (MIT) could obtain objects quickly, even though the data was shifted the position in the memory. Compared with the traditional RAMCloud cluster, the time of obtaining objects can obviously reduce when the data throughput is increasing. Also, the idle time of coordinator and recombined time of log are both declining. The experimental results show that the global key with the support of the binary index tree can reduce the time of obtaining objects and recovering.
    Parallel computation for image denoising via total variation dual model on GPU
    ZHAO Mingchao, CHEN Zhibin, WEN Youwei
    2016, 36(5):  1228-1231.  DOI: 10.11772/j.issn.1001-9081.2016.05.1228
    Asbtract ( )   PDF (556KB) ( )  
    References | Related Articles | Metrics
    The problem of Total Variation (TV)-based image denoising was considered. Since the traditional serial computation speed based on Central Processing Unit (CPU) was low, a parallel computation based on Graphics Processing Unit (GPU) was proposed. The dual model of the total variation-based image denoising was derived and the relationship between the primal variable and the dual variable was considered. The projected gradient method was applied to solve the dual model. Numerical results obtained by CPU and GPU show that the algorithm implemented by GPU is more efficient than that by CPU, and with the increasing of image size, the advantage of GPU parallel computing is more outstanding.
    Step-by-step multi-radar track correlation algorithm based on fuzzy clustering
    ZHANG Shubin, FANG Yangwang, YONG Xiaoju, PENG Weishi, LI Wei
    2016, 36(5):  1232-1235.  DOI: 10.11772/j.issn.1001-9081.2016.05.1232
    Asbtract ( )   PDF (546KB) ( )  
    References | Related Articles | Metrics
    Since the multi-radar track correlation algorithm based on transitive closure fuzzy clustering has high computational complexity, a step-by-step multi-radar track correlation algorithm based on fuzzy clustering was proposed. First, based on the Euclidean distance the track correlation was judged, and the track similar matrix was simplified through fuzzy similarity calculation. Furthermore, the calculation of the iterations was decreased. Finally, the computational demanding of the proposed algorithm was certainly reduced. The simulation results show that the proposed algorithm can determine targets' tracks accurately, saves 54% of time effectively with the high accuracy.
    Security analysis and implementation for wireless local area network access protocol via near field communication authentication
    LI Yun, CHEN Pangsen, SUN Shanlin
    2016, 36(5):  1236-1245.  DOI: 10.11772/j.issn.1001-9081.2016.05.1236
    Asbtract ( )   PDF (1362KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems existing in point-to-point communication model of Wireless Local Area Network (WLAN) protocol via Near Field Communication (NFC) authentication, such as plaintext transferring, user's anonymous access, data being easily tapped and tampered, a security design of WLAN protocol via NFC was put forward. The security tunnel was built using Diffie-Hellman key exchange algorithm and second generation Secure Hash Algorithm (SHA) to transfer the random information, and the user's anonymity was eliminated using Elliptic Curve Digital Signature Algorithm (ECDSA). A prototype implementation on computer was given from requirement analysis, architecture design and sequence steps of the protocol. The experimental results by using Colored Petri Net (CPN) modeling show that the proposed protocol can execute stably and deal with the unauthorized access and eavesdropping problems of WLAN.
    Vulnerability detection algorithm of DOM XSS based on dynamic taint analysis
    LI Jie, YU Yan, WU Jiashun
    2016, 36(5):  1246-1249.  DOI: 10.11772/j.issn.1001-9081.2016.05.1246
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    Concerning DOM XSS (Document Object Model (DOM)-based Cross Site Scripting (XSS)) vulnerability detection in Web client, a detection algorithm for DOM XSS vulnerability based on dynamic taint analysis was proposed. By constructing DOM model and modifying Firefox SpiderMonkey script engine, a dynamic taint analysis method based on the bytecode was used to detect DOM XSS vulnerabilities. First, taint data was marked by extending the attribute of the DOM object class and modifying the string encoding format of SpiderMonkey. Then, the execution route of the bytecode was traversed to generate the tainted data set. After that, all the output points which might trigger DOM XSS attacks were monitored to determine whether the application contained the DOM XSS vulnerabilities. In the experiment, a DOM XSS vulnerability detection system containing a crawler was designed and implemented. The experimental results show that the proposed algorithm can effectively detect the DOM XSS vulnerabilities, and the detection rate is about 92%.
    Efficient certificate-based proxy re-encryption scheme without bilinear pairings
    XU Hailin, CHEN Ying, LU Yang
    2016, 36(5):  1250-1256.  DOI: 10.11772/j.issn.1001-9081.2016.05.1250
    Asbtract ( )   PDF (1148KB) ( )  
    References | Related Articles | Metrics
    All the previous certificate-based Proxy Re-Encryption (PRE) schemes are based on the computationally-heavy bilinear pairings, and thus have low computation efficiency. To solve this problem, a certificate-based proxy re-encryption scheme without relying on the bilinear pairings was proposed over the elliptic curve group. Under the hardness assumption of the Computational Diffie-Hellman (CDH) problem, the proposed scheme was formally proven to be indistinguishable against adaptively chosen-ciphertext attacks in the random oracle model. Due to avoiding the time-consuming bilinear pairing operations, the proposed scheme significantly reduced the computation cost. Compared with the previous certificate-based proxy re-encryption schemes with bilinear pairings, the analysis shows that the proposed scheme has obvious advantages in both the computation efficiency and the communication cost, and the scheme is more suitable for the computation-constrained and bandwidth-limited applications.
    Dynamic S-box construction method and dynamic cryptography property analysis
    HU Zhihua, YAN Shuo, XIONG Kuanjiang
    2016, 36(5):  1257-1261.  DOI: 10.11772/j.issn.1001-9081.2016.05.1257
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics
    Based on the idea of using reversible transformation and affine transformation to generate S-box, a method to produce dynamic S-box by changing the rows of affine transformation array was proposed. The cryptography properties of single S-box were analyzed. The experimental results show that the cryptography properties of single S-box achieve the security standard of S-box of AES. Moreover, the dynamic differential probability, dynamic linear probability, dynamic non-linear degree, dynamic algebra times and impossible differential numbers of the dynamic S-box produced by the proposed method were analyzed. Theoretical analysis shows that the S-box produced by proposed method possesses good cryptography properties, and experiments also prove that the S-box so produced has good dynamic non-linear degree, dynamic differential and impossible differential properties. Finally, by analyzing the hardware implementation efficiency of the generation of the dynamic S-box, the proposed method has high hardware implementation efficiency.
    Scene classification based on feature-level and decision-level fusion
    HE Gang, HUO Hong, FANG Tao
    2016, 36(5):  1262-1266.  DOI: 10.11772/j.issn.1001-9081.2016.05.1262
    Asbtract ( )   PDF (841KB) ( )  
    References | Related Articles | Metrics
    Since the accuracy of single feature in scene classification is low, inspired by information fusion, a classification method combined feature-level and decision-level fusion was proposed. Firstly, Scale Invariant Feature Transform-Bag of Words (SIFT-BoW), Gist, Local Binary Patterns (LBP), Laws texture and color histogram features of image were extracted. Then, the classification results of every single feature were fused in the way of Dezert-Smarandache Theory (DSmT) to obtain the decision-level fusion result; at the same time, the five features were serially connected to generate a new feature, the new feature was used to classification to obtain the feature-level fusion result. Finally, the feature-level and decision-level fusion results were adaptively fused to finish classification. To solve the Basic Belief Assignment (BBA) problem of DSmT, a method based on posterior probability matrix was proposed. The accuracy of the proposed method on 21 classes of remote sensing images is 88.61% when training and testing samples are both 50, which is 12.27 percentage points higher than the highest accuracy of single feature. The accuracy of proposed method is also higher than that of the feature-level fusion serial connection or DSmT reasoning decision-level fusion.
    Clustering for point objects based on spatial proximity
    YU Li, GAN Shu, YUAN Xiping, LI Jiatian
    2016, 36(5):  1267-1272.  DOI: 10.11772/j.issn.1001-9081.2016.05.1267
    Asbtract ( )   PDF (946KB) ( )  
    References | Related Articles | Metrics
    Spatial clustering is one of the vital research directions in spatial data mining and knowledge discovery. However, constrained by the complex distribution of uneven density, various shapes and multi-bridge connection of points, most clustering algorithms based on distance or density cannot identify high aggregative point sets efficiently and effectively. A point clustering method based on spatial proximity was proposed. According to the structure of point Voronoi diagram, adjacent relationships among points were recognized. The similarity criteria was defined by region of Voronoi, a tree structure was built to recognize point-target clusters. The comparison experiments were conducted on the proposed algorithm, K-means algorithm and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm. Results show that the proposed algorithm is capable for identifying clusters in arbitrary shapes, with different densities and connected only at bridges or chains, meanwhile also suitable for aggregative pattern recognition in heterogeneous space.
    Clustering recommendation algorithm based on user interest and social trust
    XIAO Xiaoli, QIAN Yali, LI Danjiang, TAN Liubin
    2016, 36(5):  1273-1278.  DOI: 10.11772/j.issn.1001-9081.2016.05.1273
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    Collaborative filtering algorithm is the most widely used algorithm in personalized recommendation system. Focusing on the problem of date sparseness and poor scalability, a new clustering recommendation algorithm based on user interest and social trust was proposed. Firstly, according to user rating information, the algorithm divided users into different categories by clustering technology, and set up a user neighbor set based on interest. In order to improve the accuracy of the calculation of interest similarity, the modified cosine formula was used to eliminate the difference of user scoring criteria. Then, the trust mechanism is introduced to measure implicit trust value among users by defining the direct trust calculation method and indirect trust calculation method, converted a social network to a trust network, and set up a user neighbor set based on trust. Finally, this algorithm combined the predictive value of two neighbor sets to generate recommendations for users by weighting method. The simulation experiment was carried out to test the performance on Douban dataset, found suitable value of α and k. Compared with collaborative filtering algorithm based on users and recommendation algorithm based on trust, the Mean Absolute Error (MAE) decreased by 6.7%, precision, recall and F1 increased by 25%,40% and 37%. The proposed algorithm can effectively improve the quality of recommendation system.
    Academic paper recommendation model based on community partition
    HUANG Yonghang, TANG Yong, LI Chunying, TANG Zhikang, LIU Jiwei
    2016, 36(5):  1279-1283.  DOI: 10.11772/j.issn.1001-9081.2016.05.1279
    Asbtract ( )   PDF (1002KB) ( )  
    References | Related Articles | Metrics
    An academic paper recommendation model based on community partition was proposed according to sociability in social network. The model regarded the largest connected component in complex network as the logic unit in data processing, and divided up the largest connected component into non-intersect kernel sub-network. The labels would be established according to kernel sub-network by non-parameter control mode. Communities were divided in scholar social network through label propagation, and academic papers were recommended among the users in the communities by the results of the community partition. The proposed community partition method was compared with the classic community partition method in the experiments on artificial network. The experimental results show that the proposed method can achieve good community partition qualities on different characteristic artificial networks.
    Importance sorting method of organizational enities based on trace clustering
    XU Tao, MENG Ye
    2016, 36(5):  1284-1289.  DOI: 10.11772/j.issn.1001-9081.2016.05.1284
    Asbtract ( )   PDF (901KB) ( )  
    References | Related Articles | Metrics
    Aiming at the issue that the social network analysis method like hand-over network cannot express the importance of organizational entities precisely, a method to sort the quantified importance of organizational entities organized under the trace clusters was proposed. Firstly, a relation network was constructed to describe the relationship between trace clusters and organizational entities; secondly, a quantitative assessment of the nodes' importance of this network was defined; finally, all these nodes were sorted respectively according to their quantified importance. The experimental results show that this relation network can express the actual importance of organizational entities more precisely than the hand-over network generated by trace clustering. Compared to the importance sorting algorithm of network community nodes based on topological potential, the proposed method is more suitable for the actual business processes, meanwhile it can distinguish distinct organizational entities better than the importance-sorting algorithm based on topological potential.
    Chinese natural language interface based on paraphrasing
    ZHANG Junchi, HU Jie, LIU Mengchi
    2016, 36(5):  1290-1295.  DOI: 10.11772/j.issn.1001-9081.2016.05.1290
    Asbtract ( )   PDF (1117KB) ( )  
    References | Related Articles | Metrics
    In this paper, a novel method for Chinese Natural Language Interface of Database (NLIDB) based on Chinese paraphrase was proposed to solve the problems of traditional methods based on syntactic parsing which cannot obtain high accuracy and need a lot of manual label training corpus. First, key entities of user statements in databases were extracted, and candidate tree sets and their tree expressions were generated. Then most relevant semantic expressions were filtered by paraphrase classifier which was obtained from the Internet Q&A training corpus. Finally, candidate trees were translated into Structured Query Language (SQL). F1 score was respectively 83.4% and 90% on data sets of Chinese America Geography (GeoQueries880) and Questions about Restaurants (RestQueries250) by using the proposed method, better than syntactic based method. The experimental results demonstrate that the NLIDB based on paraphrase can handle the semantic gaps between users and databases better.
    Improved community detection algorithm based on local modularity
    WANG Tianhong, WU Xing, LAN Wangsen
    2016, 36(5):  1296-1301.  DOI: 10.11772/j.issn.1001-9081.2016.05.1296
    Asbtract ( )   PDF (836KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem that the best neighbor nodes of the communities can not accurately be found in most local community detection algorithms, an improved local community detection algorithm was proposed based on local modularity. The concept of node intimacy was introduced to quantify the relationship between the community and the neighbor nodes by the algorithm, and the nodes were selected into the communities according to the node intimacy in descending order. In the end,the extension of the local community was terminated by the local modularity index. Compared with the four kinds of typical community detection algorithms such as the random walk algorithm based on information compression, the algorithm was applied in the real networks and the artificial simulation network. The comprehensive evaluation indexs (F1score) and Normalized Mutual Informations (NMI) of the results are better than comparison algorithms. The experiments show that the algorithm has better efficiency and accuracy, and is very suitable for community detection in a large scale network.
    Micro-blog hot-spot topic discovery based on real-time word co-occurrence network
    LI Yaxing, WANG Zhaokai, FENG Xupeng, LIU Lijun, HUANG Qingsong
    2016, 36(5):  1302-1306.  DOI: 10.11772/j.issn.1001-9081.2016.05.1302
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics
    In view of the real-time, sparse and massive characteristics of micro-blog, a topic discovery model based on real-time co-occurrence network was proposed. Firstly, the set of keywords was extracted from the primitive data by the model, and the relationship weights was calculated on the basis of the time parameter to structure the word co-occurrence network. Then, sparsity could be reduced by finding potential features of a strong correlation based on weight adjustment coefficient. Secondly, the topic incremental clustering could be achieved by using the improved Single-Pass algorithm. Finally, the feature words of each topic were sorted by heat calculation, so the most representative keywords of the topic were got. The experimental results show that the accuracy and comprehensive index of the proposed model increase 6%, 8% respectively compared with the Single-Pass algorithm. The experimental results prove the validity and accuracy of the proposed model.
    Feature selection algorithms base on enhanced bee colony optimization algorithm
    ZHANG Xia, PANG Xiuping
    2016, 36(5):  1307-1312.  DOI: 10.11772/j.issn.1001-9081.2016.05.1307
    Asbtract ( )   PDF (961KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the traditional Bee Colony Optimization (BCO) has good exploration but week exploitation performance, an exploitation enhanced BCO algorithm was proposed, and applied to data feature selection problem in order to improve the performance of the feature selection. Firstly, global weight was introduced into the food source, and was used to evaluate the importance of each food source to population, thus the randomness of exploitation was reduced; then, a recruiting method with two-step filtering was designed to improve the exploitation performance and keep diversity; at last, local weight was introduced into the food source to evaluate the correlation between the food source and class labels which were used in the feature selection model. Simulation experimental results show that the proposed method can improve the effect of the BCO and get a good performance in the feature selection problem, and the method outperforms Dissimilarity based Artificial Bee Colony (DisABC) and Feature Selection based on Bee Colony Optimization (BCOFS).
    Relevance model estimation based on stable semantic clustering
    SUN Xinyu, WU Jiang, PU Qiang
    2016, 36(5):  1313-1318.  DOI: 10.11772/j.issn.1001-9081.2016.05.1313
    Asbtract ( )   PDF (1012KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of relevance model based on unstable clustering estination and its effect on retrieval performance, a new Stable Semantic Relevance Model (SSRM) was proposed. The feedback data set was first formed by using the top N documents from user initial query, after the stable number of semantic clusters had been detected, SSRM was estimated by those stable semantic clusters selected according to higher user-query similarity. Finally, the SSRM retrieval performance was verified by experiments. Compared with Relevance Model (RM), Semantic Relevance Model (SRM) and the clustering-based retrieval methods including Cluster-Based Document Model (CBDM), LDA-Based Document Model (LBDM) and Resampling, SSRM has improvement of MAP by at least 32.11%, 0.41%, 23.64%,19.59%, 8.03% respectively. The experimental results show that retrieval performance can benefit from SSRM.
    Improved NSGA-Ⅱ algorithm based on adaptive hybrid non-dominated individual sorting strategy
    GENG Huantong, LI Huijian, ZHAO Yaguang, CHEN Zhengpeng
    2016, 36(5):  1319-1324.  DOI: 10.11772/j.issn.1001-9081.2016.05.1319
    Asbtract ( )   PDF (1017KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the population diversity preservation strategy only based on crowding distance of Non-dominated Sorting Genetic Algorithm-Ⅱ (NSGA-Ⅱ) cannot reflect the real crowding degree of individuals, an improved NSGA-Ⅱ algorithm based on the adaptive hybrid non-dominated individual sorting strategy (NSGA-Ⅱh) was proposed. First, a novel loop-clustering individual sorting strategy was designed. Second, according to the Pareto layer-sorting information the NSGA-Ⅱh algorithm adaptively chose one from the two individual sorting strategies based on classical crowding distance and loop-clustering. Finally, the diversity maintain mechanism could be improved especially during the late period of evolutionary optimization. The NSGA-Ⅱh algorithm was compared with three classical algorithms including NSGA-Ⅱ, Multi-Objective Particle Swarm Optimization (MOPSO) and GDE3. The experiments on five multi-objective benchmark functions show that the NSGA-Ⅱh algorithm can acquire 80% of optimal Inverted Generational Distance (IGD) values, and the corresponding two-tailed t-test results at a 0.05 level of significance are remarkable. The proposed algorithm can not only improve convergence of the original algorithm, but also enhance the distribution of Pareto optimal set.
    Two types of matroidal structure of generalized rough sets
    XU Guoye, WANG Zhaohao
    2016, 36(5):  1325-1329.  DOI: 10.11772/j.issn.1001-9081.2016.05.1325
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics
    Based on neighborhood-based rough set model and covering-based rough set model, two matroidal structures which were matroid induced by neighborhood upper approximation number and matroid induced by covering upper approximation number were constructed. On one hand, two types of upper approximation number were defined through generalized rough set, and they were proven to satisfy rank function axiom in matroid theory, thus two types of matroids were obtained from the viewpoint of the rank function. On the other hand, some properties, such as independent sets, circuits, closures, closed sets, were proposed through rough set approach. Moreover, the concentions between upper approximation operators and closure operators were investigated. Futhuremore, the relationship between the covering and the matroid was studied. Result shows that elements and any union of them in covering are the closed sets of matroid induced by covering upper approximation number.
    Improved particle swarm optimization algorithm for support vector machine feature selection and optimization of parameters
    ZHANG Jin, DING Sheng, LI Bo
    2016, 36(5):  1330-1335.  DOI: 10.11772/j.issn.1001-9081.2016.05.1330
    Asbtract ( )   PDF (936KB) ( )  
    References | Related Articles | Metrics
    In view of feature selection and parameter optimization in Support Vector Machine (SVM) have great impact on the classification accuracy, an improved algorithm based on Particle Swarm Optimization (PSO) for SVM feature selection and parameter optimization (GPSO-SVM) was proposed to improve the classification accuracy and select the number of features as little as possible. In order to solve the problem that the traditional particle swarm algorithm was easy to fall into local optimum and premature maturation, the crossover and mutation operator were introduced from Genetic Algorithm (GA) that allows the particle to carry out cross and mutation operations after iteration and update to avoid the problem in PSO. The cross matching between particles was determined by the non-correlation index between particles and the mutation probability was determined by the fitness value, thereby new particles was generated into the group. By this way, the particles jump out of the previous search to the optimal position to improve the diversity of the population and to find a better value. Experiments were carried out on different data sets, compared with the feature selection and SVM parameters optimization algorithm based on PSO and GA, the accuracy of GPSO-SVM is improved by an average of 2% to 3%, and the number of selected features is reduced by 3% to 15%. The experimental result show that the features selection and parameter optimization of the proposed algorithm are better.
    Prediction algorithm of dynamic trajectory based on weighted grey model(1,1)
    JIANG Yixian, ZHANG Qishan
    2016, 36(5):  1336-1340.  DOI: 10.11772/j.issn.1001-9081.2016.05.1336
    Asbtract ( )   PDF (685KB) ( )  
    References | Related Articles | Metrics
    The noise assumption and motion assumption of trajectory should be demanded in dynamic trajectory prediction based on Kalman filter. In order to eliminate this insufficiency, the metabolism GM(1,1) model was introduced in dynamic trajectory prediction. Thus a prediction algorithm based on weighted grey GM(1,1) model (TR_GM_PR algorithm)was presented. Firstly, sub-trajectories with different length before forecasting point were cut out in order, then the relative fitting errors and predicted values of sub-trajectories were calculated using grey GM(1,1) model. Secondly, the normalization processing of relative fitting errors was carried out, and the weights of predicted values were set according to the result. Finally, using the linear combination of predicted values and their corresponding weights, the running tendency of trajectory in future was predicted. Experiments were conducted with the Atlantic weather Hurricane data from 2000 to 2008. Compared with hurricane trajectory prediction method with pattern matching, TR_GM_PR algorithm improves the prediction accuracy ratio of 6 hours by 2.6056 percentage points to 67.6056%. The experimental results show that TR_GM_PR algorithm is suitable for short-term trajectory prediction. In addition, the new algorithm has simple calculation and high real-time performance, and can effectively improve the prediction accuracy of dynamic trajectory.
    Dynamic multi-species particle swarm optimization based on food chain mechanism
    LIU Jiao, MA Di, MA Tengbo, ZHANG Wei
    2016, 36(5):  1341-1346.  DOI: 10.11772/j.issn.1001-9081.2016.05.1341
    Asbtract ( )   PDF (856KB) ( )  
    References | Related Articles | Metrics
    a novel Dynamic multi-Species Particle Swarm Optimization (DSPSO) algorithm based on food chain mechanism was proposed aiming at the problem that the basic Particle Swarm Optimization (PSO) algorithm is easy fall into local optimal solution when solving multimodal problems. Inspired by the natural ecosystem, a food chain mechanism and a reproduction mechanism were employed to keep the swarm diversity and good performance. In food chain mechanism, the swarm was divided into several sub-swarms, and each sub-swarm could prey on the others. The memory leader swarm was evolved and the less contributed particle was eliminated through predation, and then the new particle was generated through reproduction mechanism. The diversity was kept through the evaluation of the swarm, and the efficiency of the algorithm was enhanced through eliminating the misleading effect of the less contributed particles. In order to verify the effectiveness of the algorithm, ten benchmark problems including shifted problems and rotated problems were chose to test the performance of DSPSO. The experimental results show that DSPSO has a well optimizing performance. Compared with PSO algorithm, Local version Particle Swarm Optimization (LPSO) algorithm, Dynamic Multi-Swarm Particle Swarm Optimization (DMS-PSO) algorithm and Comprehensive Learning Particle Swarm Optimization (CLPSO) algorithm, DSPSO algorithm not only obtains more accurate solutions, but also has higher reliability.
    MSNV:network structure visualization method based on multi-level community detection
    WANG Xiangang, YAO Zhonghua, SONG Hanchen
    2016, 36(5):  1347-1351.  DOI: 10.11772/j.issn.1001-9081.2016.05.1347
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that large-scale network has characteristics of huge number of nodes, high structural complexity and difficulty to demonstrate its structural characteristics by the limited screen space, a multi-level network visualization method based on community detection was proposed. Firstly, a community detection algorithm based on network modularity was used to detect the network node and a greedy algorithm was used to find the community detection with maximum modularity to get different level of granularity communities. Then, in order to solve the problem that the Force-Directed Algorithm (FDA) could not display network nodes hierarchically, the classic FDA was improved by setting the level blinding force to achieve hierarchical layout of different level of granularity communities. Finally, high level communities and low level nodes were displayed respectively by using the interactive method such as multi-window view and Overview+Detail, meeting the requirement of both network high-level macrostructure and low-level details of the display. In the simulation test, the community detection algorithm is faster and more accurate compared to self-contained GN (Girvan-Newman) algorithm. The theoretical analysis and simulation results show that the proposed method has good effect and performance in display and interaction of large-scale network structure.
    Rendering algorithm of dynamic participating media based on optical flow
    WANG Yuanlong
    2016, 36(5):  1352-1355.  DOI: 10.11772/j.issn.1001-9081.2016.05.1352
    Asbtract ( )   PDF (764KB) ( )  
    References | Related Articles | Metrics
    In order to achieve the real-time rendering of continuous frame for participating media scene, a rendering algorithm based on optical flow was proposed. First, the regional matching method was used to calculate the field of optical flow between key frames. Then the field of optical flow between intermediate frames was calculated by the interpolation method, and the optical coherence function between frames was used to denote the consistency of optical flow to guarantee that the media motion won't be suddenly changed. Finally, the dynamic scene of continuous frames was rendered according to the field of optical flow. In the participating media scene rendering for 5 continuous frames, the efficiency of the proposed algorithm increase nearly five times than that of based on Radial Basis Function (RBF) model, real-time rendering of consecutive frames is implemented and rendering quality is relatively high.
    Detection of continuously and repeated copy-move forgery to single frame in videos by quantized DCT coefficients
    LIN Jing, HUANG Tianqiang, LAI Yueicong, LU Henan
    2016, 36(5):  1356-1361.  DOI: 10.11772/j.issn.1001-9081.2016.05.1356
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    Most existing detection algorithms of video frame copy-move forgery in time domain were designed for the copy-move forgery of video sequence containing 20 frames at least, and are difficult to detect single frame forgery. While according to the characteristics of human visual perception, 15 frames at least were needed to modify the video meaning. So when goal in vision was made by the tampering, continuous operation and many times were needed. In order to detect the tampering, a detection algorithm based on quantized Discrete Cosine Transform (DCT) coefficients for continuous and repeated single frame copy-move forgery in videos was proposed. Firstly, the video was converted into images, and quantized DCT coefficients were taken as the feature vector of a frame image. Then, the similarity between frames was measured by calculating Bhattacharyya coefficient, and threshold was set to judge the abnormal similarity between two adjacent frames. Finally, whether the video was tampered and the tampered positions were determined by the continuity of frames with abnormal similarity and the number of continuous frames. The experimental results show that the proposed algorithm can detect the video with different scenarios, it possesses fast detection speed, and is not affected by further compression factors, but also is of high accuracy and low omission ratio.
    Improved algorithm for sample adaptive offset filter based on AVS2
    CHEN Zhixian, WANG Guozhong, ZHAO Haiwu, LI Guoping, TENG Guowei
    2016, 36(5):  1362-1365.  DOI: 10.11772/j.issn.1001-9081.2016.05.1362
    Asbtract ( )   PDF (695KB) ( )  
    References | Related Articles | Metrics
    Sample Adaptive Offset (SAO) is a time-consuming part of in-loop filter in the second generation of Audio Video coding Standard (AVS2) and High Efficiency Video Coding (HEVC) standard. Aiming at the problem that existing SAO algorithms had large amounts of computation and high complexity, an improved fast rate-distortion algorithm was proposed. In this new method, the original defined table of the offset values and its binary bit string to be written into the code stream were modified by analyzing the relationship between the different offset values of each class in the edge mode and its change of the rate-distortion, so that an early termination condition was set to quickly find the best offset value for the current SAO unit without calculating the rate-distortion cost of each offset. The experimental results show that, compared with the calculation results in AVS2, the proposed algorithm reduces not only the calculation amounts but also the number of cycles by 75% to find the best offset values and the operating time of in-loop filter by 33%, which effectively lowers the complexity of the calculation in ensuring the rate-distortion of image barely changed.
    Feature extraction and reconstruction of environmental plane based on Kinect
    WANG Mei, YU Yuanfang, TU Dawei, ZHOU Hua
    2016, 36(5):  1366-1370.  DOI: 10.11772/j.issn.1001-9081.2016.05.1366
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of the large amount of data and the complicated algorithm in 3D scene feature recognition process, an feature extraction and reconstruction of environmental plane algorithm based on Kinect was proposed. Firstly, a method of RANdom SAmple Consensus (RANSAC) environment segmentation with the combination of geometrical and color information was proposed, which overcame the over segmentation and lack-segmentation based on geometric characteristic and improved the accuracy. Secondly, according to the principle of perspective projection, the three-dimensional transformation matrix was derived, which guided the 3D environment mapped into a plane. The extraction of contour points in two-dimensional space was realized by searching object boundary information which used convex hull concept. Finally, the 3D information of the contour points was recovered by the rotating inverse transform and the reconstruction of environment features was completed. Three groups of scene data were used to verify the algorithm and the experimental results show the proposed algorithm gains more precise segmentation, reduces over segmentation phenomena, and also has better reconstruction effect for objects with different shape features.
    Fingertip tracking method based on temporal context tracking-learning-detection
    HOU Rongbo, KANG Wenxiong, FANG Yuxun, HUANG Rongen, XU Weizhao
    2016, 36(5):  1371-1377.  DOI: 10.11772/j.issn.1001-9081.2016.05.1371
    Asbtract ( )   PDF (1198KB) ( )  
    References | Related Articles | Metrics
    In the video based in-air signature verification system, the existed methods cannot meet the requirement of accuracy, real time, robustness for fingertip tracking. To solve this problem, the Tracking-Learning-Detection (TLD) method based on temporal context was proposed. Based on the original TLD algorithm, the temporal context massage, namely the prior knowledge that the movement of fingertip is continuity in two adjacent frames, was introduced to narrow the search range of detection and tracking adaptively, thereby improving tracking speed. The experimental results on 12 public and 1 self-made video sequences show that the improved TLD algorithm can accurately track fingers, and tracking speed can reach 43 frames per secend. Compared with the original TLD tracking algorithm, the accuracy was increased by 15% and the tracking speed was increased more than 100%, which make the proposed method meet the real-time requirements for fingertip tracking.
    Defogging algorithm based on HSI luminance component and RGB space
    LI Huihui, QIN Pinle, LIANG Jun
    2016, 36(5):  1378-1382.  DOI: 10.11772/j.issn.1001-9081.2016.05.1378
    Asbtract ( )   PDF (834KB) ( )  
    References | Related Articles | Metrics
    The purpose of image defogging is to remove the fog effect from image of surveillance video to improve the fog haze image visual effect. Presently, there is only a comparison between images before and after defogging, and the results are often distorted seriously and oversatuarted. Thereby, it is hard to ensure the clear details and the integrity of color information simultaneously. For tackling above problems, a new optimized method for images recovering was proposed with combination of HIS luminance component and RGB space, which was based on atmosphere scattering model and optical principals. In this method the relative depth relationship of image scene was analyzed by comparing images in fine and haze days with help of the most eye-sensitive HSI luminance component. Finally, by utilizing atmosphere scattering model and the comparison of depth of field, the recovering and result evaluation were conducted on the video obtained in haze. The experimental results show that, compared with the defogging methods calculated in RGB space, the proposed method has more clear defogging results and smaller degree of color distortion and oversaturation.
    Single Gaussian model for background using block-based gradient and linear prediction
    YANG Wenhao, LI Xiaoman
    2016, 36(5):  1383-1386.  DOI: 10.11772/j.issn.1001-9081.2016.05.1383
    Asbtract ( )   PDF (642KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the Single Gaussian Model (SGM) for background could not adapt to non-stationary scenes and the "ghost" phenomenon due to sudden moving of a motionless object. An SGM for background using block-based gradient and linear prediction was put forward. Firstly, SGM was implemented on the pixel level and updated adaptively according to the changes of the pixels' values, at the same time the frame was processed by the block-based gradient algorithm, obtaining the background by judging whether the gradient of sub-block was within the threshold value and eliminating "ghost"; and then foreground from the block-based gradient algorithm and that from the SGM were made "AND" operation, improving the judgment of the background in non-stationary scenes; lastly the linear prediction was employed to process the foreground acquired from the previous operation, resetting the connected regions whose area was less than the threshold value as the background. Simulation experiments were conducted on the CDNET 2012 dataset and Wallflower dataset. In the scenes which varied by a large margin, the accuracy of the proposed method was 40% higher than that of the Gaussian Mixture Model (GMM) in spite of the fact that the detection rate of the proposed method was lower than that of GMM; but in other scenes, the rate of detection was 10% higher and the accuracy was 25% higher. The simulation results show that the proposed method is able to accommodate to the non-stationary scenes and achieve the goal of wiping the "ghost" off, as well as obtain a better result of the background and more detailed foreground than GMM.
    Illumination direction measurement based on halo analysis in high-dynamic range images
    LI Hua, WANG Xuyang, YANG Huamin, HAN Cheng
    2016, 36(5):  1387-1393.  DOI: 10.11772/j.issn.1001-9081.2016.05.1387
    Asbtract ( )   PDF (1084KB) ( )  
    References | Related Articles | Metrics
    Aiming at the illumination consistency of complex scenes in Augmented Reality (AR) system and analyzing the marker images by High-Dynamic Range (HDR) technology, an improved measurement algorithm for illumination direction based on the analysis of halo in HDR images was proposed. In order to improve the immersion and reality of virtual objects, after researching and analyzing the existing illumination recovery algorithms, a camera calibration method was proposed by utilizing the projection invariance of the quadratic curve pair. In order to get detailed light information, HDR was used to process marker image to improve accuracy. Refering to Lambert illumination model, the light information of image was analyzed to classify the shooting angle, and the improvement of traditional light source direction measuring was realized, part of the directions of the light sources outside of the photography ball reflection range was measured. The shooting 1 and shooting 2 of the single point light source were tested and analyzed. The experimental results show that this method is simple, robust, and can measure the direction of partial illumination outside the mirror ball reflection range no matter whether the marker is partially shaded or not.
    Super-resolution reconstruction based on multi-dictionary learning and image patches mapping
    MO Jianwen, ZENG Ermeng, ZHANG Tong, YUAN Hua
    2016, 36(5):  1394-1398.  DOI: 10.11772/j.issn.1001-9081.2016.05.1394
    Asbtract ( )   PDF (960KB) ( )  
    References | Related Articles | Metrics
    To overcome the disadvantages of the unclear results and time consuming in the sparse representation of image super-resolution reconstruction with single redundant dictionary, a single image super-resolution reconstruction method based on multi-dictionary learning and image patches mapping was proposed. In the framework of the traditional sparse representation, firstly the gradient structure information of local image patches was explored, and a large number of training image patches were clustered into several groups by their gradient angles, from those clustered patches the corresponding dictionary pairs were learned. And then the mapping function was computed from low resolution patch to high resolution patch in each clustered group via learned dictionary pairs with the idea of neighbor embedding. Finally the reconstruction process was reduced to a projection of each input patch into the high resolution space by multiplying with the corresponding precomputed mapping function, which improved the images quality with less running time. The experimental results show that the proposed method improves the visual quality significantly, and increases the PSNR (Peak Signal-to-Noise Ratio) at least 0.4 dB compared with the anchored neighborhood regression algorithm.
    Similar circular object recognition method based on local contour feature in natural scenario
    BAN Xiaokun, HAN Jun, LU Dongming, WANG Wanguo, LIU Liang
    2016, 36(5):  1399-1403.  DOI: 10.11772/j.issn.1001-9081.2016.05.1399
    Asbtract ( )   PDF (805KB) ( )  
    References | Related Articles | Metrics
    In the natural scenario, it is difficult to extract a complete outline of the object because of background textures, light and occlusion. Therefore an object recognition method based on local contour feature was proposed. Local contour feature of this paper formed by chains of 2-adjacent straight and curve contour segments (2AS). First, the angle of the adjacent segments, the segment length and the bending strength were analyzed, and the semantic model of the 2AS contour feature was defined. Then on the basis of the relative position relation between object's 2AS features, the 2AS mutual relation model was defined. Second, the 2AS semantic model of the object template primarily matched with the 2AS features of the test image, then 2AS mutual relation model of object template accurately matched with the 2AS features of the test image. At last, the pairs of 2AS of detected local contour features were obtained and repeatedly grouped, then grouped objects were verified according to the 2AS mutual relation model of object template. The contrast experiment with the 2AS feature algorithm with similar straight-line chains, the proposed algorithm has higher accuracy, low false positive rate and miss rate in the recognition of grading ring, then the method can more effectively recognize the grading ring.
    Real-time landmark matching algorithm supported by improved FAST feature point
    YANG Qili, ZHU Lanyan, LI Haitao
    2016, 36(5):  1404-1409.  DOI: 10.11772/j.issn.1001-9081.2016.05.1404
    Asbtract ( )   PDF (1097KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that matching time and accuracy requirements can not be met the simultaneously in image matching technology, a method based on feature points matching was proposed. Landmark matching was achieved successfully by using Random Forest (RF), and matching problem was translated into simple classifying problem to reduce the complication of computation for real-time image matching. Landmark image was represented by Features from Accelerated Segment Test (FAST) feature points, the scale and affine invariability of FAST feature points were improved by Gaussian pyramid structure and affine augmented strategy, and the matching rate was raised. Comparing with Scale-Invariant Feature Transform (SIFT) algorithm and Speed Up Robust Feature (SURF) algorithm, the experimental results show that the matching rate of the proposed algrorithm reached about 90%, keeping the matching rate approximately with SIFT and SURF in cases of scale change, occlusion or rotation, and its running time was an order of magnitude than other two algorithms. This method matches landmarks efficiently and its running time meets the real-time requirements.
    Chinese speech segmentation method based on Gauss distribution of time spans of syllables
    ZHANG Yang, ZHAO Xiaoqun, WANG Digang
    2016, 36(5):  1410-1414.  DOI: 10.11772/j.issn.1001-9081.2016.05.1410
    Asbtract ( )   PDF (957KB) ( )  
    References | Related Articles | Metrics
    So far away, there is no accurate method for Chinese natural speech segmentation of syllables,which is meaningful in labeling speech with reference text instead of people. According to two hypotheses that time spans of Chinese syllables under the same pronunciation obey Gauss distribution and short-time energy valley exists between two adjacent syllables, Chinese speech segmentation method based on Gauss distribution of time spans of syllables was proposed. A simplified method based on distribution of energy valleys was given, which effectively reduced the time complexity of this speech segmentation method. The experimental results show that segmentation accuracy (mean square value of time spans between artificial labels and labels created by this method) achieve 10-3 and computing times are less than 1 s in Matlab of PC.
    Exact pupil detection algorithm combining Hough transformation and contour matching
    MAO Shunbing
    2016, 36(5):  1415-1420.  DOI: 10.11772/j.issn.1001-9081.2016.05.1415
    Asbtract ( )   PDF (1019KB) ( )  
    References | Related Articles | Metrics
    In order to improve the precision of detection on the diameter of pupils in infrared eye videos, an exact pupil detection algorithm (Hough-Contour) combining Hough transformation and contour matching was proposed. Firstly, each image frame was grayed and filtered; secondly, the edge of the image was extracted and the initial circle was detected and taken as the pupil parameter by the revised Hough gradient method; finally, around the pupil, a circular contour whose position and radius varies in a limited range was used to match the pupil, realizing the calculation of pupil center's coordinate and diameter. In the phase of Hough transformation, the descending sort of candidate circle centers according to their accumulated values in Hough transformation was turned into searching for their maximum, in order to reduce the time consumption of this proceeding and the calculation of radius later. In the experiment, the threshold of the maximum in the array of accumulated values was searched and the image frames of closing eyes were excluded by this threshold. In the phase of contour matching, the experiment shows that if the range of the circular contour moving and stretching was assigned one tenth of the radius of the initial circle, and if the number of point pairs was assigned 40, the precision of detection on pupils would reach 99.8% from around 10% which was attained by OpenCV circle transformation. In the experiments on time performance, the proposed algorithm needed 60 ms to process one frame on the low-end computers, and the real-time detection on infrared eye videos can be achieved on the high-end computers.
    Feature combination method based on Fisher criterion in speaker recognition
    XIE Xiaojuan, ZENG Yicheng, XIONG Bingfeng
    2016, 36(5):  1421-1425.  DOI: 10.11772/j.issn.1001-9081.2016.05.1421
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy of speaker recognition, multiple feature parameters should be adopted simultaneously. For the problem that each dimension comprehensive feature parameter has the different influence on the identification result, and treating them equally may not be the optimal solution, a feature parameter extraction method based on Fisher criterion combined with Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Mel Frequency Cepstrum Coefficient (LPMFCC) and Teager Energy Operators Cepstrum Coefficient (TEOCC) was proposed. Firstly, parameters of MFCC, LPMFCC and TEOCC from speech signals were extracted, and then the Fisher ratio of each dimension of MFCC and LPMFCC parameters was calculated, six components were selected respectively by using Fisher standard to combine with TEOCC parameter into a mixture feature which was used to realize speaker recognition on the TIMIT acoustic-phonetic continuous speech corpus and NOISEX-92 noise library. The simulation results show that the average recognition rate of the proposed method by using Gauss Mixed Model (GMM) and Back Propagation (BP) neural network compared with MFCC, LPMFCC, MFCC+LPMFCC, parameter extraction method for MFCC based on Fisher criterion and the feature extraction method based on Principal Component Analysis (PCA) is increased by 21.65 percentage points, 18.39 percentage points, 15.61 percentage points, 15.01 percentage points, 22.70 percentage points in the pure voice database, and by 15.15 percentage points, 10.81 percentage points, 8.69 percentage points, 7.64 percentage points, 17.76 percentage points in 30 dB noise environments. The results show that the mixture feature can improve the recognition rate effectively and has better robustness.
    Audio feature extraction algorithm based on weight tensor of sparse representation
    LIN Jing, YANG Jichen, ZHANG Xueyuan, LI Xinchao
    2016, 36(5):  1426-1429.  DOI: 10.11772/j.issn.1001-9081.2016.05.1426
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics
    A joint time-frequency audio feature extraction algorithm based on Gabor dictionary and weight tensor of sparse representation was proposed to describe the characteristic of non-stationary audio signal. Conventional sparse representation uses a predefined dictionary to encode the audio signal as sparse weight vector. In this paper, the elements in the weight vector were reorganized into tensor format. Each order of the tensor respectively characterized time, frequency and duration property of signal, making it the joint time-frequency-duration representation of the signal. The frequency factors and duration factors were concatenated as audio features through tensor decomposition. To solve the over-fitting problem of sparse tensor factorization, an automatic-adjust-penalty-coefficient factorization algorithm was proposed. The experimental results show that the proposed feature outperforms MFCC (Mel-Frequency Cepstrum Coefficient) feature, MFCC+MP feature concatenated by MFCC and Matching Pursuit (MP) features, and nonuniform scale-frequency map feature by 28.0%, 19.8% and 6.7% respectively, in 15-category audio classification.
    Wear-leveling algorithm for NAND flash memory based on separation of hot and cold logic pages
    WANG Jinyang, YAN Hua
    2016, 36(5):  1430-1433.  DOI: 10.11772/j.issn.1001-9081.2016.05.1430
    Asbtract ( )   PDF (671KB) ( )  
    References | Related Articles | Metrics
    According to the problem of the existing garbage collection algorithm for NAND flash memory, an efficient algorithm, called AWGC (Age With Garbage Collection), was presented to improve wear leveling of NAND flash memory. A hybrid policy with the age of invalid page, erase count of physical blocks and the update frequency of physical blocks were redefined to select the returnable block. Meanwhile, a new heat calculation method logic pages was deduced, and cold-hot separating of valid pages in returnable block was conducted. Compared with the GReedy (GR) algorithm, Cost-Benefit (CB) algorithm, Cost-Age-Time (CAT) algorithm and File-aware Garbage Collection (FaGC) algorithm, not only some good results in wear leveling have been got, but also the total numbers of erase and copy operations have significantly been reduced.
    Method for processing power quality data based on selective reloading
    ZHAO Xia, LIN Tianhua, MA Suxia, QI Linhai
    2016, 36(5):  1434-1438.  DOI: 10.11772/j.issn.1001-9081.2016.05.1434
    Asbtract ( )   PDF (729KB) ( )  
    References | Related Articles | Metrics
    The monitoring data in the power quality monitoring system increased quickly. A new method based on partial storage and selective reloading was proposed, which can solve the problem of repetitive sorting and redundant processing in the tradition methods. In the calculation of the daily index, daily data was sorted and stored partly based on saving rate. In the calculation of week (month, season or year) index, the partly saved daily data in a week (month, season or year) were merged by the multiple merge algorithm to calculate a temporary 95 percentile (CP95), which could be used to determine which daily data should be reloaded. Besides the reloaded data, all other needed data were reordered to calculate the steady index. The sorting process only needed part of the stored daily data and a small amount of reloaded data, so the redundant processing problem in traditional processing method was solved effectively. Compared with the traditional data processing method, the experimental results show the efficiency can be increased more than 3 times using the proposed method when daily sampling data is relatively small. When the number of daily sampling data is more than 2880, the efficiency can be increased more than 15 times. The larger the amount of sampling data is, the more obviously the performance improves.The method has been applied in the monitoring system of Shanxi, Hebei and other provinces successfully. It is proved in practice that the method is correct and effective.
    Group consensus of heterogeneous multi-Agent systems with time delay
    LI Xiangjun, LIU Chenglin, LIU Fei
    2016, 36(5):  1439-1444.  DOI: 10.11772/j.issn.1001-9081.2016.05.1439
    Asbtract ( )   PDF (892KB) ( )  
    References | Related Articles | Metrics
    Concerning the stationary group consensus problem for the heterogeneous multi-Agent systems, which are composed of first-order Agents and second-order Agents, two stationary group consensus protocols were proposed under fixed interconnection topology and switching interconnection topologies respectively. By constructing Lyapunov-Krasovskii functions, the sufficient conditions, which are formulated as linear matrix inequalities, were obtained for the system converging to the group consensus asymptotically under the group consensus algorithm with identical time-varying communication delay. Finally, the simulation results show that the heterogeneous multi-Agent systems with time delay converg to the group consensus asymptotically under certain conditions.
    Design of measurement and control system for car body-in-white detection
    LI Zhenghui, GUO Yin, ZHANG Hongbin, ZHANG Bin
    2016, 36(5):  1445-1449.  DOI: 10.11772/j.issn.1001-9081.2016.05.1445
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics
    In order to achieve unified management and remote communication of measuring equipment in car body-in-white online visual inspection station, a measurement and control system for the car body-in-white detection was designed to improve the working efficiency. Using STM32F407 as the core, μC/OS-Ⅱ and LwIP were transplanted to build a Web server, and the Web server was set up to realize remote communication. Multithreaded tasks were established to achieve the information interaction between serial port and net port. By analyzing the data security issue in the process of data's routing and discussing the phenomenon of packet loss on transmitting, a solution was proposed. 2D normalized cross-correlation method was used to realize the image 2D positioning, and enhome the processing speed. The experimental results show that the system can provide remote communication function, reduce the cost, and improve the efficiency of equipment management.
    Design and implementation of online real-time mapping system based on image stereo vision
    HOU Yifan, WANG Dong, XING Shuai, XU Qing, GE Zhongxiao
    2016, 36(5):  1450-1454.  DOI: 10.11772/j.issn.1001-9081.2016.05.1450
    Asbtract ( )   PDF (838KB) ( )  
    References | Related Articles | Metrics
    In order to satisfy the real-time demand of deep space probe map topography of celestial body, an online mobile real-time mapping prototype system based on image stereo vision was designed and implemented in this paper. Stereo images of space object was obtained in real-time by the stereo camera, a group of observed stereo images was used to reconstruct its surface shape, and then the local reconstructed model was connected to generate the whole surface topography model of the space object. The feasibility, accuracy and speed were proved by experiment. The results show that it can meet the demend of real-time mapping.
    Extended Kalman filtering algorithm based on polynomial fitting
    WU Hanzhou, SONG Weidong, XU Jingqing
    2016, 36(5):  1455-1457.  DOI: 10.11772/j.issn.1001-9081.2016.05.1455
    Asbtract ( )   PDF (567KB) ( )  
    References | Related Articles | Metrics
    The data acquired by the satellite positioning receiver in the trajectory correction projectile must be filtered in real-time to predict the point. The calculation of traditional filtering method is time-consuming, and is difficult to meet the requirements of real-time filtering. A kind of extended Kalman filtering algorithm based on polynomial fitting was proposed. The data of projectile flight in the time interval was replaced by the fitting interpolation data. In this way the filter frequency could be reduced. Simulation results show that the computation time of the proposed method can be reduced by 7/8 compared to traditional extended Kalman filtering without reducing the filtering precision, and the real-time performance is improved. This method can provide important reference for the research of key technology of trajectory correction projectile. At the same time, the method can be applied to other filtering algorithms, and has a strong portability.
    Prediction of airport energy demand based on improved fuzzy support vector regression
    WANG Kun, YUAN Xiaoyang, WANG Li
    2016, 36(5):  1458-1463.  DOI: 10.11772/j.issn.1001-9081.2016.05.1458
    Asbtract ( )   PDF (886KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that interference would exist in the analysis and prediction of airport energy data because of the outliers, a prediction model based on improved Fuzzy Support Vector Regression (FSVR) was established for the demand of airport energy. Firstly, a fuzzy statistical method was selected to make an analysis on test sample sets, parameters and the outputs of models, and a basic membership function form consistent with the data distribution would be derived from this analysis. Secondly, relearning of membership function would be performed with respect to expert experiences, then the parameter values a and b of the normal membership function, the boundary parameter values of semi-trapezoid membership function and the parameter values p and d of triangular membership function would gradually be refined and improved, so as to eliminate or reduce the outliers which were not conducive to data mining and reserved the key points. Finally, combined with Support Vector Regression (SVR) algorithm, a prediction model was established and its feasibility was verified subsequently. The experimental result shows that, compared with Back Propagation (BP) neural network, the prediction accuracy of the FSVR increases 2.66% and the recognition rate of outliers increases 3.72%.
    Step dynamic auto-regression kernel principal component analysis and its application in fault diagnosis
    ZHANG Minlong, WANG Tao, WANG Xuping, CHANG Hongwei, WANG Fang
    2016, 36(5):  1464-1468.  DOI: 10.11772/j.issn.1001-9081.2016.05.1464
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics
    There are over-fitting phenomenon and prone omissions when moving window adaptive Kernel Principal Component Analysis (KPCA) is utilized to deal with sensitive parameters or slow degradation problem. In order to solve the problem, a step dynamic auto-regression KPCA was proposed. Firstly, the initial model was established step by step drawing on dynamic data matrix. Then, the exponentially weighting rule was introduced to process real-time data and update the model based on the moving window adaptive KPCA. Finally, the algorithm complexity was analyzed and specific steps were given. The simulation data was utilized to analyze the impact of decomposition coefficient and weighting factor. The results show that, compared with the moving window adaptive KPCA, the proposed algorithm efficiency was improved by nearly 90% and the number of false positives was almost 0 in the case of appropriate parameter selection; and it could also control the adaptive ability to solve a variety of dynamic problems by adjusting the value of weighting factor. The algorithm was applied to the experimental data analysis of compressor surge and bearing fault, the result verified its ability to deal with the problem of sensitive parameter and slow degradation.
    Synchronization estimation algorithm for attitude algorithm and external force acceleration
    MENG Tangyu, PU Jiantao, FANG Jianjun, LIANG Lanzhen
    2016, 36(5):  1469-1474.  DOI: 10.11772/j.issn.1001-9081.2016.05.1469
    Asbtract ( )   PDF (871KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of mutual interference between attitude algorithm and external force acceleration estimation in inertial navigation system, a new method based on quaternion and extended Kalman filter was proposed. Firstly, the acceleration data of the sensor was corrected by using the estimated external force acceleration data to obtain the accurate reverse gravity acceleration, combined with geomagnetic field vector and calculated by the gradient descent algorithm, the rotate quaternions were obtained. Secondly, the extended Kalman filter model was constructed to update the rotate quaternions and external force acceleration, the prediction value of rotate quaternions and the external force were obtained. Finally, the measured values of rotate quaternions and the acceleration data were corrected by Kalman filtering method, the accurate rotate quaternions and the external force acceleration of the three axis directions in reference coordinate system were obtained. The experimental results show that the method for the synchronization estimation of attitude and external force acceleration by extended Calman filter can quickly converge and accurately get the information of the attitude and the external force acceleration, its Euler angle error is ±1.95° and acceleration error is ±0.12 m/s2. The method can effectively restrain the influence of the external force acceleration on the attitude algorithm, and accurately estimate the external force.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF