Loading...

Table of Content

    10 May 2015, Volume 35 Issue 5
    Delay-aware algorithm of cross-layer design for device-to-device communication based on max-weighted queue
    YU Shengsheng, GE Wancheng, GUO Aihuang
    2015, 35(5):  1205-1208.  DOI: 10.11772/j.issn.1001-9081.2015.05.1205
    Asbtract ( )   PDF (564KB) ( )  
    References | Related Articles | Metrics

    Max Weighted Queue (MWQ) control policy based on the theory of Lyapunov optimization is a cross-layer control policy that achieves queue stability and optimal delay performance. For the real-time and delay-sensitive demand in Device-to-Device (D2D) communication services, the MWQ algorithm, in which the Channel State Information (CSI) of PHY layer and the Queue State Information (QSI) of MAC layer are collectively considered, makes the maximum system throughput as the objective function and controls the power of D2D nodes dynamic. In this paper, a novel MWQ algorithm in the D2D communication was proposed. Compared to the algorithm with fixed power, the CSI-based algorithm and the QSI-based algorithm, the MWQ algorithm can decrease the average delay about 0.5 s when the average packets arrival rate exceeds 10 Mb/s and require less 26 dB power while having the same average delay. So the MWQ algorithm can achieve a good performance and have a reference to obtain low latency in D2D communication.

    Hybrid trajectory compression algorithm based on multiple spatiotemporal characteristics
    WU Jiagao, QIAN Keyu, LIU Min, LIU Linfeng
    2015, 35(5):  1209-1212.  DOI: 10.11772/j.issn.1001-9081.2015.05.1209
    Asbtract ( )   PDF (593KB) ( )  
    References | Related Articles | Metrics

    In view of the problem that how to reduce the storage space of the trajectory data and improve the speed of data analysis and transmission in the Global Positioning System (GPS), a hybrid trajectory compression algorithm based on the multiple spatiotemporal characteristics was proposed in this paper. On the one hand, in the algorithm, a new online trajectory compression strategy based on the multiple spatiotemporal characteristics was adopted in order to choose the characteristic points more accurately by using the position, direction and speed information of GPS point. On the other hand, the hybrid trajectory compression strategy which combined online compression with batched compression was used, and the Douglas batched compression algorithm was adopted to do the second compression process of the hybrid trajectory compression. The experimental results show that the compression error of the new online trajectory compression strategy based on multiple spatiotemporal characteristics reduces significantly, although the compression ratio fells slightly compared with the existing spatiotemporal compression algorithm. By choosing appropriate cycle time of batching, the compression ratio and compression error of this algorithm are improved compared with the existing spatiotemporal compression algorithm.

    Community structure detection based on node similarity in complex networks
    LIANG Zongwen, YANG Fan, LI Jianping
    2015, 35(5):  1213-1217.  DOI: 10.11772/j.issn.1001-9081.2015.05.1213
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that finding community structure in complex network is very complex, a community discovery algorithm based on node similarity was proposed. The basic idea of this algorithm was that node pairs with higher similarity had more posibility to be grouped into the same community. Integrating local and global similarity, it constructed a similarity matrix which each element represents the similarity of a pair of nodes, then merged nodes which have the most similarity to the same community. The experimental results show that the proposed algorithm can get the correct community structure of networks, and achieve better performance than Label Propagation Algorithm (LPA), GN (Girvan-Newman) and CNM (Clauset-Newman-Moore) algorithms in community detection.

    Novel channel-adaptive coded cooperation scheme
    QIAO Ying, HE Yucheng, ZHOU Lin
    2015, 35(5):  1218-1223.  DOI: 10.11772/j.issn.1001-9081.2015.05.1218
    Asbtract ( )   PDF (871KB) ( )  
    References | Related Articles | Metrics

    To overcome the severe performance loss of conventional coded cooperation schemes under dynamic channel conditions in mobility scenarios, a novel adaptive coded cooperation scheme was proposed by using rate-compatible Low-Density Parity Check (LDPC) codes in combination with a Hybrid Automatic Repeat reQuest (HARQ) protocol. It was assumed that channel state information changed during each transmission. By automatic retransmission of unequal length incremental redundancy, the equivalent code rates at the cooperative and destination nodes could be nonlinearly adjusted with channel conditions. The expressions for outage probability and throughput were derived for evaluating the system performance of the proposed scheme, and theoretical analysis and simulation results were presented. These results show that, compared with conventional schemes and equal-length retransmission schemes, the proposed scheme with properly designed compatible rates can effectively reduce the system outage probability, increase the throughput, and improve the transmission reliability of cooperative communications in mobility scenarios.

    Survivability analysis of interdependent network with incomplete information
    JIANG Yuxiang, LYU Chen, YU Hongfang
    2015, 35(5):  1224-1229.  DOI: 10.11772/j.issn.1001-9081.2015.05.1224
    Asbtract ( )   PDF (1051KB) ( )  
    References | Related Articles | Metrics

    This paper proposed a method for analyzing the survivability of interdependent networks with incomplete information. Firstly, the definition of the structure information and the attack information were proposed. A novel model of interdependent network with incomplete attack information was proposed by considering the process of acquiring attack information as the unequal probability sampling by using information breadth parameter and information accuracy parameter in the condition of structure information was known. Secondly, with the help of generating function and the percolation theory, the interdependent network survivability analysis models with random incomplete information and preferential incomplete information were derived. Finally, the scale-free network was taken as an example for further simulations. The research result shows that both information breadth and information accuracy parameters have tremendous impacts on the percolation threshold of interdependent network, and information accuracy parameter has more impact than information breadth parameter. A small number of high accuracy nodes information has the same survivability performance as a large number of low accuracy nodes information. Knowing a small number of the most important nodes can reduce the interdependent network survivability to a large extent. The interdependent network has far lower survivability performance than the single network even in the condition of incomplete attack information.

    QoS-aware joint allocation of sensing time and resource in cognitive radio networks
    LI Li
    2015, 35(5):  1230-1233.  DOI: 10.11772/j.issn.1001-9081.2015.05.1230
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    In order to maximize the throughput of Cognitive Radio Users (CRU) and guarantee CRUs' Quality of Service (QoS) requirements, jointly optimizing sensing time and resource allocation was investigated. Based on this problem, a joint allocation of sensing time and resource algorithm was proposed. In multichannel cognitive radio networks, both spectrum sensing and resource allocation could influence the network throughput. Original joint optimization problem which considers both issues can be divided into two sub-optimization problems: transmit power and channel allocation problem with fixed sensing time, and one-dimensional exhaustive search for optimal sensing time with fixed resource allocation strategy. Using the proposed algorithm, the optimal sensing time can be obtained by exhaustive search and the optimal resource allocation strategy can be achieved by subgradient method. Simulation results show that the proposed optimized sensing time and resource allocation algorithm can maximize the throughput of cognitive radio networks. Furthermore, QoS requirements of CRU can be guaranteed.

    Outage probability of hybrid satellite terrestrial cooperative system with best selection
    CHEN Liuwei, LIANG Jun, ZHU Wei, ZHANG Hengyang, WANG Yi
    2015, 35(5):  1234-1237.  DOI: 10.11772/j.issn.1001-9081.2015.05.1234
    Asbtract ( )   PDF (528KB) ( )  
    References | Related Articles | Metrics

    Focusing on the fading and shadowing effect in satellite channel, a Hybrid Satellite-Terrestrial Cooperative System (HSTCS) was presented, and the closed-form expression of the outage probability was evaluated using the Land Mobile Satellite (LMS) channel. A selective Decode-and-Forward (DF) scheme was implemented between a source node (the satellite) and a destination node (a terrestrial station), and signals from the satellite and terrestrial relay were combined at destination. The analytical expression of the outage probability was verified with the Matlab simulation. The results show that the system can improve the outage performance through the diversity gain, compared with the direct transmission.

    Node localization of wireless sensor networks based on hybrid bat-quasi-Newton algorithm
    YU Quan, SUN Shunyuan, XU Baoguo, CHEN Shujuan, HUANG Yanli
    2015, 35(5):  1238-1241.  DOI: 10.11772/j.issn.1001-9081.2015.05.1238
    Asbtract ( )   PDF (628KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the least square method in the third stage of DV-Hop algorithm has low positioning accuracy, a localization algorithm was proposed which is the fusion of hybrid bat-quasi-Newton algorithm and DV-Hop algorithm. First of all, the Bat Algorithm (BA) was improved from two aspects: firstly, the random vector β was adjusted adaptively according to bats' fitness so that the pulse frequency had the adaptive ability. Secondly, bats were guided to move by the average position of all the best individuals before the current iteration so that the speed had variable performance; Then in the third stage of DV-Hop algorithm the improved bat algorithm was used to estimate node location and then quasi-Newton algorithm was used to continue searching for the node location from the estimated location as the initial searching point. The simulation results show that, compared with the traditional DV-Hop algorithm and the improved algorithm of DV-Hop based on bat algorithm(BADV-Hop), positioning precision of the proposed algorithm increases about 16.5% and 5.18%, and the algorithm has better stability, it is suitable for high positioning precision and stability situation.

    Sensor network queue management algorithm based on duty cycle control and delay guarantee
    ZENG Zhendong, CHEN Xiao, SUN bo, WU Shuxin
    2015, 35(5):  1242-1245.  DOI: 10.11772/j.issn.1001-9081.2015.05.1242
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics

    In order to ensure that the Wireless Sensor Network (WSN) delay requirements while minimizing power consumption, a sensor network queue management algorithm based on duty cycle control and delay guarantees (DQC) was proposed. According to changing network conditions, in order to better control node duty cycle and queue thresholds, a two-way controller was used. The controller provided a delay notification mechanism to determine an appropriate sleep time and queue length for each node based on application requirement and time-varying delay requirement. And the stability of the state of two-way controller was derived based on control theory to obtain a condition of the control parameters for guaranteeing asymptotically stable steady state. Simulation results show that compared with the algorithm based on adaptive duty cycle control and performance improvement queue-based congestion management mechanism, the proposed algorithm shortened end-to-end delay of time period by 38.8% and 36.0%, reduces the average power consumption by 46.5 mW and 27.5 mW. It show better performances on the control of delay time and energy efficiency.

    New algorithm for problem of minimum cut/maximum flow based on augmenting path restoration
    ZHAO Lifeng, YAN Ziheng
    2015, 35(5):  1246-1249.  DOI: 10.11772/j.issn.1001-9081.2015.05.1246
    Asbtract ( )   PDF (596KB) ( )  
    References | Related Articles | Metrics

    The NW (Newman-Watts) small-world network and the BA (Barabasi-Albert) scale-free network are two kinds of common network in reality, these two kinds of networks have a highly possibility existing multiple paths between any pair of vertexes. If abandoning saturated augmented chain and finding augmented chain, the efficiency is not high. The fact above urged a high performance new algorithm described as minimum cut/maximum flow algorithm based on augmenting path restoration to manage these two networks. The new algorithm constantly sought vertexes following greedy heuristics to restore saturated augmenting paths which generated by adjusting flow on shortest paths. By experimental comparisons on the NW small-world network and the BA scale-free network, the new algorithm was several times faster than Ford-Fulkerson algorithm and half as Dinic algorithm in RAM usage, as a consequence, the new algorithm was capable of calculating growing telecommunication and traffic networks.

    Structure evolution based design method for infinite impulse response digital filters
    MAO Junyong, CHEN Lijia, LIU Mingguo
    2015, 35(5):  1250-1254.  DOI: 10.11772/j.issn.1001-9081.2015.05.1250
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the transfer function of Infinite Impulse Response (IIR) digital filters is not optimal in the entire design process adopting traditional filter design methods, a structure evolution based design method for IIR digital filters using Genetic Algorithm (GA) was proposed. The method evolved the filter structure directly without the preparation of the transfer function. Firstly, the Structure Generation Instruction Sequences (SGIS) were generated randomly. Those SGIS not only controlled the process of structure generation but also represented those structures. Then, the SGIS were coded and seemed as chromosomes. Finally, GA was used to optimize those chromosomes to obtain a best filer. In the comparison experiments with the traditional coefficient evolution based design method for IIR digital filters using GA, the pass-band ripple of the proposed algorithm decreased by 40.58%, the transition zone width of it decreased by 87.62%, and the minimum stop-band attenuation of it declined by 9.22%.
    null
    HE Hua, LIN Chuang, ZHAO Zenghua, PANG Shanchen
    2015, 35(5):  1255-1261.  DOI: 10.11772/j.issn.1001-9081.2015.05.1255
    Asbtract ( )   PDF (1124KB) ( )  
    References | Related Articles | Metrics

    null

    Communication access control method based on software defined networking for virtual machines in IaaS platforms
    HAN Zhenyang, CHEN Xingshu, HU Liang, CHEN Lin
    2015, 35(5):  1262-1266.  DOI: 10.11772/j.issn.1001-9081.2015.05.1262
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the network access control of Virtual Machines (VM) in the cloud computing Infrastructure as a Service (IaaS) platforms, a method of communication access control for VM in IaaS platforms was proposed. The method based on Software Defined Networking (SDN) was realized to customize the communication access control rules from Layer 2 to Layer 4. The experimental results show that the method can manage communication access permissions of tenants' VM flexibly, and ensure the security of tenants' network.

    Hadoop big data processing system model based on context-queue under Internet of things
    LI Min, NI Shaoquan, QIU Xiaoping, HUANG Qiang
    2015, 35(5):  1267-1272.  DOI: 10.11772/j.issn.1001-9081.2015.05.1267
    Asbtract ( )   PDF (911KB) ( )  
    References | Related Articles | Metrics

    In order to solve problems that heterogeneous big data processing has low real-time response capability in Internet Of Things (IOT), data processinging and persistence schemes based on Hadoop were analyzed. A model of Hadoop big data processing system model based on "Context" named as HDS (Hadoop big Data processing System) was proposed. This model used Hadoop framework to complete data parallel process and persistence. Heterogeneous data were abstracted as "Context" which are the unified objects processed in HDS. Definitions of "Context Distance" and "Context Neighborhood System (CNS)" were proposed based on the "temporal-spatial" characteristics of "Context". "Context Queue (CQ)" was designed as an assistance storage so as to overcome defect of low real-time data processing response capability in Hadoop framework. Especially, based on temporal and spatial characteristics of context, optimization of task reorganizing in client requests CQ was introduced in detail. Finally, taken problem of vehicle scheduling in petroleum products distribution as an example, performance of data processing and real-time response capability were tested by MapReduce distributed parallel computing experiments. The experimental results show that compared with ordinary computing system SDS (Single Data processing System), HDS is not only of obviously excellence in big data processing capability but also can effectively overcome defect of low real-time data processing response of Hadoop. In 10-server experimental environment, the difference of data processinging capability between HDS and SDS is more than 200 times; the difference between HDS with and without assistance of CQ for real-time data processing response capability is more than 270 times.

    Improved weighted centroid localization algorithm in narrow space
    LIU Yong, ZHANG Jinlong, ZHANG Yanbo, WANG Tao
    2015, 35(5):  1273-1275.  DOI: 10.11772/j.issn.1001-9081.2015.05.1273
    Asbtract ( )   PDF (627KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that severe signal multipath effect, low accuracy of sensor node positioning, etc. in narrow space, a new method using Weighted Centroid Localization (WCL) algorithm based on Received Signal Strength Indicator (RSSI) was proposed. The algorithm was used in scenarios with characteristics of long and narrow strip space, and it could dynamically acquire the decline index of path by RSSI and distance of neighbor beacon node signal, improve the environmental adaptation of RSSI distance detection algorithm. In addition, the algorithm based on environment improved weight coefficient of weighted centroid algorithm by introducing correction factor, which improved the accuracy of localization. Theoretical analysis and simulation results show that the algorithm has been optimized to adapt to narrow space. As compared with the Weighted Centroid Localization (WCL) algorithm, in roadway environment with the width of 3 m, 5 m, 8 m, 10 m respectively and 10 beacon nodes, positioning precision increases 22.1%, 19.2%, 16.1% and 16.5% respectively, the stability increases 23.4%, 21.5%, 18.1% and 15.4% respectively.

    Metadata management mechanism of massive spatial data storage
    YANG Wenhui, LI Guoqiang, MIAO Fang
    2015, 35(5):  1276-1279.  DOI: 10.11772/j.issn.1001-9081.2015.05.1276
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics

    In order to manage the metadata of massive spatial data storage effectively, a distributed metadata server management structure based on consistent hashing was introduced, and on this basis, a metadata wheeled backup strategy was proposed in this paper, which stored Hash metadata node after excuting a consistent Hash algorithm according to the method of data backup, and it effectively alleviated the single point of metadata management and access bottleneck problems. Finally testing wheel backup strategy, it obtained the optimum number of metadata node backup solution. Compared with the single point of metadata servers, the proposed strategy improves the metadata safety, reduces the access delay, and improves the load balance of distributed metadata server combined with virtual nodes.

    Parallel test task scheduling based on graph coloring theory and genetic-bee colony algorithm
    WU Yong, WANG Xue, ZHAO Huanyi
    2015, 35(5):  1280-1283.  DOI: 10.11772/j.issn.1001-9081.2015.05.1280
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics

    For the question of parallel test task scheduling, an innovative solution based on graph coloring theory and genetic-bee colony algorithm was proposed. Firstly, a relation model of test tasks was established based on graph coloring theory, in which the occupation of device resource by test task could be represented by graph. Based on this relation model of test task, the optimum solution was searched via combining the artificial bee colony algorithm and the crossover operation and mutation operation which are unique in genetic algorithm to avoid the prematurity of the algorithm as well as accelerate convergence. Eventually, a grouping scheme was generated with maximized parallelism degree. Verified by the simulation, the proposed method can effectively realize the parallel test, improve the test efficiency of automatic test system.

    Design and implementation of abnormal behavior detection system in cloud computing
    YU Hongyan, CEN Kailun, YANG Tengxiao
    2015, 35(5):  1284-1289.  DOI: 10.11772/j.issn.1001-9081.2015.05.1284
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics

    Worm, Address Resolution Protocol (ARP) broadcast and other abnormal behaviorS which attack the cloud computing platform from the virtual machines cannot be detected by traditional network security components. In order to solve the problem, abnormal behavior detection technology architecture for cloud computing platform was designed, abnormal behavior detection for worms which brought signature and non-signature behaviors based on mutation theory and "Detection-Isolation-Cure-Restore" intelligent processing for cloud security was proposed. Abnormal detection, management of event and defense, and ARP broadcast detection for cloud computing platform were merged in the system. The experimental results show that the abnormal behavior inside the cloud computing platform can be detected and defensed with the system, the collection and analysis of the abnormal behavior inside cloud computing platform can be provided by this system in real-time, the traffic information can be refreshed automatically every 5 seconds, the system throughput can reach to 640 Gb and the bandwith occupied by abnormal flow can be reduced to less than 5% of the total bandwith in protected link.

    Design and implementation of local processor in a distributed system
    WEI Min, LIU Yi'an, WU Hongyan
    2015, 35(5):  1290-1295.  DOI: 10.11772/j.issn.1001-9081.2015.05.1290
    Asbtract ( )   PDF (860KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that there is a lot of data which need to be real-time processed during the production process, the local processor, based on multi-thread and co-processing architecture and two data buffer mechanisms was accomplished. As a reference, multi-functional thread in Hadoop's parallel architecture has an impressed impact on the design of the local processor, especially MapReduce principle. Based on the user-defined architecture, the local processor ensures data concurrency and correctness during receiving, computing and uploading. The system has been put into production for over one year. It can meet the enterprise requirements and has good stability, real-time, effectiveness and scalablility. The application result shows that the local processor can achieve synchronized analysis and processing of mass data.

    Utilizing multi-core CPU to accelerate remote sensing image classification based on K-means algorithm
    WU Jiexuan, CHEN Zhenjie, ZHANG Yunqian, PIAN Yuzhe, ZHOU Chen
    2015, 35(5):  1296-1301.  DOI: 10.11772/j.issn.1001-9081.2015.05.1296
    Asbtract ( )   PDF (963KB) ( )  
    References | Related Articles | Metrics

    Concerning the application requirements for the fast classification of large-scale remote sensing images, a parallel classification method based on K-means algorithm was proposed. Combined the CPU process-level and thread-level parallelism features, reasonable strategies of data granularity decomposition and task scheduling between processes and threads were implemented. This algorithm can achieve satisfactory parallel acceleration while ensuring classification accuracy. The experimental results using large-volume and multi-scale remote sensing images show that: the proposed parallel algorithm can significantly reduce the classification time, get good speedup with the maximum value of 13.83, and obtain good load-balancing. Thus it can solve the remote sensing image classification problems of the large area.

    Novel K-medoids clustering algorithm based on breadth-first search
    YAN Hongwen, ZHOU Yamei, PAN Chu
    2015, 35(5):  1302-1305.  DOI: 10.11772/j.issn.1001-9081.2015.05.1302
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics

    Due to the disadvantages such as sensitivity to the initial selection of the center, random selection of centers and poor accuracy in traditional K-medoids clustering algorithm, a breadth-first search strategy for centers was proposed on the basis of granular computing effective initialization. The new algorithm selected K granules firstly using granular computing and selected their corresponding centers as the K initial centers. Secondly, according to the similarity between objects, the proposed algorithm set up binary tree of similar objects separately where the corresponding initial centers were taken as the root nodes, and then used breadth-first search to traverse the binary tree to find out K optimal centers. What's more, the fitness function was optimized by using within-cluster distance and between-cluster distance. The experimental results on standard data set Iris and Wine in UCI show that this proposed algorithm effectively reduces the number of iterations and guarantees the accuracy of clustering at the same time.

    Applications of unbalanced data classification based on optimized support vector machine ensemble classifier
    ZHANG Shaoping, LIANG Xuechun
    2015, 35(5):  1306-1309.  DOI: 10.11772/j.issn.1001-9081.2015.05.1306
    Asbtract ( )   PDF (588KB) ( )  
    References | Related Articles | Metrics

    The traditional classification algorithms are mostly based on balanced datasets. But when the sample is not balanced, the performance of these learning algorithms are often significantly decreased. For the classification of imbalanced data, a optimized Support Vector Machine (SVM) ensemble classifier model was proposed. Firstly, the model used KSMOTE and Bootstrap to preprocess the imbalanced data and paralleled to generate the corresponding SVM models. And then these SVM models' parameters were optimized by using complex method. At last the optimized SVM ensemble classifier model was generated by the above parameters and produce the final result by voting mechanism. Through the experiment on 5 groups of UCI standard data set, the experimental results show that the optimized SVM ensemble classifier model has higher classification accuracy than SVM model, optimized SVM model and so on. And the results also verify the effect of different bootNum values on the optimized SVM ensemble classifier.

    Classification method of text sentiment based on emotion role model
    HU Yang, DAI Dan, LIU Li, FENG Xupeng, LIU Lijun, HUANG Qingsong
    2015, 35(5):  1310-1313.  DOI: 10.11772/j.issn.1001-9081.2015.05.1310
    Asbtract ( )   PDF (780KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of misjudgment which due to emotion point to an unknown and missing hidden view in traditional emotion classification method, a text sentiment classification method based on emotional role modeling was proposed. The method firstly identified evaluation objects in the text, and it used the measure based on local semantic analysis to tag the sentence emotion which had potential evaluation object. Then it distinguished the positive and negative polarity of evaluation objects in this paper by defining its emotional role. And it let the tendency value of emotional role integrate into feature space to improve the feature weight computation method. Finally, it proposed the concept named "features converge" to reduce the dimension of model. The experimental results show that the proposed method can improve the effect and accuracy of 3.2% for text sentiment classification effectively compared with other approaches which tend to pick the strong subjective emotional items as features.

    Automatic Chinese sentences group method based on multiple discriminant analysis
    WANG Rongbo, LI Jie, HUANG Xiaoxi, ZHOU Changle, CHEN Zhiqun, WANG Xiaohua
    2015, 35(5):  1314-1319.  DOI: 10.11772/j.issn.1001-9081.2015.05.1314
    Asbtract ( )   PDF (995KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems in Chinese sentence grouping domain, including the lack of computational linguistics data and the joint makers in a discourse, this paper proposed an automatic Chinese sentence grouping method based on Multiple Discriminant Analysis (MDA). Moreover, sentences group was rarely considered as a grammar unit. An annotated evaluation corpus for Chinese sentence group was constructed based on Chinese sentence group theory. And then, a group of evaluation functions J was designed based on the MDA method to realize automatic Chinese sentence grouping. The experimental results show that the length of a segmented unit and one discourse's joint makers contribute to the performance of Chinese sentence group. And the Skip-Gram model has a better effect than the traditional Vector Space Model (VSM). The evaluation parameter Pμ reaches to 85.37% and WindowDiff reduces to 24.08% respectively. The proposed method has better grouping performance than that of the original MDA method.

    Method of bursty events detection based on sentiment filter
    FEI Shaodong, YANG Yuzhen, LIU Peiyu, WANG Jian
    2015, 35(5):  1320-1323.  DOI: 10.11772/j.issn.1001-9081.2015.05.1320
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics

    In we media platform such as microblog, emergency has such characteristics as suddenness and having multiple bursting points. Thus, it brings difficulty to emergency detection. Thus, this paper proposed a method of bursty events detection based on sentiment filter. Firstly, the topic was mapped as a hierarchical model according to the method. Then, dynamic adjustment of the model characteristics was made in a timing-driven way so as to detect the new topics of the information. Based on it, the method analyzed the user's emotional attitude toward such topics. The topics were divided into positive and negative emotion tendencies according to the user's emotional attitude. Additionally, the topic full of negative emotion tendency was regarded as emergent topic. The experimental results show that the accuracy and recall of the proposed method are all increased about 10% compared with baseline.

    Method by using time factors in recommender system
    FAN Jiabing, WANG Peng, ZHOU Weibo, YAN Jingjing
    2015, 35(5):  1324-1327.  DOI: 10.11772/j.issn.1001-9081.2015.05.1324
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that traditional recommendation algorithm ignores the time factors, according to the similarity of individuals' short-term behavior, a calculation method of item correlation by using time decay function based on users' interest was proposed. And based on this method, a new item similarity was proposed. At the same time, the TItemRank algorithm was proposed which is an improved ItemRank algorithm by combining with the user interest-based item correlation. The experimental results show that: the improved algorithms have better recommendation effects than classical ones when the recommendation list is small. Especially, when the recommendation list has 20 items, the precision of user interest-based item similarity is 21.9% higher than Cosin similarity and 6.7% higher than Jaccard similarity. Meanwhile, when the recommendation list has 5 items, the precision of TItemRank is 2.9% higher than ItemRank.

    Collaborative ranking algorithm by explicit and implicit feedback fusion
    LI Gai
    2015, 35(5):  1328-1332.  DOI: 10.11772/j.issn.1001-9081.2015.05.1328
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics

    The problem of the previous research about collaborative ranking is that it does not make full use of the information in the dataset, either focusing on explicit feedback data, or focusing on implicit feedback data. Until now, nobody researches collaborative ranking algorithm by explicit and implicit feedback fusion. In order to overcome the defects of prior research, a new collaborative ranking algorithm by explicit and implicit feedback fusion namedMERR_SVD++ was proposed to optimize Expected Reciprocal Rank (ERR) based on the newest Extended Collaborative Less-is-More Filtering (xCLiMF) model and Singular Value Decomposition++ (SVD++) algorithm. The experimental results on practical datasets show that, the values of Normalized Discounted Cumulative Gain (NDCG) and ERR for MERR_SVD++ are increased by 25.9% compared with xCLiMF, Cofi Ranking (CofiRank), PopRec and Random collaborative ranking algorithms, and the running time of MERR_SVD++ showed a linear correlation with the number of ratings. Because of the high precision and the good expansibility, MERR_SVD++ is suitable for processing big data, and has wide application prospect in the field of Internet information recommendation.

    Naïve differential evolution algorithm
    WANG Shenwen, ZHANG Wensheng, QIN Jin, XIE Chengwang, GUO Zhaolu
    2015, 35(5):  1333-1335.  DOI: 10.11772/j.issn.1001-9081.2015.05.1333
    Asbtract ( )   PDF (434KB) ( )  
    References | Related Articles | Metrics

    In order to solve singleness of mutation study, a naïve mutation strategy was proposed to approach the best individual and depart the worst one. So, a scale factor self-adaptation mechanism was used and the parameter was set to a small value when the dimension value of three random individuals is very close to each other, otherwise, set it to a large value. The results showed that the Differential Evolution (DE) with the new mechanism exhibits a robust convergence behavior measured by average number of fitness evaluations, successful running rate and acceleration rate.

    Particle swarm optimization algorithm using opposition-based learning and adaptive escape
    LYU Li, ZHAO Jia, SUN Hui
    2015, 35(5):  1336-1341.  DOI: 10.11772/j.issn.1001-9081.2015.05.1336
    Asbtract ( )   PDF (853KB) ( )  
    References | Related Articles | Metrics

    To overcome slow convergence velocity of Particle Swarm Optimization (PSO) which falls into local optimum easily, the paper proposed a new approach, a PSO algorithm using opposition-based learning and adaptive escape. The proposed algorithm divided states of population evolution into normal state and premature state by setting threshold. If popolation is in normal state, standard PSO algorithm was adopted to evolve; otherwise, it falls into "premature", the algorithm with opposition-based learning strategy and adaptive escape was adopted, the individual optimal location generates the opposite solution by opposition-based learning, increases the learning ability of particle, enhances the ability to escape from local optimum, and raises the optimizing rate. Experiments were conducted on 8 classical benchmark functions, the experimental results show that the proposed algorithm has better convergence velocity and precision than classical PSO algorithm, such as Fully Imformed Particle Swarm optimization (FIPS), self-organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients (HPSO-TVAC), Comprehensive Learning Particle Swarm Optimizer (CLPSO), Adaptive Particle Swarm Optimization (APSO), Double Center Particle Swarm Optimization (DCPSO) and Particle Swarm Optimization algorithm with Fast convergence and Adaptive escape (FAPSO).

    Improved discrete particle swarm algorithm for solving flexible flow shop scheduling problem
    XU Hua, ZHANG Ting
    2015, 35(5):  1342-1347.  DOI: 10.11772/j.issn.1001-9081.2015.05.1342
    Asbtract ( )   PDF (963KB) ( )  
    References | Related Articles | Metrics

    An improved Discrete Particle Swarm Optimization (DPSO) algorithm was proposed for solving the Flexible Flow Shop scheduling Problem (FFSP) with makespan criterion. The proposed algorithm redefined the operator of particle's velocity and position, and the encoding matrix and decoding matrix were introduced to represent the relationship between job, machine and scheduling. To improve the quality of initial population of the improved DPSO algorithm for the FFSP solution, by analyzing the relationship between the initial machine selection and the total completion time, a shortest time decomposition strategy based on NEH algorithm was proposed. The experimental results show that the algorithm has good performance in solving the flexible flow shop scheduling problem, and it is an effective scheduling algorithm.

    Double subgroups fruit fly optimization algorithm with characteristics of Levy flight
    ZHANG Qiantu, FANG Liqing, ZHAO Yulong
    2015, 35(5):  1348-1352.  DOI: 10.11772/j.issn.1001-9081.2015.05.1348
    Asbtract ( )   PDF (713KB) ( )  
    References | Related Articles | Metrics

    In order to overcome the problems of low convergence precision and easily relapsing into local optimum in Fruit fly Optimization Algorithm (FOA), by introducing the Levy flight strategy into the FOA, an improved FOA called double subgroups FOA with the characteristics of Levy flight (LFOA) was proposed. Firstly, the fruit fly group was dynamically divided into two subgroups (advanced subgroup and drawback subgroup) whose centers separately were the best individual and the worst individual in contemporary group according to its own evolutionary level. Secondly, a global search was made for drawback subgroup with the guidance of the best individual, and a finely local search was made for advanced subgroup by doing Levy flight around the best individual, so that not only both the global and local search ability balanced, but also the occasionally long distance jump of Levy flight could be used to help the fruit fly jump out of local optimum. Finally, two subgroups exchange information by updating the overall optimum and recombining the subgroups. The experiment results of 6 typical functions show that the new method has the advantages of better global searching ability, faster convergence and more precise convergence.

    Range-parameterized square root cubature Kalman filter using hybrid coordinates for bearings-only target tracking
    ZHOU Deyun, ZHANG Hao, ZHANG Kun, ZHANG Kai, PAN Qian
    2015, 35(5):  1353-1357.  DOI: 10.11772/j.issn.1001-9081.2015.05.1353
    Asbtract ( )   PDF (535KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems of having nonlinear observation equations and being susceptible to initial value of filtering in bearings-only target tracking, a range-parameterized hybrid coordinates Square Root Cubature Kalman Filter (SRCKF) algorithm was proposed. Firstly,it applied the SRCKF to hybrid coordinates,obtained better tracking effect than the SRCKF under Cartesian coordinates. And then it combined the range parameterization strategy with the SRCKF under hybrid coordinates, and eliminated the impact of unobservable range. The simulation results show that the proposed algorithm can significantly improve the accuracy and robustness although the computational complexity increases slightly.

    Consensus of the second-order multi-Agent systems with random time-delays
    ZONG Xin, CUI Yan
    2015, 35(5):  1358-1360.  DOI: 10.11772/j.issn.1001-9081.2015.05.1358
    Asbtract ( )   PDF (535KB) ( )  
    References | Related Articles | Metrics

    This paper studied consensus problems for second-order multi-Agent systems with random time-delays. Both networks under the fixed and switching topologies were taken into consideration. The stability criteria of the time-delay dependence of multi-Agent systems was analyzed by constructing a proper Lyapunov function. And sufficient conditions for all Agents achieving the consensus were given in the forms of Liner Matrix Inequality (LMI). Finally, the simulation results show the correctness and effectiveness of the conclusions.

    Intelligent control based on ε-support vector regression theory for regional traffic signal system
    YOU Ziyi, CHEN Shiguo, WANG Yi
    2015, 35(5):  1361-1366.  DOI: 10.11772/j.issn.1001-9081.2015.05.1361
    Asbtract ( )   PDF (901KB) ( )  
    References | Related Articles | Metrics

    Intelligent control of urban traffic signal is an important element of intelligent transportation system. In order to meet the real-time and accuracy for the regional traffic signal coordinated control, this paper presented an Intelligent Control Strategy for Regional Traffic Signal (ICSRTS) based on ε-SVR (Support Vector Regression) nonlinear regression theory. Combining with the existing data aggregation algorithm, ICSRTS was based on the wireless sensor network structure, and it adopted the clustering strategy to create a model of discrete switching system, which integrated the information scheduling and control for the regional traffic system. In the discrete switching system, the network delay and packet loss rate for data transmission were considered, furthermore, the observer used the modified ε-SVR theory to realize the online prediction of the multi-source data based traffic state, then the controller carried out coordination control of the overall traffic signal. The asymptotic stability of discrete switching system was analyzed using Lyapunov function. Simulation results show that ICSRTS has better performance in the intersection average delay time compared with ordinary fuzzy neural control and ordinary ε-SVR prediction algorithm. Therefore, this method can realize the regional traffic signal coordinated control in real-time and effectively, and reduce the area of traffic congestion and energy consumption.

    Parameter identification in chaotic system based on feedback teaching-learning-based optimization algorithm
    LI Ruiguo, ZHANG Hongli, WANG Ya
    2015, 35(5):  1367-1372.  DOI: 10.11772/j.issn.1001-9081.2015.05.1367
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics

    Concerning low precision and slow speed of traditional intelligent optimization algorithm for parameter identification in chaotic system, a new method of parameter identification in chaotic system based on feedback teaching-learning-based optimization algorithm was proposed. This method was based on the teaching-learning-based optimization algorithm, where the feedback stage was introduced at the end of the teaching and learning stage. At the same time the parameter identification problem was converted into a function optimization problem in parameter space. Three-dimensional quadratic autonomous generalized Lorenz system, Jerk system and Sprott-J system were taken as models respectively, intercomparison experiments among particle swarm optimization algorithm, quantum particle swarm optimization algorithm, teaching-learning-based optimization algorithm and feedback teaching-learning-based optimization algorithm were conducted. The identification error of the feedback teaching-learning-based optimization algorithm was zero, meanwhile, the search times was decreased significantly. The simulation results show that the feedback teaching-learning-based optimization algorithm improves the precision and speed of the parameter identification in chaotic system markedly, so the feasibility and effectiveness of the algorithm are well demonstrated.

    Building protocol interactive process based on message sequence chart
    SHI Wang, YANG Yingjie, TANG Huilin, DONG Lipeng
    2015, 35(5):  1373-1378.  DOI: 10.11772/j.issn.1001-9081.2015.05.1373
    Asbtract ( )   PDF (936KB) ( )  
    References | Related Articles | Metrics

    In order to effectively master protocol interactive behavior, a method to automatically build protocol interactive process based on message sequence chart was proposed. Firstly, according to the characteristics of the protocol interactive process, the dependency graph was defined to represent the partial order of events in message sequence, and the network flows were converted to dependency graphs. Secondly, the basic message sequences were used to describe protocol interactive behavior fragments, and the basic message sequences were mined by defining event maximum suffix. Finally, the maximum dependency graphs that were found out were connected and merged to build a message sequence chart. The experimental results show that the proposed method has a high accuracy and the built message sequence chart can visually represent the protocol interactive process.

    Visual fusion and analysis for multivariate heterogeneous network security data
    ZHANG Sheng, SHI Ronghua, ZHAO Ying
    2015, 35(5):  1379-1384.  DOI: 10.11772/j.issn.1001-9081.2015.05.1379
    Asbtract ( )   PDF (1085KB) ( )  
    References | Related Articles | Metrics

    With the growing richness of modern network security devices, network security logs show a trend of multiple heterogeneity. In order to solve the problem of large-scale, heterogeneous, rapid changing network logs, a visual method was proposed for fusing network security logs and understanding network security situation. Firstly, according to the eight selected characteristics of heterogeneous security logs, information entropy, weighted method and statistical method were used respectively to pre-process network characteristics. Secondly, treemap and glyph were used to dig into the security details from micro level, and time-series chart was used to show the development trend of the network from macro level. Finally, the system also created graphical features to visually analyze network attack patterns. By analyzing network security datasets from VAST Challenge 2013, the experimental results show substantial advantages of this proposal in understanding network security situation, identifying anomalies, discovering attack patterns and removing false positives, etc.

    Conditional privacy-preserving authentication scheme for vehicular Ad Hoc network
    LIU Dan, SHI Runhua, ZHONG Hong, ZHANG Shun, CUI Jie, XU Yan
    2015, 35(5):  1385-1392.  DOI: 10.11772/j.issn.1001-9081.2015.05.1385
    Asbtract ( )   PDF (1336KB) ( )  
    References | Related Articles | Metrics

    Focusing on the problem that the privacy-preserving of identity authentication in Vehicular Ad Hoc NETworks (VANET), a conditional privacy-preserving authentication scheme was proposed. Firstly, this paper introduced the short signature technology, and then constructed a new identity-based short signature scheme. Compared with the well-known Conditional Privacy-Preserving Authentication Scheme (CPAS), the proposed scheme could reduce the computation costs required for both signature and verification processes and improve the communication efficiency. Secondly, the scheme divided the private signature key into two correlative sub-segments, so that it could effectively solve the issue of key escrow. Therefore, the scheme was especially suitable for the environment of VANET. Based on the proposed signature scheme, a conditional privacy-preserving authentication scheme was presented, which can achieve identity authentication with conditional privacy preservation. The theoretical and efficiency analysis shows that the scheme needs only three dot multiplication in the signature process and takes one dot multiplication, two pairing operation in the verification process. Especially, the proposed scheme use batch verification by adding the small coefficient test to accelerate the authentication speed and reduce the error rate.

    Network security situation awareness based on weighted factor
    WEN Zhicheng, CAO Chunli
    2015, 35(5):  1393-1398.  DOI: 10.11772/j.issn.1001-9081.2015.05.1393
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem of present network security Situation Awareness (SA) that scope is limited, time and space complexity is high with the issues of single source information and accuracy deflection, a framework of network security situational awareness with comprehensive factor weighted factor was presented. It fully considered the multiple-source and multi-level heterogeneous information fusion, thus displayed the network current security situation overall and reflected it accurately. Finally from instance data of network, the proposed model and algorithm of weighted factor network security situational awareness were verified, and the experimental results showed the validity of the proposed method.

    Identity-based group key exchange protocol for unbalanced network environment
    YUAN Simin, MA Chuangui, XIANG Shengqi
    2015, 35(5):  1399-1405.  DOI: 10.11772/j.issn.1001-9081.2015.05.1399
    Asbtract ( )   PDF (1048KB) ( )  
    References | Related Articles | Metrics

    In consideration of the unbalanced wireless network whose participants have unbalanced computing power, the article analyed the security of the IDentity-based Authenticated Group Key Agreement (ID-AGKA) protocol, and pointed out that the protocol could not resist the ephemeral key leakage attack. Then because the generation of the agreement signature was improved, the safety was improved, the computational cost and communication cost was reduced effectively, so the improved protocol was more suitable for the unbalanced wireless network. Meanwhile, the protocol used a designated verifier signature, which could effectively solve the privacy problem of the signer. Moreover, in this article, the dynamic mechanism of unbalanced network group key agreement protocol was improved by powerful node making full use of the low-power nodes' computation information before users join or leave. This improvement could greatly reduce the unnecessary computation of the low-power nodes, making the new protocol more conform the actual needs. Finally, the safety of the improved Group Key Agreement (GKA) protocol was proved based on Divisible Decisional Diffie-Hellman (DDDH) assumption in the random oracle model.

    Reliability and randomness enhancing techniques for physical unclonable functions
    ZHAN Huo, LIN Yaping, ZHANG Jiliang, TANG Bing
    2015, 35(5):  1406-1411.  DOI: 10.11772/j.issn.1001-9081.2015.05.1406
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics
    Due to the impact of temperature, voltage and device aging, the traditional Ring Oscillator based Physical Unclonable Functions (RO-PUF) suffer from two main issues, unreliability of Physical Unclonable Functions (PUF) response and nonrandom distribution of Ring Oscillator (RO) frequencies. In order to improve the PUF reliability, an approximate frequency slope compensation method that uses the slope relationship between RO frequency and temperature to compensate the instable RO frequency was proposed in this paper. As a result, the instable Challenge-Response Pairs (CRP) perform the reliable responses. To enhance the security, a new scheme based on Mean Absolute Difference (MAD) was proposed in this paper. Firstly, the scheme measured the RO's average frequency of each chip, then filtered the corresponding RO's average frequency multiple times to extract the true random frequency. Consequently, the output of PUF follows the random distribution. The experimental results show that the proposed scheme can improve RO-PUF's reliability and security effectively.
    Hardware/Software co-design of SM2 encryption algorithm based on the embedded SoC
    ZHONG Li, LIU Yan, YU Siyang, XIE Zhong
    2015, 35(5):  1412-1416.  DOI: 10.11772/j.issn.1001-9081.2015.05.1412
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the development cycle of existing elliptic curve algorithm system level design is long and the performance-overhead indicators are not clear, a method of Hardware/Software (HW/SW) co-design based on Electronic System Level (ESL) was proposed. This method presented several HW/SW partitions by analyzing the theories and implementations of SM2 algorithm, and generated cycle-accurate models for HW modules with SystemC. Module and system verification were proposed to compare the executing cycle counts of HW/SW modules to obtain the best partition. Finally, the ESL models were converted to Rigister Transfer Level (RTL) models according to the CFG (Control Flow Graph) and DFG (Data Flow Graph) to perform logic synthesis and comparison. In the condition of 50 MHz,180 nm CMOS technology, when getting best performance,the execute time of point-multiply was 20 ms, with 83 000 gates and the power consuption was 2.23 mW. The experimental result shows that the system analysis is conducive to performance and resources evaluation, and has high applicability in encryption chip based on elliptic curve algorithm. The embedded SoC (System on Chip) based on this algorithm can choose appropriate architecture based on performance and resource constraints.

    Audio watermarking scheme based on empirical mode decomposition
    WU Penghui, YANG Bailong, ZHAO Wenqiang, GUO Wenpu
    2015, 35(5):  1417-1420.  DOI: 10.11772/j.issn.1001-9081.2015.05.1417
    Asbtract ( )   PDF (702KB) ( )  
    References | Related Articles | Metrics

    For the issue that the robustness of the traditional audio watermarking algorithm based on Empirical Mode Decomposition (EMD) is not strong, an blind audio watermarking algorithm based on the extremum of Intrinsic Mode Function (IMF) was presented. The original audio signal was segmented firstly, and the audio frame was decomposed to a series of IMFs by EMD. Watermarking bits and synchronization code were embedded in the extremum of the last IMF by mean quantization. The embedding payload of the proposed method was 46.9~50.3 b/s, and the watermarked audio signal keeps the perceptive quality of the original audio signal. Several signal attacks such as adding noise, MP3 compression, re-sampling, filtering and cropping were imposed on the watermarked audio. The extracted watermarking bit changed a little, which shows the robustness of the proposed scheme. Compared with time domain and wavelet domain methods, the proposed method can resist 32 kb/s MP3 compression attack with high embedding payload.

    Key techniques for fast instruction set simulator
    FU Lin, HU Jin, LIANG Liping
    2015, 35(5):  1421-1425.  DOI: 10.11772/j.issn.1001-9081.2015.05.1421
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics

    In order to adapt to the the requirement of the Instruction Set Simulator (ISS) simulation speed in embedded system development, an improved ISS technology was put forward.The technology introduced instruction preprocessing, dynamic decode cache structure, multi-thread C function generation and dynamic scheduling technique based on the existing static multi-core simulator to achieve the optimization of the simulator performance. This technique has been applied successfully in forming OPT-ISS, which is based on IME-Diamond multi-core DSP processor. The experimental results show that this technique improves the simulation speed indeed.

    Design of scheduling algorithm for embedded real-time system based on feedback control
    DENG Teng, DAI Zibin, ZHANG Lichao, WU Xuetao
    2015, 35(5):  1426-1429.  DOI: 10.11772/j.issn.1001-9081.2015.05.1426
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem that embedded real-time systems' miss rate of real-time task is too high and the problem of poor stability of real-time scheduling while the load of the system is uncertain, a scheduling model based on feedback control was proposed. This model consisted of an improved multi-level queue scheduler and three controllers, those were access controller, execution level controller and Proportional-Integral-Derivative (PID) controller. The deviation of task miss rate was feedbacked to the PID controller and generated a corresponding adjustment, the adjustment acted on the other two controllers to adjust the level of real-time tasks. Tasks adjusted were executed by the scheduler. After structural adjustment and improvement, the scheduling model was applied in embedded Configurable operating system (eCos). The experimental results show that the proposed method can reduce miss rate of tasks, and solve the overload problem of the system.

    Extremum displacement measurement for laser speckle images with multi-level crossing
    ZHANG Caihong, PAN Guangzhen, YANG Jian, LIU Ting
    2015, 35(5):  1430-1434.  DOI: 10.11772/j.issn.1001-9081.2015.05.1430
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of mismatching of the extreme points in extremum method for digital speckle images, a new algorithm named the whole pixel extremum displacement measurement for laser speckle images with multi-level crossing was proposed. The extremum method was used to figure out the extreme points of speckle images before and after displacement, constructed extremum value matrix and generated 3-D figure. Then the truncation points were got by multiple specified gray planes crossing 3-D figure. And relative displacement matrix of truncation points was analyzed to calculate displacement of object. The simulation results under the condition of no noise and noise prove that the improved algorithm under guaranteeing the accuracy of displacement measurement, decreases the number of extreme points of error matching and improves operation efficiency 103 times. The algorithm was applied to the laser mouse positioning and the experimental results show that the displacement of the mobile resolution reached 1 microns, and moving direction angle error was less than 2.72°. It is conclued proves that multi-level truncation extremum displacement method based on laser speckle image is a rapid, efficient and practical algorithm.

    Enhancement algorithm for fog and dust images in coal mine based on dark channel prior theory and bilateral adaptive filter
    DU Mingben, CHEN Lichao, PAN Lihu
    2015, 35(5):  1435-1438.  DOI: 10.11772/j.issn.1001-9081.2015.05.1435
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that videos images captured from coal mines filled with coal dust and mist are often with quality problems such as lots of noise, low resolution and blur. To solve this problem, an enhancement algorithm for fog and dust images in coal mine based on dark channel prior theory and bilateral adaptive filter was proposed. On the basis of dark channel prior, the softmatting process was replaced with the adaptive bilateral filtering to obtain fine transmittance map. Then according to the special circumstances of coal mines, the global atmosphere light and the rough transmittance map were got from new perspective and image denoising was realized on the basis of the image degradation model. The experiment results show that the image processing time for a resolution of 1024×576 is 1.9 s. Compared with He algorithm (HE K, SUN J, TANG X. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011,33(12):1-13.), the efficiency increased 5 times.Compared with other algorithms such as histogram equalization method, the proposed algorithm is effective to enhance the image detail. In this way, images can be more suitable for human vision as a whole.

    Real-time simulation and application of depth-of-field based on post-processing
    CAO Yanjue, AN Bowen, LI Qiming
    2015, 35(5):  1439-1443.  DOI: 10.11772/j.issn.1001-9081.2015.05.1439
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of rendering Depth Of Field (DOF) in Virtual Reality (VR) system, an improved post-processing algorithm of DOF based on Graphics Processing Unit (GPU) was proposed. First, the scene at full resolution was rendered to an offscreen buffer, and the linear depth of each pixel was output to the alpha channel of the buffer. Then, the fully-rendered scene was downsampled into 1/16 of the original size. Next, the downsampled scene was blurred by running the image through two passes of a separable Gaussian filter, and stored as a texture. Finally, the two textures were blended according to the linear depth of each pixel in the size of Circle of Confusion (CoC). The algorithm was applied in a virtual roaming scene and a good simulation of real-time DOF effect was achieved. The experimental results show that the algorithm has improved some problems in the traditional post-processing filter algorithm, such as poor continuity and brightness diffusion, and it meets the demand of real-time interaction in VR.

    Correction of single circular fisheye image
    ZHANG Jun, WANG Zhizhou, YANG Zhengling
    2015, 35(5):  1444-1448.  DOI: 10.11772/j.issn.1001-9081.2015.05.1444
    Asbtract ( )   PDF (782KB) ( )  
    References | Related Articles | Metrics

    Focused on the issues that circular domain extraction is not accurate and effective correction field angle can not reach 180 degrees in the vertical direction, Variable Angle Line Scan (VALS) method and Longitudinal Compression Cylindrical Projection (LCCP) method were proposed respectively. By changing the inclination angle of the scan line, the VALS method got coordinates of those cut points, then it filtered out invalid cut points coordinates and further got the parameters of the circular domain by using the Kasa circle fitting method. As for the LCCP method, it artificially bended the optical path of traditional cylindrical projection so that the light projected onto the infinity point could be projected back on the cylindrical surface, thus preserved the image information effectively. The comparison with two known methods named longitude-latitude mapping and Mercator mapping proves the effectiveness of the proposed algorithm in weakening the blurring effect due to stretching caused by the edge of image correction. The result looks more nature.

    Improved Stoilov algorithm based on short distance priority and weighted mean
    LIU Ting, PAN Guangzhen, YANG Jian, ZHANG Caihong
    2015, 35(5):  1449-1453.  DOI: 10.11772/j.issn.1001-9081.2015.05.1449
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics

    The details of phase information are lost in the use of mean filtering algorithm to repair singular point by using Stoilov phase shift algorithm, which leads to incorrect phase calculation. In order to solve this problem, a new weighted mean filtering algorithm based on short distance priority was proposed. First, singular points were marked by statistical approach. Then, the size of filter window, which was built basing on the short distance priority principle for each singular point, was up to the number of non-singular points and the shortest distance in current situation. Last, this algorithm used the weighted mean of the non-singular points in the window instead of the singular point to implement singular point correction. The experimental results show that window is more detailed in this method, and the proposed method can effectively remove impulse noise, especially may have advantage in protecting details of phase, and may reduce Root Mean Squared Error (RMSE) less than 0.06 cm in the actual measurement experiments.

    Pickup algorithm for data points of Catmull-Clark subdivision mesh
    ZHANG Xiangyu, MA Xiqing
    2015, 35(5):  1454-1458.  DOI: 10.11772/j.issn.1001-9081.2015.05.1454
    Asbtract ( )   PDF (791KB) ( )  
    References | Related Articles | Metrics

    Focused on the issue that adopting the OpenGL selection mechanism to directly act on the data points of Catmull-Clark subdivision mesh may cause the name stack overflowing because of the too large data volume, referencing the intersection theory of subdivision surfaces, a new pickup method for subdivision models was proposed. Through extracting and subdividing the neighboring mesh of pickup objects, the method converted the pickup for data points of subdivision mesh into the pickup for points, edges and faces of the initial mesh and the local subdivision meshes at successive levels, and the pickup for points of the local subdivision mesh at last level. Comparison and analysis experiments of some pickup examples were conducted. The total number of naming objects and the pickup time consumption of the given method were both far less than those of the traditional OpenGL selection method when the subdivision mesh had plenty of data points. The experimental results show that the proposed method can quickly and accurately pick up the data points of subdivision mesh, and it is especially suitable for the complex subdivision models with a large number of data points, and can effectively avoid pickup errors caused by the name stack overflowing.

    Feature selection based on statistical random forest algorithm
    SONG Yuan, LIANG Xuechun, ZHANG Ran
    2015, 35(5):  1459-1461.  DOI: 10.11772/j.issn.1001-9081.2015.05.1459
    Asbtract ( )   PDF (569KB) ( )  
    References | Related Articles | Metrics

    Focused on the traditional methods of feature selection for brain functional connectivity matrix derived from Resting-state functional Magnetic Resonance Imaging (R-fMRI) have feature redundancy, cannot determine the final feature dimension and other problems, a new feature selection algorithm was proposed. The algorithm combined Random Forest (RF) algorithm in statistical method, and applied it in the identification experiment of schizophrenic and normal patients, according to the features are obtained by the classification results of out of bag data. The experimental results show that compared to the traditional Principal Component Analysis (PCA), the proposed algorithm can effectively retain important features to improve recognition accuracy, which have good medical explanation.

    3D segmentation method combining region growing and graph cut for coronary arteries computed tomography angiography images
    JIANG Wei, LYU Xiaoqi, REN Xiaoying, REN Guoyin
    2015, 35(5):  1462-1466.  DOI: 10.11772/j.issn.1001-9081.2015.05.1462
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems that the efficiency is low when the segment of three-dimensional Computed Tomography Angiography (CTA) coronary arteries images with complex structure and small region of interest, a segmentation algorithm combining region growing and graph cut was proposed. Firstly, a method of region growing based on threshold was used to divide images into several regions, which removed irrelevant pixels and simplified structure and protruded regions of interest. Afterwards, according to grey and space information, simplified images were constructed as a network diagram. Finally, network diagram was segmented with theory of graph cut, so the segmentation image of coronary arteries was got. The experimental results show that, compared with traditional graph cut, the increment for the segmentation efficiency is about 51.7%, which reduces the computational complexity. On the aspect of rendering quality, target areas for segmentation images of coronary arteries is complete, which is helpful for doctors to analyze the lesion correctly.

    Dynamic time warping gesture authentication algorithm based on improved Mahalanobis distance
    ZHOU Zhiping, MIAO Minmin
    2015, 35(5):  1467-1470.  DOI: 10.11772/j.issn.1001-9081.2015.05.1467
    Asbtract ( )   PDF (582KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that Euclidean Distance (ED) treats each feature equally and cannot take account of the correlations between features in Dynamic Time Warping (DTW) method which is commonly used in dynamic gesture authentication, a DTW gesture authentication algorithm based on improved Mahalanobis Distance (MD) was put forward. It utilized embedded three-axial accelerometer to capture the real-time dynamic hand signals, acceleration signal similarity was measured by the improved DTW method after data preprocessing, and according to the characteristics of the covariance matrix, the time complexity was optimized in the process of calculation, finally the result of authentication was got by template matching method. The experimental results show that the Equal Error Rate (EER) decreases from 3.02% to 1.39% and the average authentication response time reduces by 87.84% after optimization. The proposed method improves the accuracy of the dynamic hand gesture authentication and has a good real-time performance at the same time.

    Cosparsity analysis subspace pursuit algorithm
    ZHANG Zongnian, LIN Shengxin, MAO Huanzhang, HUANG Rentai
    2015, 35(5):  1471-1473.  DOI: 10.11772/j.issn.1001-9081.2015.05.1471
    Asbtract ( )   PDF (595KB) ( )  
    References | Related Articles | Metrics

    As subspace pursuit algorithm under cosparsity analysis model in compressed sensing has the shortcomings of low completely successful reconstruction probability and poor reconstruction performance, a cosparsity analysis subspace pursuit algorithm was proposed. The proposed algorithm was realized by adopting the selected random compact frame as the analysis dictionary and redesigning target optimization function. The selecting method of cosparsity value and the iterated process were improved. The simulation experiments show that the proposed algorithm has obviously higher completely successful reconstruction probability than that of Analysis Subspace Pursuit (ASP) and other five algorithms, and has higher comprehensive average Peak Signal-to-Noise Ratio (PSNR) for the reconstructed signal than that of ASP and other three algorithms, but a little bit lower than that of Gradient Analysis Pursuit (GAP) and other two algorithms when the original signal has Gaussion noise. The new algorithm can be used in audio and image signal processing.

    Face recognition based on local binary pattern and deep learning
    ZHANG Wen, WANG Wenwei
    2015, 35(5):  1474-1478.  DOI: 10.11772/j.issn.1001-9081.2015.05.1474
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem that deep learning ignores the local structure features of faces when it extracts face feature in face recognition, a novel face recognition approach which combines block Local Binary Pattern (LBP) and deep learning was presented. At first, LBP features were extracted from different blocks of a face image, which were connected together to serve as the texture description for the whole face. Then, the LBP feature was input to a Deep Belif Network (DBN), which was trained level by level to obtain classification capability. At last, the trained DBN was used to recognize unseen face samples. On ORL, YALE and FERET face databases, the experimental results show that the proposed method has a better recognition performance compared with Support Vector Machine (SVM) in small sample face recognition.

    Dynamic evaluation for train line planning in high speedrailway based on improved data envelopment analysis
    PU Song, LYU Hongxia
    2015, 35(5):  1479-1482.  DOI: 10.11772/j.issn.1001-9081.2015.05.1479
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the Data Envelopment Analysis (DEA) has some shortcomings in showing the differences of weights for the evaluation indexes and sorting and adjusting the efficient decision making units, an improved DEA was proposed. Firstly, the weights of indexes were confirmed by the Analytic Hierarchy Process (AHP) and the preference cone model was built. And then all the decision making units could be sorted in term of the cross efficiency and the parts of the decision making units were adjusted according to the attendance ratio and the ideal decision making units. Finally, the line planning for the Beijing-Shanghai high speed railway was evaluated. The results show that there are four efficient lines among the six lines and two inefficient lines and one of the efficient lines need to be adjusted. The experimental results show that the proposed method can provide theoretical basis for the the adjustment of the train line planning.

    Flame recognition algorithm based on Codebook in video
    SHAO Liangshan, GUO Yachan
    2015, 35(5):  1483-1487.  DOI: 10.11772/j.issn.1001-9081.2015.05.1483
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics

    In order to improve the accuracy of flame recognition in video, a flame recognition algorithm based on Codebook was proposed. The algorithm which combined with static and dynamic features of flame was innovatively applied with YUV color space in Codebook background model to detect flame region, and update the background regularly. Firstly, the algorithm extracted frames from video, and used the liner relation between R, G, B component as the color model to get the flame color candidate area. Second, because of the advantage of the YUV color space, the images were transformed from RGB format to YUV format, a flame color dynamic prospect was extracted with background learning and background subtraction by using Codebook background model. At last, Back Propagation (BP) neural network was trained with the features vectors such as flame area change rate, flame area overlap rate and flame centroid displacement. Flame was judged by using the trained BP neural network in video. The recognition accuracy of the proposed algorithm in the complex video scene was above 96% in fixed camera position and direction videos. The experimental results show that compared with three state-of-art detection algorithms, the proposed algorithm has higher accuracy and lower misrecognition rate.

    Research and simulation of radar side-lobe suppression based on Kalman-minimum mean-square error
    ZHANG Zhaoxia, WANG Huihui, FU Zheng, YANG Lingzhen, WANG Juanfen, LIU Xianglian
    2015, 35(5):  1488-1491.  DOI: 10.11772/j.issn.1001-9081.2015.05.1488
    Asbtract ( )   PDF (608KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the weak target might be covered by the range side-lobes of the strong one and the range side-lobes could only be suppressed to a certain value, an improved Kalman-Minimum Mean-Square Error (K-MMSE) algorithm was proposed in this paper. This algorithm combined the Kalman filter with the Minimum Mean-Square Error (MMSE), and it was an effective method for suppressing range side-lobes of adaptive pulse compression. In the simulation, the proposed algorithm was compared with the traditional matched filter and other improved matched filters such as MMSE in a single target or multiple targets environments, and then found that the side-lobe levels, the Peak-SideLobe Ratio (PSLR) and Integrated SideLobe Ratio (ISLR) of the Point Spread Function (PSF) were all decreased obviously in comparison with the previous two methods. The simulation results show that the method can suppress range side-lobes well and detect the weak targets well either under both the condition of a single target and the condition of multiple targets.

    Block sparse Bayesian learning algorithm for reconstruction and recognition of gait pattern from wireless body area networks
    WU Jianning, XU Haidong
    2015, 35(5):  1492-1498.  DOI: 10.11772/j.issn.1001-9081.2015.05.1492
    Asbtract ( )   PDF (1152KB) ( )  
    References | Related Articles | Metrics

    In order to achieve the optimal performance of gait pattern recognition and reconstruction of non-sparse acceleration data from Wireless Body Area Networks (WBANs)-based telemonitoring, a novel approach to apply the Block Sparse Bayesian Learning (BSBL) algorithm for improving the reconstruction performance of non-sparse accelerometer data was proposed, which contributes to achieve the superior performance of gain pattern recognition. Its basic idea is that, in view of the gait pattern and Compressed Sensing (CS) framework of WBAN-based telemonitoring, the original acceleration-based data acquired at sensor node in WBAN was compressed only by spare measurement matrix (the simple linear projection algorithm), and the compressed data was transmitted to the remote terminal, where BSBL algorithm was used to perfectly recover the non-sparse acceleration data that assumed as block structure by exploiting intra-block correlation for further gait pattern recognition with high accuracy. The acceleration data from the open USC-HAD database including walking, running, jumping, upstairs and downstairs activities were employed for testing the effectiveness of the proposed method. The experiment results show that with acceleration-based data, the reconstruction performance of the proposed BSBL algorithm can significantly outperform some conventional CS algorithms for sparse data, and the best accuracy of 98% can be obtained by BSBL-based Support Vector Machine (SVM) classifier for gait pattern recognition. These results demonstrate that the proposed method not only can significantly improve the reconstruction performance of non-sparse acceleration data for further gait pattern recognition with high accuracy but also is very helpful for the design of low-cost sensor node hardware with lower energy consumption, which will be a potential approach for the energy-efficient WBAN-based telemonitoring of human gait pattern in further application.

    Reasoning and forecasting of regional fire data based on adaptive fuzzy generalized regression neural network
    JIN Shan, JIN Zhigang
    2015, 35(5):  1499-1504.  DOI: 10.11772/j.issn.1001-9081.2015.05.1499
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics

    While BP neural network,classical theory of probability and its derivative on algorithm were used to fire loss prediction,the system structure is complex,the detection data is not stable,and the result is easy to fall into local minimum, etc. To resolve these troubles, a method of reasoning and forecasting the regional fire data was proposed based on adaptive fuzzy Generalized Regression Neural Network (GRNN). The improved fuzzy C-clustering algorithm was used to correct weight for the initial data in network input layer, and it reduced the influence of noise and isolated points on the algorithm, improved the approximation accuracy of the predicted value. The adaptive function optimization of GRNN algorithm was introducd to adjust the expansion speed of the iterative convergence, change the step, and found the global optimal solution. The method was used to resolve the premature convergence problem and improved the search efficiency. While the identified fire loss data is put into the algorithm, the experimental results show that the method can overcome the problem of instable detection data, and has good ability of nonlinear approximation and generalization capability.

2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF