Loading...

Table of Content

    10 January 2016, Volume 36 Issue 1
    Implementation of distributed index in cluster environment
    WENG Haixing, GONG Xueqing, ZHU Yanchao, HU Hualiang
    2016, 36(1):  1-7.  DOI: 10.11772/j.issn.1001-9081.2016.01.0001
    Asbtract ( )   PDF (1303KB) ( )  
    References | Related Articles | Metrics
    For performance issues brought by using non-primary key to access data on a distributed storage system, key technologies were mainly discussed to the implementation of indexing on a distributed storage system. Based on the rich analysis of new distributed storage features, the keys to design and implementation of distributed index were presented. By combining characteristics of distributed storage system and associated indexing technologies, the organization and maintenance of index, data concurrency and other issues were described. Then, the distributed indexing mechanism on the open source version of OceanBase, which is a distributed database system, was designed and implemented. The performance tests were run on the benchmarking tool YCSB. The experimental results show that the distributed auxiliary index will degrade the system performance, but it can be controlled within 5% under different data scale because of the consideration of system features and storage characteristics. In addition, it can increase index performance by even 100% with a redundant colume way.
    Probery: probability-based data query system for big data
    WU Jinbo, SONG Jie, ZHANG Li, BAO Yubin
    2016, 36(1):  8-12.  DOI: 10.11772/j.issn.1001-9081.2016.01.0008
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics
    Since the time consumption of full-result query for big data is excessively high, the system Probery was proposed. Different from traditional approximate query, Probery adopted an approximate full-result query method, an original method to query data. The approximation of Probery referred to the probability of containing all data satisfying query conditions in query results. Firstly, Probery divided the data stored in system into multiple data segments. Secondly, Probery placed the data in Distributed File System (DFS) according to the probability placing model. Finally, given a query condition, Probery adopted a heuristic query method to query data probably. The performance of query data was shown by comparing with other dominated non-relational data management system, in the case that the completeness of result set lost by 8%. The query time consumption of Probery was saved by 51% compared with HBase, by 23% compared with Cassandra, by 12% compared with MongoDB, by 3% compared with Hive. The experimental results show that Probery improves the performance of query data when the completeness of query data losses appropriately. In addition, Probery has better generality, adaptability and extensibility for big data query.
    Supersonic-based parallel group-by aggregation
    ZHANG Bing, SUN Hui, FAN Xu, LI Cuiping, CHEN Hong, WANG Wen
    2016, 36(1):  13-20.  DOI: 10.11772/j.issn.1001-9081.2016.01.0013
    Asbtract ( )   PDF (1253KB) ( )  
    References | Related Articles | Metrics
    To solve the time-consuming problem of group-by aggregation operation in case of data-intense computation, a cache-friendly group-by aggregation method was proposed. In this paper, the group-by aggregation operation was optimized in two aspects. Firstly, designing cache-friendly group-by aggregation algorithm on Supersonic, an open-source and column-oriented query execution engine, to take the full advantage of column-storage on in-memory computation. Secondly, rewriting the algorithm with multi-threads to speed up the query. In this paper, four different parallel aggregation algorithms were put forward, respectively named Shared-Nothing Parallel Group-by Aggregation (NSHPGA) algorithm, Table-Lock Shared-Hash Parallel Group-by Aggregation (TLSHPGA) algorithm, Bucket-Lock Shared-Hash Parallel Group-by Aggregation (BLSHPGA) algorithm and Node-Lock Shared-Hash Parallel Group-by Aggregation (NLSHPGA) algorithm. Through a series of comparison experiment on different group power set and different number of worker threads, NLSHPGA algorithm was proved to have the best performance both on speed-up ratio and concurrency, which achieved 10x speedups on part of queries. Besides, considering Cache miss and memory utilization, the results shows that NSHPGA algorithm is suitable for smaller group power set, which was 8 in the experiment, and when getting larger, NLSHPGA algorithm performs better than NSHPGA algorithm.
    Relational algebraic operation algorithm on compressed data
    DING Xinzhe, ZHANG Zhaogong, LI Jianzhong, TAN Long, LIU Yong
    2016, 36(1):  21-26.  DOI: 10.11772/j.issn.1001-9081.2016.01.0021
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    Since in the massive data management, the compressed data can be done some operations without decompressing first, under the condition of normal distribution, according to features of column data storage, a new compression algorithm which oriented column storage, called CCA (Column Compression Algorithm), was proposed. Firstly, the length of data was classified; secondly, the sampling method was used to get more repetitive prefix; finally the dictionary coding was utilized to compress, meanwhile the Column Index (CI) and Column Reality (CR) were acted as data compression structure to reduce storage requirement of massive data storage, thus the basic relational algebraic operations such as select, project and join were directly and effectively supported. A prototype database system based on CCA, called D-DBMS (Ding-Database Management System), was implemented. The theoretical analyses and the results of experiment on 1 TB data show that the proposed compression algorithm can significantly improve the performance of massive data storage efficiency and data manipulation. Compared to BAP (Bit Address Physical) and TIDC (TupleID Center) method, the compression rate of CCA was improved by 51% and 14%, and its running speed was improved by 47% and 42%.
    Partition-based incremental processing method for string similarity join
    YAN Cairong, ZHU Bin, WANG Jian, HUANG Yongfeng
    2016, 36(1):  27-32.  DOI: 10.11772/j.issn.1001-9081.2016.01.0027
    Asbtract ( )   PDF (890KB) ( )  
    References | Related Articles | Metrics
    String similarity join is an essential operation of data quality management and a key step to find the value of data. Now in the era of big data, since the existing methods can not meet the demands of incremental processing, an incremental string similarity join method oriented streaming data, called Inc-Join, was proposed. And the string index technique was optimized. Firstly, based on the Pass-Join string join algorithm, strings were divided into some disjoint substrings by utilizing partition technique; secondly, the inverted index of strings was created and acted as a state; finally, the similarity calculation was done according to the state when new data came, and the state would be updated after each operation of string similarity join. The experimental results show that Inc-Join method can reduce the number of reduplicate matching between short or long strings to √n(n is the number of matching with batching processing model) without affecting the join accuracy. The elapsed time of string similarity join with batching processing model was 1 to 4.7 times the time Inc-Join needs when three different datasets were processed, and it tended to increase sharply. And the minimum elapsed time of optimized Inc-Join only accounted for 3/4 of original elapsed time of Inc-Join. With the increasing number of strings, the elapsed time of optimized Inc-Join would account for less and less of proportion in original elapsed time. The state need not to be saved, so the optimized Inc-Join further reduces time and space cost of Inc-Join.
    Mining mobility patterns based on deep representation model
    CHEN Meng, YU Xiaohui, LIU Yang
    2016, 36(1):  33-38.  DOI: 10.11772/j.issn.1001-9081.2016.01.0033
    Asbtract ( )   PDF (960KB) ( )  
    References | Related Articles | Metrics
    Focusing on the fact that the order of locations and time play a pivotal role in understanding user mobility patterns for spatio-temporal trajectories, a novel deep representation model for trajectories was proposed. The model considered the characteristics of spatio-temporal trajectories: 1) different orders of locations indicate different user mobility patterns; 2) trajectories tend to be cyclical and change over time. First, two time-ordered locations were combined in location sequence; second, the sequence and its corresponding time bin were combined in the temporal location sequence, which was the basic unit of describing the features of a trajectory; finally, the deep representation model was utilized to train the feature vector for each sequence. To verify the effectiveness of the deep representation model, experiments were designed to apply the temporal location sequence vectors to user mobility patterns mining, and empirical studies were performed on a real check-in dataset of Gowalla. The experimental results confirm that the proposed method is able to discover explicit movement patterns (e.g., working, shopping) and Word2Vec is difficult to discover the valuable patterns.
    Moving object location prediction algorithm based on Markov model and trajectory similarity
    SONG Lujie, MENG Fanrong, YUAN Guan
    2016, 36(1):  39-43.  DOI: 10.11772/j.issn.1001-9081.2016.01.0039
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    Focusing on low prediction accuracy of the low-order Markov model and high sparsity rate of the high-order Markov model, a moving object location prediction algorithm based on Markov Model and Trajectory Similarity (MMTS) was proposed. The moving object's historical trajectory was modeled by using Markov thinking, and trajectory similarity was acted as an important factor of location prediction. With the result set predicted by Markov model as candidate set, the trajectory similarity factor was combined to get the final prediction. The experimental results show that, compared with the k-order Markov model, the predictive capability of the MMTS method is not greatly affected with the change of training sample size and the value of k, and the average accuracy is improved by more than 8% while significantly reducing the sparsity rate of k-order Markov model. So, the proposed method not only solves the problem of high sparsity rate and low prediction accuracy of the k-order Markov model, but also improves the stability of prediction.
    Population flow analysis based on cellphone trajectory data
    KONG Yangxin, JIN Cheqing, WANG Xiaoling
    2016, 36(1):  44-51.  DOI: 10.11772/j.issn.1001-9081.2016.01.0044
    Asbtract ( )   PDF (1202KB) ( )  
    References | Related Articles | Metrics
    With the development of communication technology and popularization of smartphones, the massive cellphone trajectory data gathered by base stations plays an important role in some applications, such as urban planning and population flow analysis. In this paper, a Movement Features-based Judging Urban Population Flow (MF-JUPF) algorithm utilizing cellphone trajectory data was proposed to deal with the issue about the population flow. First, users' activity trajectories were mined from cellphone trajectory data after data preprocessing. Second, the movement features were extracted according to the pattern of entering and leaving a city, and the parameters of these features were trained using various classification models upon real data sets. Finally, trained classification models were used to judge whether a user came in/out of the city. To enhance the efficiency and scalability, a MapReduce-based algorithm was developed to analyze massive cellphone trajectory data sets. As reported in the experimental part upon real data sets, the precision and recall of the proposed solution to judge the entering and leaving behaviors were greater than 80%. In comparison with Signal Disappears-based Judging Urban Population Flow (SD-JUPF) algorithm, the precision and recall of entering city judgment increased by 19.0% and 13.9%, and the precision and recall of leaving city judgment increased by 17.3% and 6.1%. Compared with the non-filtering algorithm, the time cost of the improved filtering algorithm was reduced by 36.1% according to the traits of these data. The theoretical analyses and experimental results illustrate the high accuracy and flexibility of MF-JUPF which has applicable values in urban planning and other fields.
    Multi-label classification algorithm based on Bayesian model
    ZHANG Luoyang, MAO Jiali, LIU Bin, WU Tao
    2016, 36(1):  52-56.  DOI: 10.11772/j.issn.1001-9081.2016.01.0052
    Asbtract ( )   PDF (869KB) ( )  
    References | Related Articles | Metrics
    Since the relation of labels in Binary Relevance (BR) is ignored, it is easy to cause the multi-label classifier to output not exist or less emergent labels in training data. The Multi-Label classification algorithm based on Bayesian Model (MLBM) and Markov Multi-Label classification algorithm based on Bayesian Model (MMLBM) were proposed. Firstly, to analyze the shortcomings of BR algorithm, the simulation model was established; considering the value of label should be decided by the attribute confidence and label confidence, MLBM was proposed. Particularly, the attribute confidence was calculated by traditional classification and the label confidence was obtained directly from the training data. Secondly, when MLBM calculated label confidence, it had to consider all the classified labels, thus some of no-relation or weak-relation labels would affect performance of the classifier. To overcome the weakness of MLBM, MMLBM was proposed, which used Markov model to simplify the calculation of label confidence. The theoretical analyses and simulation experiment results demonstrate that, in comparison with BR algorithm, the average classification accuracy of MMLBM increased by 4.8% on emotions dataset, 9.8% on yeast dataset and 7.3% on flags dataset. The experimental results show that MMLBM can effectively improve the classification accuracy when the label cardinality is larger in the training data.
    Key frame preprocessing of H.264 real-time streaming based on timing control algorithm for analyzing unit of group of pictures
    DU Dan, FENG Lijun, LIU Yintian
    2016, 36(1):  57-60.  DOI: 10.11772/j.issn.1001-9081.2016.01.0057
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics
    Network-based audio and video calls as well as video conferencing may loss packets because of limited network bandwidth, which results in video streaming quality problems and reduces the effects of video calls and video conferencing. The real-time streaming quality control algorithm was proposed. This algorithm adopted time control approach to detect and remove the bad key frames, thus reducing the occurrence of blurred screen. The proposed algorithm can efficiently reduce the costs of time and space, which eventually enhances the streaming fluency. The experiments were conducted in terms of original frame playback, post-processing playback, and key frame pretreatment playback. The experimental comparisons showed that the proposed algorithm could efficiently improve the quality and fluency of playback, moreover, the computational complexity of playing processing was also decreased by more than 40%. The results show that the proposed algorithm has outstanding effect to improve the quality of live streaming and reduce the occurrence of blurred screen.
    Improved relay forwarding scheme based on network coding
    GUO Qiang, QIN Yue
    2016, 36(1):  61-65.  DOI: 10.11772/j.issn.1001-9081.2016.01.0061
    Asbtract ( )   PDF (715KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of traditional relaying forward protocol: amplifying both the signals and the noise, possibly forwarding incorrect decoded signals and low reliability of decision on relay node in the two-way relay system, two Mutual Information Forwarding (MIF) schemes based on network coding were suggested. Firstly, the relay node forwards the hard decisions from two source nodes, and the reliability of these hard decisions. Then the receiving terminal detects and makes a decision on the combined result, which are based on the forwarding signal from relay node and its reliability, also the signal from another source node. The expressions for the received Signal-to-Noise Ratio (SNR) were derived, and numerical simulations were done in Additive White Gaussian Noise (AWGN) channel. At the same time, Monte Carlo simulation was conducted in AWGN channel and Rayleigh channel. Results show that both two proposed schemes under two kinds of channels have 1 dB and 1-2 dB gain than the Estimate-and-Forward (EF) scheme in Bit Error Ratio (BER) performance. The simulation results show that, the network coding scheme based on MIF significantly improves BER performance in relay forward of two-way relay network.
    Hierarchical routing protocol based on non-uniform clustering for wireless sensor network
    HUANG Tinghui, YI Kai, CUI Gengshen, WANG Yuliang
    2016, 36(1):  66-71.  DOI: 10.11772/j.issn.1001-9081.2016.01.0066
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics
    According to the problem of excessive energy consumption caused by the unreasonable distribution of cluster head nodes in the large-scale Wireless Sensor Network (WSN), a Hierarchical Routing Protocol for wireless sensor networks based on Non-uniform Clustering (HRPNC) was designed. HRPNC combined the idea of clustering in Low Energy Adaptive Clustering Hierarchy (LEACH), and basing on stratification improved the algorithms of competitive radius regarding Energy-Balanced Unequal Clustering routing protocol for WSN (DEBUC). Through taking advantage of hierarchical mechanism and the mechanism of competition, the distribution of the cluster heads turned out to be more reasonable and the energy consumption of such nodes got balance effectively. In the simulation performed on the Matlab, the life cycle of HRPNC was higher than that of the LEACH and DEBUC by about 500 and 300 rounds respectively. The average residual energy of the nodes with HRPNC was higher than that of the nodes with LEACH and DEBUC. As to the energy consumption, it remained lower and more stable during the survival phase. Besides, compared with LEACH and DEBUC, the aggregate of data packet of HRPNC was 300% and 130% higher respectively. What is more, under different simulations, the packet loss rate of HRPNC was lower than that of LEACH and DEBUC. The experimental results show that HRPNC can not only extend the lifetime of the network, and increase network stability and the number of data transmission, but also reduce the loss rate of data transmission effectively.
    Cloud resource sharing design supporting multi-attribute range query based on SkipNet
    SUN Lihua, CHEN Shipin
    2016, 36(1):  72-76.  DOI: 10.11772/j.issn.1001-9081.2016.01.0072
    Asbtract ( )   PDF (737KB) ( )  
    References | Related Articles | Metrics
    In cloud resource sharing service model, in order to realize the multi-attribute range query of cloud resources, an improved E-SkipNet network was proposed. Firstly, based on the traditional Distributed Hash Table (DHT) network SkipNet, data attributes were added to the setting of NameID and physical nodes were added to single attribute domain to support multi-attribute range queries in E-SkipNet. Secondly, on the basis of the original E-SkipNet network, the physical nodes were simultaneously mapped into multiple logical nodes and added to multiple attribute domains, and the resources were released in accordance with different attributes to different logical nodes in the improved E-SkipNet. Finally, the resources were mapped to logical nodes utilizing uniform locality preserving hashing function, which was the key to support efficient range query. The simulation results show that the routing efficiency of improved E-SkipNet network was respectively increased by 18.09% and 20.47% compared with E-SkipNet and Multi-Attribute Addressable Network (MAAN). The results show that the improved E-SkipNet can support more efficient cloud resource multi-attribute range queries and achieve load balancing in heterogeneous environment.
    Routing algorithm on dynamic adjustment of forward angle based on residual energy
    ZHANG Maoxing, WANG Haifeng, XIANG Fenghong, MAO Jianlin, ZHANG Chuanlong
    2016, 36(1):  77-80.  DOI: 10.11772/j.issn.1001-9081.2016.01.0077
    Asbtract ( )   PDF (782KB) ( )  
    References | Related Articles | Metrics
    The routing scheme is one of the key factors which influence the lifetime of Wireless Sensor Network (WSN). The network can be paralyzed easily as a result of large energy consumption of key nodes by heavy communication. To solve the problem of energy consumption of key nodes in WSNs, a new ant colony routing algorithm on Dynamic Adjustment of Forward Angle based on Residual Energy (DAFARE) was proposed. Firstly, the nodes chose the next-hop node according to residual energy and distance in the range of initial forward angle; secondly, the forward angle was adjusted dynamically in the view of residual energy of nodes within the scope of forward angle; finally, early death of key nodes was avoided successfully. The simulation suggested that the effective life could be improved approximately 50% by DAFARE, compared with ant colony optimization algorithm based on Function of Multi-object Evaluation and Positive-Negative Feedback (FMEPNF). The experimental results show that, the network energy consumption of DAFARE can be balanced effectively, the lifetime is prolonged, and the coverage of WSN is guaranteed.
    ZigBee hybrid routing algorithm based on uneven clustering mechanism
    BAI Leqiang, WANG Yutao
    2016, 36(1):  81-86.  DOI: 10.11772/j.issn.1001-9081.2016.01.0081
    Asbtract ( )   PDF (903KB) ( )  
    References | Related Articles | Metrics
    The existing ZigBee network routing algorithm has unbalanced energy consumption. To solve the problem, based on the tree routing algorithm and Ad-Hoc On-demand Distance Vector junior (AODVjr) algorithm, the ZigBee hybrid routing algorithm based on uneven clustering mechanism was proposed. The algorithm divided the network into several uneven logical clusters, the scale of cluster close to the coordinator was smaller, so the forwarding task could be reduced, and the energy consumption was balanced. Based on the clustering, the transmission was divided into transmission within the clusters and between the clusters. Transmission within the clusters used tree routing algorithm based on the neighbor table. While the transmission between the clusters used the AODVjr algorithm, because the tree routing algorithm based on the neighbor table was invalid, which could find out shorter path between two cluster head nodes, in the same time, only the cluster head nodes and gateway nodes could broadcast Route Request (RREQ) packet, which helped to reduce the redundant RREQ packets. The simulation results show that the proposed algorithm can effectively delay the time of the death node and prolong the network lifetime, thus improving the network performance.
    Wide-band spectrum sensing algorithm using sparsity estimation based on Gerschgorin theorem
    ZHAO Zhijin, CHEN Jinglai
    2016, 36(1):  87-90.  DOI: 10.11772/j.issn.1001-9081.2016.01.0087
    Asbtract ( )   PDF (706KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of under-estimation of sparsity at low Signal-to-Noise Ratio (SNR) and over-estimation of sparsity at high SNR, a wide-band spectrum sensing algorithm using sparsity estimation based on Gerschgorin theorem was proposed. Firstly, Gerschgorin theorem was used to separate the signal disk and noise disk in order to estimate the sparsity. Then, the spectrum support set was obtained by using Orthogonal Matching Pursuit (OMP) algorithm. Finally, the wide-band spectrum sensing was accomplished. The simulation results show that, the SNR of the proposed algorithm, AIC-OMP (Akaike Information Criterion-Orthogonal Matching Pursuit) algorithm and MDL-OMP (Minimum Description Length-Orthogonal Matching Pursuit) algorithm need 4.6 dB, 8.5 dB and 9.7 dB respectively while their detection probability reaching to 95%; the false alarm probability of the proposed algorithm tends to 0 when the SNR is higher than 13 dB, which is far lower than that of BPD-OMP (Bayesian Predictive Density-Orthogonal Matching Pursuit) algorithm and GDRI-OMP (Gerschgorin Disk Radii Iteration-Orthogonal Matching Pursuit) algorithm. Therefore, the proposed algorithm takes account of sparsity estimation performances under both low SNR and high SNR, and the spectrum sensing performance of the proposed algorithm is better than that of AIC-OMP algorithm, MDL-OMP algorithm, BPD-OMP algorithm and GDRI-OMP algorithm.
    Design and simulation of satellite channel three-state Markov model
    GUO Yecai, ZHAO Weijuan, ZHANG Xiuzai
    2016, 36(1):  91-95.  DOI: 10.11772/j.issn.1001-9081.2016.01.0091
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics
    To study the influence of atmospheric space environment on satellite communications, the weather conditions were divided into clear sky, cloudy and foggy weather, rainy and snowy weather. The satellite channel propagation characteristics were researched under different weather conditions. The Rice model, C.Loo model and Suzuki model were established respectively. The three-state Markov process was introduced into the satellite channel model, which could describe the transformation of the channel states caused by the weather change. The statistical characteristics of the satellite channel three-state Markov model were analyzed. Finally, the satellite channel three-state Markov model was designed and simulated based on the sum of sinusoid method and the measured data of meteorological satellite cloud image. The simulation results show that the proposed model can be used in the simulation of actual satellite channel propagation characteristics.
    Design of remote data acquisition driver with king view supported by middleware
    LIU Xueduo, JIAO Donglai, JI Feng, YANG Hao
    2016, 36(1):  96-100.  DOI: 10.11772/j.issn.1001-9081.2016.01.0096
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics
    To solve the problems in configuration network including compatibility of lower computer, data sharing and simplicity of presentation at client, a data acquisition model with king view supported by middleware was proposed. Based on configuration software with strong network connection and second development characteristic, taking general king view software as an example, the model was deeply analyzed with the theory of configuration software, the communication middleware was combined with the communication protocol, the device information and variable information of king view were defined, the display interface of king view was drawn, and the availability for the model was verified at last. The verification results show that, compared to the traditional configuration network model, the proposed model promotes the expandability and compatibility of configuration network data acquisition model, and it can be used for data sharing and variety show at client, and further accelerates the fusion of configuration and Internet of Things (IoT) technology.
    Estimation algorithm of radio frequency identification tags based on non-empty slot number
    LONG Zhaohua, GONG Tengfei
    2016, 36(1):  101-106.  DOI: 10.11772/j.issn.1001-9081.2016.01.0101
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics
    Regarding the problem of long estimation time and big estimation error of tag estimation algorithms that currently exist in Radio Frequency Identification (RFID) system, a novel tag estimation method based on non-empty slot number was proposed. Firstly, the model of Dynamic Frame Slot ALOHA (DFSA) algorithm was analyzed to point out the necessity of tag estimation; secondly, some tag estimation algorithms were researched and their shortcomings were listed; thirdly, by researching the relationship between the average number of non-empty slots and the number of total tags to be identified in different frame length conditions, a normalized curve independent of frame length was obtained and applied to estimate tag numbers. Furthermore, by introducing accuracy demand, the total number of polling times K under different tag numbers was determined by using probability theory and binary search method; finally, a simulation was conducted and a comparative analysis of performance between the proposed method and some existing tag estimation algorithms was done from aspects of estimation accuracy and estimation time. The simulation results show that the maximum error rate of the proposed algorithm is only 1%. When the frame length was 128 and the total tag number was 400, the error rate of the proposed tag estimation algorithm was reduced by 66.7%, 78.3% and 72.2% respectively compared with Adaptive Slotted ALOHA Protocol (ASAP), Fast Zero Estimation (FZE) and Maximum A Posteriori (MAP) estimation algorithm. What's more, in the case of recognizing the same number of tags, the estimation time of the proposed algorithm was also significantly less than that of the above-mentioned three algorithms. Thus, the proposed method has higher estimation accuracy and estimation efficiency and can identify tags quickly and accurately in RFID system.
    Optimization of cloud task scheduling based on discrete artificial bee colony algorithm
    NI Zhiwei, LI Rongrong, FANG Qinghua, PANG Shanshan
    2016, 36(1):  107-112.  DOI: 10.11772/j.issn.1001-9081.2016.01.0107
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    To meet high quality requirement of virtual resource service in cloud computing applications and solve the problem that cloud computing task scheduling only consider single objective currently, a Discrete Artificial Bee Colony (DABC) algorithm for cloud task scheduling optimization was proposed by considering the users' shortest waiting time, resource load balancing and economic principle. First, the multi-objective mathematical model of cloud task scheduling was established in theory. Second, by combining with preference satisfaction policy, introducing the local search operator and changing the searching way of scout bee, an optimizing strategy based on the Multi-objective DABC (MDABC) algorithm was proposed to solve the problem. Different cloud task scheduling simulation experimental results show that the proposed MDABC algorithm can obtain higher comprehensive satisfaction than the basic DABC algorithm, Genetic Algorithm (GA) and classical greedy algorithm. Thus, the proposed MDABC algorithm can better improve the performance of cloud task scheduling in virtual resource system, and its universality is better.
    Overbooking decision-making method of multiple instances under cloud computing resource market
    CHEN Donglin, YAO Mengdi, DENG Guohua
    2016, 36(1):  113-116.  DOI: 10.11772/j.issn.1001-9081.2016.01.0113
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    Considering the problems of low load rate of data centers in cloud providers, uncertainty and variety of cloud user demand; in order to improve the average profit of the cloud providers, an overbooking model of multiple instances under uncertain demand was established. The proposed model combined the influences of overbooking for cloud data center load balancing and Service Level Agreement (SLA) under the actual cloud computing resource market, multi-constraint of overbooking was provided, then the optimal allocation policy of each instance type was put forward. The simulation results show that when the unused rate of reservation is 0.25, the average profit is relatively high, the load rate of data center is 78%, finally the optimal allocation of each instance type is determined.
    Virtual machine deployment strategy based on particle swarm optimization algorithm
    YANG Jing, ZHANG Hongjun, ZHAO Shuining, ZHAN Donghui
    2016, 36(1):  117-121.  DOI: 10.11772/j.issn.1001-9081.2016.01.0117
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics
    To solve the virtual machine deployment problem in Infrastructure as a Service (IaaS) of cloud computing, a virtual machine deployment strategy based on Particle Swarm Optimization (PSO) algorithm was proposed. Since the PSO algorithm has weaknesses of having a slow convergence speed and falling into local optimum easily when dealing with large-scale and complex problems like virtual machine deployment, firstly, a Multiple-population Gaussian Learning Particle Swarm Optimization (MGL-PSO) algorithm was proposed, with using the model of multiple population evolution to accelerate the algorithm convergence, as well as adding Gaussian learning strategy to avoid local optimum. Then according to the deployment model, with using Round Robin (RR) algorithm to initialize the MGL-PSO, a virtual machine deployment strategy aiming to load balancing was proposed. Through the simulation experiment in CloudSim, it validates that MGL-PSO has a higher convergence speed and load imbalance degree is reduced by 13% compared with PSO algorithm. In the two experimental situations, compared with the Opportunistic Load Balancing (OLB) algorithm, the load imbalance degrees of the proposed algorithm decrease by 25% and 15% respectively, and compared with the Greedy Algorithm (GA) the load imbalance degrees decrease by 19% and 7% respectively.
    Online compression of global positioning system trajectory data based on motion state change
    LIU Leijun, FANG Cheng, ZHANG Lei, BAO Suning
    2016, 36(1):  122-127.  DOI: 10.11772/j.issn.1001-9081.2016.01.0122
    Asbtract ( )   PDF (999KB) ( )  
    References | Related Articles | Metrics
    Concerning the insufficient consideration of the cumulative error and offset which online Global Positioning System (GPS) trajectory data compression based on motion state change and the insufficient key point evaluation of online GPS trajectory data compression based on the offset calculation, an online compression of GPS trajectory data based on motion state change, named Synchronous Euclidean Distance (SED) Limited Thresholds Algorithm (SLTA), was proposed. This algorithm used steering angle and speed change to evaluate information of trajectory point. At the same time, SLTA introduced the SED to limit offset of trajectory point. So SLTA could reach better information retention. The experimental results show that the trajectory compression ratio can reach about 50%. Compared with Thresholds Algorithm (TA), the average SED error (less than 5 m) of SLTA can be negligible. For other trajectory data compression algorithms, SLTA's average angel error is the lowest (1.5°-2.3°) and run time is the most stable. SLTA can stably and effectively do online GPS trajectory data compression.
    Optimization of baggage tag reader layout based on improved particle swarm optimization
    GAO Qingji, LI Yongsheng, LUO Qijun
    2016, 36(1):  128-132.  DOI: 10.11772/j.issn.1001-9081.2016.01.0128
    Asbtract ( )   PDF (699KB) ( )  
    References | Related Articles | Metrics
    When civil aviation passengers check in, various uncertainty problems exist in the baggage tag readers' number, position and angle. To solve the problems, the Dynamic Population-Double Fitness Particle Swarm Optimization (DPDF-PSO) algorithm was proposed. Firstly, the mathematical model of baggage tag detector was established, then it was transformed into an optimization problem; secondly, the optimization problem was solved by standard Particle Swarm Optimization (PSO) algorithm; finally, the standard PSO algorithm was improved in accordance with the model features. The simulation results show that compared with standard PSO algorithm, the simulation time of the DPDF-PSO algorithm reduced by 23.6%, the objective function value increased by 3.7%. DPDF-PSO algorithm overcomes the shortage of long simulation time and troublesome problem of optimal boundary solutions existed in standard PSO algorithm. Identity information can be read quickly and accurately by readers layout at a lower cost.
    Support vector machine situation assessment algorithm based on MapReduce
    CHEN Zhen, XIA Jingbo, YANG Juan, WEI Zekun
    2016, 36(1):  133-137.  DOI: 10.11772/j.issn.1001-9081.2016.01.0133
    Asbtract ( )   PDF (732KB) ( )  
    References | Related Articles | Metrics
    Support Vector Machine (SVM) has good performance in dealing with dimensionality disaster, over fitting and nonlinearity, which other traditional situation assessment algorithms does not have. However SVM has low efficiency when dealing with large-scale data. To effectively confront the challenge of handling big data, a MapReduce-based SVM (MR-SVM) situation assessment algorithm was proposed. Considering the characteristics of SVM algorithm, the parallelization and parameter selection of SVM based on MapReduce programming was implemented by designing procedures of map function and reduce function. The performances of MR-SVM and SVM were compared on Hadoop platform, MR-SVM had lower efficiency than SVM when dealing with small-scale data, but much better performance when dealing with large-scale data. SVM had an exponential growth on training time with the growth of data scalability while MR-SVM has slow growth. The experiment results show that MR-SVM solves the problem of data scalability, therefore it is suitable for situation assessment in big data environment.
    Improved local-search-based chaotic discrete particle swarm optimization algorithm for solving traveling salesman problem
    CHENG Biyun, LU Haiyan, XU Xiangping, SHEN Wanqiang
    2016, 36(1):  138-142.  DOI: 10.11772/j.issn.1001-9081.2016.01.0138
    Asbtract ( )   PDF (909KB) ( )  
    References | Related Articles | Metrics
    In view of the drawbacks of the standard Discrete Particle Swarm Optimization (DPSO) algorithm such as slow convergence speed and easily trapping into local optima, an Improved Local-search-based Chaotic Discrete Particle Swarm Optimization (ILCDPSO) algorithm based on excellence coefficient was proposed and then applied to Traveling Salesman Problem (TSP). In this algorithm, each edge was assigned an appropriate excellence coefficient based on the principle of roulette selection. This helped to improve the selection probability of short edge, thus improving the optimization ability and convergence speed of the algorithm. In order to further improve the accuracy of solution, a local search strategy was employed such that the exploration ability of the algorithm could be improved by adjusting the routes of cities in the given neighborhood for each city. Moreover, a chaotic sequence was integrated into the iteration formula to enhance the randomness and diversity of particles and hence increasing the global searching ability of the proposed algorithm. Finally the algorithm was evaluated by some typical instances in the internationally commonly used library of TSP (TSPLIB) and compared with Particle Swarm Optimization (PSO) algorithm, Improved Particle Swarm Optimization (IPSO) algorithm, and Chaotic Particle Swarm Optimization (CPSO) algorithm, etc. The experiment data show that, under the same experimental conditions, ILCDPSO can achieve optimal solutions with less average iterations than other algorithms and has the highest ratio of number for obtaining optimal solutions. The research results indicate that ILCDPSO algorithm performs better than other algorithms in terms of convergence speed, global optimization ability and stability, and it is a comparatively potential intelligent algorithm for solving TSP.
    Novel differential evolution algorithm based on simplex-orthogonal experimental design
    LI Kangshun, ZUO Lei, LI Wei
    2016, 36(1):  143-149.  DOI: 10.11772/j.issn.1001-9081.2016.01.0143
    Asbtract ( )   PDF (1013KB) ( )  
    References | Related Articles | Metrics
    Focusing on the defects, such as slow convergence and premature phenomenon, in solving constrained optimization problems by the traditional Differential Evolution (DE) algorithm, a novel DE based on Simplex-Orthogonal experimental design (SO-DE) algorithm was proposed. The algorithm designed a new hybrid crossover operator that combined simplex crossover and orthogonal experimental design to improve the search ability of DE algorithm, and the improved comparison criteria was used to compare and select the individuals of population. Several parent individuals were used to produce multiple offspring individuals by simplex crossover in the new hybrid crossover operator, then the multiple excellent individuals, which were selected from two set by orthogonal experimental design, were copied in the next generation. Different treatment schemes were used for different stages of population in the improved comparison criterion, which aimed to effectively weigh the relationship between the value of the objective function and the degree of constraint violation, thus better individuals were chosen into the next generation. Simulation experiments were conducted on 13 standard test functions and 2 engineering design problems. The SO-DE algorithm is much better than HEAA (Hybrid Evolutionary Algorithm and Adaptive constraint-handling technique) and COEA/ODE (a novel Constrained Optimization Evolutionary Algorithm based on Orthogonal Experimental Design) in terms of the accuracy and standard variance of final solution. The experimental results demonstrate that the SO-DE algorithm has better accuracy and stability.
    Fuzzy clustering algorithm based on midpoint density function
    ZHOU Yueyue, HU Jie, SU Tao
    2016, 36(1):  150-153.  DOI: 10.11772/j.issn.1001-9081.2016.01.0150
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics
    In the traditional Fuzzy C-Means (FCM) clustering algorithm, the initial clustering center is uncertain and the number of clusters should be preset in advance which may lead to inaccurate results. The fuzzy clustering algorithm based on midpoint density function was put forward. Firstly, the stepwise regression thought was integrated as the initial clustering center selection method to avoid convergence from local circulation, and then the number of clusters was determined, finally according to the results, the validity index of fuzzy clustering including overlap degree and resolution was judged to determin the optimal number of clusters. The results prove that, compared with the traditional improved FCM, the proposed algorithm reduces the number of iterations and increases the average accuracy by 12%. The experimental results show that the proposed algorithm can reduce the processing time of clustering, and it is better than the comparison algorithm on the average accuracy and the clustering performance index.
    Correlation between phrases in advertisement based on recursive autoencoder
    HU Qinghui, WEI Shiwei, XIE Zhongqian, REN Yafeng
    2016, 36(1):  154-157.  DOI: 10.11772/j.issn.1001-9081.2016.01.0154
    Asbtract ( )   PDF (737KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that most research results on correlation between advertising phrases stay in the literal level, and can not exploit deep semantic information of the phrases, which limits the performance of the task, a novel method was proposed to calculate the correlation between the phrases by using deep learning technique. Recursive AutoEncoder (RAE) was developed to make full use of semantic information in the word order and phrase, which made the phrase vector contain more deep semantic information, and built the calculating method of correlation under the advertising situation. Specifically, for a given list of a few phrases, reconstruction error was produced by merging the adjacent two elements. Phrase tree, which similar to the Huffman tree, was produced by merging two elements with smallest reconstruction error in turn. Gradient descent and Cosine distance were used to minimize the reconstruction error of phrase tree and measure the correlation between the phrases respectively. The experimental results show that the contribution of the important phrases is increased in the representation of the final phrase vector by introducing weight information, and RAE is more suitable for phrase calculation. The proposed method increases the accuracy by 4.59% and 3.21% respectively compared with LDA (Latent Dirichlet Allocation) and BM25 algorithm under the same condition of 50% recall rate, which proves its effectiveness.
    Spam filtering based on modified stack auto-encoder
    SHEN Cheng'en, HE Jun, DENG Yang
    2016, 36(1):  158-162.  DOI: 10.11772/j.issn.1001-9081.2016.01.0158
    Asbtract ( )   PDF (882KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that Stack Auto-encoder (SA) easily traps to overfitting, which may reduce the accuracy of spam classification, a modified SA method based on dynamic dropout was proposed. Firstly, the specificity of the spam classification was analyzed, and the dropout algorithm was employed in SA to handle overfitting. Then according to the fault of dropout algorithm that making some nodes be in the stall state for a long time, an improved algorithm of dropout was proposed. The static dropout rate was replaced by dynamic dropout rate which decreased with training steps using dynamic function. Finally, the dynamic dropout algorithm was used to improve the pretraining model of SA. The simulation results show that compared with Support Vector Machine (SVM) and Back Propagation (BP) neural network, the average accuracy of the modified SA is 97.66%. And the Matthews correlation coefficient of every dataset is higher than 89%. Matthews correlation coefficient of the modified SA on every dataset is 3.27%, 1.68%, 2.16%, 1.51%, 1.58% and 1.07% higher than that of the conventional SA separately. The experimental results show that the modified SA using dynamic dropout has higher accuracy and better robustness.
    Web information extraction in health field
    LI Rujun, ZHANG Jun, ZHANG Xiaomin, GUI Xiaoqing
    2016, 36(1):  163-170.  DOI: 10.11772/j.issn.1001-9081.2016.01.0163
    Asbtract ( )   PDF (1223KB) ( )  
    References | Related Articles | Metrics
    For the question how to apply the Web Information Extraction (WIE) technology to health field, a Web information extraction method based on WebHarvest was proposed. Through the structure analysis of different health Web sites and the design of health entity extraction rules, the automatic extraction algorithm of health entity and its attributes based on WebHarvest was realized; then they were stored in a relational database after consistency check; in the end, the values of entity attributes were analyzed to recognize entities by using processing method of natural language Ansj to extract relationship among entities. In the health entity extraction experiments, the average F-measure of the technology reached 99.9%; in the entity contact extraction experiments, the average F-measure reached 80.51%. The experimental results show that the proposed Web information extraction technology has high quality and credibility in the health information extraction.
    Location-based asymmetric similarity for collaborative filtering recommendation algorithm
    WANG Fuqiang, PENG Furong, DING Xiaohuan, LU Jianfeng
    2016, 36(1):  171-174.  DOI: 10.11772/j.issn.1001-9081.2016.01.0171
    Asbtract ( )   PDF (702KB) ( )  
    References | Related Articles | Metrics
    To improve the accuracy of the recommendation system, a Location-Based Asymmetric Similarity for Collaborative Filtering (LBASCF) recommendation algorithm was proposed for the problem that traditional Collaborative Filtering (CF) recommendation algorithm does not consider the location information. Firstly, the cosine similarity and the Location-Based Asymmetric Similarity (LBAS) between users were calculated by the user-item rating matrix and the user's historical consumption location; secondly, a new user similarity measure was obtained by fusing the cosine similarity and location-based similarity. The blended similarity could reflect the user's preferences in both location and interest. Finally, based on the ratings of the user's nearest neighbors, new items were recommended to the user. The effectiveness of the algorithm was evaluated by using a dianping dataset and Foursquare dataset. The experimental results on the dianping dataset show that, compared with CF, the recall and precision of LBASCF were increased by 1.64% and 0.37% respectively; compared with the Location-Aware Recommender System (LARS), the recall and precision of LBASCF were increased by 1.53% and 0.35% respectively. The experimental results show that LBASCF can achieve better recommendation quality of the system based on the application of location-based services than CF and LARS.
    User sentiment model oriented to product attribute
    JIA Wenjun, ZHANG Hui, YANG Chunming, ZHAO Xujian, LI Bo
    2016, 36(1):  175-180.  DOI: 10.11772/j.issn.1001-9081.2016.01.0175
    Asbtract ( )   PDF (903KB) ( )  
    References | Related Articles | Metrics
    The traditional sentiment model faces two main problems in analyzing user's emotion of product reviews: 1) the lack of fine-grained emotion analysis for product attributes; 2) the number of product attributes shall be defined in advance. In order to alleviate the problems mentioned above, a fine-grained model for product attributes named User Sentiment Model (USM) was proposed. Firstly, the entities were clustered in product attributes by Hierarchical Dirichlet Processes (HDP) and the number of product attributes could be obtained automatically. Then, the combination of the entity weight in product attributes, the evaluation phrase of product attributes and sentiment lexicon was considered as prior. Finally, Latent Dirichlet Allocation (LDA) was used to classify the emotion of product attributes. The experimental results show that the model achieves a high accuracy in sentiment classification and the average accuracy rate of sentiment classification is 87%. Compared with the traditional sentiment model, the proposed model obtains higher accuracy on extracting product attributes as well as sentiment classification of evaluation phrases.
    Action recognition method based on dense optical flow trajectory and sparse coding algorithm
    ZHAO Xiaojian, ZENG Xiaoqin
    2016, 36(1):  181-187.  DOI: 10.11772/j.issn.1001-9081.2016.01.0181
    Asbtract ( )   PDF (1315KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the existing action feature extraction method achieves lower recognition rate, a novel unsupervised one for action recognition by combining Dense Optical Flow trajectory and Sparse Coding (DOF-SC) algorithm was proposed. First of all, the trajectory-centered image patches were sampled as the original features based on the extraction of Dense Optical Flow (DOF). Then, the sparse dictionary on the basis of Sparse Coding (SC) framework was trained, and the sparse feature representation of the trajectory through dictionary was got, then the code book of the trajectory by clustering with the Bag-of-Feature (BF) model was achieved, and the trajectory of every action by code book was voted, the action features by counting the number of every code book were got. Finally, the examples for action recognition was classified and predicted by Support Vector Machine (SVM) with the kernel of histogram intersection function. The accuracy of the DOF-SC algorithm is superior to the accuracy of Motion Boundary Histogram (MBH) as the action feature by 0.9% in the KTH (Kungliga Tekniska Hgskolan) database and 1.2% in the YouTube database with the trajectory sampling rate of 10%. The results prove the effectiveness of the unsupervised action feature extraction method.
    Data discretization algorithm based on adaptive improved particle swarm optimization
    DONG Yuehua, LIU Li
    2016, 36(1):  188-193.  DOI: 10.11772/j.issn.1001-9081.2016.01.0188
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the classical rough set can only deal with discrete attributes, a discretization algorithm based on Adaptive Hybrid Particle Swarm Optimization (AHPSO) was proposed. Firstly, the adaptive adjustment strategy was introduced, which could not only overcome the shortage that the particle swarm was easy to fall into local extremum but also improve the ability of seeking the global excellent result. Secondly, the Tabu Search (TS) method was introduced to deal with the global optimal particle of each generation and to get the best global optimal particle, which enhanced the local search ability of particle swarm. Finally, the attribute discretization points were initialized to the particle group when the classification ability of the decision table had been kept. The optimal discretization points were sought through the interaction between particles. By using the classification method of J48 decision tree based on WEKA (Waikato Environment for Knowledge Analysis) platform, compared with the discretization algorithms based on importance of attribute and information entropy, the classification accuracy of the proposed algorithm improved by about 10% to 20%.Compared with the discretization algorithms based on Niche Discrete PSO (NDPSO) and linearly decreasing weight PSO, the classification accuracy of the proposed algorithm improved by about 2% to 5%. The experimental results show that the proposed algorithm significantly enhances the accuracy of classification by J48 decision tree, and it has better validity for discretization of continuous attributes.
    Network security situation evaluation method based on attack pattern recognition
    WANG Kun, QIU Hui, YANG Haopu
    2016, 36(1):  194-198.  DOI: 10.11772/j.issn.1001-9081.2016.01.0194
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    By analyzing and comparing the existing network security situation evaluation methods, it is found that they can not accurately reflect the features of large-scale, coordination, multi-stage gradually shown by network attack behaviors. Therefore, a network security situation evaluation method based on attack pattern recognition was proposed. Firstly, the causal analysis of alarm data in the network was made, and the attack intention and the current attack phase were recognized. Secondly, the situation evaluation based on the attack phase was realized. Lastly the State Transition Diagram (STG) of attack phase was created to realize the forecast of network security situation by combining with vulnerability and configuration information of host. A simulation experiment for the proposed network security situation evaluation model was performed by network examples. With the deepening of the attack phase, the value of network security situation would increase. The experimental results show that the proposed method is more accurate in reflecting the truth of attack, and the method does not need training on the historical sequence, so the method is more effective in situation forecasting.
    Network security situation prediction method based on harmony search algorithm and relevance vector machine
    LI Jie, ZHANG Zhaowei
    2016, 36(1):  199-202.  DOI: 10.11772/j.issn.1001-9081.2016.01.0199
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics
    To deal with the time-varying and nonlinear properties of network security and its difficulty in prediction assessment, a network security situation prediction method based on Harmony Search algorithm and Relevance Vector Machine (HS-RVM) was proposed to offset the prediction accuracy drawbacks of existing prediction methods. In the prediction process, network security situation samples were firstly normalized and phase space was reconstructed; then, Harmony Search (HS) algorithm was adopted to find out the optimal Relevance Vector Machine (RVM) hyper parameters to build the network security situation prediction model with improved prediction accuracy and velocity; finally, Wilcoxon signed rank tests were used to testify the difference of prediction performance. The simulation cases indicate that the Mean Absolute Percentage Error (MAPE) and the Root-Mean-Square Error (RMSE) of the proposed prediction method are 0.49575 and 0.02096 respectively, with a better prediction performance than the Improved Harmony Search (IHS) algorithm and Regularized Extreme Learning Machine (IHS-RELM) prediction model and PSO and Support Vector machine for Regression (PSO-SVR) prediction model. The outcome of Wilcoxon signed rank tests show there is a significant difference. The proposed method is capable to depict the changing rules of network security situation relatively, which is helpful for network administrators to control the changing tendency of network security situation in time.
    Attack graph based risk assessment method for cyber security of cyber-physical system
    WU Wenbo, KANG Rui, LI Zi
    2016, 36(1):  203-206.  DOI: 10.11772/j.issn.1001-9081.2016.01.0203
    Asbtract ( )   PDF (608KB) ( )  
    References | Related Articles | Metrics
    Recent incidents such as the Stuxnet worm have shown that cyber attacks can cause serious physical damage of Cyber-Physical System (CPS). Aiming at this problem, a risk assessment method based on attack graph was proposed. Firstly, the attack behavior of CPS was analyzed and the result showed that the vulnerabilities in physical devices such as Programmable Logic Controller (PLC) were the keys of cross-domain attack. Then the utilization modes and impact of vulnerabilities were described. Secondly, the risk assessment model was proposed as well as the successful-attack-probability index and the attack-impact index. Furthermore, the successful-attack-probability index was calculated considering the intrinsic characteristics of vulnerabilities and the ability of attacker. The attack-impact index was calculated considering the host importance and the utilization mode of vulnerabilities. The method was developed to assess the cyber layer and physical layer as a whole system and the impact of multiple cross-domain attacks on system risk was considered. The numerical examples show that the risk of combined attack is five times the risk of a single attack and the risk value obtained is more accurate.
    Edge partitioning approach for protecting sensitive relationships in social network
    FAN Guoting, LUO Yonglong, SUN Dandan, WANG Taochun, ZHENG Xiaoyao
    2016, 36(1):  207-211.  DOI: 10.11772/j.issn.1001-9081.2016.01.0207
    Asbtract ( )   PDF (949KB) ( )  
    References | Related Articles | Metrics
    The sensitive relationships between users are important privacy information in social networks. Focusing on the issue of sensitive relationships leakage between users, an edge partitioning algorithm was proposed. Firstly, every non-sensitive edge was partitioned into some sub-edges after the sensitive edge was deleted in social networks. Secondly, every sub-edge was assigned information which belongs to the original non-sensitive edge. So every sub-edge contained part information of the original non-sensitive edge. The anonymized social network that preserves privacy was generated finally. In the comparison experiments with cluster-edge algorithm and cluster-based with constraints algorithm, the edge partitioning algorithm had a greater decrease of the probability of sensitive relationships leakage with maintaining high availability of data. The probability was decreased by about 30% and 20% respectively. As a result, the edge partitioning algorithm can effectively protect sensitive relationships in social networks.
    Wireless communication security strategy based on differential flag byte
    TANG Shang, LI Yonggui, ZHU Yonggang
    2016, 36(1):  212-215.  DOI: 10.11772/j.issn.1001-9081.2016.01.0212
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics
    Since the computational complexity of public key cryptography is high, and Power Delay Profile (PDP) model is limited by the distance between the attacker and the user, a wireless communication security strategy based on Differential Flag Byte (DFB) was proposed in the identification and defense of impersonation attack. Meanwhile, the equation to generate the DFB was given. The strategy utilized the transmission data information to generate the DFB equation, establishing the correlation that current flag byte of transmission data frame was determined by the relevant parameter of last frame. Finally, receiving terminal identified attack by testing and verifying the DFB received from the data frame with threshold decision. Through theoretical analysis, DFB could prevent recurrent impersonation attack, when the attacker knew the communicational parameter. Meanwhile, the attacker's effective attack time was shorter, and the attack cycle was longer. And the attacker was limited to a finite ellipse in space. Simulation analysis was carried out with a simple DFB at the end. The results show that wireless communication based on the simple DFB strategy can identify and defense impersonation attack by setting the appropriate threshold, when the communication system's Signal-to-Noise Ratio (SNR) was above -4 dB.
    Outsourced data encryption scheme with access privilege revocation
    LI Chengwen, WANG Xiaoming
    2016, 36(1):  216-221.  DOI: 10.11772/j.issn.1001-9081.2016.01.0216
    Asbtract ( )   PDF (958KB) ( )  
    References | Related Articles | Metrics
    The scheme proposed by Zhou et al. (ZHOU M, MU Y, SUSILO W, et al. Privacy enhanced data outsourcing in the cloud. Journal of network and computer applications, 2012, 35(4): 1367-1373) was analyzed, and the shortcoming of no access privilege revocation was shown. To address the shortcoming, an outsourced data encryption scheme with revoking access privilege was proposed. Firstly, the data were divided into several data blocks, and each data block was encrypted separately. Secondly, with the key derivation method, the number of keys stored and managed by the data owner was reduced. Finally, multiple decryption keys were constructed on an encrypted data to revoke access privileges of some users, without affecting the legitimate users. Compared with Zhou's scheme, the proposed scheme not only maintains the advantage of privacy protection to the outsourced data in the scheme, but also realizes access privilege revocation for users. The analysis results show that the proposed scheme is secure under the assumption of the Discrete Logarithm Problem (DLP).
    Feature extraction for stereoscopic vision depth map based on principal component analysis and histogram of oriented depth gradient
    DUAN Fengfeng, WANG Yongbin, YANG Lifang, PAN Shujing
    2016, 36(1):  222-226.  DOI: 10.11772/j.issn.1001-9081.2016.01.0222
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics
    To solve the low accuracy and high complexity in feature extraction of stereoscopic vision depth map, a feature extraction algorithm based on Principal Component Analysis and Histogram of Oriented Depth Gradient (PCA-HODG) was proposed. Firstly, disparity computation and depth map extraction were executed for binocular stereoscopic vision image to obtain high quality depth map; secondly, edge detection and gradient calculation of depth map within fixed size window were performed, then the features of region shape histograms were acquired and quantified. Meanwhile, dimensionality reduction by Principal Component Analysis (PCA) was implemented; finally, to realize the accuracy and completeness of feature extraction from depth map, a detection method of sliding window was used for the feature extraction of whole depth map and the dimensionality reduction was implemented once again. In the experiment of feature matching and classification, for the frames of test sequence Street, the average classification accuracy rate of the proposed algorithm increased by 1.15% when compared with the Range-Sample Depth Feature (RSDF) algorithm; while for Tanks, Tunnel, Temple, the average classification accuracy rate increased by 0.69%, 1.95%, 0.49% respectively when compared with the Geodesic Invariant Feature (GIF) algorithm. At the same time, the average running time decreased by 71.65%, 78.05%, 80.06% respectively compared with the Histogram of Oriented Depth (HOD), RSDF, GIF algorithm. The experimental results show that the proposed algorithm can not only detect and extract features of depth map more accurately, but also reduce the running time greatly by dimensionality reduction. Moreover, the proposed algorithm also has better robustness.
    New improved 1-2-order fractional differential edge detection model based on Riemann-Liouville integral
    WANG Chengxiao, HUANG Huixian, YANG Hui, XU Jianmin
    2016, 36(1):  227-232.  DOI: 10.11772/j.issn.1001-9081.2016.01.0227
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issues of failing to pinpoint the edge information accurately and lacking texture detail of image by using integer order differential or 0-1-order fractional differential mask operators in digital image processing, a new 1-2-order edge detection operator based on Laplacian operator was proposed. Deduced from the definition of Riemann-Liouville (R-L),the 1-2-order fractional differential had the advantage in enhancing high-frequency signal and reinforcing medium frequency signal. The simulation results demonstrate that the proposed operator can take an higher recognition rate on the subjective recognition, and it's better at extracting the edge information, especially for the image with rich texture detail in the smooth region with little change of gray scale. Objectively, the integrated location error rate is 7.41% which is less than that of integer order differential operators (a minimum of 10.36%) and 0-1-order differential operator (a minimum of 9.97%). Quantitative indicators show the new fractional operator can effectively improve the positioning accuracy of the edge, and the proposed operator is particularly suitable for edge detection with high frequency information.
    Image compressive sensing reconstruction via total variation and adaptive low-rank regularization
    LIU Jinlong, XIONG Chengyi, GAO Zhirong, ZHOU Cheng, WANG Shuxian
    2016, 36(1):  233-237.  DOI: 10.11772/j.issn.1001-9081.2016.01.0233
    Asbtract ( )   PDF (789KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that collaborative sparse image Compressive Sensing (CS) reconstruction based on fixed transform bases can not adequately exploit the self similarity of images, an improved reconstruction algorithm combining the Total Variation (TV) with adaptive low-rank regularization was proposed in this paper. Firstly, the similar patches were found by using image block matching method and formed into nonlocal similar patch groups. Then, the weighted low-rank approximation for nonlocal similar patch groups was adopted to replace the 3D wavelet transform filtering used in collaborative sparse representation. Finally, the regularization term of combining the gradient sparsity with low-rank prior of nonlocal similarity patch groups was embedded to reconstruction model, which is solved by Alternating Direction Multiplier Method (ADMM) to obtain the reconstructed image. The experimental results show that, in comparison with the Collaborative Sparse Recovery (RCoS) algorithm, the proposed method can increase the Peak Signal-to-Noise Ratio (PSNR) of reconstructed images about 2 dB on average, and significantly improve the quality of reconstructed image with keeping texture details better for nonlocal self-similar structure is precisely described.
    High-speed method for extracting center of line structured light
    SU Xiaoqin, XIONG Xianming
    2016, 36(1):  238-242.  DOI: 10.11772/j.issn.1001-9081.2016.01.0238
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics
    The speed and accuracy of extracting the center of line structured light has a direct effect on integral performance of the system in three-dimensional measurement of line structured light. Based on geometrical center method, direction template method and barycenter method, a high-speed algorithm of extracting the center of line structured light was proposed. First, the image was preprocessed, and the center of line structured light was fast extracted as contour line by geometrical center method. Then a normal detection method based on location was designed to extract normal direction of contour line. Compared with direction template method, the calculating speed of the proposed algorithm significantly improved. Finally, the center of structured light was extracted precisely through gray weight of normal direction pixel on contour line. The experimental results indicate that the proposed algorithm's speed has significantly boosted in processing, and its precision reaches sub-pixel level.
    Adaptive total generalized variation denoising algorithm for low-dose CT images
    HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Fang, ZHANG Pengcheng, LIU Yi, SUN Weiya, GUI Zhiguo
    2016, 36(1):  243-247.  DOI: 10.11772/j.issn.1001-9081.2016.01.0243
    Asbtract ( )   PDF (796KB) ( )  
    References | Related Articles | Metrics
    A new denoising algorithm, Adaptive Total Generalized Variation (ATGV), was proposed for removing streak artifacts within the reconstructed image of low-dose Computed Tomography (CT). Considering the shortage that the traditional Total Generalized Variation (TGV) would blur the edge details, the intuitionistic fuzzy entropy which can distinguish the smooth and detail regions was introduced into the TGV algorithm. Different areas of the image were processed with different denoising intensities. As a result, the image details could be well preserved. Firstly, the Filtered Back Projection (FBP) algorithm was used to obtain a reconstructed image. Secondly, the edge indicator function based on intuitive fuzzy entropy was applied to improve the TGV algorithm. Finally, the new algorithm was employed to reduce the noise in the reconstructed image. The simulations of the low-dose CT image reconstruction for the Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm has the smaller values of the Normalized Mean Square Distance (NMSD) and Normalized Average Absolute Distance (NAAD) in the two experiment images, compared with the Total Variation (TV) algorithm and TGV algorithm. Meanwhile, the two experiment images processed with the new method can obtain high Peak Signal-to-Noise Ratios (PSNR) of 26.90 dB and 44.58 dB, respectively. So the proposed algorithm can effectively preserve image details and edges, while reducing streak artifacts.
    River information extraction from high resolution remote sensing image based on homomorphic system filtering
    YANG Yipu, YANG Fan, PAN Guofeng, ZHANG Huimin
    2016, 36(1):  248-253.  DOI: 10.11772/j.issn.1001-9081.2016.01.0248
    Asbtract ( )   PDF (1143KB) ( )  
    References | Related Articles | Metrics
    Concerning the incomplete extraction of river information easily affected by synonyms spectrum phenomenon from the high resolution remote sensing image, considering the specific frequency texture information of the river region, a river information extraction method from the high resolution remote sensing image based on homomorphic system filtering was proposed. The spectral analysis of remote sensing image was used by the proposed algorithm to define the feature identification of river information in the frequency domain. The one-dimensional profile line adding-window short-time analysis river location technology was adopted to realize river location and width estimation of the remote sensing image automatically and quickly. Finally, the low-pass filter under the homomorphic system was designed to extract low frequency river information. The GF-1 image was used as experimental data. The extraction accuracy of the proposed algorithm was higher than the traditional method based on spectral information. The Kappa coefficient reached 0.85, and the extraction process was automatic. The experimental results show that the proposed algorithm can extract river information from complex geomorphology region quickly and effectively.
    Research of domain ontology driven enterprise on-line analytical processing systems
    LIU Xinrui, REN Fengyu, LEI Guoping
    2016, 36(1):  254-259.  DOI: 10.11772/j.issn.1001-9081.2016.01.0254
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics
    At present the insufficient formal business knowledge participation in the process of On-Line Analytical Processing (OLAP) results in restriction and limitation to in-depth analysis. To overcome the limitations, a new approach for building an OLAP system was proposed based on domain ontology. Firstly, by analyzing the limitations of the existing ontology construction methods and using the similarity evaluation algorithm based on multiple features and weighted pattern of entity classes, a semi-automatic domain ontology construction method in which global top-level domain ontology was designed by experts after local ontologies were generated from databases was put forward to implement formalized description of mine-production domain knowledge. And then the key indicators of mine-production capacity were chosen as measurements and Multi-Dimensional Ontology (MDO) with business semantic concepts was built. Finally, the method was tested by a practical project of metal mine decision making system. The experimental results show that the proposed method can dynamically integrate heterogeneous information resource of mine production process and facilitate the unambiguous interpretation of query results, and discover association rules and implicit knowledge through the advantages of formalization expression and reasoning of domain ontology. Meanwhile, by high frequency and general concept views, it avoids query duplication and improves the performance of traditional OLAP systems.
    Harmfulness prediction of clone code based on Bayesian network
    ZHANG Liping, ZHANG Ruixia, WANG Huan, YAN Sheng
    2016, 36(1):  260-265.  DOI: 10.11772/j.issn.1001-9081.2016.01.0260
    Asbtract ( )   PDF (875KB) ( )  
    References | Related Articles | Metrics
    During the process of software development, activities of programmers including copy and paste result in a lot of code clones. However, the inconsistent code changes are always harmful to the programs. To solve this problem, and find harmful code clones in programs effectively, a method was proposed to predict harmful code clones by using Bayesian network. First, referring to correlation research on software defects prediction and clone evolution, two software metrics including static metrics and evolution metrics were proposed to characterize the features of clone codes. Then the prediction model was constructed by using core algorithm of Bayesian network. Finally, the probability of harmful code clones occurrence was predicted. Five different types of open-source software system containing 99 versions written in C languages were tested to evaluate the prediction model. The experimental results show that the proposed method can predict harmfulness for clones with better applicability and higher accuracy, and further reduce the threat of harmful code clones while improving software quality.
    Global optimal matching method for 3D fragments based on swarm intelligence
    SUN Jiaze, GENG Guohua
    2016, 36(1):  266-270.  DOI: 10.11772/j.issn.1001-9081.2016.01.0266
    Asbtract ( )   PDF (781KB) ( )  
    References | Related Articles | Metrics
    Aiming at the error accumulation problem in the process of the traditional global matching of the three-dimensional (3D) models, a global optimal matching method based on swarm intelligences was proposed. The global matching process for multiple 3D fragments was abstracted, and then a mathematic model of the global optimal matching was set up, the solution of the optimal matching for multiple 3D fragments was converted to satisfy certain constraint conditions of the optimal match matrix of combinatorial optimization problem. A discretization algorithm based on hybrid social cognitive optimization algorithm was proposed to solve the NP (Non-deterministic Polynomial) problem. Finally, the classical example analyses verified that the proposed algorithm has global optimization ability and strong robustness without the initial position, and it provides an efficient method for global matching of the 3D fragments.
    Optimization analysis for serial bottleneck system of urban rail transit station
    LIU Jie, HE Shengxue, ZHANG Haodong
    2016, 36(1):  271-274.  DOI: 10.11772/j.issn.1001-9081.2016.01.0271
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics
    For the bottleneck relief of urban rail transit station infrastructure is lack of quantitative system analysis and its cost is uncertain, a quantitative analysis model for station bottleneck was put forward, and based on which a new optimization strategy was advanced. First of all, passenger train flow diagram was established, and based on the series-parallel hybrid queuing network system optimization model was constructed; secondly, on the basis of the existing optimization strategy, a new control optimization strategy was put forward: change sequence, namely exchange machine physical sequence of security check device and automatic ticket gate; lastly, in Shanghai Xinzhuang subway station according to the two kinds of optimization strategy, specific optimization schemes were advanced and then simulated. Three optimization schemes effectively reduce the passenger queuing time, but the total cost difference is very big. As for a certain arrival rate, compared with no optimization scheme, optimization scheme one with adding one security device reduced the waiting time by 92.5%, increased the total cost by 3.2%; optimization scheme two with exchanging the sequence of security check device and brake machine, reduced the waiting time by 80.3%, decreased the total cost by 50.4%; and optimization scheme three, the composition of scheme one and two, almost completely eliminated the queue waiting time, but increased the total cost by 29.6%. The result analyses show that the proposed model can well simulate the bottleneck relief cost, and the new strategy is superior to the traditional strategy in reducing the total cost.
    Data recovery algorithm in chemical process based on locally weighted reconstruction
    GUO Jinyu, YUAN Tangming, LI Yuan
    2016, 36(1):  282-286.  DOI: 10.11772/j.issn.1001-9081.2016.01.0282
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    According to phenomenon of missing data in the chemical process, a Locally Weighted Recovery Algorithm (LWRA) for dealing with missing data in the chemical process was proposed based on preserving the local data structure characteristic. The missing data points were located and marked with the symbol NaN (Not a Number), the missing data set was divided into complete data set and incomplete data set. The corresponding k nearest neighbors of incomplete data set were found in the complete data according to the size of integrity in turn, and the corresponding weights of k nearest neighbors were calculated according to the principle of minimum error sum of squares. Finally, the missing data points were reconstructed by k nearest neighbors and their corresponding weights. The algorithm was applied into two types of chemical process data with different missing rates and compared with two traditional data recovery algorithms, Expectation Maximization Principal Component Analysis (EM-PCA) and Mean Algorithm (MA). The results reveal that the proposed method has the lowest error, and the computation speed increases by 2 times in average than EM-PCA. The experimental results demonstrate that the proposed algorithm can not only recover data efficiently but also improve the utilization rate of the data, and it's suitable for nonlinear chemical process data recovery.
    Path planning algorithm for unmanned aerial vehicles based on improved artificial potential field
    DING Jiaru, DU Changping, ZHAO Yao, YIN Dengyu
    2016, 36(1):  287-290.  DOI: 10.11772/j.issn.1001-9081.2016.01.0287
    Asbtract ( )   PDF (678KB) ( )  
    References | Related Articles | Metrics
    There are still some issues existing in the traditional Artificial Potential Field (APF), such as the poor adaptability to the complex environment, easily getting into local standstill and the unsmooth path. In order to solve these problems, an improved artificial potential field method was proposed. Firstly, the connectivity of threats was analyzed by the proposed algorithm, and the optimum feasible solution domain was got by the geometric topology. Secondly, a pre-planning of track points was carried out within the feasible solution domain. The pre-planning was based on the threats' global distribution information, and made up for the deficiencies of falling into local minimum and failing to find a feasible path. Finally, the gravitational function of artificial potential field method was improved, and a sufficient smooth flight path was obtained by several iterations and curvature checking. The simulation results show that the improved algorithm can meet the path planning requirements of unmanned aerial vehicles. The proposed algorithm is simple and feasible with strong searching and adaptability.
2024 Vol.44 No.9

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF