Table of Content

    10 June 2017, Volume 37 Issue 6
    Game-theoretic algorithm for joint power control and rate allocation in cognitive networks
    ZHU Jiang, BA Shaowei, DU Qingmin
    2017, 37(6):  1521-1526.  DOI: 10.11772/j.issn.1001-9081.2017.06.1521
    Asbtract ( )   PDF (995KB) ( )  
    References | Related Articles | Metrics
    Aiming at the resource allocation problem for the uplink in cognitive radio networks, a game-theoretic algorithm for joint power control and rate allocation adapted to multi-cell cognitive radio networks was proposed. To control user's power and rate more reasonably and reduce interference among Secondary Users (SUs), firstly, the different cost factors for power and rate were set respectively, so as to control user more reasonably and avoid user excessively increasing transmission power. Then, the existence and uniqueness of the Nash Equilibrium (NE) for the proposed algorithm were proved, the convergence demonstration of the proposed algorithm was given. Finally, for solving the optimization problem of the transmission power and transmission rate, the iterative updating flowchart of the proposed algorithm for the joint power control and rate allocation was presented. The theoretical analysis and simulation results show that, compared with the similar game algorithms, on the premise of guaranteeing the quality of communication, the proposed algorithm can make user acquire higher transmission rate and higher Signal to Interference plus Noise Ratio (SINR) at lower transmission power, reduce the interference among users, and improve the system capacity of SUs.
    Ring-based clustering algorithm for nodes non-uniform deployment
    SUN Chao, PENG Li, ZHU Xuefang
    2017, 37(6):  1527-1531.  DOI: 10.11772/j.issn.1001-9081.2017.06.1527
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of energy hole in the nodes non-uniform deployment network model based on the ring in Wireless Sensor Network (WSN), a Ring-based Clustering Algorithm for Nodes Non-uniform Deployment (RCANND) was proposed. The number of the optimal cluster heads in each ring was calculated by minimizing the energy consumption of each ring in the nodes non-uniform deployment network model. The cluster head selectivity was calculated by using the residual energy of the nodes, the distance from the base station, and the average distance from the neighbor nodes. The cluster head rotation was carried out with the cluster head selection sequence in cluster, and the number of cluster formation phases was reduced to improve the efficiency of network energy utilization. The proposed algorithm was tested in the simulation experiments, the experimental results show that, the average energy consumption fluctuation of nodes under the same radius but different nodes deployment models is very small. The average energy consumption fluctuation of nodes under the same nodes deployment model but different radiuses is not obvious. The network lifetime was defined as the survivability of 50% network nodes. In the case of non-uniform deployment of nodes, the network lifetime of the proposed algorithm is higher than that of Unequal Hybrid Energy Efficient Distributed algorithm (UHEED) by about 18.1% while it is also higher than that of Rotated Unequal Hybrid Energy Efficient Distributed algorithm (RUHEED) by about 11.5%. In the case of uniform deployment of nodes, the network lifetime of the proposed algorithm is higher than that of sub-Ring-based Energy-efficient Clustering Routing for WSN (RECR) by about 6.4%. The proposed algorithm can effectively balance the energy consumption under different nodes deployment models and prolong the network lifetime.
    An adaptive clustering algorithm based on grades for wireless sensor networks
    XIAO Wei, TU Yaqing
    2017, 37(6):  1532-1538.  DOI: 10.11772/j.issn.1001-9081.2017.06.1532
    Asbtract ( )   PDF (1081KB) ( )  
    References | Related Articles | Metrics
    To solve the short life time and low network throughput problems caused by the heterogeneity and mobility of Wireless Sensor Network (WSN) clustering algorithm, an Adaptive Clustering Algorithm based on Grades (ACA_G) was proposed. The proposed algorithm was run on rounds, which was composed of three stages:the adaptive clustering stage, the cluster construction stage and the data transmission stage. In the adaptive clustering stage, every partition may be subdivided or united adjacently according to the change of the number of nodes in each partition to keep an appropriate number of nodes in it. The adaptive clustering measure could be able to solve the unreasonable problems of the number of cluster-heads and the scale of clusters caused by the node mobility in WSN. In order to deal with the phenomena of some nodes died too fast and the life time of WSN was shortened caused by the heterogeneity in WSN, the node with the highest grade was selected as the cluster-head in the cluster construction stage. In the WSN application, the grade of each node was calculated according to the node residual energy, the speed of energy consumption, the distance between the node and the base station, the accumulated distance between the node and other nodes in the same cluster. The experiment was simulated by OMNeT++ and Matlab on a WSN with energy heterogeneity, in which node's mobile speed is 0~0.6 m/s randomly. The experimental results show that, compared with the Low Energy Adaptive Clustering Hierarchy -Mobile (LEACH-Mobile) algorithm and the Distributed Energy-Efficient Clustering (DEEC) algorithm, the life time of WSN clustered by the proposed algorithm is 30.9% longer than the other two algorithms, its network throughout is 1.15 times at least as much as the other two algorithms.
    Dynamic clustering target tracking based on energy optimization in wireless sensor networks
    WEI Mingdong, HE Xiaomin, XU Liang
    2017, 37(6):  1539-1544.  DOI: 10.11772/j.issn.1001-9081.2017.06.1539
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of high energy consumption caused by data collision and cluster selection process in dynamic clustering target tracking of Wireless Sensor Network (WSN), a dynamic clustering method based on energy optimization for WSN was proposed. Firstly, a time division election transmission model was proposed, which avoided data collision actively to reduce energy consumption of nodes in a dynamic cluster. Secondly, based on energy information and tracking quality, the energy-balanced farthest node scheduling strategy was proposed, which optimized custer head node scheduling. Finally, according to the weighted centroid localization algorithm, the target tracking task was completed. Under the environment of random deployment of nodes, the experimental results show that, the average tracking accuracy of the proposed method for non-linear moving objects was 0.65 m, which is equivalent to that of Dynamic Cluster Member Selection method for multi-target tracking (DCMS), and improved by 45.8% compared to Distributed Event Localization and Tracking Algorithm (DELTA). Compared with DCMS and DELTA, the proposed algorithm can effectively reduce energy consumption of the dynamic tracking clusters by 61.1% and prolong the network lifetime.
    Energy-balanced routing algorithm in rechargeable wireless sensor networks
    XIE Xiaojun, YU Hao, TAO Lei, ZHANG Xinming
    2017, 37(6):  1545-1549.  DOI: 10.11772/j.issn.1001-9081.2017.06.1545
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics
    Aiming at energy-balanced routing problem in rechargeable Wireless Sensor Network (WSN), a new multi-path routing algorithm and an opportunistic routing algorithm were proposed in the scenario of wireless charging with stable power and monitoring data collection network, so as to achieve the energy balance of the network. Firstly, the relationship model between the charging power and the receiving power of wireless sensor nodes was constructed by the theory of electromagnetic propagation. Then, considering the sending and receiving energy consumptions of wireless sensor nodes in the network, the energy-balanced routing problem was transformed into the max-min optimization lifetime problem of the network nodes. The link traffic obtained by the linear programming was used to guide the data flow allocation in the routing. Finally, considering a more realistic scenario of low power WSN, an energy-balanced routing algorithm based on opportunistic routing was proposed. The experimental results show that, compared with the Shortest Path Routing (SPR) and Expected Duty-Cycled wakeups minimal routing (EDC) algorithms, the proposed two routing algorithms can effectively improve the utilization ratio of the energy collection and the network lifetime in the working period.
    Fingerprint matching indoor localization algorithm based on dynamic time warping distance for Wi-Fi network
    ZHANG Mingyang, CHEN Jian, WEN Yingyou, ZHAO Hong, WANG Yugang
    2017, 37(6):  1550-1554.  DOI: 10.11772/j.issn.1001-9081.2017.06.1550
    Asbtract ( )   PDF (856KB) ( )  
    References | Related Articles | Metrics
    Focusing on the low accuracy problem of regular fingerprint matching indoor localization algorithm for Wi-Fi network confronted with signal fluctuation or jamming, the fingerprint matching indoor localization algorithm based on Dynamic Time Warping (DTW) similarity for Wi-Fi network was proposed. Firstly, the Wi-Fi signal characteristics in localization area were converted to the time-series fingerprints according to the sequence of sampling. The similarity between the locating data and sampling data was obtained by computing the fingerprint DTW distance of Wi-Fi signal. Then, according to the structural characteristics of the sampling area, the fingerprint sampling problem of Wi-Fi signal was divided into three kinds of basic sampling methods based on dynamic path. Finally, the accuracy and completeness of the fingerprint feature information were increased by the combination of multiple dynamic path sampling methods, which improved the accuracy and location precision of fingerprint matching. The extensive experimental results show that, compared with the instantaneous fingerprint matching indoor localization algorithm, within the location error of 3 m, the cumulative error frequency of the proposed localization algorithm, was 10% higher for uniform motion and 13% higher for variable motion within routing area, and 9% higher for crossed curvilinear motion and 3% higher for S-type curvilinear motion within open area. The proposed localization algorithm can improve accuracy and location precision of fingerprint matching effectively in real indoor localization applications.
    Load balancing scheme based on multi-objective optimization for software defined network
    LIU Biguo, SHU Yong'an, FU Yinghui
    2017, 37(6):  1555-1559.  DOI: 10.11772/j.issn.1001-9081.2017.06.1555
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of load balancing in Software Defined Network (SDN) control plane, a Dynamic Switch Migration Algorithm based on Multi-objective optimization (M-DSMA) was proposed. Firstly, the mapping relationship between the switch and the controller was transformed into 0-1 matrix optimization problem. Then, the two conflicting objective functions were simultaneously optimized and controlled by the multi-objective genetic algorithm based on Non-dominated Sorting Genetic Algorithm-Ⅱ (NSGA-Ⅱ), one was the plane load balancing degree and another one was the communication overhead generated by switch migration. In the process of multi-objective optimization, the individuals were selected by using the fitness function for crossover and mutation, and then a rapid non-dominated sorting method was used to elite strategy in population. The next generation population was generated and the whole population was continually evolved, thus the global optimal solution was searched. The simulation results show that, the proposed M-DSMA can effectively balance the control plane load, and reduce the communication overhead by 30% to 50% compared with Dynamic Switch Migration Algorithm (DSMA). The proposed algorithm has the significant advantages in improving the control plane scalability.
    Design of fault-tolerant router for virtual channel dynamic allocation based on port fault granularity partition
    HANG Yanxi, XU Jinfu, NAN Longmei, GUO Pengfei
    2017, 37(6):  1560-1568.  DOI: 10.11772/j.issn.1001-9081.2017.06.1560
    Asbtract ( )   PDF (1275KB) ( )  
    References | Related Articles | Metrics
    High reliability is an important issue in the development of high performance network-on-chip router. Concerning the problem that the ports of the router whose virtual channel is dynamic allocated are prone to fail at present, a design of fault-tolerant router based on port fault granularity partition was proposed. Firstly, a fault and congestion model for ports based on granularity partition was established combining with the specialty of the virtual channel dynamic allocation and faults' characteristics. Then, the related fault-tolerant circuit was designed on the basis of the model combining with the real-time fault detection methods, an adjacent port sharing module was added and a fault-tolerant read/write point control logic circuit was designed. Finally, a fault-tolerant and congestion mitigation scheme was put forward based on the design. The experiments shows that the proposed router can maintain fault tolerant properties under various port failure modes with little performance degradation, and it has high ratio of performance improvement and area overhead.
    Application of space-domain adaptive anti-jamming technology in data-link communication
    WU Di
    2017, 37(6):  1569-1573.  DOI: 10.11772/j.issn.1001-9081.2017.06.1569
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    The data-link system is easily subjected to interference under complex battlefield electromagnetic environment, so as to reduce the interconnectivity performance of the communication system. In order to solve the problem, a new anti-jamming method was proposed to improve the system performance of communication anti-jamming,which integrated the space trap technology with smart array-antenna and relative positioning technology with data-link. The Direction Of Arrival (DOA) of desired signal was firstly obtained through the relative positioning technology of data-link. Then, the radiation pattern of multi-antenna array was automatically rebuilt. The goals of keeping constant gain on the orientation of the desired communication signal and forming null steering on the orientation of jamming signal were implemented to suppress jamming signal. The proposed method was tested and validated on software defined radio platform in the laboratory environment. The experimental results show that, the proposed method has improved the interference suppression ability of system above 40 dB. The proposed method has extended the anti-jamming method of system data-link from traditional time/frequency domain to spatial domain, and it can be used for anti-jamming in the relevant communication system.
    Dynamic resource allocation strategy in Spark Streaming
    LIU Bei, TAN Xinming, CAO Wenbin
    2017, 37(6):  1574-1579.  DOI: 10.11772/j.issn.1001-9081.2017.06.1574
    Asbtract ( )   PDF (982KB) ( )  
    References | Related Articles | Metrics
    The existing resource allocation strategy has long resource adjustment cycle and cannot sufficiently meet the individual needs of different applications and users when Spark Streaming is selected as stream processing component in hybrid large-scale computing platform. In order to solve the problems, a Dynamic Resource Allocation strategy for Multi-application (DRAM) was proposed. The global variables were added to control the dynamic resource allocation process in DRAM. Firstly, the historical data feedback and the global variables were obtained. Then, whether increasing or decreasing the number of resources in each application was determined. Finally, the increase or decrease of resources was implemented. The experimental results show that, the proposed strategy can effectively adjust the resource quota, and reduce the processing delay compared with the original Spark platform strategies such as Streaming and Core under both cases of the stable data stream and the unstable data stream. The proposed strategy can also improve the utilization rate of the cluster resources.
    Energy-efficient strategy for threshold control in big data stream computing environment
    PU Yonglin, YU Jiong, WANG Yuefei, LU Liang, LIAO Bin, HOU Dongxue
    2017, 37(6):  1580-1586.  DOI: 10.11772/j.issn.1001-9081.2017.06.1580
    Asbtract ( )   PDF (1225KB) ( )  
    References | Related Articles | Metrics
    In the field of big data real-time analysis and computing, the importance of stream computing is constantly improved while the energy consumption of dealing with data on stream computing platform rises constantly. In order to solve the problems, an Energy-efficient Strategy for Threshold Control (ESTC) was proposed by changing the processing mode of node to data in stream computing. First of all, according to system load difference, the threshold of the work node was determined. Secondly, according to the threshold of the work node, the system data stream was randomly selected to determine the physical voltage of the adjustment system in different data processing situation. Finally, the system power was determined according to the different physical voltage. The experimental results and theoretical analysis show that in stream computing cluster consisting of 20 normal PCs, the system based on ESTC saves about 35.2% more energy than the original system. In addition, the ratio of performance and energy consumption under ESTC is 0.0803 tuple/(s·J), while the original system is 0.0698 tuple/(s·J). Therefore, the proposed ESTC can effectively reduce the energy consumption without affecting the system performance.
    Dynamic trust level based ciphertext access control scheme
    CHEN Danwei, YANG Sheng
    2017, 37(6):  1587-1592.  DOI: 10.11772/j.issn.1001-9081.2017.06.1587
    Asbtract ( )   PDF (1146KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of Attribute-Based Encryption (ABE) such as high computational consumption and lack of flexibility in mobile Internet, a dynamic trust level based Ciphertext-Policy ABE (CP-ABE) scheme was proposed. Firstly, the "trust level" attribute was defined to indicate user's trusted level and divide users into different classes. User with high "trust level" was be able to decrypt the message in a constant computational overhead. Meanwhile, Central Authority (CA) was allowed to evaluate user's access behavior within the certain time threshold. Only the user's "trust level" was updated dynamically by the updating algorithm instead of complete re-generating of secret key. Theoretical analysis and experimental results show that, with the growing proportion of high "trust level" user, the total time consumption of the proposed scheme was decreased until being stable and finally was superior to the traditional scheme. The proposed scheme can improve the access control efficiency in mobile Internet on the premise of keeping the security standard.
    A private set intersection protocol against malicious attack
    LUO Xiaoshuang, YANG Xiaoyuan, WANG Xu'an
    2017, 37(6):  1593-1598.  DOI: 10.11772/j.issn.1001-9081.2017.06.1593
    Asbtract ( )   PDF (942KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of private set intersection calculation in secure two-party computation, an improved private set intersection protocol based on Bloom Filter was proposed. On the premise of ensuring the security of both parties about their own privacy, the intersection of two datasets could be calculated. Only one party can calculate the intersection elements whereas the other party can't calculate the intersection. Both parties can't obtain or infer any other set elements except the intersection of the other party, which ensures the security of sensitive information for both parties. The proposed protocol introduced the identity-based key agreement protocol, which can resist the malicious attacks of illegal users, protect the privacy and achieve the security defense, resist the risk of key disclosure, reduce the amount of encryption and decryption. The proposed protocol has the ability to support large scale data computation.
    Privacy-preserving incomplete data Skyline query protocol in two-tiered sensor networks
    ZUO Kaizhong, SHANG Ning, TAO Jian, WANG Taochun
    2017, 37(6):  1599-1604.  DOI: 10.11772/j.issn.1001-9081.2017.06.1599
    Asbtract ( )   PDF (1108KB) ( )  
    References | Related Articles | Metrics
    The sensor data of sensor node is easy to be influenced by the external environment, which makes the incomplete data exist widely in the wireless sensor network and the sensor data face the serious privacy threat. Aiming at the problem of privacy leakage during the query process of incomplete data in two-tiered sensor networks, a Privacy-Preserving Incomplete data Skyline query protocol in two-tiered sensor network (PPIS) based on replacement algorithm and bucket technology was proposed. In order to realize the Skyline query for incomplete data, the value of the missing attribute was replaced to the upper bound of data field and then the incomplete data was mapped into the buckets. In order to preserve the privacy of data, the range of the bucket was transformed into a prefix encoding and then the prefix encoding was loaded into Bloom filters. Thus, the query processing could be executed by the storage node without clear text of the sensor data and real range of the bucket. In order to preserve the integrity of query results, Merkle hash tree was used to construct the integrity verification code for implementing the integrity verification of query results. Theoretical analysis and simulation experiment of real dataset has confirmed the privacy and efficiency of PPIS. Compared with existing privacy-preserving Skyline query protocols-SMQ (Secure Multidimensional Query) and SSQ (Secure Skyline Query), the proposed PPIS can save the communication cost by more than 70%.
    Multidimensional zero-correlation linear cryptanalysis on Zodiac cipher algorithm
    CHENG Lu, WEI Yuechuan, PAN Xiaozhong, LI Anhui
    2017, 37(6):  1605-1608.  DOI: 10.11772/j.issn.1001-9081.2017.06.1605
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics
    Zodiac is a block cipher algorithm and it supports 3 master key lengths which are called Zodiac-128, Zodiac-192 and Zodiac-256. The security of Zodiac algorithm was evaluated by using zero-correlation linear cryptanalysis. Firstly, 10-round zero-correlation linear approximations of Zodiac algorithm were constructed according to the structural characteristics of the algorithm. Then, the multidimensional zero-correlation linear cryptanalysis on 16-round Zodiac-192 was conducted. The analysis results show that 19-byte keys were restored totally in the process of attack, the data complexity was about 2124.40 known ciphertexts and the computational complexity was 2181.58 encryptions of 16-round. Thus the Zodiac-192 algorithm with the 192-bit key of 16 rounds (full rounds) is not immune to the zero-correlation linear cryptanalysis.
    Diamond encoding steganography algorithm based on algebraic multigrid
    YANG Ming, HUANG Ying
    2017, 37(6):  1609-1615.  DOI: 10.11772/j.issn.1001-9081.2017.06.1609
    Asbtract ( )   PDF (1121KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of security for steganography algorithm, a Diamond Encoding (DE) steganography algorithm based on Algebraic MultiGrid (AMG) was proposed. Firstly, an image was divided into two parts of coarse grid and fine grid by the AMG method. Then, the confidential information was embedded into the two part pixels of coarse grid and fine grid by DE method. The change of pixels in coarse grid part has little influence on the whole image quality, while the change of pixels in fine grid part has the great effect on the whole image quality. And the k value of DE is associated with the capacity of information hiding closely, the pixels change greater with the k value increasing. Therefore, in the embedding process with DE, the k value of the coarse grid part is not less than that of the fine grid part. Finally, when the k value of DE was chosen to 1 and 2, three kinds of steganography scheme were proposed. The proposed algorithm was compared with Least Significant Bit (LSB) replacement, random LSB matching, DE algorithm and adaptive edge detection algorithm. The experimental results show that, the first-order Markov security metric of the proposed algorithm is superior to other contrasted steganalysis algorithms.
    Information hiding algorithm based on 3D high efficiency video coding background
    REN Shuai, SUO Li, ZHANG Tao, YANG Tao, MU Dejun
    2017, 37(6):  1616-1619.  DOI: 10.11772/j.issn.1001-9081.2017.06.1616
    Asbtract ( )   PDF (663KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of security and capacity of confidential information during public network transmission, an information hiding algorithm based on the High Efficiency Video Coding (HEVC) background was proposed. The background image of multi-view images in HEVC was used as the carrier. Firstly, the background image was decomposed into three gray-scale components by using lαβ color space theory. Then, the image components such as α and β were transformed by Discrete Cosine Transform (DCT) method. Finally, the confidential information was embedded into the region of carrier repeatedly. With relative low energy weight, the mid-frequency coefficient regions of α and β components after DCT were both chosen as the hidden regions to be embedded repeatedly with the confidential information, which makes the proposed algorithm have good invisibility and robustness. The experimental results show that, compared with the algorithms based on intra-frame and inter-frame, the invisibility of the proposed algorithm was improved respectively by about 16.1% and 11.4% while the robustness of the proposed algorithm was increased respectively by about 55.5% and 20.2%.
    Software pipelining realization method of AES algorithm based on cipher stream processor
    WANG Shoucheng, XU Jinhui, YAN Yingjian, LI Gongli, JIA Yongwang
    2017, 37(6):  1620-1624.  DOI: 10.11772/j.issn.1001-9081.2017.06.1620
    Asbtract ( )   PDF (816KB) ( )  
    References | Related Articles | Metrics
    Aiming at the excessively long time consumption of round function in block cipher implementation, a new software pipelining realization method of Advanced Encryption Standard (AES) algorithm based on Reconfigurable Cipher Stream Processor (RCSP) was proposed. The operations of round function were divided into several pipelining segments. The different pipelining segments corresponded to different cipher resources. The instruction level parallelism was developed to accelerate the execution speed of round function by executing different pipelining segments of multiple round functions in parallel. The execution efficiency of block cipher algorithm was improved. The separation processes of pipelining segments and software pipelining mapping methods of AES algorithm were analyzed with the computing resources of single cluster, two clusters and four clusters of RCSP. The experimental results show that, the proposed software pipelining realization method, which makes different data fragments of one block or multiple blocks processed in parallel, can not only improve the performance of a block serial execution, but also improve the performance of multiple blocks parallel execution by developing the parallelism between the blocks.
    Design and implementation of cloud platform intrusion prevention system based on software defined network
    CHI Yaping, JIANG Tingting, DAI Chuping, SUN Wei
    2017, 37(6):  1625-1629.  DOI: 10.11772/j.issn.1001-9081.2017.06.1625
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    The traditional intrusion prevention system is the serially connected in the network environment, its ability to deal with the intrusion is limited and may cause network congestion easily. In order to solve the problems, an intrusion prevention scheme for cloud computing applications was designed based on Software Defined Network (SDN). Firstly, the SDN controller was integrated in the OpenStack platform. Then, by using the programmable characteristics of the controller, the linkage mechanism of intrusion detection and controller was designed to realize the intrusion prevention. The principle of the linkage mechanism is that the intrusion information is passed to the controller when the intrusion detection system detects the intrusion, then the security policy was issued to the virtual switch by the controller for filtering the intrusion traffic and dynamically preventing the intrusion. Finally, the proposed scheme was compared with the traditional intrusion prevention scheme in experiment. The comparison and analysis results show that, the proposed scheme can detect more than 90% of the instructions when they come at 40000 packets per second, while the traditional scheme only detect 85% of the instructions when they come at 12000 packets per second. The proposed scheme can be used to improve the detection efficiency of intrusion prevention in the cloud environment.
    Improved weight distribution method of vulnerability basic scoring index
    XIE Lixia, XU Weihua
    2017, 37(6):  1630-1635.  DOI: 10.11772/j.issn.1001-9081.2017.06.1630
    Asbtract ( )   PDF (896KB) ( )  
    References | Related Articles | Metrics
    The basic scoring index weight distribution of the Common Vulnerability Scoring System (CVSS) relies too much on expert experience, which leads to the lack of objectivity. In order to solve the problem, a vulnerability basic scoring index weight distribution method was proposed. Firstly, the relative importances of scoring elements were sorted. Then, the index weight combination optimal search method was used to search the weight combination scheme. Finally, combined with the grey relation analysis method, the multiple weight distribution schemes based on expert experience decision were used as the input to obtain the weight combination scheme. The experimental results show that, compared with CVSS, from the quantitative point of view, the proposed method has more gentle score distribution of scoring results than the CVSS, which effectively avoids the excessive extreme values, and the discretization of score distribution can effectively distinguish the severity of different vulnerabilities objectively and effectively. The comparative analysis from the qualitative point of view show that, while the vast majority of vulnerabilities (92.9%) in CVSS are designated as the high level of severity, the proposed method can achieve more balanced characteristic distribution in grade distribution of vulnerability severity.
    Intrusion detection method of deep belief network model based on optimization of data processing
    CHEN Hong, WAN Guangxue, XIAO Zhenjiu
    2017, 37(6):  1636-1643.  DOI: 10.11772/j.issn.1001-9081.2017.06.1636
    Asbtract ( )   PDF (1400KB) ( )  
    References | Related Articles | Metrics
    Those well-known types of intrusions can be detected with higher detection rate in the network at present, but it is very difficult to detect those new unknown types of network intrusions. In order to solve the problem, a network intrusion detection method of Deep Belief Network (DBN) model based on optimization of data processing was proposed. The data processing and method model were improved respectively without destroying the existing knowledge and increasing detection time seriously to solve the above problem. Firstly, the data processed by Probability Mass Function (PMF) encoding and MaxMin normalization was applied to the DBN model. Then, the relatively optimal DBN structure was selected through fixing other parameters, changing a parameter and the cross validation. Finally, the proposed method was tested on the benchmark NSL-KDD dataset. The experimental results show that, the optimization of data processing can improve the classification accuracy of the DBN model, the proposed intrusion detection method based on DBN has good adaptability and higher recognition ability of unknown samples. The detection time of DBN algorithm is similar to that of Support Vector Machine (SVM) algorithm and Back Propagation (BP) neural network model.
    Distributed denial of service attack recognition based on bag of words model
    MA Linjin, WAN Liang, MA Shaoju, YANG Ting, YI Huifan
    2017, 37(6):  1644-1649.  DOI: 10.11772/j.issn.1001-9081.2017.06.1644
    Asbtract ( )   PDF (1115KB) ( )  
    References | Related Articles | Metrics
    The payload of Distribute Denial of Service (DDoS) attack changes drastically, the manual intervention of setting warning threshold relies on experience and the signature of abnormal traffic updates not timely, an improved DDoS attack detection algorithm based on Binary Stream Point Bag of Words (BSP-BoW) model was proposed. The Stream Point (SP) was extracted automatically from current network traffic data, the adaptive anomaly detection was carried out for different topology networks, and the labor cost was reduced by decreasing frequently updated feature set. Firstly, the mean clustering was carried out for the existing attack traffic and normal traffic to look for SP in the network traffic. Then, the original traffic was mapped to the corresponding SP for formalized expression by histogram. Finally, the DDoS was detected and classified by Euclidean distance. The experimental results on public database DARPA LLDOS1.0 show that, compared with Locally Weighted Learning (LWL), Support Vector Machine (SVM), Random Tree (RT), Logistic regression analysis (Logistic), Naive Bayes (NB), the proposed algorithm has higher recognition rate of abnormal network traffic. The proposed algorithm based on BoW model has the good recognition effect and generalization ability in abnormal network traffic recognition of denial of service attack, which is suitable for the deployment in the Small Medium Enterprise (SME) network traffic equipment.
    Android malware application detection using deep learning
    SU Zhida, ZHU Yuefei, LIU Long
    2017, 37(6):  1650-1656.  DOI: 10.11772/j.issn.1001-9081.2017.06.1650
    Asbtract ( )   PDF (1160KB) ( )  
    References | Related Articles | Metrics
    The traditional Android malware detection algorithms have low detection accuracy, which can not successfully identify the Android malware by using the technologies of repacking and code obfuscation. In order to solve the problems, the DeepDroid algorithm was proposed. Firstly, the static and dynamic features of Android application were extracted and the Android application features were created by combining static features and dynamic features. Secondly, the Deep Belief Network (DBN) of deep learning algorithm was used to train the collected training set for generating deep learning network. Finally, untrusted Android application was detected by the generated deep learning network. The experimental results show that, when using the same test set, the correct rate of DeepDroid algorithm is 3.96 percentage points higher than that of Support Vector Machine (SVM) algorithm, 12.16 percentage points higher than that of Naive Bayes algorithm, 13.62 percentage points higher than that of K-Nearest Neighbor (KNN) algorithm. The proposed DeepDroid algorithm has combined the static features and dynamic features of Android application. The DeepDroid algorithm has made up for the disadvantages that code coverage of static detection is not enough and the false positive rate of dynamic detection is high by using the detection method combined dynamic detection and static detection. By using the DBN algorithm in feature recognition, the proposed DeepDroid algorithm has guaranteed high network training speed and high detection accuracy at the same time.
    Fast proximity testing method with privacy preserving in mobile social network
    CUI Weirong, DU Chenglie
    2017, 37(6):  1657-1662.  DOI: 10.11772/j.issn.1001-9081.2017.06.1657
    Asbtract ( )   PDF (948KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of protecting user's location privacy in proximity testing, a new method of achieving fast proximity testing with privacy preserving was proposed. The map was divided with the grid by the proposed method. In the process of proximity testing, firstly, the vicinity region of the user was transformed into a collection of the surrounding grids. Then, the intersection of the users' vicinity regions was calculated by using the Private Set Intersection (PSI) for privacy preserving. Finally, the proximity determination was made based on whether the intersection was empty. The results of analysis and experiment show that, compared with the existing methods based on private equality testing and the method based on coordinate transformation, the proposed method can solve the fairness issue of privacy preserving in proximity testing, resist the collusion attack between the server and the user, and has a higher computational efficiency.
    Scheduling strategy of value evaluation for output-event of actor based on cyber-physical system
    ZHANG Jing, CHEN Yao, FAN Hongbo, SUN Jun
    2017, 37(6):  1663-1669.  DOI: 10.11772/j.issn.1001-9081.2017.06.1663
    Asbtract ( )   PDF (1059KB) ( )  
    References | Related Articles | Metrics
    The performances and correctness of system are affected by the state transition real-time process of the cyber-physical system. In order to solve the problem, aiming at the state transition process of actor's output-event driven system, a new scheduling strategy of value evaluation for output-event of actor named Value Evaluation-Information Entropy and Quality of Data (VE-IE&QoD) was proposed. Firstly, the real-time performance of event was expressed through the super dense time model. The self-information of the output-event, the information entropy of the actor and the quality of data were defined as the function indexes of value evaluation. Then, the value evaluation mission was executed for the process of the actor in performing task and it was considered about suitably increasing the weighting coefficient for parametric equation. Finally, the discrete event models which contain the proposed VE-IE&QoD scheduling strategy, the traditional Earliest Deadline First (EDF) scheduling algorithm and Information Entropy* (IE*) scheduling strategy were built by Ptolemy Ⅱ platform. The operation situation of different algorithm models was analyzed, the change of value evaluation and execution time of different algorithm models were compared. The experimental results show that, the VE-IE&QoD scheduling strategy can reduce the system average execution time, improve the memory usage efficiency and task value evaluation. The proposed VE-IE&QoD scheduling strategy can improve the system performance and correctness to some extent.
    Adaptive control design for a class of nonlinear systems based on extended BP neural network
    CHEN Haoguang, WANG Yinhe
    2017, 37(6):  1670-1673.  DOI: 10.11772/j.issn.1001-9081.2017.06.1670
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics
    Aiming at the uncertainty of Single-Input-Single-Output (SISO) nonlinear systems, a novel adaptive control design based on extended Back Propagation (BP) neural network was proposed. Firstly, the weight vectors of BP neural network were trained via the offline data. Then, the scaling factor and estimation parameter of approximate accuracy were adjusted online to control the whole system by update law. In the design process of controller, with the Lyapunov stability analysis, the adaptive control scheme was proposed to guarantee that all the states of the closed-loop system were Uniformly Ultimately Bounded (UUB). Compared with the traditional adaptive control method of BP neural network, the proposed method can effectively decrease the parameter number of online adjustment and reduce the burden of computation. The simulation results show that the proposed method can make all the states of the closed-loop system tend to be zero, which means the system reaches the steady state.
    Method for solving Lasso problem by utilizing multi-dimensional weight
    CHEN Shanxiong, LIU Xiaojuan, CHEN Chunrong, ZHENG fangyuan
    2017, 37(6):  1674-1679.  DOI: 10.11772/j.issn.1001-9081.2017.06.1674
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    Least absolute shrinkage and selection operator (Lasso) has performance superiority in dimension reduction of data and anomaly detection. Concerning the problem that the accuracy is low in anomaly detection based on Lasso, a Least Angle Regression (LARS) algorithm based on multi-dimensional weight was proposed. Firstly, the problem was considered that each regression variable had different weight in the regression model. Namely, the importance of the attribute variable was different in the overall evaluation. So, in calculating angular bisector of LARS algorithm, the united correlation of regression variable and residual vector was introduced to distinguish the effect of different attribute variables on detection results. Then, the three weight estimation methods of Principal Component Analysis (PCA), independent weight evaluation and CRiteria Importance Though Intercriteria Correlation (CRITIC) were added into LARS algorithm respectively. The approach direction and approach variable selection in the solution of LARS were further optimized. Finally, the Pima Indians Diabetes dataset was used to prove the optimal property of the proposed algorithm. The experimental results show that, the LARS algorithm based on multi-dimensional weight has a higher accuracy than the traditional LARS under the same constraint condition with smaller threshold value, and can be more suitable for anomaly detection.
    Linear kernel support vector machine based on dual random projection
    XI Xi, ZHANG Fengqin, LI Xiaoqing, GUAN Hua, CHEN Guirong, WANG Mengfei
    2017, 37(6):  1680-1685.  DOI: 10.11772/j.issn.1001-9081.2017.06.1680
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low classification accuracy problem of large-scale Support Vector Machine (SVM) after random-projection-based feature dimensionality reduction, Linear kernel SVM based on dual random projection (drp-LSVM) for large-scale classification problems was proposed with the introduction of the dual recovery theory. Firstly, the relevant geometric properties of drp-LSVM were analyzed and demonstrated. It's proved that, with maintaining the similar geometric advantages of Linear kernel SVM based on dual random projection (rp-LSVM), the divided hyperplane of drp-LSVM was more close to the primitive classifier trained by complete data. Then, in view of the fast solution to drp-LSVM, the traditional Sequential Minimal Optimization (SMO) algorithm was improved and the drp-LSVM classifier based on improved SMO algorithm was completed. Finally, the experimental results show that, drp-LSVM inherits the advantages of rp-LSVM, reduces classification error, improves training accuracy, and all its performance indexes are more close to the classifier trained by primitive data; the classifier designed based on the improved SMO algorithm can reduce memory consumption and achieve higher training accuracy.
    Ant colony optimization algorithm based on improved pheromones double updating and local optimization for solving TSP
    XU Kaibo, LU Haiyan, CHENG Biyun, HUANG Yang
    2017, 37(6):  1686-1691.  DOI: 10.11772/j.issn.1001-9081.2017.06.1686
    Asbtract ( )   PDF (961KB) ( )  
    References | Related Articles | Metrics
    Concerning the drawbacks of the Ant Colony Optimization (ACO) algorithm such as low convergence rate and being easy to fall into local optimum solutions, an ACO algorithm based on Improved Pheromones Double Updating and Local Optimization (IPDULACO) was proposed. Double updating was performed on the pheromones of subpaths whose path contribution degrees to the current global optimal solution obtained by colony were bigger than the prescribed path contribution threshold. The selected probability of the subpaths which were used to constitute the potential optimal solution was increased and the convergence rate of the proposed algorithm was accelerated. Then, when the ant colony fell into the local optimal solution in the search process, the random insertion method was utilized to change the city sequences of the current local optimal solution in order to enhance the algorithm's ability of jumping out of local optimal solution. The improved algorithm was applied to several classical Traveling Salesman Problem (TSP) instances in the simulation experiments. The experimental results show that, for small-size TSP instances, the IPDULACO can obtain the known optimal solution in less number of iterations. For relatively large-size TSP instances, the IPDULACO can obtain the optimal solution with higher accuracy in less number of iterations. Therefore, the IPDULACO has the stronger ability of searching for the global optimal solution and faster convergence rate, and it can be used for solving TSP effectively.
    Improved multi-class AdaBoost algorithm based on stagewise additive modeling using a multi-class exponential loss function
    ZHAI Xiyang, WANG Xiaodan, LEI Lei, WEI Xiaohui
    2017, 37(6):  1692-1696.  DOI: 10.11772/j.issn.1001-9081.2017.06.1692
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics
    Stagewise Additive Modeling using a Multi-class Exponential loss function (SAMME) is a multi-class AdaBoost algorithm. To further improve the performance of SAMME, the influence of using weighed error rate and pseudo loss on SAMME algorithm was studied, and a dynamic weighted Adaptive Boosting (AdaBoost) algorithm named SAMME with Resampling and Dynamic weighting (SAMME.RD) algorithm was proposed based on the classification of sample's effective neighborhood area by using the base classifier. Firstly, it was determined that whether to use weighted probability and pseudo loss or not. Then, the effective neighborhood area of sample to be tested in the training set was found out. Finally, the weighted coefficient of the base classifier was determined according to the classification result of the effective neighborhood area based on the base classifier. The experimental results show that, the effect of calculating the weighted coefficient of the base classifier by using real error rate is better. The performance of selecting base classifier by using real probability is better when the dataset has less classes and its distribution is balanced. The performance of selecting base classifier by using weighed probability is better when the dataset has more classes and its distribution is imbalanced. The proposed SAMME.RD algorithm can improve the multi-class classification accuracy of AdaBoost algorithm effectively.
    Multi-label classification algorithm based on user identity
    ZHENG Xiaoxue, ZHANG Dafang, DIAO Zulong
    2017, 37(6):  1697-1701.  DOI: 10.11772/j.issn.1001-9081.2017.06.1697
    Asbtract ( )   PDF (857KB) ( )  
    References | Related Articles | Metrics
    At present there lacks a way to measure home-school communication in a smart campus. Concerning the obvious identity characteristics when chatting in a smart campus, a new multi-label classification algorithm named Adaboost.ML (Multiclass, multi-label version of Adaboost based on user identity) was proposed. Firstly, the heuristic rule was added for the proposed algorithm. Then, the Adaboost.MH (Multiclass,multi-label version of Adaboost based on Hamming loss) algorithm was introduced, and the concept of dataset sharding was discarded. Finally, the single data was used as the focus of analysis, which reduced the inference time and the error caused by the edge of the time slice. The comprehensive decision-making about the relationship between the chat users was made out. The experimental results show that, compared with the heuristic algorithm based on rules, the false positive rate of the proposed algorithm is decreased by 53% while its false negative rate is reduced by 66% on the dataset of smart campus. The proposed algorithm also has good classification results on the dataset of WeChat. At present, the proposed algorithm has been applied to the smart campus project, and it can get home-school communication quickly and accurately.
    Aircraft detection and recognition based on deep convolutional neural network
    YU Rujie, YANG Zhen, XIONG Huilin
    2017, 37(6):  1702-1707.  DOI: 10.11772/j.issn.1001-9081.2017.06.1702
    Asbtract ( )   PDF (1130KB) ( )  
    References | Related Articles | Metrics
    Aiming at the specific application scenario of aircraft detection in large-scale satellite images of military airports, a real-time target detection and recognition framework was proposed. The deep Convolutional Neural Network (CNN) was applied to the target detection task and recognition task of aircraft in large-scale satellite images. Firstly, the task of aircraft detection was regarded as a regression problem of the spatially independent bounding-box, and a 24-layer convolutional neural network model was used to complete the bounding-box prediction. Then, an image classification network was used to complete the classification task of the target slices. The traditional target detection and recognition algorithm on large-scale images is usually difficult to make a breakthrough in time efficiency. The proposed target detection and recognition framework of aircraft based on CNN makes full use of the advantages of computing hardware greatly and shortens the executing time. The proposed framework was tested on a self-collected data set consistent with application scenarios. The average time of the proposed framework is 5.765 s for processing each input image, meanwhile, the precision is 79.2% at the operating point with the recall of 65.1%. The average time of the classification network is 0.972 s for each image and the Top-1 error rate is 13%. The proposed framework provides a new solution for application problem of aircraft detection in large-scale satellite images of military airports with relatively high efficiency and precision.
    Improved pedestrian detection method based on convolutional neural network
    XU Chao, YAN Shengye
    2017, 37(6):  1708-1715.  DOI: 10.11772/j.issn.1001-9081.2017.06.1708
    Asbtract ( )   PDF (1327KB) ( )  
    References | Related Articles | Metrics
    In order to choose better model and acquire more accurate bounding-box when using the Convolutional Neural Network (CNN) in pedestrian detection, an improved pedestrian detection method based on CNN was proposed. The improvements include two aspects:how to determine the iterative learning number of training CNN samples and how to merge multiple responses of an object. Firstly, on the solution of the first improvement, multiple candidate CNN classifiers were learned from different training samples in different training iterations. And a new strategy was proposed to select the model with better generalization ability. Both the accuracy on the validation set and the stability of the accuracies during the iterative training procedure were considered by the proposed strategy. On the improvement of combining multiple responses, an enhanced refined bounding-box combination method was proposed which was different from the Non-Maximum Suppression (NMS) method. The coarse bounding-box of CNN detection procedure output was taken as the input for obtaining the one-to-one refined bounding-box. Then, the CNN accurate positioning process was used for each coarse bounding-box to get the corresponding refined bounding-box. Finally, the multiple refined bounding-boxes were merged by considering the correction probability of each bounding-box. Exactly, the final output bounding-box was obtained by the weighted average of multiple relevant refined bounding boxes with respect to their correction probabilities. To investigate the proposed two improvements, the comprehensive experiments were conducted on well-recognized pedestrian detection benchmark dataset-ETH. The experimental results show that, the two proposed improvements have effectively improved the detection performance of the system. Compared with the benchmark method of Fast Region proposals with CNN (R-CNN), the detection performance of the proposed method with the fusion of two improvements has greatly improved by 5.06 percentage points under the same test conditions.
    Dictionary learning algorithm based on Fisher discriminative criterion constraint of atoms
    LI Zhengming, YANG Nanyue, CEN Jian
    2017, 37(6):  1716-1721.  DOI: 10.11772/j.issn.1001-9081.2017.06.1716
    Asbtract ( )   PDF (1114KB) ( )  
    References | Related Articles | Metrics
    In order to improve the discriminative ability of dictionary, a dictionary learning algorithm based on Fisher discriminative criterion constraint of the atoms was proposed, which was called Fisher Discriminative Dictionary Learning of Atoms (AFDDL). Firstly, the specific class dictionary learning algorithm was used to assign a class label to each atom, and the scatter matrices of within-class atoms and between-class atoms were calculated. Then, the difference between within-class scatter matrix and between-class scatter matrix was taken as the Fisher discriminative criterion constraint to maximize the differences of between-class atoms. The difference between the same class atoms was minimized when the autocorrelation was reduced, which made the same class atoms reconstruct one type of samples as much as possible and improved the discriminative ability of dictionary. The experiments were carried out on the AR face database, FERET face database, LFW face database and the USPS handwriting database. The experimental results show that, on the four image databases, the proposed algorithm has higher recognition rate and less training time compared with the Label Consistent K-means-based Singular Value Decomposition (LC-KSVD) algorithm, Locality Constrained and Label Embedding Dictionary Learning (LCLE-DL) algorithm, Support Vector Guided Dictionary Learning (SVGDL) algorithm, and Fisher Discriminative Dictionary Learning (FDDL) algorithm. And on the four image databases, the proposed algorithm has higher recognition rate compared with Sparse Representation based Classification (SRC) and Collaborative Representation based Classification (CRC).
    CT/MR brain image fusion method via improved coupled dictionary learning
    DONG Xia, WANG Lifang, QIN Pinle, GAO Yuan
    2017, 37(6):  1722-1727.  DOI: 10.11772/j.issn.1001-9081.2017.06.1722
    Asbtract ( )   PDF (1146KB) ( )  
    References | Related Articles | Metrics
    The dictionary training process is time-consuming, and it is difficult to obtain accurate sparse representation by using a single dictionary to express brain medical images currently, which leads to the inefficiency of image fusion. In order to solve the problems, a Computed Tomography (CT)/Magnetic Resonance (MR) brain image fusion method via improved coupled dictionary learning was proposed. Firstly, the CT and MR images were regarded as the training set, and the coupled CT and MR dictionary were obtained through joint dictionary training based on improved K-means-based Singular Value Decomposition (K-SVD) algorithm respectively. The atoms in CT and MR dictionary were regarded as the features of training images, and the feature indicators of the dictionary atoms were calculated by the information entropy. Then, the atoms with the smaller difference feature indicators were regarded as the common features, the rest of the atoms were considered as the innovative features. A fusion dictionary was obtained by using the rule of "mean" and "choose-max" to fuse the common features and innovative features of the CT and MR dictionary separately. Further more, the registered source images were compiled into column vectors and subtracted the mean value. The accurate sparse representation coefficients were computed by the Coefficient Reuse Orthogonal Matching Pursuit (CoefROMP) algorithm under the effect of the fusion dictionary, the sparse representation coefficients and mean vector were fused by the rule of "2-norm max" and "weighted average" separately. Finally, the fusion image was obtained via reconstruction. The experimental results show that, compared with three methods based on multi-scale transform and three methods based on sparse representation, the image visual quality fused by the proposed method outperforms on the brightness, sharpness and contrast, the mean value of the objective parameters such as mutual information, the gradient based, the phase congruency based and the universal image quality indexes under three groups of experimental conditions are 4.1133, 0.7131, 0.4636 and 0.7625 respectively, the average time in the dictionary learning phase under 10 experimental conditions is 5.96 min. The proposed method can be used for clinical diagnosis and assistant treatment.
    Knowledge integration and semantic annotation in closed-loop lifecycle management system
    SANG Cheng, CHENG Jian, SHI Yiming
    2017, 37(6):  1728-1734.  DOI: 10.11772/j.issn.1001-9081.2017.06.1728
    Asbtract ( )   PDF (1131KB) ( )  
    References | Related Articles | Metrics
    The knowledge in the closed-loop lifecycle management system is independent and can't be shared. In order to solve the problems, aiming at the characteristics of the closed-loop lifecycle, a new knowledge integration and semantic annotation method was proposed. Firstly, the connotation of knowledge integration and semantic annotation in closed-loop lifecycle management system was expatiated briefly. Secondly, a multi-dimensional and multi-level knowledge integration framework was constructed by using ontology technology for low temperature plasma equipment. Then, on the above basis, a computing method of extracting and matching the document semantic vector and ontology semantic vector was designed. The knowledge document semantic annotation of one sub-system in low temperature plasma equipment was completed. Finally, the test experiments were designed and verified. The experimental results show that, by using knowledge document data set in the closed-loop lifecycle management system to complete the semantic annotation, the average accuracy rate of the proposed method is 84%, and its average recall rate is 79%. The proposed knowledge integration and semantic annotation method can realize the sharing and reuse of the knowledge document in the closed-loop lifecycle management system.
    Product property sentiment analysis based on neural network model
    LIU Xinxing, JI Donghong, REN Yafeng
    2017, 37(6):  1735-1740.  DOI: 10.11772/j.issn.1001-9081.2017.06.1735
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor results of product property sentiment analysis by the simple neural network model based on word vector, a gated recursive neural network model of integrating discrete features and word vector embedding was proposed. Firstly, the sentences were modeled with direct recurrent graph and the gated recursive neural network model was adopted to complete product property sentiment analysis. Then, the discrete features and word vector embedding were integrated in the gated recursive neural network. Finally, the feature extraction and sentiment analysis were completed in three different task models:pipeline model, joint model and collapsed model. The experiments were done on laptop and restaurant review datasets of SemEval-2014, the macro F1 score was used as the evaluation indicator. Gated recursive neural network model achieved the F1 scores as 48.21% and 62.19%, which were more than ordinary recursive neural network model by nearly 1.5 percentage points. The results indicate that the gated recursive neural network can capture complicated features and enhance the performance on product property sentiment analysis. The proposed neural network model integrated with discrete features and word vector embedding achieved the F1 scores as 49.26% and 63.31%, which are all higher than baseline methods by 0.5 to 1.0 percentage points. The results show that discrete features and word vector embedding can help each other, on the other hand, it's also shown that the neural network model based on only word embedding has the room for improvement. Among the three task models, the pipeline model achieves the highest F1 scores. Thus, it's better to complete feature extraction and sentiment analysis separately.
    Sentence composition model for reading comprehension
    WANG Yuanlong
    2017, 37(6):  1741-1746.  DOI: 10.11772/j.issn.1001-9081.2017.06.1741
    Asbtract ( )   PDF (965KB) ( )  
    References | Related Articles | Metrics
    The reading comprehension of document in Natural Language Processing (NLP) requires the technologies such as representation, understanding and reasoning on the document. Aiming at the choice questions of literature reading comprehension in college entrance examination, a sentence composition model based on the hierarchical composition model was proposed, which could achieve the semantic consistency measure at the sentence level. Firstly, a neural network model was trained by the triple consisted of single word and phrase vector. Then, the sentence vectors were combined by the trained neural network model (two composition methods:the recursion method and the recurrent method) to obtain the distributed vector of sentence. The similarity between sentences was presented by the cosine similarity between the two sentence vectors. In order to verify the proposed method, the 769 simulation materials and 13 Beijing college entrance examination materials (including the source text and the choice question) were collected as the test set. The experimental results show that, compared with the traditional optimal method based on HowNet semantics, the precision of the proposed recurrent method is improved by 7.8 percentage points in college entrance examination materials and 2.7 percentage points in simulation materials respectively.
    A new compressed vertex chain code
    WEI Wei, DUAN Xiaodong, LIU Yongkui, GUO Chen
    2017, 37(6):  1747-1752.  DOI: 10.11772/j.issn.1001-9081.2017.06.1747
    Asbtract ( )   PDF (940KB) ( )  
    References | Related Articles | Metrics
    Chain code is one kind of coding technology, which can represent the line, curve and region boundary with small data storage. In order to improve the compression efficiency of chain code, a new compression vertex chain code named Improved Orthogonal 3-Direction Vertex Chain Code (IO3DVCC) was proposed. The statistical characteristic of the Vertex Chain Code (VCC) and the directional characteristic of the OrThogonal 3-direction chain code (3OT) were combined in the proposed new chain code, 5 code values were totally set. The combination of 1, 3 and the combination of 3, 1 in VCC were merged and expressed by code 1. The expression of the code 2 was the same with the corresponding code value of VCC. The expression of code 3 was the same as the code value 2 of 3OT. Code 4 and code 5 corresponded to the two continuous code value 1 of IO3DVCC and eight continuous code values 2 of VCC respectively. Based on Huffman coding, the new chain code was the indefinite length coding. The code value probability, average expression ability, average length and efficiency of IO3DVCC, Enhanced Relative 8-Direction Freeman Chain Code (ERD8FCC), Arithmetic encoding Variable-length Relative 4-direction Freeman chain code (AVRF4), Arithmetic coding applied to 3OT chain code (Arith_3OT), Compressed VCC (CVCC), and Improved CVCC (ICVCC) were calculated aiming at the contour boundary of 100 images. The experimental results show that the efficiency of I3ODVCC is the highest. The total code number, total binary bit number, and compression ratio relative to the 8-Direction Freeman Chain Code (8DFCC) of three kinds of chain codes including IO3DVCC, Arith_3OT, and ICVCC were calculated aiming at the contour boundary of 20 randomly selected images. The experimental results demonstrate that the compression effect of IO3DVCC is the best.
    Sketch-based image retrieval method using local geometry moment invariant
    BAO Zhenhua, KANG Baosheng, ZHANG Lei, ZHANG Jing
    2017, 37(6):  1753-1758.  DOI: 10.11772/j.issn.1001-9081.2017.06.1753
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics
    The difficulty in sketch-based image retrieval is the effective recognition of images with different scales, positions, rotations and deformations. In order to identify and retrieve images of different scales, positions and rotations more accurately, a Sketch-Based Image Retrieval method Using Local Geometry Moment Invariant (SBIRULGMI) was proposed. Firstly, the geometric characteristics of image were used to determine the coordinate system of image. Secondly, the geometry moment invariant of image blocks which were divided averagely based on the generated coordinate system was calculated to form a eigenvector. Then, the similarities between query sketch and images in database were calculated based on Euclidean distance. Finally, the retrieval results were obtained from the similarity ranking and optimized according to Ant Colony Optimization (ACO). Compared with Shape Context (SC), Edge Orientation Histogram (EOH), GAbor Local lIne-based Feature (GALIF) and MindFinder, the retrieval accuracy of the proposed method in image database of MPEG-7 shape1 part B was increased by 17 percentage points on average. The experimental results show that the proposed method not only has a better recognition effect on the images after translation, scaling and flipping transformation, but also has better robustness to a certain degree of rotation and deformation.
    Feature detection and description algorithm based on ORB-LATCH
    LI Zhuo, LIU Jieyu, LI Hui, ZHOU Xiaogang, LI Weipeng
    2017, 37(6):  1759-1762.  DOI: 10.11772/j.issn.1001-9081.2017.06.1759
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics
    The binary descriptor based on Learned Arrangements of Three Patch Codes (LATCH) lacks of scale invariance and its rotation invariance depends upon feature detector, so a new feature detection and description algorithm was proposed based on Oriented fast and Rotated Binary robust independent elementary feature (ORB) and LATCH. Firstly, the Features from Accelerated Segment Test (FAST) was adopted to detect corner feature on the scale space of image pyramid. Then, the intensity centroid method of ORB was used to obtain orientation compensation. Finally, the LATCH was used to describe the feature. The experimental results indicate that, the proposed algorithm has the characteristics of low computational complexity, high real-time performance, rotation invariance and scale invariance. Under the same accuracy, the recall rate of the proposed algorithm is better than ORB and HARRIS-LATCH algorithm, the matching inner rate of the proposed algorithm is higher than ORB algorithm by 4.2 percentage points. In conclusion, the proposed algorithm can reduce the performance gap with histogram based algorithms such as Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Feature (SURF) while maintaining the real-time property, and it can deal with image sequence in real-time quickly and exactly.
    Shape correspondence analysis based on feature matrix similarity measure
    TIAN Hua, LIU Yunan, GU Jiaying, CHEN Qiao
    2017, 37(6):  1763-1767.  DOI: 10.11772/j.issn.1001-9081.2017.06.1763
    Asbtract ( )   PDF (920KB) ( )  
    References | Related Articles | Metrics
    Aiming at the urgent requirement of rapid and efficient 3D model shape analysis and retrieval technology, a new method of 3D model shape correspondence analysis by combining the intrinsic heat kernel features and local volume features was proposed. Firstly, the intrinsic shape features of the model were extracted by using Laplacian Eigenmap and heat kernel signature. Then, the feature matching matrix was established by combining the stability of the model heat kernel feature and the significance of local space volume. Finally, the model registration and shape correspondence matching analysis was implemented through feature matrix similarity measurement and short path searching. The experimental results show that, the proposed shape correspondence analysis method with the combination of heat kernel distance and local volume constraint can not only effectively improve the efficiency of model shape matching, but also identify the structural features of the same class models. The proposed method can be applied to further realize the co-segmentation and shape retrieval of multigroup models.
    Hyperspectral remote sensing image classification based on active learning algorithm with unlabeled information
    ZHANG Liang, LUO Yimin, MA Hongchao, ZHANG Fan, HU Chuan
    2017, 37(6):  1768-1771.  DOI: 10.11772/j.issn.1001-9081.2017.06.1768
    Asbtract ( )   PDF (666KB) ( )  
    References | Related Articles | Metrics
    In hyperspectral remote sensing image classification, the traditional active learning algorithms only use labeled data for training sample, massive unlabeled data is ignored. In order to solve the problem, a new active learning algorithm combined with unlabeled information was proposed. Firstly, by realizing triple screening of K neighbor consistency principle,predict consistency principle, and information evaluation of active learning, the unlabeled sample with a certain amount of information and highly reliable prediction label was obtained. Then, the prediction label was added to the label sample set as real label. Finally, an optimized classification model was produced by training the sample. The experimental results show that, compared with the passive learning algorithms and the traditional active learning algorithms, the proposed algorithm can obtain higher classification accuracy under the precondition of the same manual labeling cost and get better parameter sensitivity.
    Robust video watermarking algorithm for HEVC based on intra-frame prediction modes of muli-partitioning
    CAI Chunting, FENG Gui, WANG Chi, HAN Xue
    2017, 37(6):  1772-1776.  DOI: 10.11772/j.issn.1001-9081.2017.06.1772
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics
    Considering the low robustness of existing watermarking algorithms based on High Efficiency Video Coding (HEVC) standard, a robust video watermarking algorithm for HEVC based on intra-frame prediction modes of multi-partitioning was proposed. Firstly, in order to eliminate the intra-frame error propagation after embedding the watermark, embeddable regions were selected and the texture direction was calculated for 4×4 luminance blocks. Secondly, a scheme was proposed that the 33 angular prediction modes were divided into four pattern sets, which were recorded as follows:upper horizontal, lower horizontal, upper vertical, lower vertical. Finally, the four pattern sets were mapped to the values of the current and next to be embedded watermarks. Once the current 4×4 luminance blocks met the pattern sets, the current 33 angular prediction modes were truncated into the current pattern sets and the watermark was embedded. The watermark was extracted by the texture direction and four prediction pattern sets at decoding side. The experimental results show that, the average Peak Signal to Noise Ratio (PSNR) of the proposed algorithm is almost unchanged. In addition, the proposed algorithm achieves the Bit Error Rate (BER) of 14.1% under re-encoded attacks. Therefore, the proposed algorithm has low video distortion and can well resist the re-encoded attacks in robustness.
    Moving object detection based on background subtraction for video sequence
    LIU Zhongmin, HE Shengjiao, HU Wenjin, LI Zhanming
    2017, 37(6):  1777-1781.  DOI: 10.11772/j.issn.1001-9081.2017.06.1777
    Asbtract ( )   PDF (789KB) ( )  
    References | Related Articles | Metrics
    Moving object detection is the essential process of object recognition, marking and tracking in video sequences, the background subtraction algorithm is widely used in moving object detection. Concerning the problem that illumination changing, noise and local motion seriously affect the accuracy of moving object detection, a moving object detection algorithm based on background subtraction for video sequences was proposed. The background subtraction was combined with inter-frame difference to estimate the motion state of current frame pixels. The related pixels in the static and motion region were replaced and updated respectively. The Otsu method was used to extract moving object and the mathematical morphological operation was used to eliminate the noise and redundant information in the objects. The experimental results show that the proposed algorithm has good visual effect and high accuracy for detecting moving objects in video sequences, and it can overcome the shortcomings such as local movement and noise.
    Unstructured road detection based on improved region growing with PCA-SVM rule
    WANG Xinqing, MENG Fanjie, LYU Gaowang, REN Guoting
    2017, 37(6):  1782-1786.  DOI: 10.11772/j.issn.1001-9081.2017.06.1782
    Asbtract ( )   PDF (861KB) ( )  
    References | Related Articles | Metrics
    Intelligent vehicles need to use many characteristic parameters in unstructured road detection, which makes the feature fusion recognition difficult and computation complex, and the similarity of some road area and background may produce the mistake distinguishment and judgement of road identification. In order to solve the problems, an unstructured road detection method based on improved region growing with Principal Component Analysis-Support Vector Machine (PCA-SVM) rule was proposed. Firstly, the complex characteristic parameters such as color and texture of unstructured road were extracted, and then the PCA was used to reduce the dimension of the extracted characteristic information. The SVM trained with the primary characteristics reduced by PCA was used to be the classifier of the complex road cells. The priori knowledge such as the location of road, the initial cell and the characteristics of road boundary cells were used to improve the region growing method, and the classifier was used to decide the way of growing in cell growth for eliminating miscalculation area. The test results of actual roads show that, the proposed method has good adaptability and robustness, and can identify the unstructured road area effectively. The comparison results show that, compared with the traditional algorithm, the proposed method can shorten the calculation time by more than half through cutting characteristics from ten dimensions to three dimensions in ensuring the accuracy at the same time. The proposed method can also eliminate the 10% of miscalculation areas made by some similar areas of road and background for the traditional algorithm. The proposed method can provide a feasible way to shorten the recognition time and eliminate background interference in local path planning and navigation based on vision in the wild environment.
    Building extraction from high-resolution remotely sensed imagery based on neighborhood total variation and potential histogram function
    SHI Wenzao, LIU Jinqing
    2017, 37(6):  1787-1792.  DOI: 10.11772/j.issn.1001-9081.2017.06.1787
    Asbtract ( )   PDF (1093KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of the low accuracy and high requirements for data in the existing building identification and extraction methods from high-resolution remotely sensed imagery, a new method based on Neighborhood Total Variation (NTV) and Potential Histogram Function (PHF) was proposed. Firstly, the value of weighted NTV likelihood function for each pixel of a remotely sensed imagery was calculated, the segmentation was done with region growing method, and the candidate buildings were selected from the segmentation results with the constraints of rectangular degree and aspect ratio. Then, the shadows were detected automatically. At last, shadows were processed with morphology operations. The buildings were extracted by computing the adjacency relationship of the processed shadows and candidate buildings, and the building boundaries were fitted with the minimum enclosing rectangle. For verifying the validity of the proposed method, nine representative sub-images were chosen from PLEIADES images covering Shenzhen for experiment. The experimental results show that, the average precision and recall of the proposed method are 97.71% and 84.21% for the object-based evaluation, and the proposed method has increased the overall performance F1by more than 10% compared with two other building extraction methods based on level set and color invariant feature.
    Orthogonalization of regressors in functional magnetic resonance imaging based on general linear model
    DAI Hepu, LIU Gang, HE Yanyan
    2017, 37(6):  1793-1797.  DOI: 10.11772/j.issn.1001-9081.2017.06.1793
    Asbtract ( )   PDF (904KB) ( )  
    References | Related Articles | Metrics
    Concerning the collinearity problem between the regressors in functional Magnetic Resonance Imaging (fMRI) model, a method of orthogonalization was proposed. Firstly, the regressors of interest and the regressors to be orthogonalized were determined. Then, the related part with regressos of interest was removed from the regressors to be orthogonalized, and the collinear regressors of the model were orthogonally decomposed into independent parts to eliminate the effect of collinearity. The influence of orthogonalization on General Linear Model (GLM) was also discussed and analysed. Finally, the experiments were carried out through some synthetic data and a current popular fMRI data analysis software package-Functional magnetic resonance imaging of the brain Software Library (FSL).The experimental results show that, the method of orthogonalization can eliminate the collinearity in the model and improve the significance of the regressors of interest to achieve accurate brain functional localization. The proposed method of orthogonalization can be used for the basic research and clinical treatment of brain.
    High-precision calibration and measurement method based on stereo vision
    KONG Yingqiao, ZHAO Jiankang, XIA Xuan
    2017, 37(6):  1798-1802.  DOI: 10.11772/j.issn.1001-9081.2017.06.1798
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics
    In stereo vision measurement system, the distortion caused by the optical system makes imaging of target deviate from the theoretical imaging point, which results in measurement error of system. In order to improve the accuracy of the measuring system, a new measurement method based on stereo vision was proposed. Firstly, a quartic polynomial on the whole imaging plane was fitted through the pixel resolution of each angular point on the calibration board, the coefficient of the fitted polynomial was proportional to the distance from the object to the camera. Then, the longitudinal distance of the detected object was measured by using the measuring distance principle of binocular model. Finally, based on the obtained polynomial, the monocular camera was used to measure the transverse dimension of the detected object. The experimental results show that, when the distance between the object and the camera is within 5 m, the longitudinal distance error of the proposed method can be reduced to less than 5%. And when the object is 1 m away from the camera, the measurement error of transverse width of the proposed method is within 0.5 mm, which approaches to the theoretical highest resolution.
    Obfuscating algorithm based on congruence equation and improved flat control flow
    WANG Yan, HUANG Zhangjin, GU Naijie
    2017, 37(6):  1803-1807.  DOI: 10.11772/j.issn.1001-9081.2017.06.1803
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics
    Aiming at the simple obfuscating result of the existing control flow obfuscating algorithm, an obfuscating algorithm based on congruence equation and improved flat control flow was presented. First of all, a kind of opaque predicate used in the basic block of the source code was created by using secret keys and a group of congruence equation. Then, a new algorithm for creating N-state opaque predicate was presented based on Logistic chaotic mapping. The proposed algorithm was applied to the existing flat control flow algorithm for improving it. Finally, according to the combination of the above two proposed algorithms for obfuscating the source code, the complexity of the flat control flow in the code was increased and make it more difficult to be cracked. Compared with the flat control flow algorithm based on chaotic opaque predicate, the code's tamper-proof attack time of the obfuscated code was increased by above 22% on average and its code's total cyclomatic complexity was improved by above 34% on average by using the proposed obfuscating algorithm. The experimental results show that, the proposed algorithm can guarantee the correctness of execution result of the obfuscated program and has a high cyclomatic complexity, so it can effectively resist static and dynamic attacks.
    Partition configuration and initialization in integrated modular avionics
    WANG Yunsheng, LEI Hang
    2017, 37(6):  1808-1813.  DOI: 10.11772/j.issn.1001-9081.2017.06.1808
    Asbtract ( )   PDF (1045KB) ( )  
    References | Related Articles | Metrics
    Regarding to the resource allocation and partition starting time in the Integrated Modular Avionics (IMA), a Unified Modeling Language (UML) model of partition configuration and initialization was proposed based on the case study of VxWorks 653 partition operating system. The proposed model including classes diagram and initial sequence diagram for partition, was established to facilitate the analysis of the mechanism of partition configuration and starting/initialization. The contents and function of partition configuration in the processes of resources allocation, operating system compilation and partition initialization, were discussed in detail, as well as the differences between "cold start" and "warm start" mode. A platform was set up for testing the startup times of the two kinds of startup modes, and the test results showed that the time of cold start was 148 ms, and warm start time was 8.5 ms. Furthermore, the applicable scenarios for cold start and warm start mode were discussed. The policies of partition configuration and application software initialization were proposed based on the starting time. The mode of partition start and time of partition initialization should be fully considered when establishing the partition main time frame and identifying the health management policy. The designed policies can be applicable to other partition system design in high security applications.
    Application of greedy search algorithm in satellite scheduling
    SHAN Guohou, LIU Jian, SHUI Yan, LI Lihua, YU Guangye
    2017, 37(6):  1814-1819.  DOI: 10.11772/j.issn.1001-9081.2017.06.1814
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that observational image quality and profits are low in satellite scheduling by adopting lagged weather forecast cloud information, a mathematic model capturing real-time cloud distribution was proposed. The Agile Earth Observation Satellite (AEOS) scheduling model was also built based on the real-time cloud information. Considering the local optimization of Greedy Search Algorithm (GSA) and it can give full consideration for constraints such as cloud of satellite observation and limited storage resources, the applications of GSA for the satellite scheduling problem were researched. Firstly, the cloud coverage of observation task was considered in priority order by GSA. The image quality value of observation task was calculated according to the size of cloud coverage and the observation task was selected by the sort of the image quality value. Secondly, the task with the maximize profit was selected according to task size, deadline and satellite storage resource. Finally, satellite observation and task transmission were completed according to their ability of improving profit. The simulation experiments show that, on the case of 100 tasks, the task profit of satellite schedule adopting GSA was improved by 14.82% and 10.32% compared with the Dynamic Programming Algorithm (DPA) and Local Search Algorithm (LSA) respectively. Besides, the image quality of applying GSA is higher than taking DPA and LSA in the same circumstance. The experimental results show that the GSA can effectively improve the image observation quality and task observation profit of satellite scheduling.
    Application of improved principal component analysis-Bayes discriminant analysis method to petroleum drilling safety evaluation
    REN Dongmei, ZHANG Yuyang, DONG Xinling
    2017, 37(6):  1820-1824.  DOI: 10.11772/j.issn.1001-9081.2017.06.1820
    Asbtract ( )   PDF (753KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that Principal Component Analysis-Bayes Discriminant Analysis (PCA-BDA) only supports safety evaluation but can not detect the dangerous factors, by introducing the concept of attribute importance degree, an improved PCA-BDA algorithm was proposed and applied to the petroleum drilling safety evaluation. Firstly, the safety ranking of each record was evaluated by the initial PCA-BDA algorithm. Secondly, the attribute importance was computed with the eigenvector matrix in PCA, the classification function coefficient in BDA, and the weight of safety ranking. Finally, the attributes were regulated and controlled with referencing the attribute importance. In the comparison experiments with Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE), the accuracy rate of improved PCA-BDA reached 96.7%, which was obviously higher than that of the AHP and FCE method. In the simulation experiment, more than 70% of safety rankings of petroleum drilling were improved by regulating the 3 most important attributes, while the safety ranking had no change by adjusting the least 3 important attributes. The experimental results show that the improved PCA-BDA can accurately accomplish the safety evaluation, and find out the critical attributes to make the petroleum drilling safety management more targeted.
2023 Vol.43 No.5

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF