Loading...

Table of Content

    10 February 2017, Volume 37 Issue 2
    Helius: a lightweight big data processing system
    DING Mengsu, CHEN Shimin
    2017, 37(2):  305-310.  DOI: 10.11772/j.issn.1001-9081.2017.02.0305
    Asbtract ( )   PDF (943KB) ( )  
    References | Related Articles | Metrics

    Concerning the limitations of Spark, including immutable datasets and significant costs of code execution, memory management and data serialization/deserialization caused by running environment of Java Virtual Machine (JVM), a light-weight big data processing system, named Helius, was implemented in C/C++. Helius supports the basic operations of Spark, while allowing the data set to be modified as a whole. In Helius, the C/C++ is utilized to optimize the memory management and network communication, and a stateless worker mechanism is utilized to simplify the fault tolerance and recovery process of the distributed computing platform. The experimental results showed that in 5 iterations, the running time in Helius was only 25.12% to 53.14% of that in Spark when running PageRank iterative jobs, and the running time in Helius was only 57.37% of that in Spark when processing TPCH Q6. On the basis of one iteration of PageRank, the IP incoming and outcoming data sizes of master node in Helius were about 40% and 15% of those in Sparks, and the total memory consumed in the worker node in Helius was only 25% of that in Spark.Compared with Spark, Helius has the advantages of saving memory, eliminating the need for serialization and deserialization, reducing network interaction and simplifying fault tolerance.

    Spatio-temporal index for massive traffic data based on HBase
    FANG Jun, LI Dong, GUO Huiyun, WANG Jiayi
    2017, 37(2):  311-315.  DOI: 10.11772/j.issn.1001-9081.2017.02.0311
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the HBase storage without spatio-temporal index degrades the traffic data query performance, some HBase spatio-temporal indexes based on row keys were proposed for massive traffic data. Firstly, the dimensionality reduction method based on Geohash was used to convert two-dimensional spatial position data into a one-dimensional code. Then the code was combined with the temporal dimension. Secondly, four index models were put forward based on combination order, and the structures of the models and their adaption conditions for traffic data query were discussed. Finally, the algorithm of index creation as well as traffic data query algorithm was proposed. Experimental results show that the proposed HBase spatio-temporal index structure can effectively enhance the traffic data query performance. In addition, the query performance of four different spatio-temporal index structures in different data size, different query radius and different query time range were compared, which verified the different adaption scenes of different index structures in traffic data query.

    Frequent sequence pattern mining with differential privacy
    LI Yanhui, LIU Hao, YUAN Ye, WANG Guoren
    2017, 37(2):  316-321.  DOI: 10.11772/j.issn.1001-9081.2017.02.0316
    Asbtract ( )   PDF (1179KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that releasing frequent sequence patterns and the corresponding true supports may reveal the individuals' privacy when the data set contains sensitive information, a Differential Private Frequent Sequence Mining (DP-FSM) algorithm was proposed. Downward closure property was used to generate a candidate set of sequence patterns, smart truncating based technique was used to sample frequent patterns in the candidate set, and geometric mechanism was utilized to perturb the true supports of each sampled pattern. In addition, to improve the usability of the results, a threshold modification method was proposed to reduce truncation error and propagation error in mining process. The theoretical analysis show that the proposed method is ε-differentially private. The experimental results demonstrate that the proposed method has lower False Negative Rate (FNR) and Relative Support Error (RSE) than that of the comparison algorithm named PFS2, thus effectively improving the accuracy of mining results.

    Domain-driven high utility co-location pattern mining method
    JIANG Wanguo, WANG Lizhen, FANG Yuan, CHEN Hongmei
    2017, 37(2):  322-328.  DOI: 10.11772/j.issn.1001-9081.2017.02.0322
    Asbtract ( )   PDF (1053KB) ( )  
    References | Related Articles | Metrics

    A spatial co-location pattern represents a subset of spatial features whose instances are frequently located together in spatial neighborhoods. The existing interesting metrics for spatial co-location pattern mining do not take account of the difference between features and the diversity between instances belonging to the same feature. In addition, using the traditional data-driven spatial co-location pattern mining method, the mining results often contain a lot of useless or uninteresting patterns. In view of the above problems, firstly, a more general study object-spatial instance with utility value was proposed, and the Utility Participation Index (UPI) was defined as the new interesting metric of the spatial high utility co-location patterns. Secondly, the domain knowledge was formalized into three kinds of semantic rules and applied to the mining process, and a new domain-driven iterative mining framework was put forward. Finally, by the extensive experiments, the differences between mined results with different interesting metrics were compared in two aspects of utility ratio and frequency, as well as the changes of the mining results after taking the domain knowledge into account. Experimental results show that the proposed UPI metric is a more reasonable measure in consideration of both frequency and utility, and the domain-driven mining method can effectively find the co-location patterns that users are really interested in.

    Top-Rank-k frequent patterns mining algorithm based on TCM prescription database
    QIN Qibing, TAN Long
    2017, 37(2):  329-334.  DOI: 10.11772/j.issn.1001-9081.2017.02.0329
    Asbtract ( )   PDF (854KB) ( )  
    References | Related Articles | Metrics

    The dependency of the empirical parameters in frequent patterns mining of Traditional Chinese Medicine (TCM) prescriptions should be reduced to improve the accuracy of mining results. Aiming at the characteristics of TCM prescription data, an efficient Top-Rank-k frequent patterns mining algorithm based on Weighted Undirected Graph (WUG) was proposed. The new algorithm can directly mining frequent k-itemset (k≥3) without mining 1-times and 2-times, and then quikly backtrack to the corresponding prescription of the frequent itemsets of core drugs combination. Besides, the compression mechanism of Dynamic Bit Vector (DBV) was used to store the edge weights in undirected graph to improve the spatial storage efficiency of the algorithm. Experiments were conducted on TCM prescription datasets, real datasets (Chess, Pumsb and Retail) and synthetic datasets (T10I4D100K and Test2K50KD1). The experimental results show that compared with iNTK (improved Node-list Top-Rank-K) and BTK (B-list Top-Rank-K), the proposed algorithm has better performance in terms of time and space, and it can be applied to other types of data sets.

    Diversified top-k shapelets transform for time series classification
    SUN Qifa, YAN Qiuyan, YAN Xinming
    2017, 37(2):  335-340.  DOI: 10.11772/j.issn.1001-9081.2017.02.0335
    Asbtract ( )   PDF (920KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that shapelets candidates can be very similar in time series classification by shapelets transform, a diversified top-k shapelets transform method named DivTopKShapelet was proposed. In DivTopKShapelet, the diversified top-k query method was used to filter similar shapelets and select the k most representative shapelets. Then the optimal shapelets was used to transform data, so as to improve the accuracy and time efficiency of typical time series classification method. Experimental results show that compared with clustering based shapelets classification method (ClusterShapelet) and coverage based shapelets classification method (ShapeletSelction), DivTopKShapelet method can not only improve the traditional time series classification method, but also increase the accuracy by 48.43% and 32.61% at most; at the same time, the proposed method can enhance the computational efficiency in 15 data sets, which is at least 1.09 times and at most 287.8 times.

    Probabilistic bichromatic reverse-kNN query on road network
    XU Wei, LI Wengen, ZHANG Yichao, GUAN Jihong
    2017, 37(2):  341-346.  DOI: 10.11772/j.issn.1001-9081.2017.02.0341
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics

    Considering the road network constraint and the uncertainty of moving object location, a new reverse-kNN query on road network termed Probabilistic Bichromatic Reverse-kNN (PBRkNN) was proposed to find a set of uncertain points and make the probability which the kNN of each uncertain point contains the given query point be greater than a specified threshold. Firstly, a basic algorithm called Probabilistic Eager (PE) was proposed, which used Dijkstra algorithm for pruning. Then, the Pre-compute Probabilistic Eager (PPE) algorithm which pre-computes the kNN for each point was proposed to improve the query efficiency. In addition, for further improving the query efficiency, the Pre-compute Probabilistic Eager External (PPEE) algorithm which used grid index to accelerate range query was proposed. The experimental results on the road networks of Beijing and California show that the proposed pre-computation strategies can help to efficiently process probabilistic bichromatic reverse-kNN queries on road networks.

    Query probability-based location privacy protection approach
    ZHAO Dapeng, SONG Guangxuan, JIN Yuanyuan, WANG Xiaoling
    2017, 37(2):  347-351.  DOI: 10.11772/j.issn.1001-9081.2017.02.0347
    Asbtract ( )   PDF (1008KB) ( )  
    References | Related Articles | Metrics

    The existing privacy protection technologies rarely consider query probability, map data, semantic information of Point of Information (POI) and other side information, so the attacker can deduce the privacy information of the user by combining the side information with the location data. To resolve this problem, a new algorithm was proposed to protect the location privacy of users, namely ARB (Anonymouse Region Building). Firstly, the space was divided into grids, and historical statistics were utilized to obtain the probability of queries for each grid of space. Then, the anonymous region for each user was obtained based on query probability of corresponding grid to protect the user's location privacy information. Finally, the location information entropy was used as a measure of privacy protection performance, and the performance of the proposed method was verified by comparison with the existing two methods on the real data set. The experimental results show that ARB obtains better privacy protection effect and lower computation complexity.

    KSRG: an efficient optimal route query algorithm for multi-keyword coverage
    JIN Pengfei, NIU Baoning, ZHANG Xingzhong
    2017, 37(2):  352-359.  DOI: 10.11772/j.issn.1001-9081.2017.02.0352
    Asbtract ( )   PDF (1293KB) ( )  
    References | Related Articles | Metrics

    To alleviate the issues of high complexity and poor scalability in the processing of keyword-aware optimal route query algorithms for large scale graph or multiple query keywords, an effective algorithm was proposed based on the scheme of keyword sequence route generating. The algorithm satisfied the coverage of query keywords first, and took a path expansion inspired by the keyword coverage property rather than aimless adjacent edge expansion to efficiently construct candidate paths. With the aid of a scaling method and ineffective route pruning, the search space was reduced into a polynomial order from an original factorial order, which further reduced the complexity and enhanced the scalability. Experiments conducted over four gragh datasets verified the accuracy and improvement in efficiency and scalability of the proposed algorithm.

    Construction and inference of latent variable model oriented to user preference discovery
    GAO Yan, YUE Kun, WU Hao, FU Xiaodong, LIU Weiyi
    2017, 37(2):  360-366.  DOI: 10.11772/j.issn.1001-9081.2017.02.0360
    Asbtract ( )   PDF (1019KB) ( )  
    References | Related Articles | Metrics
    Large amount of user rating data, involving plentiful users' opinion and preference, is produced in e-commerce applications. An construction and inference method for latent variable model (i.e., Bayesian Network with a latent variable) oriented to user preference discovery from rating data was proposed to accurately infer user preference. First, the unobserved values in the rating data were filled by Biased Matrix Factorization (BMF) model to address the sparseness problem of rating data. Second, latent variable was used to represent user preference, and the construction of latent variable model based on Mutual Information (MI), maximal semi-clique and Expectation Maximization (EM) was given. Finally, an Gibbs sampling based algorithm for probabilistic inference of the latent variable model and the user preference discovery was given. The experimental results demonstrate that, compared with collaborative filtering, the latent variable model is more efficient for describing the dependence relationships and the corresponding uncertainties of related attributes among rating data, which can more accurately infer the user preference.
    Fast influence maximization algorithm in social network under budget control
    LIU Yuanying, GUO Jingfeng, WEI Lidong, HU Xinzhuan
    2017, 37(2):  367-372.  DOI: 10.11772/j.issn.1001-9081.2017.02.0367
    Asbtract ( )   PDF (878KB) ( )  
    References | Related Articles | Metrics
    Concerning the high time complexity in influence maximization under budget control, a fast influence maximization algorithm, namely BCIM, was proposed. Firstly, a new information dissemination model which propagates the initial nodes for many times was proposed. Secondly, the nodes with high influence ranking value were selected as candidate seeds, and the calculation of node's influence scope was decreased based on the short distance influence. Lastly, only one seed was selected at most in each set of candidate seeds by using the dynamic programming method. The experimental results show that, compared with Random (random algorithm), Greedy_MII (greedy algorithm based on the maximum influence increment) and Greedy_MICR (greedy algorithm based on the maximum of influence increment over cost ratio), the influence scope of BCIM is near to or a bit better than that of Greedy_MICR and Greedy_MII, but much worse than that of Random; the quality of seeds set of BCIM, Greedy_MICR and Greedy_MII is similar, but much better than that of Random; the running time of BCIM is several times of Random, while the running time of the both greedy algorithms are hundreds times of BCIM. In summary, BCIM algorithm can find a more effective seeds set in a short time.
    Secure cloud storage method based on three-dimensional stereo model
    LYU Hongwu, CAI Yaoqi, WANG Huiqiang, GUO Fangfang
    2017, 37(2):  373-377.  DOI: 10.11772/j.issn.1001-9081.2017.02.0373
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics
    Focusing on the data lost or unavailable reference in cloud storage, a secure cloud storage method based on Three-Dimensional model (TD-model)was proposed. Firstly, base nodes of TD-model method were formed by encoding the data, which would be stored uniformly into two opposite sides in the TD-model. Secondly, normal nodes were formed in each side by mathematical computing, and the nodes of each side ensure connection. Finally, high data availability was achieved by the correlation of all the six sides. The experimental results show that compared with the traditional replica storage methods, the secure cloud storage method based on TD-model enhances data recovery efficiency and ensures data integrity. In addition, the proposed method can overcome the drawback of traditional methods that only the single node failure can be recovered.
    Trustworthiness attestation scheme for virtual machine based on certificateless ring signature
    RONG Xing, ZHAO Yong
    2017, 37(2):  378-382.  DOI: 10.11772/j.issn.1001-9081.2017.02.0378
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics
    Due to the complexity and dynamic behavior in virtual environment, the efficiency is low when adopting traditional methods to prove the secure state of virtual machines. Ring signature has high computational efficiency and strong anonymity, so the the key management can be solved by using the certificateless public key system. A trustworthiness attestation scheme which adopted certificateless ring signature scheme in Virtual Machine (VM) was put forward. After the trusted physical environment of virtual platform was validated by the Private Key Generator (PKG), the virtual Trusted Platform Module (vTPM) signature key was generated by PKG and vTPM manager using certificateless signature algorithm, and the ring signature was employed by VM to perform remote attestation and hide attestor's identity in ring members, which realized the attestation of VM's anonymous identity and state. After completion of the proof preparation, the VM does not need to generate virtual Attestation Identity Key (vAIK) certificates repeatedly in the process of attestation and migration, thus greatly improving the efficiency of attestation. Consequently, the proposed scheme has strong security and anonymity, and it is suitable for the cloud computing environment with huge numbers of VMs.
    Security resource allocation and service scheduling framework for multiple tenants in IaaS
    YUAN Zhongliang, CHEN Xingshu, WANG Yitong
    2017, 37(2):  383-387.  DOI: 10.11772/j.issn.1001-9081.2017.02.0383
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    In Infrastructure-as-a-Service (IaaS) environment, the limited security service resources and uneven allocation of security resources for multiple tenants causes low efficiency of security service scheduling. To resolve this problem, a tenant security service scheduling framework was designed. Based on the minimum fairness algorithm and the scheduling idea of Fair Scheduler, the minimum sharing resources and resource demand attribute were set for the tenant. Then, the security service resource allocation algorithm was used to satisfy the tenant's resource demand as fair as possible to ensure the minimum sharing resources of the tenant. Finally, a tenant security service scheduling framework was implemented by combining the job scheduling algorithm within tenant and resource preemption algorithm among tenants. The experimental results show that under the condition of random allocation of resources, the proposed security service resource allocation algorithm is better than traditional algorithms in the aspects of resource utilization and operation efficiency, and the security service scheduling framework can solve the uneven allocation of security resources for multiple tenants.
    Virtual machine file integrity monitoring based on hardware virtualization
    ZHAO Cheng, CHEN Xingshu, JIN Xin
    2017, 37(2):  388-391.  DOI: 10.11772/j.issn.1001-9081.2017.02.0388
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics
    In order to protect the integrity of the Virtual Machine (VM) sensitive files and make up for the shortcomings such as high performance overhead, low compatibility and poor flexibility in out-of-box monitoring based on the instruction monitoring methods, OFM (Out-of-box File Monitoring) based on hardware virtualization was proposed. In OFM, Kernel-based Virtual Machine (KVM) was used as the virtual machine monitor to dynamically configure sensitive file access control strategy in real-time; in addition, OFM could modify the call table entries related to file operations of virtual machine system to determine the legitimacy of the VM process operation files, and deal with the illegal processes. Unixbench was deployed in a virtual machine to test the performance of OFM. The experimental results demonstrate that OFM outperforms to instruction monitoring methods in file monitoring and has no affect on other types of system calls for virtual machines. Meanwhile, OFM can effectively monitor the integrity of the virtual machine files and provide better compatibility, flexibility and lower performance losses.
    File recovery based on SQlite content carving
    MA Qingjie, LI Binglong, WEI Lina
    2017, 37(2):  392-396.  DOI: 10.11772/j.issn.1001-9081.2017.02.0392
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics
    The SQLite is applied by a number of Instant Messaging (IM) softwares for history data storing. In the process of IM forensics, to impede the investigation of the judiciary, the important SQLite data are often hidden, deleted or overwritten by criminals. The current data recovery methods are inefficient and cannot extract the overwritten data. A content carving algorithm based on SQLite was proposed to resolve the above problems. According to SQLite database storage characteristics and data deletion mechanism, the free domain was regarded as units to form idle list, the page content was used to grain structural units of engraving, and the scattered blocks of data were spliced efficiently on the basis of the position of data overwritten. The experimental results show that the propsed SQLite content carving algorithm can effectively recover IM data in local and mobile terminals; the recovery rate reaches 100% when the database is not damaged, and the recovery rate still reaches about 50% when the delete area is overwritten in different degrees.
    Predefined-class based algorithm for compact regular expression matching
    MAI Taotao, PAN Xiaozhong, WANG Yaqi, SU Yang
    2017, 37(2):  397-401.  DOI: 10.11772/j.issn.1001-9081.2017.02.0397
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    For hardware regular expression matching algorithms are faced with the challenges in memory space and throughput, a Predefined-class Compact Finite Automaton (Pre-class CFA) matching algorithm based on Extended Finite Automaton (XFA) regular expression matching algorithm was proposed. By using the predefined classes, the algorithm not only can achieve class character matching in regular expression, but also can match the special character set by priority setting; besides, the migrating edges were reduced when eliminating the Deterministic Finite Automaton (DFA) state space exploration problem. At the same time, based on the characteristics of Field-Programmable Gate Array (FPGA) and migration edge, a migration edge access method based on parallel Read-Only Memory (ROM) was designed to realize the parallel reading and matching of the same state with mutiple migration edges. The algorithm was tested on ALTERA DE2-70 FPGA platform, which is a low-performance platform. The throughput of the experimental system is 1.30 Gb/s, which can realize intrusion detection and garbage filtering under gigabit network.
    Secure instant-messaging method for mobile intelligent terminal
    ZHANG Fan, ZHANG Cong, ZHAO Zemao, XU Mingdi
    2017, 37(2):  402-407.  DOI: 10.11772/j.issn.1001-9081.2017.02.0402
    Asbtract ( )   PDF (1072KB) ( )  
    References | Related Articles | Metrics
    Instant messaging is fundamental to various mobile Internet applications; however, it is still an open problem to implement secure instant messaging in untrusted Internet environment. An approach for secure instant messaging of mobile intelligent terminal was presented, and a protocol for Trusted Session Key Agreement (TSKA) was designed and implemented. Theoretical analysis shows that the proposed TSKA can ensure the authenticity, freshness and confidentiality of the negotiated session key, even in the condition that both of the instant messaging server and the communication channel are not trusted. After TSKA, instant audio/video messages can be sent to the other side in a confidential and complete way. Experimental results in real Internet environment show that the proposed approach is efficient and secure, the session key can be negotiated within 1-2 seconds, and attackers cannot obtain any plaintext of instant messages.
    Spreading activation method to encrypt data with images as references
    FU Xixu, GONG Xizhang
    2017, 37(2):  408-411.  DOI: 10.11772/j.issn.1001-9081.2017.02.0408
    Asbtract ( )   PDF (569KB) ( )  
    References | Related Articles | Metrics
    Complexity, non-linearity and correctness are important features in cryptography. Human beings's knowledge processing just has these features. Spreading activation theory describes how human beings deal with complex and non-linear knowledge correctly. Inspired by the spreading activation theory, a new method for encoding data with references to a bitmap was advanced. In this method, a symbol was represented with a deviation range and a relative position of a contour. Encrypted data was generated by using a spreading activation process. The same symbol can be represented in different code, and the same code can represent different symbols too. This method can ensure the complexity of deciphering by implementing the spreading activation model to create a mass potential searching space. On the other hand, it can also reduce the relativity between the cipher text and the plaintext.
    Data protection mechanism of local-no-data for iSCSI disk
    ZHANG Jinqing, YAO Shuzhen, TAN Huobin
    2017, 37(2):  412-416.  DOI: 10.11772/j.issn.1001-9081.2017.02.0412
    Asbtract ( )   PDF (836KB) ( )  
    References | Related Articles | Metrics
    The existing Internet Small Computer System Interface (iSCSI) disk data protection measures cannot guarantee that the data will not be stolen when unexpected user logs into the system legally. By combining algorithm of redirect disk read and write, transparent encryption and decryption of disk, local-no-data iSCSI disk data protection mechanism named iSCSI_SEC (iSCSI disk data SECurity) was proposed. The concept of local-no-data means that the important data in iSCSI disk will only be stored in the server and not be stored in the local storage by user operation or program copy or some other reasons, which can guarantee the confidentiality of important data on the disk. iSCSI_SEC was realized in system kernel by loading a layer of disk filter. The experimental results show that compared with TrueCrypt, although iSCSI_SEC decreased the disk read and write performance, but the decrease is less than that of TrueCrypt. iSCSI_SEC not only can guarantee the confidentiality of data on iSCSI disk, but also has better performance than TrueCrypt under the environment of iSCSI.
    Research and application for terminal location management system based on firmware
    SUN Liang, CHEN Xiaochun, ZHENG Shujian, LIU Ying
    2017, 37(2):  417-421.  DOI: 10.11772/j.issn.1001-9081.2017.02.0417
    Asbtract ( )   PDF (848KB) ( )  
    References | Related Articles | Metrics
    Pasting the Radio Frequency Identification (RFID) tag on the shell of computer so that to trace the location of computer in real time has been the most frequently used method for terminal location management. However, RFID tag would lose the direct control of the computer when it is out of the authorized area. Therefore, the terminal location management system based on the firmware and RFID was proposed. First of all, the authorized area was allocated by RFID radio signal. The computer was allowed to boot only if the firmware received the authorized signal of RFID on the boot stage via the interaction between the firmware and RFID tag. Secondly, the computer could function normally only if it received the signal of RFID when operation system is running. At last, the software Agent of location management would be protected by the firmware to prevent it from being altered and deleted. The scenario of the computer out of the RFID signal coverage would be caught by the software Agent of the terminal; and the terminal would then be locked and data would be destroyed. The terminal location management system prototype was deployed in the office area to control almost thirty computers so that they were used normally in authorized areas and locked immediately once out of authorized areas.
    Multicast routing in cognitive radio networks: algorithms and protocols
    ZHOU Kunxiao, ZHAO Hui, YUAN Huaqiang
    2017, 37(2):  422-426.  DOI: 10.11772/j.issn.1001-9081.2017.02.0422
    Asbtract ( )   PDF (1068KB) ( )  
    References | Related Articles | Metrics
    Cognitive Radio Network (CRN) plays a critical role in achieving better wireless bandwidth utilization and improving the quality of wireless applications. Multicasting in CRN is a challenging problem due to the dynamic nature of spectrum opportunities available to the secondary users. Researchers have proposed a variety of schemes for efficient multicast in cognitive radio networks, including schemes based on optimization theory, network coding, machine learning, and game theory. The algorithms and techniques for solving the multicast problem effectively were summarized, and a comprehensive survey of protocols was given. Finally, future research directions were identified.
    Improved wireless sensor network localization algorithm based on aggregation, collinearity and connectivity of anchor nodes
    HUANG Liang
    2017, 37(2):  427-431.  DOI: 10.11772/j.issn.1001-9081.2017.02.0427
    Asbtract ( )   PDF (844KB) ( )  
    References | Related Articles | Metrics
    To improve the positioning accuracy of Wireless Sensor Network (WSN), the relationship between the positioning accuracy and the distribution of anchor nodes was studied, and a new anchor node selection algorithm based on the Degree of Aggregation-Collinearity (DAC) and Node Degree (ND) of anchors was proposed, namely DAC-ND. Firstly, the experimental analysis showed that the anchor nodes in collinear or concentrated distribution have a large influence on positioning accuracy. Secondly, by analyzing and comparing the anchor node selection algorithms based on Degree of Collinearity (DC), it was found that two kind of anchor node selection algorithms based on the minimum angle (DC-A) or minimum height (DC-H) have some disadvantages. Finally, combining the advantages of the above both algorithms, the new concept of DAC was proposed, and the DAC-ND algorithm was put forward based on DAC and ND. The Matlab simulation results illustrated that, the average position error of DAC-ND can be greatly reduced by 54%-73% compared with the random selection algorithm, and it also can be reduced by 15%-23% and 12%-23% compared with the anchor selection algorithms based on DC-A and DC-H respectively. The experimental results demonstrate that the DAC-ND algorithm can obtain much higher positioning accuracy compared with algorithms based on DC-A or DC-H, which proves the effectiveness of the DAC-ND algorithm.
    Unknown protocol frame segmentation algorithm based on preamble mining
    LEI Dong, WANG Tao, WANG Xiaohan, MA Yunfei
    2017, 37(2):  440-444.  DOI: 10.11772/j.issn.1001-9081.2017.02.0440
    Asbtract ( )   PDF (1054KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor efficiency in unknown protocol frame segmentation, an unknown protocol frame segmentation algorithm based on preamble mining was proposed. Firstly, the principle of the preamble being used as the start of frame was introduced. As the cause that the existing frequent sequence mining algorithm cannot mine long preamble directly, the problems in candidate sequence selection were analyzed. Combining with the characteristics of preamble, two methods for selecting candidate sequences from the target bit streams and selecting candidate sequence based on the variation of the size of candidate sequence set were given. Secondly, an algorithm inferring the length of preamble and mining the preamble was put forward for unknown protocol frame segmentation. Finally, experiments were conducted with real bit streams captured from the Ethernet. The experimental results show that the proposed algorithm can rapidly and accurately mine the preamble sequence in the bit stream of the unknown protocol with lower space and time complexity.
    Dynamic centrality analysis of vehicle Ad Hoc networks
    FENG Huifang, WANG Junxia
    2017, 37(2):  445-449.  DOI: 10.11772/j.issn.1001-9081.2017.02.0445
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics
    Dynamic network topology is one of the important characteristics of vehicle Ad Hoc networks (VANET). Based on Intelligent Driver Model with Lane Changes (IDM-LC), the VanetMobiSim was used to study the dynamic centrality of topology for VANET in detail. The temporal network model of VANET was built. The evaluation method of dynamic centrality based on the attenuation factor and information store-and-forward index was established, which not only can describe the relation between the current network topology and the historical one, but also can depict the store-and-forward mechanism of information transmission in VANET. Finally, the dynamic centrality of VANET was analyzed through the simulation experiment. The results show that although the dynamic centrality of VANET topology varies with time and parameters, the ranking of important nodes remains relatively stable. This conclusion not only can help us identify the relay nodes of information transmission better to achieve the successful delivery of information, but also provides guidance for invulnerability of VANET topology.
    New improved algorithm for superword level parallelism
    ZHANG Suping, HAN Lin, DING Lili, WANG Pengxiang
    2017, 37(2):  450-456.  DOI: 10.11772/j.issn.1001-9081.2017.02.0450
    Asbtract ( )   PDF (1269KB) ( )  
    References | Related Articles | Metrics
    For SLP (Superword Level Parallelism) algorithm cannot effectively process the large-scale applications covered with few parallel codes, and the codes which can be vectorized may be adverse to vectorization. A new improved algorithm for SLP was proposed, namely NSLPO. First of all, the non-isomorphic statements which cannot be vectorized were transformed to isomorphic statements as far as possible, thus locating the opportunities of vectorization which SLP has lost. Secondly, the Max Common Subgraph (MCS) was built by adding redundant nodes, and the supplement diagram of SLP was got by using some optimization such as redundancy deleting, which can greatly increase the parallelism of program. At last, the codes which are harmful to vectorization were exclued out of vectorization by using cutting method and executed in serial, only the valuable codes for vectorization were vectorized to improve the efficiency of programs as far as possible. Experiments were conducted on widely used kernel test sets. The experimental results show that compared with the SLP algorithm, the proposed NSLPO algorithm has better performance and its running time was reduced by 9.1%.
    Adjacent vertex-distinguishing equitable V-total coloring algorithm of graph based on multi-objective optimization
    CAO Daotong, LI Jingwen, JIANG Hongdou, WEN Fei
    2017, 37(2):  457-462.  DOI: 10.11772/j.issn.1001-9081.2017.02.0457
    Asbtract ( )   PDF (833KB) ( )  
    References | Related Articles | Metrics
    Adjacent Vertex-Distinguishing Equitable V-Total Coloring (AVDEVTC) of a graph means on the basis of adjacent vertex-distinguishing V-total coloring, the differences between every two colors used in coloring are no more than one. The minimum number of colors used for completing AVDEVTC is called Adjacent Vertex-Distinguishing Equitable V-Total Chromatic Number (AVDEVTCN). A multi-objective optimization coloring algorithm was proposed to resolve the problem of AVDEVTC of the graph. A main objective function and four subobjective functions were designed according to the conditions of AVDEVTC. Every subobjective function was optimized to meet the requirements of the main objective function by the iterative exchange operation of the color set of every vertex on the coloring matrix, thus completed the coloring. Theoretical analysis and experimental comparison show that all of the simple connected graphs within eight vertices have the AVDEVTC, and their AVDEVTCN are between the maximum degree plus 1 and the maximum degree plus 2. The experimental result indicates that the proposed coloring algorithm can correctly calculate the AVDEVTCN of graphs within 1000 vertices in a short period of time.
    Performance analysis of cluster computing systems using binary decision diagram
    XU Meiling, QIAO Ying, MO Yuchang, ZHONG Farong
    2017, 37(2):  463-467.  DOI: 10.11772/j.issn.1001-9081.2017.02.0463
    Asbtract ( )   PDF (869KB) ( )  
    References | Related Articles | Metrics
    To analyze the performance of cluster computing systems with identical computing power but different failure distribution, a k-to-l-out-of-n structure was used to model system performance, and a new analytical method based on Binary Decision Diagram (BDD) was proposed for the performance analysis. A new and efficient BDD algorithm that makes full use of the special k-to-l-out-of-n structure was also proposed using a top-down manner, which solved the problem that the traditional bottom-up generation algorithm must generate a large number of intermediate redundant nodes. Then the proposed BDD was used to efficiently calculate the probability of the system being at a specific performance level. At last, some examples were provided to illustrate the proposed BDD-based performance analysis methodology as well as its efficiency in analyzing large-scale cluster computing systems.
    Constructing method of service network based on social network and linked data
    LI Zhiming, TANG Yongzhong
    2017, 37(2):  468-472.  DOI: 10.11772/j.issn.1001-9081.2017.02.0468
    Asbtract ( )   PDF (904KB) ( )  
    References | Related Articles | Metrics
    An increasing large number of services are available in the network, which plays a great role in promoting the development of service-oriented computing technology. Concerning the problems that services scale and utilization were far from initially expected, and the interactions of making the services work together were complicated, a method for constructing service network based on social network and linked data was proposed. First of all, in order to improve service scale and utilization, the concept of Service Network (SN) combining social network and linked data was proposed. Secondly, the service community in the service network was established for elevating the feasibility and effectiveness of service discovery. Then, in order to solve the problem of service interoperation, the service relationship and property were formally expressed in service network. Finally, a case study of tourism service was used to analyze the proposed method. The analysis results show that the efficiency and feasibility of the proposed method are validated to solve the problems of low service utilization and complex service relationship.
    Anonymous broadcast encryption based access control method for cloud storage
    XU Shengwei, LIN Muqing
    2017, 37(2):  473-482.  DOI: 10.11772/j.issn.1001-9081.2017.02.0473
    Asbtract ( )   PDF (1569KB) ( )  
    References | Related Articles | Metrics
    Focusing on the deficiencies on performance and security of the existing anonymous broadcast encryption scheme, a new anonymous broadcast encryption scheme based on the Lagrange interpolation polynomial was proposed. Firstly, an anonymous broadcast encryption security model against adaptive adversaries was defined. Then the scheme was constructed based on the Lagrange interpolation polynomial under the composite order bilinear group settings, which ensures user identity anonymity and achieves an efficient encryption and decryption at the same time. Finally, based on the subgroup decision assumption and the composite decisional bilinear Diffie-Hellman assumption, the security was proved in standard model, which shows that the proposed scheme has both ciphertext confidentiality and receiver anonymity against adaptive adversaries. Experimental results and performance analysis show that the proposed method has low communication and computing overhead, and can efficiently solve the anonymous access control issues of ciphertext data in cloud storage.
    Secure transmission method of mission planning system in white-box attack context
    CUI Xining, DONG Xingting, MU Ming, WU Jiao
    2017, 37(2):  483-487.  DOI: 10.11772/j.issn.1001-9081.2017.02.0483
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the communication keys in transmission of mission planning system were easily stolen in White-Box Attack Context (WBAC), a new secure transmission method of mission planning system was proposed based on modified white-box Advanced Encryption Standard (white-box AES). First, the Advanced Encryption Standard (AES) was split into many lookup tables and the keys were embedded into these lookup tables, then the lookup tables were merged in accordance with the excuting order of the AES. Secondly, on the ground, different white-box AES programs were generated in accordance with the given white-box AES generation algorithms using different keys. In the end, the white-box AES programs were embedded in the security transmission of the mission planning system. When the key needed to be replaced, the original white-box AES program should be erased on the ground to generate a new white-box AES. Theoretical analysis shows that compared with the traditional secure transmission of mission planning system, the modified secure transmission method of mission planning system can make the attack complexity to 291, which achieves the sufficient security and can protect the communication key.
    Privacy-preserving trajectory data publishing based on non-sensitive information analysis
    DENG Jingsong, LUO Yonglong, YU Qingying, CHEN Fulong
    2017, 37(2):  488-493.  DOI: 10.11772/j.issn.1001-9081.2017.02.0488
    Asbtract ( )   PDF (1003KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue of privacy disclosure between trajectory and non-sensitive information, a trajectory privacy preserving algorithm based on non-sensitive information analysis was proposed. Firstly, the correlation between trajectory and non-sensitive information was analyzed to build trajectory privacy disclosure decision model, and the Minimal Violating Sequence tuple (MVS) was gotten. Secondly, using common subsequences, the doublets with the minimal loss of trajectory data in MVS were selected as the suppression objects when removing the privacy risks caused by MVS, then the anonymized trajectory dataset with privacy and low data loss was obtained. In the comparison experiments with LKC-Local algorithm and Trad-Local algorithm, when the sequence length is 3, the average instance loss of the proposed algorithm is decreased by about 6% and 30% respectively, and the average MFS (Maximal Frequent Sequence) loss is decreased by about 7% and 60% respectively. The experimental results verify that the proposed algorithm can effectively improve the quality of recommend service.
    Private information retrieval protocol based on point function secret sharing
    YUAN Dazeng, HE Mingxing, LI Xiao, ZENG Shengke
    2017, 37(2):  494-498.  DOI: 10.11772/j.issn.1001-9081.2017.02.0494
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics
    Focusing on the privacy security problem of Private Information Retrieval (PIR), a private information retrieval protocol based on point Function Secret Sharing (FSS) was proposed. The index of the retrieval was regarded as a special 0-1 point function, and the key group of the point function was generated by using the point function secret sharing technique, which was sent to p servers respectively. The retrieval results were obtained by XOR operation according to the responses returned by the p servers. The correctness, security and efficiency of the protocol were analyzed, which proves that the proposed protocol is secure and efficient. A concrete example was given to illustrate the validity of the protocol. Finally, the applications of the protocol to multi-term private information retrieval and keyword-based private information retrieval were introduced.
    Image encryption based on combination of spactial and wavelet domain
    CAO Guanghui, LI Chunqiang
    2017, 37(2):  499-504.  DOI: 10.11772/j.issn.1001-9081.2017.02.0499
    Asbtract ( )   PDF (1000KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that hybrid domain image encryption algorithm based on chaos theory has weak encryption strength, a new image encryption algorithm based on the combination of spatial and wavelet domain was proposed. First, one-level two-dimensional discrete wavelet decomposition was performed on the original image to extract the low-frequency wavelet coefficients. Second, using the chaotic sequence generated by 2D Sine Logistic chaotic dynamic system, the low frequency sub-band was scrambled by using chaotic magic transformation, then inverse wavelet transformation was executed on decomposed image.For the scrambled image, a chaotic sequence used for encryption key in spatial domain based on intertwining Logistic map was generated, then the pixels were encrypted by using the combination technology of elemental multiplication in a Galois field with XOR. At the same time, the technology of chaotic disturbance and encryption feedback was introduced to implement one-running key. Theoretical analysis and experimental results show that the proposed algorithm has the advantages of large key space, anti-reconstruction attack, anti-differential attack, feasible encryption efficiency, strong security, and so on.
    Sound recognition based on optimized orthogonal matching pursuit and deep belief network
    CHEN Qiuju, LI Ying
    2017, 37(2):  505-511.  DOI: 10.11772/j.issn.1001-9081.2017.02.0505
    Asbtract ( )   PDF (1251KB) ( )  
    References | Related Articles | Metrics
    Concerning the influence of various environmental ambiances on sound event recognition, a sound event recognition method based on Optimized Orthogonal Matching Pursuit (OOMP) and Deep Belief Network (DBN) was proposed. Firstly, Particle Swarm Optimization (PSO) algorithm was used to optimize Orthogonal Matching Pursuit (OMP) sparse decomposition of sound signal, which realized fast sparse decomposition of OMP and reserved the main body of sound signal and reduced the influence of noise. Then, an optimized composited feature was composed by Mel-Frequency Cepstral Coefficient (MFCC), time-frequency OMP feature and Pitch feature extracted from the reconstructed sound signal, which was called OOMP feature. Finally, the DBN was employed to learn the OOMP feature and recognize 40 classes of sound events in different environments and Signal-to-Noise Ratio (SNR). The experimental results show that the proposed method which combined OOMP and BDN is suitable for sound event recognition in various environments, and can effectively recognize sound events in various environments; it can still maitain an average accuracy rate of 60% even when the SNR is 0 dB.
    Unsupervised local feature learning for robust face recognition
    FENG Shu
    2017, 37(2):  512-516.  DOI: 10.11772/j.issn.1001-9081.2017.02.0512
    Asbtract ( )   PDF (843KB) ( )  
    References | Related Articles | Metrics
    Image representation is a fundamental issue in face recognition, it is desired to design a discriminative and robust feature to alleviate the effect of illumination, occlusion and pose, etc. Motivated by the convolutional architecture and the advantages (stable result and fast convergence) of K-means algorithm in building filter bank, a very simple yet effective face recognition approach was presented. It consists of three main parts:convolutional filters leraning, nonlinear processing and spatial mean pooling. Firstly, K-means was employed based on preprocessed image patches to construct the convolution filters quickly. Each filter was convoluted with face image to extract sufficient and discriminative feature information. Secondly, the typical hyperbolic tangent function was applied for nonlinear projection on the convoluted features. Thirdly, spatial mean pooling was used to denoise and reduce the dimensions of the learned features. The classification stage only requires a novel linear regression classifier. The experimental results on two widely utlized databases such as AR and ExtendedYaleB demonstrate that the proposed method is simple and effective, and has strong robustness to illumination and occlusion.
    Hybrid imperialist competitive algorithm for solving job-shop scheduling problem
    YANG Xiaodong, KANG Yan, LIU Qing, SUN Jinwen
    2017, 37(2):  517-522.  DOI: 10.11772/j.issn.1001-9081.2017.02.0517
    Asbtract ( )   PDF (1017KB) ( )  
    References | Related Articles | Metrics
    For the Job-shop Scheduling Problem (JSP) with the objective of minimizing the makespan, a hybrid algorithm combining with Imperialist Competitive Algorithm (ICA) and Tabu Search (TS) was proposed. Based on imperialist competitive algorithm, crossover operator and mutation operator of Genetic Algorithm (GA) were applied in the hybrid algorithm as assimilation to strengthen its global search ability. To overcome the weakness of imperialist competitive algorithm in local search, TS algorithm was used to improve the offspring of assimilation. The hybrid neighborhood structure and a novel selection strategy were used by TS to make the search more efficient. By combining with the ability of global search and local search, testing on the 13 classic benchmark scheduling problems and comparing with other four hybrid algorithms in recent years, the experimental results show that the proposed hybrid algorithm is effective and stable.
    Optimization of integrated flexible process planning and job shop scheduling based on artificial bee colony
    SONG Shuanjun, YANG Peili, SHI Wenli
    2017, 37(2):  523-529.  DOI: 10.11772/j.issn.1001-9081.2017.02.0523
    Asbtract ( )   PDF (1123KB) ( )  
    References | Related Articles | Metrics
    To achieve the optimization of integrated flexible process planning and job shop scheduling, taking the flexibility of manufacturing process and order and manufacturing machine of the workpieces into account, for minimizing the maximum completion time of the product processing task, an artificial bee colony algorithm based on crossover and mutation was proposed. Aiming at the discrete characteristics of integrated flexible process and job shop scheduling, the process route was coded in sequence, and the job scheduling was based on the working procedure. To improve the performance of the algorithm, by means of crossover and mutation operation of process population and scheduling population, the employed foragers and onlookers bees seeked local optimality, and the scouts seeked global optimality. On this basis, the necessity of the integration research and the effectiveness of the improved algorithm were verified by two test cases.
    Traffic sign recognition based on optimized convolutional neural network architecture
    WANG Xiaobin, HUANG Jinjie, LIU Wenju
    2017, 37(2):  530-534.  DOI: 10.11772/j.issn.1001-9081.2017.02.0530
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics
    In the existing algorithms for traffic sign recognition, sometimes the training time is short but the recognition rate is low, and other times the recognition rate is high but the training time is long. To resolve these problems, the Convolutional Neural Network (CNN) architecture was optimized by using Batch Normalization (BN) method, Greedy Layer-Wise Pretraining (GLP) method and replacing classifier with Support Vector Machine (SVM), and a new traffic sign recognition algorithm based on optimized CNN architecture was proposed. BN method was used to change the data distribution of the middle layer, and the output data of convolutional layer was normalized to the mean value of 0 and the variance value of 1, thus accelerating the training convergence and reducing the training time. By using the GLP method, the first layer of convolutional network was trained with its parameters preserved when the training was over, then the second layer was also trained with the parameters preserved until all the convolution layers were trained completely. The GLP method can effectively improve the recognition rate of the convolutional network. The SVM classifier only focused on the samples with error classification and no longer processed the correct samples, thus speeding up the training. The experiments were conducted on Germany traffic sign recognition benchmark, the results showed that compared with the traditional CNN, the training time of the new algorithm was reduced by 20.67%, and the recognition rate of the new algorithm reached 98.24%. The experimental results prove that the new algorithm greatly shortens the training time and reached a high recognition rate by optimizing the structure of the traditional CNN.
    Three random under-sampling based ensemble classifiers for Web spam detection
    CHEN Musheng, LU Xiaoyong
    2017, 37(2):  535-539.  DOI: 10.11772/j.issn.1001-9081.2017.02.0535
    Asbtract ( )   PDF (1006KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of slighty imbalanced classification in Web spam detection, three ensemble classifiers based on random under-sampling techniques were proposed, including Random Under-Sampling once without replacement (RUS-once), Random Under-Sampling multiple times without replacement (RUS-multiple) and Random Under-Sampling with replacement (RUS-replacement). At first, the unbalanced training dataset was converted into several balanced datasets by using one of the under-sampling techniques. Secondly, the Classification And Regression Tree (CART) classifiers were trained based on the balanced datasets. Finally, an ensemble classifier was constructed with all of the CART classifiers based on simple voting rule and used to classify the test samples. The experimental results show that the three kinds of random under-sampling based ensemble classifiers achieve good classification results, the performance of RUS-multiple and RUS-replacement are better than RUS-once. Compared with CART, Bagging with CART and Adaboost with CART, the AUC values of RUS-multiple and RUS-replacement increase about 10% on WEBSPAM UK-2006 and about 25% on WEBSPAM UK-2007; compared with several state-of-the-art baseline classification models, RUS-multiple and RUS-replacement achieve the optimal results in AUC value.
    Hesitant fuzzy decision-making method based on regret theory and evidence theory
    ZHU Lun
    2017, 37(2):  540-545.  DOI: 10.11772/j.issn.1001-9081.2017.02.0540
    Asbtract ( )   PDF (1148KB) ( )  
    References | Related Articles | Metrics
    Under the hesitant fuzzy environment, considering the decision makers' psychological behavior, a method based on regret theory and evidence theory was proposed to cope with Multi-Attribute Group Decision Making (MAGDM) problems that the attribute value is hesitant fuzzy information, the attribute weights and probability information of situation are completely unknown. First, evidence theory was utilized to calculate the probability information of the states. Then, based on the interval fuzzy matrices, the estimation of t-distribution and the score function matrices, the utility values of attribute values were determined. Moreover, by using regret theory, the decision makers' perceived utility values were obtained. The overall perceived utility of each alternative was acquired on the basis of the weighted arithmetic mean. After that, all the alternatives were ordered. Finally, the proposed approach was applied to a numerical example about the selection of enterprise. The experimental results show that the proposed method can get the same results as the existing methods, but only a small number of parameters needed to be considersed in the decision process. The results of comparative analysis domonstrate that the decision making results obtained by the proposed method are reasonable and reliable, which can reflect the actual situation.
    Applications of fractional partial differential equations in image processing
    ZHOU Shangbo, WANG Liping, YIN Xuehui
    2017, 37(2):  546-552.  DOI: 10.11772/j.issn.1001-9081.2017.02.0546
    Asbtract ( )   PDF (1147KB) ( )  
    References | Related Articles | Metrics
    It has been widely concerned to apply fractional partial differential equations in image processing, especially in the image denoising and image Super Resolution (SR) reconstruction. The current research results have shown the advantages and effects of fractional order applications. The theory and model of fractional partial differential equations in image denoising and image super-resolution reconstruction were introduced and discussed. The simulation results show that the methods based on fractional partial differential equations has more advantages than the methods based on integer order partial differential equations in terms of denoising and reducing the staircase effect. Finally, the related research problems were pointed out.
    Image inpainting algorithm based on non-subsampled contourlet transform
    ZOU Weigang, ZHOU Zhihui, WANG Yang
    2017, 37(2):  553-558.  DOI: 10.11772/j.issn.1001-9081.2017.02.0553
    Asbtract ( )   PDF (1059KB) ( )  
    References | Related Articles | Metrics
    The multi-scale analysis technology has been widely used in the field of digital image processing, the inpainting image with large damaged area has become a hot and difficult spot of image inpainting. Based on the principle of multi-resolution analysis and the traditional method of image inpainting, a new algorithm for image inpainting based on non-subsampled contourlet transform was proposed. Firstly, the image was decomposed into low frequency and high frequency parts by using the non-subsampled contourlet transform, then the parts of different frequency after image decomposition were inpainted respectively. The low frequency components of the image were inpainted by the improved method of texture synthesis. Because after non-subsampled contourlet transform, the information of the corresponding position between the low frequency component and the high frequency component is consistent, the information of corresponding position of other high frequency components could be repaired while the low frequency component was repaired. Finally, the inpainting of the texture image was completed by reconstruction process of non-subsampled contourlet transform. Generally, the selection of image inpainting parameters was appropriate for the best image effect, thus a counter-example was given for authentication. The structural similarity measure among the proposed algorithm and the classical Criminisi algorithm and the wavelet inpainting algorithm has little difference, but the Peak Signal-To-Noise Ratio (PSNR) measurement has different result according to the different texture characteristics of images and the different location characteristics of damaged areas. The simulation results show that the proposed method is very good for the promotion of the non-subsampled Contourlet transform in image inpainting application, and it can get better repair effect while inpainting the image with large damaged area.
    Increasing fusion image spectral fidelity by using midway histogram equalization
    ZHAO Liling, SUN Quansen
    2017, 37(2):  559-563.  DOI: 10.11772/j.issn.1001-9081.2017.02.0559
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of spectral distortion in remote sensing image fusion, an improved transform method based on midway image equalization was proposed. First, the multispectral image was decomposed by IHS (Intensity, Hue, Saturation) transform; then by using the midway image equalization, the cumulative histogram of panchromatic image and the spectral intensity component of multispectral image were adjusted to be the same; finally, the inverse transform of IHS was implemented and a high quality fusion image was obtained. The theoretical analysis and experimental results show that the proposed algorithm can not only suppress the spectral distortion of the fusion image, but also preserve the spatial resolution of the fusion image effectively, and it is simple and easy to implement. Compared with the traditional fusion algorithms such as IHS transform, Principal Component Analysis (PCA), Wavelet Transform (WT) and Brovey, the fusion images generated by the proposed algorithm have good visual effects; in addition, the proposed algorithm has better performance in terms of Peak-Signal-to-Noise Ratio (PSNR), spectral distortion and information entropy. Fusion images obtained by midway image equalization maintain spatial information well and have little spectral distortion.
    Improved nonlinear brightness-lifting model for restoring backlight images
    MAN Le, ZHAO Yu, WANG Haoxian
    2017, 37(2):  564-568.  DOI: 10.11772/j.issn.1001-9081.2017.02.0564
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics
    Photo observation and identification is often influenced by insufficient light and unsuitable shooting angle when taking photos. In order to solve this problem, an image restoration method based on nonlinear brightness-lifting model was proposed. Although the existing nonlinear brightness enhancement method can improve the brightness of the backlight area, distortion still occurs in the highlighted area due to excessive promotion. On the basis of the existing image processing algorithm, a new adaptive backlight images restoration method based on nonlinear brightness improvement model was proposed. Image segmentation processing and logarithmic function were used to enhance image brightness, in which the threshold was determined by Otsu threshold processing, and the adjustment coefficient in the transition function was calculated by the ratio between pixels of backlight area and total pixels. Simulation results show that, compared with the method of using the logarithmic function conversion relation and adjusting the image brightness in the HSI space model, the proposed method not only improves the image quality and preserves the nature of image without distortion, but also has a good improvement in performance.
    Ultrasound image segmentation based on pixel clustering
    HUANG Zhibiao, YAO Yu
    2017, 37(2):  569-573.  DOI: 10.11772/j.issn.1001-9081.2017.02.0569
    Asbtract ( )   PDF (898KB) ( )  
    References | Related Articles | Metrics
    B-mode cardiac ultrasound image segmentation is a fundamental step before cardiac functional parameters estimation. Aiming at the problem that the accuracy of segmentation is low because of the low resolution of ultrasound image, and the model based image segmentation algorithms need a large number of training sets, an image segmentation algorithm based on pixel clustering was proposed combined with prior knowledge of B-mode cardiac ultrasound images. Firstly, anisotropic diffusion was used to preprocess the image. Secondly, one-dimensional K-means was used to cluster the pixels. Finally, every pixel value of the image was assigned to the pixel value of its best cluster center according to cluster results and prior knowledge. The theoretical analysis shows that the proposed algorithm can get the maximum Peak Signal-to-Noise Ratio (PSNR) of ultrasound image; the experimental results show that the proposed algorithm performs better than Otsu algorithm, and its PSNR is increased by 11.5% compared with Otsu algorithm. The proposed algorithm can still work even for a single ultrasound image and can be suitble for ultrasound image segmentation of any shapes, so it is conducive to estimate cardiac functional parameters more accurately.
    Well-formedness checking algorithm of interface automaton and its realization
    LI Xue, ZHU Jiagang
    2017, 37(2):  574-580.  DOI: 10.11772/j.issn.1001-9081.2017.02.0574
    Asbtract ( )   PDF (1185KB) ( )  
    References | Related Articles | Metrics
    To address the issue that the non-well-formed components in a component-based system may lead to the whole system working abnormally, an algorithm for checking the well-formedness of a component was proposed based on its Interface Automaton (IA) model, and a relevant prototype tool was developed. Firstly, the reachability graph isomorphic with the given IA was constructed. Secondly, an ordered set including all the transitions of the reachability graph relevant to the IA was obtained by depth-first-searching the reachability graph. Finally, the well-formedness check of a given IA was completed by checking whether each action belonging to a method in the IA could autonomously reach its return action without exception according to the ordered set under the condition that the external environment meets the input hypothesis. As a realization of the proposed algorithm, a relevant prototype tool was developed on Eclipse platform, namely T-CWFC (Tool for Component Well-Formedness Checking). The prototype tool can model the given component, set up its reachability graph, check its well-formedness and output check result message. The validity of the proposed algorithm was verified by running the tool on a set of components.
    Model composition method of decision support system driven by double factors of state and data
    TANG Chaosheng, LIU Shihong, CHENG Jieren
    2017, 37(2):  581-586.  DOI: 10.11772/j.issn.1001-9081.2017.02.0581
    Asbtract ( )   PDF (1012KB) ( )  
    References | Related Articles | Metrics
    In the Decision Support System (DSS) model composition project, the single factor-driven method has a narrow range of application and low efficiency; the multi-factor-driven method lacks the description of the logical relationship between the algorithm and sub-model level and has low level of abstraction of the black box. Therefore, a composition method driven by the double factors of state and data was proposed. Based on the definition of workflow process model, the DSS submodel was generated as a workflow activity firstly, which was used as a portable and independent unit, containing the algorithm for solving the model inside and providing the external interface parameters. Secondly, according to the dependencies of activities performed, the corresponding Event-Condition-Action (ECA) rules were formulated to bind the composition relation between them dynamically. Finally, with the network activity diagram converted, combined work was completed by the aid of workflow platform. Experimental results show that the proposed method can solve the general problems of the model composition and has good generality and expansibility compared with other methods.
    Multi-fractal Web log simulation generation algorithm based on stable process
    PENG Xingxiong, XIAO Ruliang
    2017, 37(2):  587-592.  DOI: 10.11772/j.issn.1001-9081.2017.02.0587
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    The software system running on the server cluster needs large-scale data sets of Web log to meet the performance test requirement, but the existing simulation generation algorithm cannot meet the requirements due to the single model. Aiming at this problem, a new multi-fractal Web log simulation generation algorithm based on alpha stable process was proposed. Firstly, the self-similarity of Web log was described by alpha stable process in Long Range Dependence (LRD). Secondly, the multi-fractal of Web log was described by binomial-b model in Short Range Dependence (SRD). Finally, the model of long range dependence and the model of short range dependence were integrated into the improved ON/OFF framework. Compared with the single model, the parameters of the proposed algorithm has clear physical meaning equipped with good performance of self-similarity and multi-fractal. The experimental results show that the proposed algorithm can accurately simulate the real Web log and be effectively applied in Web log simulation generation with large-scale data sets.
    Real-time vehicle monitoring algorithm for single-lane based on DSP
    YANG Ting, LI Bo, SHI Wenjing, ZHANG Chengfei
    2017, 37(2):  593-596.  DOI: 10.11772/j.issn.1001-9081.2017.02.0593
    Asbtract ( )   PDF (621KB) ( )  
    References | Related Articles | Metrics
    The traditional traffic flow detection system based on sensor device has complex hardware equipment and the universal traffic flow detection algorithm cannot distinguish the directions of vehicles. To resolve the above problems, a real-time vehicle monitoring algorithm for single-lane based on Digital Signal Processor (DSP) was proposed and applied to parking lot. Firstly, the background differential algorithm was used to detect vehicles on virtual detection zone and the method of mean background modeling was improved. Then, an adjacent frame two-value classification algorithm was proposed to distinguish the directions of vehicles. Finally, virtual detection zone was used for vehicle counting and the number of empty parking spots was real-time displayed on a Light Emitting Diode (LED) screen. The feasibility of the proposed algorithm was verified by the simulation experiment. The actual test results showed that the accuracy rate of the adjacent frame two-value classification algorithm for direction detection was 96.5% and the accuracy rate of parking spot monitoring algorithm was 92.2%. The proposed real-time vehicle monitoring algorithm for single-lane has high accuracy and needs less detection equipment, so it can be applied to single-lane parking lot for real-time vehicle monitoring.
    Finite-time adaptive chaos control for permanent magnet synchronous motor
    GAO Junshan, SHI Lanlan, DENG Liwei
    2017, 37(2):  597-601.  DOI: 10.11772/j.issn.1001-9081.2017.02.0597
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics
    Aiming at the issue that chaotic attractor exists in Permanent Magnet Synchronous Motor (PMSM) and chaotic synchronous control of PMSM cannot be realized except on the cycle of the equilibrium point, a kind of zero error system algorithm based on automatic control theory and finite time control principle was proposed. Firstly, the error system was established by mathematical model of PMSM and a mathematical formula between each state variable and its expected value in PMSM. Secondly, synchronous controller and correction rate were designed for the error system model and the conclusion that the error system could quickly converge to zero in finite time was proved by using Lyapunov's stability criterion. Finally, interference was imposed on the error system and the robustness of the algorithm was analyzed. The theoretical analysis and simulation results show that the proposed algorithm can maitain in zero balance after reaching the zero point of the system, which can effectively restrain the chaotic attractor and adjust the input and output of PMSM flexibly; and the PMSM system has good robustness to the uncertainty parameter and external disturbance while ensuring the normal operation of PMSM.
    Genetic algorithm with preference matrix for solving long-term carpooling problem
    GUO Yuhan, ZHANG Meiqi, ZHOU Nan
    2017, 37(2):  602-607.  DOI: 10.11772/j.issn.1001-9081.2017.02.0602
    Asbtract ( )   PDF (918KB) ( )  
    References | Related Articles | Metrics
    A Preference Matrix based Genetic Algorithm (PMGA) was introduced for solving the Long-Term Car Pooling Problem (LTCPP), and a group of users with both vehicle and the same destination was assigned to the co-generation group to minimize the total travel cost. First, the objective function of calculating the cost of all users was set up, and a long-term car pooling model with constraints of user time window and car capacity was designed. Then based on the characteristics of the model and classic Genetic Algorithm (GA), a preference matrix mechanism was adapted into the crossover and mutation operators to memorize and update the preference information among different users, thus improving the quantity and the quality of feasible solutions. The experimental results show that in the same computing environment, the optimal solution value of 20 solutions obtained by PMGA is the same as that of the exact algorithm when the number of users is less than 200. Moreover, PMGA is remarkable in solution quality when dealing with large size of instances. The proposed algorithm can significantly improve the solution quality of the long-term car pooling problem, and play an important role in reducing vehicle emission and traffic congestion.
    Multi-function radar emitter identification based on stochastic infinite automaton
    CAO Shuai, WANG Buhong, LI Longjun, LIU Shuaiqi
    2017, 37(2):  608-612.  DOI: 10.11772/j.issn.1001-9081.2017.02.0608
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics
    To deal with the emitter identification problem in Multi-Function Radar (MFR) based on Stochastic Context-Free Grammar (SCFG) model, a MFR emitter identification method based on Stochastic Infinite State Automata (SISA) was proposed on the basis of syntactic modeling. The grammar production rules in "Mercury" MFR control module and the characteristic production rules in "Mercury" MFR system were used in this method to reconstruct an SCFG, which was further used to construct an SISA for identification subsequently. Theoretical analysis and simulation results show that the proposed method can realize MFR emitter identification. Within a certain range, the average recognition rate can be improved by adding the amount of grammar production rules, and the identification performance is superior to Stochastic Push-Down Automata (SPDA) constructed by SCFG. The experimental results validate the reliability and effectiveness of the proposed method.
2024 Vol.44 No.9

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF