Loading...

Table of Content

    10 December 2017, Volume 37 Issue 12
    Power control algorithm based on network utility maximization in Internet of vehicles
    ZUO Yuxing, GUO Aihuang, HUANG Bo, WANG Lu
    2017, 37(12):  3345-3350.  DOI: 10.11772/j.issn.1001-9081.2017.12.3345
    Asbtract ( )   PDF (1105KB) ( )  
    References | Related Articles | Metrics
    Channel congestion occurs when the vehicular traffic density increases to a certain extent in Internet of Vehicles (IoV), even if there are only beacons in the wireless channel. To solve the problem, a Distributed-Weighted Fair Power Control (D-WFPC) algorithm was proposed. Firstly, considering the actual channel characteristics in IoV, the Nakagami-m fading channel model was used to establish the random channel model. Then, the mobility of the nodes in IoV was considered, and a power control optimization problem was established based on the Network Utility Maximization (NUM) model, which kept the local channel load under the threshold to avoid congestion. Finally, a distributed algorithm was designed by solving the problem with dual decomposition and iterative method. The transmit power of each vehicle was dynamically adjusted according to the beacons from neighbor vehicles. In the simulation experiment, compared with the fixed transmit power schemes, the D-FWPC algorithm reduced the delay and packet loss ratio effectively with the increase of traffic density, the highest reduction was up to 24% and 44% respectively. Compared with the Fair distributed Congestion Control with transmit Power (FCCP) algorithm, the D-FWPC algorithm had better performance all the way and the highest reduction in delay and packet loss ratio was up to 10% and 4% respectively. The simulation results show that the D-WFPC algorithm can converge quickly and ensure messages to be transmitted with low delay and high reliability in IoV.
    Load balancing mechanism for hierarchical controllers based on software defined network
    ZHU Shike, SHU Yong'an
    2017, 37(12):  3351-3355.  DOI: 10.11772/j.issn.1001-9081.2017.12.3351
    Asbtract ( )   PDF (1030KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that the communication overhead between controllers is large and the controller throughput is low during the load balancing process of multi-controller in Software Defined Network (SDN), a hierarchical controller load balancing mechanism was proposed. Based on the hierarchical architecture, the load balancing was completed through the collaboration of super controller and domain controller, and the predefined load threshold was used to reduce the message exchange overhead between domain controller and super controller. At the same time, the most overloaded domain controller was effectively selected. A plurality of switches conforming to the migration standard were selected from the switches controlled by the most overload domain controller. Simultaneously the selected switches were respectively migrated to a plurality of domain controllers with high overall performance, which solving the problem of load imbalance among multiple controllers. The experimental results showed that, compared with the COoperative Load BAlancing Scheme for hierarchical SDN controllers (COLBAS) and the Dynamic and Adaptive algorithm for controller Load Balancing (DALB), the number of messages in the proposed mechanism system was reduced by about 79 percentage points, and the throughput of the proposed system was about 8.57% higher than DALB and 52.01% higher than COLBAS. The proposed mechanism can effectively reduce the communication overhead and improve the system throughput to achieve a better load balancing effect.
    Dynamic pilot allocation based on graph coloring in massive MIMO systems
    FAN Zifu, HU Min, LI Yuening
    2017, 37(12):  3356-3360.  DOI: 10.11772/j.issn.1001-9081.2017.12.3356
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics
    Aiming at the pilot contamination problem in massive Multiple-Input Multiple-Out (MIMO) systems, a dynamic pilot allocation scheme based on graph coloring was proposed. To allocate pilot more reasonably and mitigate pilot contamination, firstly, an edge-weighted interference graph based on cooperation among cells was constructed to describe the strength of pilot contamination among multi-cell users, whereby two users in different cells were connected by a weighted edge. Then, based on the traditional graph coloring theory, pilot resources were allocated preferentially to users who were heavily polluted by the characteristics of different weighted edge for connected users. The theoretical analysis and simulation results show that, compared with existing distributed pilot allocation scheme, the proposed pilot allocation scheme can reduce the inter-cell interference and enhance the uplink achievable sum rate by considering pilot reuse of all cells and centralized pilot allocation mechanism based on graph coloring.
    Capacity optimization of secondary user system in MIMO cognitive networks based on non-orthogonal multiple access
    LIAO Han, MA Dongya, YIN Lixin
    2017, 37(12):  3361-3367.  DOI: 10.11772/j.issn.1001-9081.2017.12.3361
    Asbtract ( )   PDF (1016KB) ( )  
    References | Related Articles | Metrics
    Concerning the demands of large capacity and high spectrum utilization in future mobile communication system, a method for optimizing the capacity of secondary user system in Multiple-Input Multiple-Output (MIMO) cognitive networks based on Non-Orthogonal Multiple Access (NOMA) was proposed. Firstly, the transmitted signals were pre-coded, and then the cognitive users were clustered according to channel gains. Secondly, the power allocation was performed for users after clustering. Finally, the Non-deterministic Polynomial-hard (NP-hard) multi-cluster objective function was transformed into solving the capacity of each sub-cluster. Meanwhile, taking into account Quality of Service (QoS) of cognitive users and requirement of Successive Interference Cancellation (SIC), the optimal power allocation coefficient, which is a constant between 0 and 1, was solved by using Lagrange function and Karush-Kuhn-Tucker (KKT) condition. The simulation results show that, the proposed method outperforms the average power allocation method. And when the channel quality is poor, compared with the MIMO cognitive network based on Orthogonal Multiple Access (OMA), the proposed method has improved the capacity of secondary user system significantly.
    Energy-efficiency scheme based on load balancing in dense small cell networks
    WEI Shihong, ZHANG Li, HUANG Xiaoge
    2017, 37(12):  3368-3373.  DOI: 10.11772/j.issn.1001-9081.2017.12.3368
    Asbtract ( )   PDF (1045KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of high outage probability and unbalanced load in dense small cellular networks, an energy-efficient scheme based on load balancing was proposed. The network energy-efficiency was maximized through joint optimization of load balancing and base station on/off control strategy under the constraints of guaranteed user outage probability and minimum rate. The optimization problem is a non-convex Non-deterministic Polynomial-hard (NP-hard) problem, and it is quite complex to obtain the optimal solution. Therefore, the original optimal problem was decomposed into two suboptimal subproblems. Firstly, the optimal load balancing strategy with the given base station on/off control strategy was given by the proposed load balancing scheme. Secondly, the optimal base station on/off control strategy was designed under the constraint of satisfying the minimum user rate. The experimental results show that, when the number of users is less than 180, the outage probability of the proposed scheme is zero while the outage probability of the traditional Maximum Signal to Interference plus Noise Ratio (Max-SINR) algorithm reaches 11%. The network energy-efficiency of the proposed scheme is higher than those of the base station Randomly-off (Ran-off) algorithm and the base station Not-off (No-off) algorithm. The proposed scheme can improve network energy-efficiency and ensure load balance.
    WSN routing algorithm based on optimal number of clusters and improved gravitational search
    LI Xinchun, GAO Baisheng
    2017, 37(12):  3374-3380.  DOI: 10.11772/j.issn.1001-9081.2017.12.3374
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    In order to improve the energy efficiency of Wireless Sensor Network (WSN), a WSN routing algorithm based on Optimal Number of Clusters and Improved Gravitational Search (ONCIGS) was proposed. Firstly, the optimal number of clusters was calculated according to the idea of uneven clustering, and the improved AGglomerative NESting (AGNES) algorithm was adopted to realize the reasonable clustering of network. Secondly, reverse learning mechanism and elite strategy were introduced into the gravitational search algorithm, and the force was adjusted adaptively based on population density to improve the search precision and speed up the convergence. Then, the standard deviation of residual energy of cluster heads was taken as the objective function to search the energy-balanced inter-cluster data forwarding path. The experimental results show that, compared with the Low Energy Adaptive Clustering Hierarchy (LEACH) routing algorithm and Distributed Energy Balanced Unequal Clustering (DEBUC) routing algorithm, the network life cycle of the proposed ONCIGS is prolonged by 41.94% and 5.77% respectively under the network scale of 100 m×100 m, and it is prolonged by 76.60% and 7.82% respectively under the network scale of 200 m×200 m. The proposed ONCIGS can effectively prolong network lifetime and improve energy efficiency.
    Adaptive compressed sensing algorithm based on observation matrix optimization
    HU Qiang, LIN Yun
    2017, 37(12):  3381-3385.  DOI: 10.11772/j.issn.1001-9081.2017.12.3381
    Asbtract ( )   PDF (780KB) ( )  
    References | Related Articles | Metrics
    In order to improve the anti-noise performance of the traditional Compressed Sensing (CS) recovery algorithm, a kind of Adaptive Compressed Sensing (ACS) algorithm was proposed based on the idea of observation matrix optimization and adaptive observation. The observed energy was all allocated in the support position estimated by the traditional CS recovery algorithm, which could effectively improve the observed Signal-to-Noise Ratio (SNR) owing to the support positions contained in the estimated support set. Then, the optimal new observation vector was derived from the perspective of observation matrix optimization, that is, its nonzero part was designed as the eigenvector of Gram matrix. The simulation results show that, the energy growth rate of non-diagonal elements of Gram matrix is less than that of the traditional CS algorithm with the increase of the number of observations. And the reconstruction normalized mean square error of the proposed algorithm is respectively lower than that of the traditional CS algorithm and the typical Bayesian method above 10 dB and 5 dB under the same conditions of number of observations, sparsity and SNR. The analysis shows that the proposed adaptive observation mechanism can effectively improve the energy efficiency and anti-noise performance of the traditional CS recovery algorithm.
    Virtual machine placement algorithm for heterogeneous cloud environment based on resource demand distribution feature
    XUE Hongye, ZHU Tianlei, LUO Xiangyu, FENG Jian
    2017, 37(12):  3386-3390.  DOI: 10.11772/j.issn.1001-9081.2017.12.3386
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem of Virtual Machine Placement (VMP) in heterogeneous cloud environment, a Resource Demand Distribution Feature based Placement Algorithm (RDDFPA) for virtual machines was proposed. Firstly, a method of describing virtual machine requirements and physical machine configuration based on scale factor of CPU resource and memory resource was established. Based on the scale factor, all the virtual machines were sorted. Secondly, by analyzing the proportion relationship of virtual machine requirements and physical machine configuration in the CPU resources and memory resources, the proportion demarcation point was determined, and the partition of virtual machine set was completed. The requirement proportion of matched physical machines with different configurations was reflected by the size of each virtual machine subset.Finally, by using the heuristic algorithm such as the First Fit algorithm, the virtual machine subset was placed on the subset of physical machines with matched configuration. Theoretical analysis and simulation experimental results show that, compared with the total number of physical machines with any single configuration, the total number of physical machines required by the proposed algorithm is reduced by 2%-17%.The proposed RDDFPA can determine the number of physical machines with various configurations according to the distribution of virtual machine resource requirements, and efficiently complete the placement of virtual machines, which can improve the resource utilization rate and reduce the system energy consumption.
    Effective algorithm for granulation reduction of multi-granulation rough set
    HU Shanzhong, XU Yi, HE Minghui, WANG Ran
    2017, 37(12):  3391-3396.  DOI: 10.11772/j.issn.1001-9081.2017.12.3391
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low efficiency problem of the existing granulation reduction algorithms for multi-granulation rough set, an Effective Algorithm for Granulation Reduction of Multi-granulation Rough Set (EAGRMRS) was proposed. Firstly, the lower approximation Boolean matrix of decision classes was defined by using the decision information system as the object. The defined matrix could be used for converting redundant and repeated set operations into Boolean operations in the process of granular reduction. Based on this matrix, the algorithm for computing lower approximation of decision classes and the algorithm for computing the important measure of granularity were presented. Then, focusing on the problem of redundancy calculation when computing the important measure of granularity, a fast algorithm for computing the important measure of granularity with dynamic increasing of granularity was presented. On the basis, the EAGRMRS was proposed. The time complexity of the proposed algorithm is O(|A|·|U|2+|A|2·|U|), in which|A|is the size of granulation set,|U|is the number of instances in decision information system. The experimental results on UCI datasets show that, the proposed algorithm is effective and efficient, the efficiency advantage of EAGRMRS is more obvious over Heuristic Approach to Granular Structure Selection (HAGSS) for multi-granulation rough set when the dataset increases.
    Parallel algorithm for triangle enumeration
    WANG Zhuo, SUO Bo, PAN Wei
    2017, 37(12):  3397-3400.  DOI: 10.11772/j.issn.1001-9081.2017.12.3397
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    The classical Graph Twiddling (GT) algorithm is the MapReduce implementation of triangle parallel enumeration algorithm. However, the GT algorithm can only enumerate the triangle structure of whole graph and can not enumerate the triangle structure of candidate vertexes directly. To solve the problem, a parallel algorithm was proposed for directly enumerating the triangle structure of candidate vertexes. Firstly, the set of all the combinations of candidate vertexes for forming triangle was given by analyzing the distribution of candidate vertexes. Then, through the screening of the set, the triangle structure of candidate vertexes was directly enumerated. Finally, the proposed algorithm was implemented on Spark to achieve high efficiency and popularity. The contrast experiment was completed on artificial datasets and real datasets. The experimental results show that, compared with the GT algorithm, the running time of the proposed algorithm is only 1/3 of the running time of GT algorithm, and the running time on Spark is only 1/7 of the running time on Hadoop. The proposed algorithm can be used to generate the triangle dataset of any candidate vertex directly and efficiently.
    Improved Spark Shuffle memory allocation algorithm
    HOU Weifan, FAN Wei, ZHANG Yuxiang
    2017, 37(12):  3401-3405.  DOI: 10.11772/j.issn.1001-9081.2017.12.3401
    Asbtract ( )   PDF (909KB) ( )  
    References | Related Articles | Metrics
    Shuffle performance is an important indicator of affecting cluster performance for big data frameworks. The Shuffle memory allocation algorithm of Spark itself tries to allocate memory evenly for every Task in the memory pool, but it is found in experiments that the memory was wasted and the efficiency was low due to the imbalance of memory requirements for each Task. In order to solve the problem, an improved Spark Shuffle memory allocation algorithm was proposed. According to the amount of memory applications and historical operating data, the Task was divided into two categories based on memory requirements. The "split"processing was carried out for the Task of small memory requirements, while the memory was allocated based on the number of Task overflows and the waiting time after overflow for the Task of large memory requirements. By taking full advantage of the free memory of memory pool, the adaptive adjustment of Task memory allocation could be realized under the condition of unbalanced Task memory requirements caused by the data skew. The experimental results show that, compared with the original algorithm, the improved algorithm can reduce the overflow rate of the Task, decrease the turnaround time of the Task, and improve the running performance of the cluster.
    Application of honey encryption mechanism in protection of private data
    YIN Wei, ZHOU Hongjian, XING Guoqiang
    2017, 37(12):  3406-3411.  DOI: 10.11772/j.issn.1001-9081.2017.12.3406
    Asbtract ( )   PDF (1022KB) ( )  
    References | Related Articles | Metrics
    In order to solve the vulnerability problem of the traditional encryption mechanisms, the honey encryption mechanism was applied to three types of private data including identity card numbers, mobile phone numbers and bank card passwords to protect data storage security. Firstly, the principle of honey encryption was analyzed and discussed, and a distributed-transforming encryptor for honey encryption system was designed. Then, the message space was abstracted, and the system was implemented and its performance was evaluated. The performance overhead problem was found and an enhanced mechanism was proposed. The message spaces with uniformly/non-uniformly distribution were taken into account and also used for symmetric encryption algorithm and public key encryption mechanism in the design and implementation of honey encryption. Through the proposed design, system implementation and experimental results of honey encryption, the following conclusions are drawn:1)due to the performance issue, honey encryption is more suitable for a small message space instead of a large one; 2)the design of message space needs to be considered comprehensively, which can not bring fingerprint characteristics, otherwise it can not solve the vulnerability problem of brute-force attack; 3)the protection ability of honey encryption is different with the application; 4) the implementation of honey encryption has to be customized for different applications.
    Design of secure network coding scheme by double encryption based on chaotic sequences
    XU Guangxian, ZHAO Yue, GONG Zhongsheng
    2017, 37(12):  3412-3416.  DOI: 10.11772/j.issn.1001-9081.2017.12.3412
    Asbtract ( )   PDF (878KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of the existing network coding schemes against global wiretapping attack such as large amount of computation, low bandwidth efficiency and low security, a secure network coding scheme by double encryption based on chaotic sequences was proposed. Firstly, a key was used to encrypt the last dimensional transmission data and the chaotic sequences were disturbed by the data itself while encrypting. Then, another key and a random number key were used to generate coding coefficient matrix, while the chaotic sequences were disturbed by m sequence. Finally, the obtained coding coefficient matrix was used for the linear combination of encrypted messages and unencrypted messages against global wiretapping attacks. Since the coding coefficient matrix was generated by the keys, the coding coefficients were not needed to be transmitted in the channel. Compared with the traditional Secure Practical Network Coding (SPOC) scheme, the proposed scheme saves the bandwidth overhead of the transmission of coding coefficients in the network. The analysis and experimental results show that, the proposed scheme improves the safety performance of network, which ciphertext-only attacks and known plaintext attacks can all be resisted. And the proposed scheme can also improve the transmission efficiency, and its algorithm complexity is moderate.
    Scheme of sharing cloud data audit supporting user traceability and lightweight
    JIN Yu, CAI Chao, HE Heng
    2017, 37(12):  3417-3422.  DOI: 10.11772/j.issn.1001-9081.2017.12.3417
    Asbtract ( )   PDF (996KB) ( )  
    References | Related Articles | Metrics
    The data is usually shared by a group of users in cloud computing. The third party auditor can obtain the the identities of group members through their signatures of data blocks. In order to protect the identities of group members, the existing public audit schemes for shared data all hide the identities of group members. However, the anonymity of identity leads to the problem that a member of the group can change the shared data maliciously without being found, and the amount of computation is large for resource constrained devices in the process of generating signature for users. The existing public audit schemes have the problems that the identity of data block can not be traced and the amount of calculation of generating shared data block signature is large. In order to solve the above problems, A Scheme of sharing cloud Data Audit supporting user traceability and lightweight (ASDA) was proposed. Firstly, the security mediator was used to replace the user signature to protect the identities of group members. The information of user was saved while signing and it could be traced back that the data block was modified by which member through the above information, which could ensure the traceability of the identity of data block. Then, a new data block blinding technology was used to reduce the amount of client computing. The experimental results show that, compared with Storing shared Data on the cloud Via Security-mediator (SDVS) scheme, the proposed scheme reduces the computing time of users and realizes the traceability of shared data blocks.
    Ciphertext policy attribute-based encryption scheme with weighted attribute revocation
    WANG Jingwei, YIN Xinchun
    2017, 37(12):  3423-3429.  DOI: 10.11772/j.issn.1001-9081.2017.12.3423
    Asbtract ( )   PDF (1079KB) ( )  
    References | Related Articles | Metrics
    Most of the existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE) schemes cannot support multi-state representation of attributes, and the computation overhead of encryption and decryption phase is huge. In order to solve the problems, a CP-ABE scheme with Weighted Attribute Revocation (CPABEWAR) was proposed. On the one hand, the expression ability of attribute was improved by introducing the concept of weighted attribute. On the other hand, in order to reduce the computation cost, part calculation tasks were outsourced to Cloud Service Provider (CSP) under the premise of ensuring data securer. The analysis results show that, the proposed CPABEWAR is proved to be Chosen Plaintext Secure (CPS) under the Decisional Bilinear Diffie-Hellman (DBDH) assumption. The proposed scheme simplifies the access tree structure at the cost of a small amount of storage space and improves system efficiency and flexibility of access control, which is suitable for cloud users with limited computing power.
    Fully homomorphic encryption scheme without Gaussian noise
    LI Mingxiang, LIU Zhao, ZHANG Mingyan
    2017, 37(12):  3430-3434.  DOI: 10.11772/j.issn.1001-9081.2017.12.3430
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics
    Much lately, a leveled fully homomorphic encryption scheme was proposed based on the Learning With Rounding (LWR) problem. The LWR problem is a variant of the Learning With Errors (LWE) problem, but it dispenses with the costly Gaussian noise sampling. Thus, compared with the existing LWE-based fully homomorphic encryption schemes, the proposed LWR-based fully homomorphic encryption scheme has much higher efficiency. But then, the user's evaluation key was needed to be obtained in the homomorphic evaluator of the proposed LWR-based fully homomorphic encryption scheme. Accordingly, a new leveled fully homomorphic encryption scheme was constructed based on the LWR problem, and the user's evaluation key was not needed to be obtained in the homomorphic evaluator of the new fully homomorphic encryption scheme. Since the new proposed fully homomorphic encryption scheme can be used to construct the schemes such as identity-based fully homomorphic encryption schemes, and attribute-based fully homomorphic encryption schemes, the new proposed scheme has wider application than the lately proposed LWR-based fully homomorphic encryption scheme.
    User identification method across social networks based on weighted hypergraph
    XU Qian, CHEN Hongchang, WU Zheng, HUANG Ruiyang
    2017, 37(12):  3435-3441.  DOI: 10.11772/j.issn.1001-9081.2017.12.3435
    Asbtract ( )   PDF (1259KB) ( )  
    References | Related Articles | Metrics
    With the emergence of various social networks, the social media network data is analyzed from the perspective of variety by more and more researchers. The data fusion of multiple social networks relies on user identification across social networks. Concerning the low utilization problem of heterogeneous relation between social networks of the traditional Friend Relationship-based User Identification (FRUI) algorithm, a new Weighted Hypergraph based User Identification (WHUI) algorithm across social networks was proposed. Firstly, the weighted hypergraph was accurately constructed on the friend relation networks to describe the friend relation and the heterogeneous relation in the same network, which improved the accuracy of presenting topological environment of nodes. Then, on the basis of the constructed weighted hypergraph, the cross network similarity between nodes was defined according to the consistency of nodes' topological environment in different networks. Finally, the user pair with the highest cross network similarity was chosen to match each time by combining with the iterative matching algorithm, while two-way authentication and result pruning were added to ensure the recognition accuracy. The experiments were carried out in the DBLP cooperation networks and real social networks. The experimental results show that, compared with the existing FRUI algorithm, the average precision, recall, F of the proposed algorithm is respectively improved by 5.5 percentage points, 3.4 percentage points, 4.6 percentage points in the real social networks. The WHUI algorithm can effectively improve the precision and recall of user identification in practical applications by utilizing only network topology information.
    Dynamic defense method of Web server based on Linux namespace
    CHEN Gang, GUO Yudong, WEI Xiaofeng
    2017, 37(12):  3442-3446.  DOI: 10.11772/j.issn.1001-9081.2017.12.3442
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics
    Web servers are widely deployed on cloud computing platform represented by Docker containers and face serious security challenges. In order to improve the security and defense capability of such Web servers, a dynamic defense method of Web server based on Linux namespace was proposed. Firstly, the running environment of Web server was built by using namespace on the premise to ensure Web service working normally. Then, the dynamic transformation of Web server was realized by the alternate running of multiple environments to confuse intruder, which increased the difficulty of attacking Web server by the intruder. Finally, the running environment of Web server was periodically deleted and rebuilt to eliminate the impact of intrusion behavior on the Web server, and ultimately the dynamic defense capability of Web server was effectively improved. The experimental results show that, the proposed method can effectively enhance the security of Web server while it has little affect on system performance, and its response time of requesting 100 KB data is 0.02-0.07 ms.
    Scale invariant feature transform-based fast image copy detection
    ZHENG Lijun, LI Xinwei, BU Xuhui
    2017, 37(12):  3447-3451.  DOI: 10.11772/j.issn.1001-9081.2017.12.3447
    Asbtract ( )   PDF (946KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problems of low feature extraction speed and low matching efficiency of the traditional image copy detection algorithm based on Scale Invariant Feature Transform (SIFT) feature, a fast image copy detection algorithm based on location distribution and orientation distribution features of SIFT feature points was proposed. Firstly, the two-dimensional location information of SIFT feature points was extracted. The number of feature points in each interval was counted with block statistics by calculating the distance and angle between each feature point and image center point. The binary hash sequence was generated to construct the first order robust feature according to the quantitative relationship. Then, the numbers of sub-interval feature points in all directions were counted with block statistics according to the one-dimensional direction distribution feature of feature points, and the secondary image feature was constructed according to the quantitative relationship. Finally, a cascade filter framework was used in the copy detection to make a judgement about whether the copy or not. The simulation experimental results show that, compared with the traditional copy detection algorithm which constructs the hash sequence based on the SIFT feature with 128-dimensional descriptor, the feature extraction time of the proposed algorithm is shortened to the original 1/20, and the matching time is also reduced by more than 1/2. Therefore the proposed algorithm meet the requirement of online copy detection.
    Tampering detection algorithm based on noise consistency for digital voice heterologous splicing
    YANG Fan, YAN Diqun, XU Hongwei, WANG Rangding, JIN Chao, XIANG Li
    2017, 37(12):  3452-3457.  DOI: 10.11772/j.issn.1001-9081.2017.12.3452
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics
    Heterologous splicing is a typical tampering behavior for digital voice. It mainly uses the audio editing software to splice the voice clips recorded in different scenes, so as to achieve the purpose of changing the semantics of voice. Considering the difference of background noise in different scenes, a tampering detection algorithm based on noise consistency for digital voice heterologous splicing was proposed. Firstly, the Time-Recursive Averaging (TRA) algorithm was applied to extract the background noise contained in the voice to be detected. Then, the Change-Point Detection (CPD) algorithm was used to detect whether abrupt changes existed in the noise variance, which was used to determine whether the voice was tampered, and to locate the tampering position of the testing voice. The experimental results show that the proposed algorithm can achieve good performance in detecting the tampering position of heterologous splicing for digital voice.
    Three-dimensional spatial structured encoding deep network for RGB-D scene parsing
    WANG Zeyu, WU Yanxia, ZHANG Guoyin, BU Shuhui
    2017, 37(12):  3458-3466.  DOI: 10.11772/j.issn.1001-9081.2017.12.3458
    Asbtract ( )   PDF (11074KB) ( )  
    References | Related Articles | Metrics
    Efficient feature extraction from RGB-D images and accurate 3D spatial structure learning are two key points for improving the performance of RGB-D scene parsing. Recently, Fully Convolutional Neural Network (FCNN) has powerful ability of feature extraction, however, FCNN can not learn 3D spatial structure information sufficiently. In order to solve the problem, a new neural network architecture called Three-dimensional Spatial Structured Encoding Deep Network (3D-SSEDN) was proposed. The graphical model network and spatial structured encoding algorithm were organically combined by the embedded structural learning layer, the 3D spatial distribution of objects could be precisely learned and described. Through the proposed 3D-SSEDN, not only the Hierarchical Visual Feature (HVF) and Hierarchical Depth Feature (HDF) containing hierarchical shape and depth information could be extracted, but also the spatial structure feature containing 3D structural information could be generated. Furthermore, the hybrid feature could be obtained by fusing the above three kinds of features, thus the semantic information of RGB-D images could be accurately expressed. The experimental results on the standard RGB-D datasets of NYUDv2 and SUNRGBD show that, compared with the most previous state-of-the-art scene parsing methods, the proposed 3D-SSEDN can significantly improve the performance of RGB-D scene parsing.
    Feature selection method of high-dimensional data based on random matrix theory
    WANG Yan, YANG Jun, SUN Lingfeng, LI Yunuo, SONG Baoyan
    2017, 37(12):  3467-3471.  DOI: 10.11772/j.issn.1001-9081.2017.12.3467
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics
    The traditional feature selection methods always remove redundant features by using correlation measures, and it is not considered that there is a large amount of noise in a high-dimensional correlation matrix, which seriously affects the feature selection result. In order to solve the problem, a feature selection method based on Random Matrix Theory (RMT) was proposed. Firstly, the singular values of a correlation matrix which met the random matrix prediction were removed, thereby the denoised correlation matrix and the number of selected features were obtained. Then, the singular value decomposition was performed on the denoised correlation matrix, and the correlation between feature and class was obtained by decomposed matrix. Finally, the feature selection was accomplished according to the correlation between feature and class and the redundancy between features. In addition, a feature selection optimization method was proposed, which furtherly optimize the result by comparing the difference between singular value vector and original singular value vector and setting each feature as a random variable in turn. The classification experimental results show that the proposed method can effectively improve the classification accuracy and reduce the training data scale.
    Collaborative filtering algorithm based on bounded matrix low rank approximation and nearest neighbor model
    WEN Zhankao, YI Xiushuang, TIAN Shenshen, LI Jie, WANG Xingwei
    2017, 37(12):  3472-3476.  DOI: 10.11772/j.issn.1001-9081.2017.12.3472
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    To solve the limitation and accuracy of matrix decomposition in Collaborative Filtering (CF) algorithm, a Collaborative Filtering algorithm based on Bounded Matrix low rank Approximation (BMA) and Nearest neighbor model (BMAN-CF) was proposed to improve the accuracy of item scoring prediction. Firstly, the matrix factorization algorithm of BMA was introduced to extract the implicit feature information of sub-matrix and improve the accuracy of neighborhood set search. Then, the target users' scores on target items were respectively predicted according to the traditional user-based and item-based collaborative filtering algorithms. And the equilibrium factor and control factor were used to dynamically balance the two prediction results, the target users' scores of items were obtained. Finally, the data was partitioned, and the proposed algorithm was parallelized in Hadoop environment by using the characteristics of MapReduce computing framework. The experimental results show that, the BMAN-CF has higher rating prediction accuracy than other matrix factorization algorithms, and the speedup experiment shows that the proposed parallelized algorithm has better scalability.
    Improved clustering algorithm for multivariate time series with unequal length
    HUO Weigang, CHENG Zhen, CHENG Wenli
    2017, 37(12):  3477-3481.  DOI: 10.11772/j.issn.1001-9081.2017.12.3477
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of slow speed of the existing model-based Multivariate Time Series (MTS) clustering algorithm when dealing with MTS wtih unequal length, an improved clustering algorithm named MUltivariate Time Series Clustering Algorithm based on Lift Ratio (LR) Component Extraction (MUTSCA〈LRCE〉) was proposed. Firstly, the equal frequency discretization method was used to symbolize MTS. Then, the LR vector was calculated to express the temporal pattern between the dimensions of time series of MTS samples. Each LR vector was sorted and a fixed number of different key components were extracted from both ends. All the extracted key components were spliced to form a model vector for representing the MTS samples. The MTS sample set with unequal length was transformed into a model vector set with equal length. Finally, the k-means algorithm was used for the clustering analysis of generated model vector set with equal length. The experimental results on multiple common data sets show that, compared with the model-based MTS clustering algorithm named MUltivariate Time Series Clustering Algorithm〈LR〉(MUTSCA〈LR〉), the proposed algorithm can significantly improve the clustering speed of MTS data sets with unequal length under the premise of guaranteeing clustering effect.
    Efficient clustering algorithm for fast recognition of density backbone
    QIU Baozhi, TANG Yamin
    2017, 37(12):  3482-3486.  DOI: 10.11772/j.issn.1001-9081.2017.12.3482
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics
    In order to find density backbone quickly and improve the accuracy of high-dimensional data clustering results, a new algorithm for fast recognition of high-density backbone was put forward, which was named Efficient CLUstering based on density Backbone (ECLUB) algorithm. Firstly, on the basis of defining the local density of object, the high-density backbone was identified quickly according to the mutual consistency of k-nearest neighbors and the local density relation of neighbor points. Then, the unassigned low-density points were divided according to the neighborhood relations to obtain the final clustering. The experimental results on synthetic datasets and real datasets show that the proposed algorithm is effective. The clustering results of Olivetti Face dataset show that, the Adjusted Rand Index (ARI) and Normalized Mutual Information (NMI) of the proposed ECLUB algorithm is 0.8779 and 0.9622 respectively. Compared with the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, Clustering by Fast search and find of Density Peaks (CFDP) algorithm and CLUstering based on Backbone (CLUB) algorithm, the proposed ECLUB algorithm is more efficient and has higher clustering accuracy for high-dimensional data.
    Exact SAT algorithm based on dynamic branching strategy of award and punishment
    LIU Yanli, XU Zhenxing, XIONG Dan
    2017, 37(12):  3487-3492.  DOI: 10.11772/j.issn.1001-9081.2017.12.3487
    Asbtract ( )   PDF (911KB) ( )  
    References | Related Articles | Metrics
    The limited number and high similarity of learning clauses lead to limited historical information and imbalanced search tree. In order to solve the problems, a dynamic branching strategy of award and punishment was proposed. Firstly, the variables of every unit propagation were punished. Different penalty functions were established according to whether the variable generated a conflict and the conflict interval. Then, in the learning phase, the positive variables for the conflict were found according to the learning clauses, and their activities were nonlinearly increased. Finally, the variable with the maximum activity was chosen as the new branching variable. On the basis of the glucose3.0 algorithm, an improved dynamic algorithm of award and punishment named Award and Punishment 7 (AP7) was completed. The experimental results show that, compared with the glucose3.0 algorithm, the rate of cutting branches of AP7 algorithm is improved by 14.2%-29.3%, and that of a few examples is improved up to 51%. The running time of the improved AP7 algorithm is shortened more than 7% compared with the glucose3.0 algorithm. The branching strategy can efficiently reduce the size of search tree, make the search tree more balanced and reduce the computation time.
    Improved grey wolf optimizer algorithm using dynamic weighting and probabilistic disturbance strategy
    CHEN Chuang, Ryad CHELLALI, XING Yin
    2017, 37(12):  3493-3497.  DOI: 10.11772/j.issn.1001-9081.2017.12.3493
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics
    The basic Grey Wolf Optimizer (GWO) algorithm is easy to fall into local optimum, which leads to low search precision. In order to solve the problem, an Improved GWO (IGWO) was proposed. On the one hand, the position vector updating equation was dynamically adjusted by introducing weighting factor derived from coefficient vector of the GWO algorithm. On the other hand, the probabilistic disturbance strategy was adopted to increase the population diversity of the algorithm at later stage of iteration, thus the ability of the algorithm for jumping out of the local optimum was enhanced. The simulation experiments were carried out on multiple benchmark test functions. The experimental results show that, compared with the GWO algorithm, Hybrid GWO (HGWO) algorithm, Gravitational Search Agorithm (GSA) and Differential Evolution (DE) algorithm, the proposed IGWO can effectively get rid of local convergence and has obvious advantages in search precision, algorithm stability and convergence speed.
    Chinese short text classification method by combining semantic expansion and convolutional neural network
    LU Ling, YANG Wu, YANG Youjun, CHEN Menghan
    2017, 37(12):  3498-3503.  DOI: 10.11772/j.issn.1001-9081.2017.12.3498
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    Chinese news title usually consists of a single word to dozens of words. It is difficult to improve the accuracy of news title classification due to the problems such as few characters and sparse features. In order to solve the problems, a new method for text semantic expansion based on word embedding was proposed. Firstly, the news title was expanded into triples consisting of title, subtitle and keywords. The subtitle was constructed by combining the synonym of title and the part of speech filtering method, and the keywords were extracted from the semantic composition of words in multi-scale sliding windows. Then, the Convolutional Neural Network (CNN) model was constructed for categorizing the expanded text. Max pooling and random dropout were used for feature filtering and avoidance of overfitting. Finally, the double-word spliced by title and subtitle, and the multi-keyword set were fed into the model respectively. Experiments were conducted on the news title classification dataset of the Natural Language Processing & Chinese Computing in 2017 (NLP&CC2017). The experimental results show that, the classification precision of the combination model of expanding news title to triples and CNN is 79.42% in 18 categories of news titles, which is 9.5% higher than the original CNN model without expanding, and the convergence rate of model is improved by keywords expansion. The proposed expansion method of triples and the constructed CNN model are verified to be effective.
    Convolutional neural network based method for diagnosis of Alzheimer's disease
    LIN Weiming, GAO Qinquan, DU Min
    2017, 37(12):  3504-3508.  DOI: 10.11772/j.issn.1001-9081.2017.12.3504
    Asbtract ( )   PDF (844KB) ( )  
    References | Related Articles | Metrics
    The Alzheimer's Disease (AD) usually leads to atrophy of hippocampus region. According to the characteristic, a Convolutional Neural Network (CNN) based method was proposed for the diagnosis of AD by using the hippocampu region in brain Magnetic Resonance Imaging (MRI). All the test data were got from the ADNI database including 188 AD and 229 Normal Control (NC). Firstly, all the brain MRI were preprocessed by skull stripping and aligned to a template space. Secondly, a linear regression model was used for age correction of brain aging atrophy. Then, after preprocessing, multiple 2.5D images were extracted from the hippocampus region in the 3D brain image for each object. Finally, the CNN was used to train and recognize the extracted 2.5D images, and the recognition results of the same object were used for the joint diagnosis of AD. The experiments were carried out by using multiple ten-fold cross validation methods. The experimental results show that the average recognition accuracy of the proposed method reaches 88.02%. The comparison results show that, compared with Stacked Auto-Encoder (SAE) method, the proposed method has improved the diagnosis effect of AD in the case of only using MRI.
    Summary of facial expression recognition methods based on image
    XU Linlin, ZHANG Shumei, ZHAO Junli
    2017, 37(12):  3509-3516.  DOI: 10.11772/j.issn.1001-9081.2017.12.3509
    Asbtract ( )   PDF (1504KB) ( )  
    References | Related Articles | Metrics
    In recent years, facial expression recognition has received extensive attention in education, medicine, psychoanalysis and business. Aiming at the problems of not systematic enough and fuzzy concept of facial expression recognition method, the steps and methods of facial expression recognition were reviewed and discussed. Firstly, the commonly used facial expression databases were introduced and the development of facial expression recognition was reviewed. Then, two aspects of facial expression recognition were introduced, such as facial expression coding and facial expression recognition. The four processes of face facial expression recognition were summarized. The classical algorithms, the basic principles of these algorithms and the comparisons of their advantages and disadvantages were summarized emphatically in the two processes of feature extraction and facial expression classification. Finally, the existing problems and possible development trends in the future of the current facial expression recognition were pointed out.
    Image classification algorithm based on multi-scale feature fusion and Hessian sparse coding
    LIU Shengqing, SUN Jifeng, YU Jialin, SONG Zhiguo
    2017, 37(12):  3517-3522.  DOI: 10.11772/j.issn.1001-9081.2017.12.3517
    Asbtract ( )   PDF (1033KB) ( )  
    References | Related Articles | Metrics
    The traditional sparse coding image classification algorithms extract single type features, ignore the spatial structure information of the images, and can not make full use of the feature topological structure information in feature coding. In order to solve the problems, a image classification algorithm based on multi-scale feature fusion and Hessian Sparse Coding (HSC) was proposed. Firstly, the image was divided into sub-regions with multi-scale spatial pyramid. Secondly, the Histogram of Oriented Gradient (HOG) and Scale-Invariant Feature Transform (SIFT) were effectively merged in each subspace layer. Then, in order to make full use of the feature topology information, the second order Hessian energy function was introduced to the traditional sparse coding target function as a regularization term. Finally, Support Vector Machine (SVM) was used to classify the images. The experimental results on dataset Scene15 show that, the accuracy of HSC is 3-5 percentage points higher than that of Locality-constrained Linear Coding (LLC), while it is 1-3 percentage points higher than that of Support Discrimination Dictionary Learning (SDDL) and other comparative methods. Time-consuming experimental results on dataset Caltech101 show that, the time-consuming of HSC is about 40% less than that of the Multiple Kernel Learning Sparse Coding (MKLSC). The proposed HSC can effectively improve the accuracy of image classification, and its efficiency is also better than the contrast algorithms.
    Novel image segmentation algorithm based on Snake model
    HU Xuegang, QIU Xiulan
    2017, 37(12):  3523-3527.  DOI: 10.11772/j.issn.1001-9081.2017.12.3523
    Asbtract ( )   PDF (894KB) ( )  
    References | Related Articles | Metrics
    The existing image segmentation algorithms based on Snake model generally have the disadvantages of poor noise robustness, limited application range, easy leakage of weak edge and difficult to converge to small and deep concave boundary of contour curve. In order to solve the problems, a novel image segmentation algorithm based on Snake model was proposed. Firstly, the Laplacian operator with isotropic smoothness was replaced by the new chosen diffusion term. Secondly, the p-Laplacian functional was introduced into the smooth energy term to strengthen the external force in the normal direction. Finally, the edge-preserving term was used to keep the external force field parallel to the edge direction, so as to prevent the weak edge from leaking and promote the contour curve to converge to the small and deep concave boundary. The experimental results show that, the proposed model not only overcomes the drawbacks of the existing image segmentation algorithms based on Snake model, possesses better segmentation effect, improves the anti-noise performance and corner positioning accuracy obviously, but also consumes less time. The proposed model is suitable for segmenting noise images, medical images, and natural images with many weak edges.
    Image segmentation algorithm based on fusion of group intelligent algorithm optimized OTSU-entropy and pulse coupled neural network
    CHENG Shuli, WANG Liejun, QIN Jiwei, DU Anyu
    2017, 37(12):  3528-3535.  DOI: 10.11772/j.issn.1001-9081.2017.12.3528
    Asbtract ( )   PDF (1350KB) ( )  
    References | Related Articles | Metrics
    The image segmentation results under the maximum interclass variance criterion have the problems that the original information is not enough, the real-time performance is poor, the number of iterations in the Pulse Coupled Neural Network (PCNN) model is difficult to determine. In order to solve the problems, a new automatic image segmentation algorithm was proposed based on the fusion of group intelligent algorithm optimized OTSU-entropy (OTSU-H) and PCNN. Firstly, the gray distribution information and related information of the image were used to fuse redundancy, competition and complementarity of the image effectively, at the same time, the two-dimensional and three-dimensional observation space were constructed. The fast recursive algorithm of OTSU-H criterion was proposed. Secondly, the objective function of the fast recursive algorithm was respectively used as the fitness function of the four group intelligent algorithms of Cuckoo Search (CS) algorithm, Firefly Algorithm (FA), Particle Swarm Optimization (PSO) algorithm and Genetic Algorithm (GA). Finally, the optimized OTSU-H was introduced into the PCNN model to acquire the number of iterations automatically. The experimental results show that, compared with the original OTSU, the maximum entropy criterion, the image segmentation algorithms based on graph theory segmentation, pixel clustering segmentation and candidate region semantic segmentation, the proposed algorithm has better image segmentation effect, reduces the computational complexity, saves the storage space of the computer, and has strong anti-noise ability. In addition, the proposed algorithm has a wide range of applications with the characteristics of less time consumption and not need training.
    Improved geodesic active contour image segmentation model based on Bessel filter
    LIU Guoqi, LI Chenjing
    2017, 37(12):  3536-3540.  DOI: 10.11772/j.issn.1001-9081.2017.12.3536
    Asbtract ( )   PDF (982KB) ( )  
    References | Related Articles | Metrics
    Active contour model is widely used in image segmentation and object contour extraction, and the edge-based Geodesic Active Contour (GAC) model is widely used in the object extraction with obvious edges. But the process of GAC evolution costs many iterations and long time. In order to solve the problems, the GAC model was improved with Bessel filter theory. Firstly, the image was smoothed by Bessel filter to reduce the noise. Secondly, a new edge stop term was constructed based on the edge detection function of Bessel filter and incorporated into the GAC model. Finally, the Reaction Diffussion (RD) term was added to the constructed model for avoiding re-initialization of the level set. The experimental results show that, compared with several edge-based models, the proposed model improves the time efficiency and ensures the accuracy of segmentation results. The proposed model is more suitable for practical applications.
    Image inpainting algorithm for partitioning feature subregions
    LI Mengxue, ZHAI Donghai, MENG Hongyue, CAO Daming
    2017, 37(12):  3541-3546.  DOI: 10.11772/j.issn.1001-9081.2017.12.3541
    Asbtract ( )   PDF (991KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of inpainting missing information in the large damaged region with rich texture information and complex structure information, an image inpainting algorithm for partitioning feature subregions was proposed. Firstly, according to the different features contained in the image, the feature formula was used to extract the features, and the feature subregions were divided by the statistical eigenvalues to improve the speed of image inpainting. Secondly, on the basis of the original Criminisi algorithm, the calculation of priority was improved, and the structural fracture was avoided by increasing the influence of the structural term. Then, the optimal sample patch set was determined by using the target patch and its optimal neighborhood similar patches to constrain the selection of sample patch. Finally, the optimal sample patch was synthesized by using weight assignment method. The experimental results show that, compared with the original Criminisi algorithm, the Peak Signal-to-Noise Ratio (PSNR) of the proposed algorithm is improved by 2-3 dB; compared with the patch priority weight computation algorithm based on sparse representation, the inpainting efficiency of the proposed algorithm is also obviously improved. Therefore, the proposed algorithm is not only suitable for the inpainting of small-scale damaged images, but also has better inpainting effect for large damaged images with rich texture information and complex structure information, and the restored images are more in line with people's visual connectivity.
    Image feature point matching algorithm based on center surround filter detection
    SUN Zengyou, DUAN Yushuai, LI Ya
    2017, 37(12):  3547-3553.  DOI: 10.11772/j.issn.1001-9081.2017.12.3547
    Asbtract ( )   PDF (1171KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of poor stability and accuracy of feature point detection in traditional image matching algorithms, a new image feature point matching algorithm based on Scale-invariant Center surround Filter Detection (SCFD) was proposed. Firstly, a multi-scale space was constructed, a center surround filter was used to detect feature points of a image at different scales, Harris method and sub-pixel interpolation were applied to acquire the stable feature points. Secondly, Oriented fast and Rotated Binary Robust Independent Elementary Feature (BRIEF) (ORB) algorithm was combined to confirm the main direction of feature points and construct the description operator of feature points. Finally, Hamming distance was used to complete the matching, Least Median Squares (LMeds) theorem and Maximum Likelihood (ML) estimation were used to eliminate wrong matching points. The experimental results show that, the matching precision of the proposed algorithm is up to 96.6%, which is 2 times of that of the ORB algorithm when the scale changes. The running time of the proposed algorithm is 19.8% of that of Scale Invariant Feature Transform (SIFT) and 28.3% of that of Speed-Up Robust Feature (SURF). The proposed algorithm can effectively improve the stability and accuracy of feature point detection, and has better matching effects under the circumstances of different angle of view, scale scaling, rotation change and brightness variation.
    Medical image fusion algorithm based on linking synaptic computation network
    GAO Yuan, JIA Ziting, QIN Pinle, WANG Lifang
    2017, 37(12):  3554-3557.  DOI: 10.11772/j.issn.1001-9081.2017.12.3554
    Asbtract ( )   PDF (871KB) ( )  
    References | Related Articles | Metrics
    The traditional fusion methods based on Pulse Coupled Neural Network (PCNN) have the shortcomings of too many parameters, the parameters and number of network iterations difficult to accurately set, and poor fusion effect. In order to solve the problems, a new image fusion algorithm using the connection item (L item) of Linking Synaptic Computing Network (LSCN) model was proposed. Firstly, the two images to be fused was input into the LSCN model respectively. Secondly, the L term was used to replace the ignition frequency in the traditional PCNN as the output. Then, the iteration was terminated by the multi-pass operation. Finally, the pixels of the fused image were obtained by comparing the values of L terms. The theoretical analysis and experimental results show that, compared with the image fusion algorithms using the improved PCNN model and the new model proposed on the basis of PCNN model, the fusion images generated by the proposed algorithm have better visual effects. In addition, compared with the fusion algorithm of LSCN using ignition frequence as the output, the proposed algorithm is all superior in edge information evaluation factor, information entropy, standard deviation, space frequency, average grads. The proposed algorithm is simple and convenient, which not only reduces the number of parameters to be determined, reduces the computational complexity, but also solves the problem that the number of iterations in the traditional model is difficult to be determined.
    Delaunay triangulation subdivision algorithm of spherical convex graph and its convergence analysis
    XIA Jun, LI Yinghua
    2017, 37(12):  3558-3562.  DOI: 10.11772/j.issn.1001-9081.2017.12.3558
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics
    When calculating curved Ricci Flow, non-convergence emerges due to the existence of undersized angles in triangular meshes. Concerning the problem of non-convergence, a Delaunay triangulation subdivision algorithm of spherical convex graph of enhancing the minimum angle was proposed. First of all, the Delaunay triangulation subdivision algorithm of spherical convex graph was given. The proposed algorithm had two key operations:1) if a Delaunay minor arc was "encroached upon", a midpoint of the Delaunay minor arc was added to segment the Delaunay minor arc; 2) if there was a "skinny" spherical triangle, it was disassembled by adding the center of minor circle of its circumscribed sphere. Then, the convergence criteria of the proposed algorithm was explored on local feature scale and an upper-bound formula of the output vertex was given. The grids based on the output of experiment show that the spherical triangle generated by the grids of the proposed algorithm has no narrow angle, so it is suitable for calculating Ricci Flow.
    Application of MRF's spatial correlation model in NMF-based linear unmixing
    YUAN Bo
    2017, 37(12):  3563-3568.  DOI: 10.11772/j.issn.1001-9081.2017.12.3563
    Asbtract ( )   PDF (1038KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of initialization and "local minima" of Non-negative Matrix Factorization (NMF) in hyperspectral unmixing, a spatial correlation constrained NMF linear unmixing algorithm based on Markov Random Field (MRF) (MRF-NMF) was proposed. Firstly, the number of endmembers was estimated by Hyperspectral Signal identification by minimum error (HySime) method, the endmember matrix and abundance matrix were initialized by Vertex Component Analysis (VCA) and Fully Constrained Least Squares (FCLS). Secondly, the energy function of depicting the spatial distribution characteristics of ground objects was established by using MRF to depict the spatial correlation distribution features of ground objects. Finally, the spatial correlation constraint function based on MRF and the NMF standard objective function were used for unmixing in the form of alternating iteration, and the endmember information and abundance decomposition results of hyperspectral data were obtained. The theoretical analysis and experimental results of real data show that, with hyperspectral data of low spatial correlation, compared with the three reference algorithms of Minimum Volume Constrained NMF(MVC-NMF), Piecewise Smoothness NMF with Sparseness Constraints (PSNMFSC) and NMF with Alternating Projected Subgradients (APS-NMF), the endmember decomposition precision of MRF-NMF increases by 7.82%, 12.4% and 10.1%, and the abundance decomposition precision of MRF-NMF increases by 8.34%, 12.6% and 9.87%. The proposed MRF-NMF can make up for NMF's deficiency in depicting spatial correlation features, and reduce the spatial energy distribution error of ground objects.
    Evaluation model of mobile application crowdsourcing testers
    LIU Ying, ZHANG Tao, LI Kun, LI Nan
    2017, 37(12):  3569-3573.  DOI: 10.11772/j.issn.1001-9081.2017.12.3569
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    Mobile application crowdsourcing testers are anonymous, non-contractual, which makes it difficult for task publishers to accurately evaluate the ability of crowdsourcing testers and quality of test results.To solve these problems, a new evaluation model of Analytic Hierarchy Process (AHP) for mobile application crowdsouring testers was proposed. The ability of crowdsourcing testers was evaluated comprehensively and hierarchically by using the multiple indexes, such as activity degree, test ability and integrity degree. The combination weight vector of each level index was calculated by constructing the judgment matrix and consistency test. Then, the proposed model was improved by introducing the requirement list and description list, which made testers and crowdsourcing tasks match better. The experimental results show that the proposed model can evaluate the ability of testers accurately, support the selection and recommendation of crowdsourcing testers based on the evaluation results, and improve the efficiency and quality of mobile application crowdsourcing testing.
    Constructing method of PLC program model based on state transition
    CHANG Tianyou, WEI Qiang, GENG Yangyang
    2017, 37(12):  3574-3580.  DOI: 10.11772/j.issn.1001-9081.2017.12.3574
    Asbtract ( )   PDF (1124KB) ( )  
    References | Related Articles | Metrics
    The Programmable Logic Controller (PLC) program needs modeling the program manually in the NuSMV model testing, which is not only a waste of manpower but also an error-prone procedure. In order to solve the problems, an automatic construction method of PLC program model based on state transition was proposed. Firstly, the Structured Text (ST) language features were analyzed and the ST program was parsed as an abstract grammar tree. Secondly, according to the abstract grammar tree, the control flow graph was generated based on different grammatical structure by control flow analysis. And then the program dependency graph was obtained by data flow analysis. Finally, the NuSMV input model was generated according to the program dependency graph. The experimental results shows that, the proposed method achieves the automatic construction from ST program to NuSMV input model, and the constructed NuSMV input model not only retains the original characteristics of ST program but also conforms to the input standard of NuSMV model detection tool. Compared with the traditional manual model construction method, the proposed method improves the efficiency and accuracy of model generation.
    Control flow analysis method of PLC program
    ZHANG Ye, LU Yuliang
    2017, 37(12):  3581-3585.  DOI: 10.11772/j.issn.1001-9081.2017.12.3581
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics
    Programmable Logic Controller (PLC) is one of the most important components of industrial control system, which controls varieties of physical equipments and production processes. The faults of PLC control program caused by malicious tempering of attacker and programming errors of internal personnel will seriously threaten equipment safety and personal safety in industrial field. In order to solve this problem, a control flow analysis method of PLC program was proposed. Firstly, the lexical and syntactic structure of source code were analyzed by using flex and bison. Then, the intermediate representation without instruction side effects was generated and optimized by analyzing the Abstract Syntax Tree (AST). Finally, the basic blocks were divided on the basis of intermediate representation, and the control flow graph of the program was constructed by taking basic block as the basic unit. The experimental results show that, the proposed method can restore the control flow structure of PLC program in the form of statement table, and provide the basis for program understanding and security analysis.
    Learning-based performance monitoring and analysis for Spark in container environments
    PI Aidi, YU Jian, ZHOU Xiaobo
    2017, 37(12):  3586-3591.  DOI: 10.11772/j.issn.1001-9081.2017.12.3586
    Asbtract ( )   PDF (985KB) ( )  
    References | Related Articles | Metrics
    The Spark computing framework has been adopted as the framework for big data analysis by an increasing number of enterprises. However, the complexity of the system is increased due to the characteristic that it is typically deployed in distributed and cloud environments. Therefore, it is always considered to be difficult to monitor the performance of the Spark framework and finding jobs that lead to performance degradation. In order to solve this problem, a real-time monitoring and analysis method for Spark performance in distributed container environment was proposed and compiled. Firstly, the resource consumption information of jobs at runtime was acquired and integrated through the implantation of code in Spark and monitoring of Application Program Interface (API) files in Docker containers. Then, the Gaussian Mixture Model (GMM) was trained based on job history information of Spark. Finally, the trained model was used to classify the resource consumption information of Spark jobs at runtime and find jobs that led to performance degradation. The experimental results show that, the proposed method can detect 90.2% of the abnormal jobs and it only introduces 4.7% degradation to the performance of Spark jobs. The proposde method can lighten the burden of error checking and help users find the abnormal jobs of Spark in a shorter time.
    Reduction method of test suites based on mutation analysis
    WANG Shuyan, CHEN Pengyuan, SUN Jiaze
    2017, 37(12):  3592-3596.  DOI: 10.11772/j.issn.1001-9081.2017.12.3592
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics
    The scale of test suites is constantly expanding and the cost of testing is increasing due to the change of test requirements in the process of regression testing. In order to solve the problems, a Reduction method of Test suites based on the analysis of Mutation (RTM) was proposed. Firstly, the test suites were classified and the transaction set matrix of mutants was created in binary numerical form according to whether the designated mutants could be detected or not by test suites. Then, the correlation relation between test suites was obtained by using the improved association mining algorithm. Finally, the test suites were effectively reduced according to these relations. The simulation experimental results of the six classical programs show that, the test suite reduction rate of the proposed RTM can reach 37%. Compared with the traditional greedy algorithm and heuristic algorithm, the proposed RTM improves the test suite reduction rate by 6%, and can guarantee the test coverage rate at the same time, even the test coverage rate of a single test suite increases by 11% on average. The proposed method can meet more test requirements by using fewer test suites, effectively improving test efficiency and reducing test cost.
    Modeling of high-density crowd emergency evacuation based on floor-field particle swarm optimization algorithm
    WANG Chao, WANG Jian
    2017, 37(12):  3597-3601.  DOI: 10.11772/j.issn.1001-9081.2017.12.3597
    Asbtract ( )   PDF (980KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of congestion management and emergency evacuation of high-density crowd under the environment of unconventional emergencies, a four-layer crowd Evacuation Cyber-Physical System (E-CPS) framework was proposed, which contained sensing layer, transport layer, calculation layer and application layer. In the calculation layer of E-CPS framework, a Floor-Field Particle Swarm Optimization (PSO) (FF-PSO) crowd evacuation model was proposed by introducing static floor-field modeling rules into classical PSO. The FF-PSO evacuation model has the advantages such as simple rule and quick calculation of static floor-field, fast search and fast convergence of PSO. In addition, a new fitness function was designed and introduced into the proposed FF-PSO model to realize the dynamic adjustment of evacuation strategy. Numerical simulation and instance simulation were carried out to further verify the feasibility and effectiveness of the proposed FF-PSO model in congestion management. The instance simulation results of National Exhibition and Convention Center (Shanghai) show that 66 more pedestrians can be evacuated from the accident area per minute on average by the proposed model of introducing congestion management than the model of only considering the shortest distance. Furthermore, the evacuation time is saved by 19 min and the evacuation efficiency is improved by 13.4% by introducing congestion management.
    Improved hybrid bat algorithm for vehicle routing problem of perishable fresh goods
    YIN Ya, ZHANG Huizhen
    2017, 37(12):  3602-3607.  DOI: 10.11772/j.issn.1001-9081.2017.12.3602
    Asbtract ( )   PDF (944KB) ( )  
    References | Related Articles | Metrics
    The choice of distribution route for vehicle with perishable fresh goods is not only affected by various factors such as the type of goods, the change of refrigeration environment, vehicle capacity limit and delivery time, but also needs to reach the certain targets such as the least cost and the highest customer satisfaction. In order to solve the problem, a multi-objective model for Vehicle Routing Problem (VRP) of perishable fresh goods was constructed and an improved hybrid bat algorithm was proposed to solve the constructed model. Firstly, the time window fuzzy processing method was used to define the customer satisfaction function, and the multi-objective model of optimal path selection was established by subdividing the perishable fresh goods types and defining the refrigeration cost. The bat algorithm is easy to fall into local optimum and premature convergence in solving discrete problems. Then, on the basis of analyzing the above problems, the speed update formula of classical bat algorithm was simplified, and the singlepoint or multipoint mutation selection mechanism of the hybrid bat algorithm was set up to improve the performance of the algorithm. Finally, the performance of the proposed algorithm was tested. Compared with the basic bat algorithm and several existing hybrid bat algorithms, the customer satisfaction of the proposed improved hybrid bat algorithm is improved by 1.6%-4.2% in solving VRP and the average total cost is reduced by 0.68%-2.91%. The proposed algorithm has better computational performance, higher computational efficiency and stability.
    Path planning of robot based on glowworm swarm optimization algorithm of scene understanding
    LUO Tianhong, LIANG Shuang, HE Zeyin, ZHANG Xia
    2017, 37(12):  3608-3613.  DOI: 10.11772/j.issn.1001-9081.2017.12.3608
    Asbtract ( )   PDF (946KB) ( )  
    References | Related Articles | Metrics
    Against at the problems of oscillation and poor adaptability of robot motion state in the path planning of traditional unstructured environment, a new path planning strategy based on Glowworm Swarm Optimization algorithm of Scene understanding (SGSO) was proposed. The initialization was realized based on the regularity, randomness and generalization of chaotic systems, and golden section method was used for later optimization, which improved the diversity of the population, suppressed the premature and local convergence of the algorithm. And, by introducing scene understanding of glowworm "natural enemy", the selection mechanism of glowworm swarm was optimized to solve the grounding phenomenon of glowworm in the process of tracing under unstructured environment, which enhanced the adaptability and robustness of the algorithm. The simulation results of four test functions show that, the proposed algorithm is superior to the basic Glowworm Swarm Optimization (GSO) algorithm in solving precision and convergence efficiency. The proposed algorithm was applied to the path planning of mobile robots in unstructured environment, the test results show that the planning path based on SGSO was shorter and the corner was more smooth, which could effectively avoid the additional load on power system caused by large angle steering of robot, verifying the feasibility and effectiveness of the proposed algorithm.
    Multi-robot odor source localization based on brain storm optimization algorithm
    LIANG Zhigang, GU Junhua, DONG Yongfeng
    2017, 37(12):  3614-3619.  DOI: 10.11772/j.issn.1001-9081.2017.12.3614
    Asbtract ( )   PDF (1048KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of the odor source localization algorithms by using multi-robot in indoor turbulent environment, such as the low utilization rate of historical concentration information and the lack of mechanism to adjust the global and local search, a multi-robot cooperative search algorithm combing Brain Storm Optimization (BSO) algorithm and upwind search was proposed. Firstly, the searched location of robot was initialized as an individual and the robot position was taken as the center for clustering, which effectively used the guiding role of historical information. Secondly, the upwind search was defined as an individual mutation operation to dynamically adjust the number of new individuals generated by the fusion of selected individuals in a class or two classes, which effectively adjusted the global and local search methods. Finally, the odor source was confirmed according to the two indexes of concentration and persistence. In the simulation experiments under two environments with and without obstacles, the proposed algorithm was compared with three kinds of swarm intelligent multi-robot odor source localization algorithms. The experimental results show that, the average search time of the proposed algorithm is reduced by more than 33% and the location accuracy is 100%. The proposed algorithm can effectively adjust the global and local search relations of robot, and locate the odor source quickly and accurately.
    Multi-robot dynamic task allocation algorithm based on Pareto improvment
    JIANG Dong, XU Xin
    2017, 37(12):  3620-3624.  DOI: 10.11772/j.issn.1001-9081.2017.12.3620
    Asbtract ( )   PDF (813KB) ( )  
    References | Related Articles | Metrics
    In order to solve the optimization problem of dynamic task allocation in multi-robot system, a new quadratic task allocation algorithm based on Pareto improvement was proposed based on the initial task allocation of contract net. When the fire fighting task was performed by the multi-robot system in parallel, firstly, the multiple robots were divided into several sub-group through the initialization of task allocation. Then, a fire fighting task was undertook by a subgroup, and the robots needed to be migrated were determined by the Pareto improvement of the subgroup and its nearest subgroup while the subgroup performing the task to achieve the Pareto optimality between the two subgroups. Finally, the global Pareto optimality was achieved by the Pareto improvement of all subgroups with posterior binary tree traversal. The theoretical analysis and simulation results show that, compared with reinforcement learning algorithm and ant colony algorithm, the fire fighting time of the proposed algorithm is reduced by 26.18% and 37.04% respectively. And compared with the traditional contract net method, the proposed algorithm can perform the fire fighting task efficiently in time, and also has the obvious advantage in system revenue.
    Envelope extraction algorithm for acoustic signal of vehicle pressing line based on variable step size
    LAN Zhangli, HUANG Fen
    2017, 37(12):  3625-3630.  DOI: 10.11772/j.issn.1001-9081.2017.12.3625
    Asbtract ( )   PDF (812KB) ( )  
    References | Related Articles | Metrics
    The acoustic signal waveform of vehicle through the deceleration zone is different from that of the normal running on the road, the extraction of its feature parameters is crucial to the automatic judgment of number, speed and type of vehicles, and the acoustic signal envelope curve has many advantages in extracting its feature parameters compared with the original signal. However, the traditional envelope extraction algorithm has the problems that there are many burrs and it is difficult for the feature parameters to truly reflect the signal properties and features in the envelope extraction of acoustic signals in such traffic domain. In order to solve the problems, combined with the characteristics of acoustic signals of vehicles through the deceleration zone, a new envelope extraction algorithm for acoustic signal of vehicle pressing line based on variable step size was proposed. Different step sizes were set to traverse the signal. The curve was plotted by using the maximum point in each step and compared with the original signal waveform. The sharp definition and error of feature point extraction were taken as the judgment basis to realize the effective extraction of the acoustic signal envelope. The experimental results show that, under the same number of sampling points, the extracted envelope curve by the proposed algorithm is more clear and has less burrs than that by the traditional envelope extraction algorithm, and the extraction error of feature parameters is smaller.
2024 Vol.44 No.8

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF