Table of Content

    10 August 2017, Volume 37 Issue 8
    Link prediction method for complex network based on closeness between nodes
    DING Dazhao, CHEN Yunjie, JIN Yanqing, LIU Shuxin
    2017, 37(8):  2129-2132.  DOI: 10.11772/j.issn.1001-9081.2017.08.2129
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics
    Many link prediction methods only focus on the standard metric AUC (Area Under receiver operating characteristic Curve), ignoring the metric precision and closeness of common neighbors and endpoints under different topological structures. To solve these problems, a link prediction method based on closeness between nodes was proposed. In order to describe the similarity between endpoints more accurately, the closeness of common neighbors was designed by considering the local topological information around common neighbors, which was adjusted for different networks through a parameter. Empirical study on six real networks show that compared with the similarity indicators such as Common Neighbor (CN), Resource Allocation (RA), Adamic-Adar (AA), Local Path (LP) and Katz, the proposed index can improve the prediction accuracy.
    Energy-efficient micro base station deployment method in heterogeneous network with quality of service constraints
    ZHANG Yangyang, TANG Hongbo, YOU Wei, WANG Xiaolei, ZHAO Yu
    2017, 37(8):  2133-2138.  DOI: 10.11772/j.issn.1001-9081.2017.08.2133
    Asbtract ( )   PDF (967KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of high energy consumption caused by the increase of base station density in heterogeneous dense network, an energy-efficient method for micro base station deployment in heterogeneous networks was proposed. Firstly, the feasibility of micro base station positions was considered to mitigate the effects of environmental conditions. Then the optimization target value was weighed under different user distribution probability to enhance adaptability for different user distribution scenarios. Finally, an energy-efficient deployment algorithm for micro base stations was proposed by jointly optimizing the number, deployment position and power configuration of micro base stations. Simulation results show that the proposed method improves energy efficiency by up to 26% compared with the scheme which only optimizes the number and location of micro base stations. The experimental results demonstrate that the combined optimization method can improve the energy efficiency of the system compared with the deployment method without considering the power factor, and verifies the influence of the micro base station power on the energy efficiency of heterogeneous network.
    Data scheduling algorithm based on software defined network for vehicular Ad Hoc network
    WU Yi, MA Liangyi, WEI Yunfeng, XU Zhexin
    2017, 37(8):  2139-2144.  DOI: 10.11772/j.issn.1001-9081.2017.08.2139
    Asbtract ( )   PDF (1150KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the Road Side Unit (RSU) has inefficient response to the request of the vehicles in Vehicular Ad Hoc Network (VANET), a data scheduling algorithm based on Software Defined Network (SDN) architecture, namely SDDS, was proposed. Firstly, a graph of conflicting policies was generated based on status information of vehicles, and a maximum weighted independent set of the graph was solved to maximize the number of satisfied requests in current cycle. Secondly, the redundancy of data in vehicles was analyzed to figure out the optimum parameter, and a selection mechanism for collaborative vehicles was designed based on geographical position. Finally, the characteristics of handover vehicles and some factors that would affect the multi-RSU cooperation were analyzed, and a multi-RSU cooperation mechanism was put forward based on collision avoidance. In addition, a new evaluation indicator, service efficiency, was proposed to estimate the overall quality of service. Simulation results showed that compared with Most Requests First (MRF) and Cooperative Data Dissemination (CDD) algorithms, the service efficiency of SDDS algorithm was increased up to 15% and 20% respectively. The simulation results prove that SDDS algorithm can observably improve the sevice eficiency and quality of scheduling system.
    Hybrid dynamic clustering algorithm for long term evolution system in unlicensed bands
    ZHANG Gang, JIANG Wei, LIU Shixiao
    2017, 37(8):  2145-2149.  DOI: 10.11772/j.issn.1001-9081.2017.08.2145
    Asbtract ( )   PDF (929KB) ( )  
    References | Related Articles | Metrics
    For the problem of cross subframe interference caused by the use of dynamic subframe configuration in Long Term Evolution in Unlicensed band (LTE-U) system, a hybrid dynamic clustering algorithm which considered large-scale loss and cell traffic volume was proposed. Firstly, the information of large-scale loss and cell traffic volume was periodically measured by base station, then the corresponding relevance metric value was calculated. Secondly, according to the relevance metric value, the cells were clustered by polling to realize periodic update of clustering result. Finally, the dynamic subframe configuration was performed according to the updated cell clustering result. Simulation results showed that compared with the traditional static clustering algorithm in the medium traffic arrival rate conditions, the uplink-downlink user average throughput of hybrid dynamic clustering algorithm was increased by about 16.92% and 34.33%, the uplink-downlink user average delay was decreased by about 14.18% and 36.32%. Simulation results demonstrate that the hybrid dynamic clustering algorithm can effectively reduce the impact of cross subframe interference, and improve system throughput, which has better performance compared to the traditional static clustering algorithm.
    Path planning algorithm for mobile sink with optimized network lifetime and shortest path in wireless sensor network
    MO Wenjie, ZHENG Lin
    2017, 37(8):  2150-2156.  DOI: 10.11772/j.issn.1001-9081.2017.08.2150
    Asbtract ( )   PDF (1109KB) ( )  
    References | Related Articles | Metrics
    In order to alleviate the problem of the imbalance energy consumption and hotspot due to the uneven distribution of nodes and the different amount of perception data in the Wireless Sensor Network (WSN), a Path Planning Algorithm of Mobile Sink named MSPPA was proposed to optimize network lifetime and shortest path in WSN. Firstly, by defining the grids in the network area, several candidate sites of mobile sink were distributed in each grid, and then sink node selected a site for sojourning and collecting data of nodes in each grid. Secondly, based on the relationship between network lifetime and the selection of sink sites, an optimization model was established to weigh network lifetime and mobile journey of sink. Finally, the double-stranded genetic algorithm was proposed to plan the order of mobile sink traversing grids and selecting site of the mobile sink in each grid, then the optimal path of mobile sink was obtained. The simulation results show that, compared with Low-Energy Adaptive Clustering Hierarchy (LEACH) algorithm and optimizing LEACH clustering algorithm with Mobile Sink and Rendezvous Nodes (MS-LEACH-RN), the network lifetime of MSPPA was increased by 60%. The proposed MSPPA has a good balance of energy consumption as well. The experimental results indicate that the proposed MSPPA can effectively alleviate the imbalance of energy consumption and the hotspot problems, prolonging the network lifetime.
    Optimization of the TCP connection initiating process in data centers
    XIA Yu, LIAO Pingxiu, CUI Lei
    2017, 37(8):  2157-2162.  DOI: 10.11772/j.issn.1001-9081.2017.08.2157
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics
    An SYN packet might be dropped when initiating a Transmission Control Protocol (TCP) connection in data centers, causing the tasks missing the expected deadline; accordingly, without changing the existing devices, applications and the TCP itself, a viable mechanism based on Weighted Random Early Detection (WRED) was proposed to avoid the drop of SYN packets. Three related key problems were solved by the proposed method:how to mark and recognize the SYN packets; how to reserve space for the SYN packets in switches; how much space is required for reserving. Compared with the original TCP, the TCP connection establishment time was greatly reduced after optimizing. The simulation results show that the connection initialization optimization method can solve the problem of tasks missing the expected deadline.
    Anti-collision algorithm for readers in radio frequency identification based on graph theory
    XU Yafeng, CUI Yinghua
    2017, 37(8):  2163-2167.  DOI: 10.11772/j.issn.1001-9081.2017.08.2163
    Asbtract ( )   PDF (887KB) ( )  
    References | Related Articles | Metrics
    Radio Frequency IDentification (RFID) systems often require multiple readers to ensure coverage of the entire target area. When there are too much readers, because of the mutual interference between the readers, the efficiency of the whole RFID system and the recognition efficiency are reduced. To resolve the problem, a new reader anti-collision algorithm based on graph theory was proposed. Firstly, the reader network was considered as a simple graph with time slot of the reader groups, the readers with the same time slot were regarded as a group, and the adjacent readers were assigned with different time slot, thus avoiding the interference caused by overlapping. At the same time, considering the frequency interference problem within the group of readers, the readers in the group with the same frequency were regarded as a group, and the adjacent readers were assigned with different frequency, thus avoiding the frequency collision caused by too large interference range. Next, according to the grouping information, the time slots and the frequency resources were assigned to each reader by the central server through the arrangement command. Finally, the working order of each group of readers was assigned by the central server through the ordering commands. The simulation results showed that compared with the Neighbor-Friendly Reader Anti-Collision (NFRA) algorithm, the average work efficiency of the proposed scheme was improved by 6.5 percentage points and the efficiency of a system with 1000 readers was improved by 9.5 percentage points. The results demonstrate that the proposed algorithm based on graph theory can optimize the number of working readers in the given time and reduce the number of idle readers.
    Anti-collision algorithm for RFID based on counter and bi-slot
    MO Lei, CHEN Wei, REN Ju
    2017, 37(8):  2168-2172.  DOI: 10.11772/j.issn.1001-9081.2017.08.2168
    Asbtract ( )   PDF (831KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem of the binary search anti-collision algorithm in Radio Frequency IDentification (RFID) system such as many search times and large amount of communication data, a new anti-collision algorithm for RFID with counter and bi-slot was proposed based on regressive search tree algorithm and time slot algorithm, namely CBS. The tags were searched step by step according to the slot counter in tag and the collision bit information received by reader. The response tags were divided into two groups, which returned the data information to the reader in two time slots. The reader only sends the information of the highest collision bit position, and the tags only send the bits of data after the highest collision bit. Theoretical analysis and simulation results showed that compared with the traditional Regressive Binary Search (RBS) algorithm, the search times of CBS algorithm was reduced by more than 51%, and the communication data was reduced by more than 65%. CBS algorithm is superior to the commonly used anti-collision algorithms, which greatly reduces the search times and communication data, and improves the search efficiency.
    Optimal power control based on interference power constraint in cognitive satellite terrestrial networks
    SHI Shengchao, LI Guangxia, LI Zhiqiang
    2017, 37(8):  2173-2176.  DOI: 10.11772/j.issn.1001-9081.2017.08.2173
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics
    In cognitive satellite terrestrial networks, when the satellite users are secondary users, power control is necessary to guarantee the communication quality of terrestrial primary user in the uplink case. In the context of fading channels, maximizing the Ergodic Capacity (EC) of the satellite user was selected as the objective function, then two optimal power control schemes were proposed based on Peak Interference power Constraint (PIC) and Average Interference power Constraint (AIC), respectively. Meanwhile, the closed expression of the optimal transmit power was given. Simulation results show that the ergodic capacity of satellite user can be increased when the satellite link experiences the weaker shadowing conditions; moreover, under the the specific satellite link condition, the performance of satellite user becomes better with the increasing of the terrestrial interference link fading parameters. In addition, power control method based on AIC is superior to that based on PIC.
    Consensus clock synchronization algorithm based on Kalman filter estimation
    YOU Luyao, HUANG Qingqing, DUAN Sijing
    2017, 37(8):  2177-2183.  DOI: 10.11772/j.issn.1001-9081.2017.08.2177
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    For Wireless Sensor Network (WSN), many applications rely on the coordination of synchronized clock nodes. However, the crystal oscillator of the node is affected by itself and the external environment, so that the clock skew and clock offset change and then lead to the clocks falling out of synchronization. Consequently, a new clock synchronization algorithm based on distributed Kalman filter and consistency compensation, namely DKFCC, was proposed. First of all, the optimal estimation of the clock skew and offset was obtained by two-way message exchange mechanism and distributed Kalman filter. Then consistency compensation method based on the optimal estimation value of clock parameters was adopted to achieve clock synchronization. The experimental results show that compared with the Asynchronous Consensus-based time synchronization (AC) algorithm in the WSN with 100 randomly-deployed nodes, the Synchronous Root Mean Square Error (SRAMSE) of the DKFCC synchronization algorithm with virtual global consistency is reduced by about 95%, which means DKFCC synchronization algorithm has higher synchronization accuracy. At the same time, the proposed algorithm achieves synchronization from the clock parameter level without operating clock synchronization frequently, thus it has better energy efficiency compared to AC synchronization algorithm.
    Multi-target detection via sparse recovery of least absolute shrinkage and selection operator model
    HONG Liugen, ZHENG Lin, YANG Chao
    2017, 37(8):  2184-2188.  DOI: 10.11772/j.issn.1001-9081.2017.08.2184
    Asbtract ( )   PDF (828KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm may introduce some false targets in moving target detection with the presence of multipath reflections, a descending dimension method for designed matrix based on LASSO was proposed. Firstly, the multipath propagation increases the spatial diversity and provides different Doppler shifts over different paths. In addition, the application of broadband OFDM signal provides frequency diversity. The introduction of spatial diversity and frequency diversity to the system causes target space sparseness. Sparseness of multiple paths and environment knowledge were applied to estimate paths along the receiving target responses. Simulation results show that the improved LASSO algorithm based on the descending dimension method for designed matrix has better detection performance than the traditional algorithms such as Basis Pursuit (BP), Dantzig Selector (DS) and LASSO at the Signal-to-Noise Ratio (SNR) of -5 dB, and the target detection probability of the improved LASSO algorithm was 30% higher than that of LASSO at the false alarm rate of 0.1. The proposed algorithm can effectively filter the false targets and improve the radar target detection probability.
    Blind estimation of combination code sequence for TDDM-BOC based on Sanger neural network
    ZHANG Ting, ZHANG Tianqi, XIONG Mei
    2017, 37(8):  2189-2194.  DOI: 10.11772/j.issn.1001-9081.2017.08.2189
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics
    Concerning the blind estimation of the combination code sequence of Time Division Data Modulation-Binary Offset Carrier (TDDM-BOC) modulation signal under low Signal-to-Noise Ratio (SNR), a new method based on Sanger Neural Network (Sanger NN), a kind of multi-principal component neural network, was proposed. Firstly, the segmented TDDM-BOC signal was used as input signal, and the weight vectors of multi-feature components of the segmented TDDM-BOC signal were adaptively extracted by Sanger NN algorithm. Secondly, the weight vectors were trained repeatedly until convergence by continuously inputing segmented TDDM-BOC signal. Finally, the combination signal code sequence was rebuilt by the symbolic function of each weight vector, thus realizing the blind estimation of the TDDM-BOC signal. Furthermore, an optimal variable step method was used in Sanger NN algorithm to greatly improve the convergence speed. Theoretical analysis and simulation results demonstrate that the Sanger NN algorithm can achieve blind estimation of the TDDM-BOC combined code sequence with low SNR of -20.9~0 dB, and its complexity is significantly lower than that of Singular Value Decomposition (SVD) and on-line unsupervised learning neural network for adaptive feature extraction via principal component analysis (LEAP). Although the number of data group required for the convergence of Sanger NN algorithm is larger than that of LEAP algorithm, but the convergence time of Sanger NN is lower than that of LEAP algorithm.
    New least mean square algorithm with variable step based on underwater acoustic communication
    ZHENG Yifeng, HAO Xueyuan, YAN Xiaohong
    2017, 37(8):  2195-2199.  DOI: 10.11772/j.issn.1001-9081.2017.08.2195
    Asbtract ( )   PDF (929KB) ( )  
    References | Related Articles | Metrics
    In underwater acoustic communication, multipath effect channel can cause severe Inter-Symbol Interference (ISI). In view of the problems of the existing equalization algorithms when dealing with ISI, including slow convergence speed and huge steady-state error, as well as the complicated algorithm and being difficult to carry out hardware migration, a new variable step Least Mean Square (LMS) algorithm was proposed with anticosine step function and three adjustment parameters within the Feed-Forward Equalizer and Decision Feed-back Equalizer (FFE-DFE) structure. Firstly, simulations of three adjustment parameters including α, β, r were given to optimize the algorithm and compare it with traditional LMS algorithm, Modified Arctangent based Variable Step LMS (MA-VSLMS) and Hyperbolic Secant function based Variable Step size LMS algorithm (HS-VSLMS) in convergence and steady-state error. The simulation results showed that compared with the traditional LMS algorithm, the convergence speed of the proposed algorithm was 57.9% higher, and the steady-state error was reduced by 2 dB; compared with HS-VSLMS and MA-VSLMS, the convergence speed of the proposed algorithm was 26.3% and 15.8% higher, respectively, and the steady-state error was reduced by 1-2 dB. Finally, the proposed algorithm was transplanted to signal processing module and tested in an underwater experiment. Experimental results indicate that the signal is recovered very well after the equalizer, and the ISI problem caused by multipath effect is solved in the actual scene.
    Method for exploiting function level vectorization on simple instruction multiple data extensions
    LI Yingying, GAO Wei, GAO Yuchen, ZHAI Shengwei, LI Pengyuan
    2017, 37(8):  2200-2208.  DOI: 10.11772/j.issn.1001-9081.2017.08.2200
    Asbtract ( )   PDF (1353KB) ( )  
    References | Related Articles | Metrics
    Currently, two vectorization methods which exploit Simple Instruction Multiple Data (SIMD) parallelism are loop-based method and Superword Level Parallel (SLP) method. Focusing on the problem that the current compiler cannot realize function level vectorization, a method of function level vectorization based on static single assignment was proposed. Firstly, the variable properties of program were analysed, and then a set of compiling directives including SIMD function annotations, uniform clauses, linear clauses were used to realize function level vectorization. Finally, the vectorized code was optimized by using the variable attribute result. Some test cases from the field of multimedia and image processing were selected to test the function and performance of the proposed function level vectorization on Sunway platform. Compared with the scalar program execution results, the execution of the program after the function level vectorization is more efficient. The experimental results show that the function level vectorization can achieve the same effect of task level parallelism, which is instructive to realize the automatic function level vectorization.
    Storage load balancing algorithm based on storage entropy
    ZHOU Weibo, ZHONG Yong, LI Zhendong
    2017, 37(8):  2209-2213.  DOI: 10.11772/j.issn.1001-9081.2017.08.2209
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics
    In the distributed storage system, Disk space Utilization (DU) is generally used to measure the load balance of each storage node. When given the equal disk space utilization to each node, the balance of storage load is achieved in the whole distributed storage system. However, in practice, the storage node with relatively low disk I/O speed and reliability becomes a bottleneck for the performance of data I/O in the whole storage system. Therefore in heterogeneous distributed storage system and specially the system which has great differences in disk I/O speed and reliability of each storage node, the speed of data I/O is definitely limited when disk space utilization is the only evaluation criteria of storage load balance. A new idea based on read-write efficiency was proposed to measure the storage load balance in the distributed storage system. According to the definition of Storage Entropy (SE) given by the theory of load balance and entropy, a kind of load balance algorithm based on SE was proposed. With system load and single node load determination as well as load shifting, the quantitative adjustment for storage load of the distributed storage system was achieved. The proposed algorithm was tested and compared with the load balance algorithm based on disk space utilization. Experimental results show that the proposed algorithm can balance storage load well in the distributed storage system, which effectively restrains the system load imbalance and improves the overall efficiency of reading and writing of the distributed storage system.
    Load balancing strategy of cloud storage based on Hopfield neural network
    LI Qiang, LIU Xiaofeng
    2017, 37(8):  2214-2217.  DOI: 10.11772/j.issn.1001-9081.2017.08.2214
    Asbtract ( )   PDF (646KB) ( )  
    References | Related Articles | Metrics
    Focusing on the shortcoming of low storage efficiency and high recovery cost after copy failure of the current Hadoop, Hopfield Neural Network (HNN) was used to improve the overall performance. Firstly, the resource characteristics that affect the storage efficiency were analyzed. Secondly, the resource constraint model was established, the Hopfield energy function was designed and simplified. Finally, the average utilization rate of 8 nodes was analyzed by using the standard test case Wordcount, and the performance and resource utilization of the proposed strategy were compared with three typical algorithms including dynamic resource allocation algorithm, energy-efficient algorithm and Hadoop default storage strategy, and the comparison results showed that the average efficiency of the storage strategy based on HNN was promoted by 15.63%, 32.92% and 55.92% respectively. The results indicate that the proposed algorithm can realize the resource load balancing, help to improve the storage capacity of Hadoop, and speed up the retrieval.
    Phase space probabilistic clustering algorithm based on multi-scale quantum harmonic oscillator algorithm
    WANG Ziyi, AN Junxiu, WANG Peng
    2017, 37(8):  2218-2222.  DOI: 10.11772/j.issn.1001-9081.2017.08.2218
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics
    A Phase Space Probabilistic Clustering Algorithm based on Multi-scale Quantum Harmonic Oscillator Algorithm (PSPCA-MQHOA) was proposed to solve the task scheduling and resource allocation of large clusters. Firstly, the cluster operating status was projected into the phase space, and the complex working state was transformed into the point set in the phase space. Furthermore, the phase space was meshed to form the Multi-scale Quantum Harmonic Oscillator Algorithm (MQHOA) for discrete objective function. Finally, probabilistic clustering of cluster nodes was carried out by using the probability interpretation of wave function in the MQHOA process. PSPCA-MQHOA inherits the advantages of MQHOA, such as explicit physical model, strong search capabilities and accurate results, and it has few iterations due to the discretized phase space. Experimental results show that PSPCA-MQHOA can be applied to clusters in a variety of load conditions.
    Multi-attribute decision making method based on comparison possibility degree
    HU Xin, CHANG Wenjun, SUN Chaoping
    2017, 37(8):  2223-2228.  DOI: 10.11772/j.issn.1001-9081.2017.08.2223
    Asbtract ( )   PDF (1013KB) ( )  
    References | Related Articles | Metrics
    Based on comparison possibility degree of Distributed Preference Relation (DPR) considering four pairwise relations of alternatives including superiority, inferiority, indifference and uncertainty, a multi-attribute decision making method with unknown attribute weights was proposed. Firstly, DPRs were transformed to score intervals by using grade scores. Based on the score intervals, the comparison possibility degree of DPR with consideration of uniform distribution was constructed. Secondly, the relationship between comparison possibility degree and the difference range of values of the proposed possibility degree and the possibility degree without considering probability distribution were theoretically analyzed and proven. At last, by using the proposed possibility degree, a multiple attribute decision making process considering unknown attribute weights was generated. Meanwhile, a high recognizable decision solution was created. The proposed method was applied to evaluate the strategic emerging industries of Wuhu, which verified the applicability and validity of the method.
    Learning neuron model based on non-associative learning mechanism
    BI Song, DIAO Qi, CHAI Xiaofeng, HAN Cunwu
    2017, 37(8):  2229-2233.  DOI: 10.11772/j.issn.1001-9081.2017.08.2229
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics
    Biological neurons have non-associative learning mechanisms, and a novel learning neuron with non-associative learning mechanisms was designed, namely learning neuron. Firstly, the simplified description of the habituation learning mechanism and the dishabituation learning mechanism were researched in the non-associative learning mechanisms. Secondly, the mathematical models of habituation and dishabituation learning mechanisms were established. Finally, based on the classical M-P (McCulloch-Pitts) neuron model, the learning neuron model with the ability of habituation and dishabituation learning was proposed. The simulation results verify that the proposed learning neuron has typical habituation and dishabituation learning ability, and provides a good foundation for the construction of new neural network.
    Link prediction algorithm based on network representation learning and random walk
    LIU Si, LIU Hai, CHEN Qimai, HE Chaobo
    2017, 37(8):  2234-2239.  DOI: 10.11772/j.issn.1001-9081.2017.08.2234
    Asbtract ( )   PDF (953KB) ( )  
    References | Related Articles | Metrics
    The transition process of existing link prediction indexes based on random walk exists strong randomness in the unweighted network and does not consider the effect of the similarity of the network structure between different neighbor nodes on transition probability. In order to solve the problems, a new link prediction algorithm based on network representation learning and random walk was proposed. Firstly, the latent structure features of network node were learnt by DeepWalk which is a network representation learning algorithm based on deep learning, and the network nodes were encoded into low-dimensional vector space. Secondly, the similarity between neighbor nodes in vector space was incorporated into the transition process of Random Walk with Restart (RWR) and Local Random Walk (LRW) respectively and the transition probability of each random walk was redefined. Finally, a large number of experiments on five real datasets were carried out. The experimental results show that the AUC (Area Under the receiver operating characteristic Curve) values of the proposed algorithms are improved up to 3.34% compared with eight representative link prediction benchmarks based on network structure.
    Visual dictionary construction for human actions recognition based on improved information gain
    WU Feng, WANG Ying
    2017, 37(8):  2240-2243.  DOI: 10.11772/j.issn.1001-9081.2017.08.2240
    Asbtract ( )   PDF (816KB) ( )  
    References | Related Articles | Metrics
    Since term frequency is not considered by traditional information gain in Bag-of-Words (BoW) model, a new visual dictionary constructing method based on improved information gain was proposed to improve the human actions recognition accuracy. Firstly, spatio-temporal interest points of human action video were extracted by using 3D Harris, then clustered by K-means to construct initial visual dictionary. Secondly, concentration of term frequency within cluster and dispersion of term frequency between clusters were introduced to improve the information gain, which was used to compute the initial dictionary; then the visual words with larger information gain were selected to build a new visual dictionary. Finally, the human actions were recognized based on Support Vector Machine (SVM) using the improved information gain. The proposed method was verified by human actions recognition of KTH and Weizmann databases. Compared with the traditional information gain, the actions recognition accuracy was increased by 1.67% and 3.45% with the dictionary constructed by improved information gain. Experimental results show that the visual dictionary of human actions based on improved information gain increases the accuracy of human actions recognition by selecting more discriminate visual words.
    Image classification method based on optimized bag-of-visual words model
    ZHANG Yong, YANG Hao
    2017, 37(8):  2244-2247.  DOI: 10.11772/j.issn.1001-9081.2017.08.2244
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that too large visual dictionary may increase the time cost of image classification in the Bag-Of-Visual words (BOV) model, a Weighted-Maximal Relevance-Minimal Semantic similarity (W-MR-MS) criterion was proposed to optimize visual dictionary. Firstly, the Scale Invariant Feature Transform (SIFT) features of images were extracted, and the K-Means algorithm was used to generate an original visual dictionary. Secondly, the correlation between visual words and image categories and semantic similarity among visual words were calculated, and a weighted parameter was introduced to measure the importance of the correlation and the semantic similarity in image classification. Finally, based on the weighing result, the visual word which correlation with image categories was weak and semantic similarity among visual words was high was removed, which achieved the purpose of optimizing the visual dictionary. The experimental results show that the classification precision of the proposed method is 5.30% higher than that of the traditional K-Means algorithm under the same visual dictionary scale; the time cost of the proposed method is reduced by 32.18% compared with the traditional K-Means algorithm under the same classification precision. Therefore, the proposed method has high classification efficiency and it is suitable for image classification.
    Improved location direction pattern based on interest points location for face recognition
    LUO Yuan, LI Huimin, ZHANG Yi
    2017, 37(8):  2248-2252.  DOI: 10.11772/j.issn.1001-9081.2017.08.2248
    Asbtract ( )   PDF (812KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that Local Directional Pattern (LDP) adopts the fixed average block method in the face feature extraction process, which cannot reflect the characteristics of different images well, an improved LDP based on interest point location was proposed. The positions of interest points contained rich feature information, and the interest points could be obtained automatically according to particular image. Firstly, the locations of interest points were decided by Speed Up Robust Feature (SURF) algorithm and K-means clustering algorithm. Secondly, 4-direction LDP (4-LDP) coding was calculated by the feature extraction windows established with each interest point as the center. Finally, the Support Vector Machine (SVM) was used to identify the face. The proposed method was evaluated in Yale and FERET databases and compared with the original LDP, 4-LDP and PCA-LDP (feature extraction method combined Principal Component Analysis and LDP). The experimental results show that the proposed method can obviously improve the recognition rate and stability while ensuring the real-time performance of the system.
    TEE standard plane classification based on improved multi-class AdaBoost algorithm
    WANG Lili, FU Zhongliang, TAO Pan, ZHU Kai
    2017, 37(8):  2253-2257.  DOI: 10.11772/j.issn.1001-9081.2017.08.2253
    Asbtract ( )   PDF (922KB) ( )  
    References | Related Articles | Metrics
    Due to redundancy of ultrasound image samples, high similarity between different planes caused by disease, and inaccurate positioning of region-of-interest, a classification method of TransEsophageal Echocardiography (TEE) standard plane was proposed by combining with Bag of Features (BOF) model, active learning and improved multi-class AdaBoost algorithm. Firstly, BOF model was constructed to describe ultrasound image. Secondly, active learning was adopted to select the most informative samples for classifiers as training data set. Lastly, improved multi-class AdaBoost algorithm was proposed, where the weight update rule of multi-class AdaBoost was modified according to the classfication results of temporary strong learner, and the TEE standard plane was classified by the improved multi-class AdaBoost algorithm. The experimental results on TEE data set and three UCI data sets showed that, compared with AdaBoost.SAMME, multi-class Support Vector Machine (SVM), BP neural network and AdaBoost.M2, the G-mean value, the total classification accuracy and the classification accuracy in most classes of the proposed method were improved in varying degrees, the classification accuracy of easily misclassified class was improved most significantly. The experimental results illustrate that the improved multi-class AdaBoost algorithm can significantly improve the G-mean value and accuracy of easily misclassified class in the datasets containing similar samples between classes.
    Path planning algorithm of mobile robot based on particle swarm optimization
    HAN Ming, LIU Jiaomin, WU Shuomei, WANG Jingtao
    2017, 37(8):  2258-2263.  DOI: 10.11772/j.issn.1001-9081.2017.08.2258
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    Concerning the slow convergence and local optimum of the traditional robot path planning algorithms in complicated enviroment, a new path planning algorithm for mobile robots based on Particle Swarm Optimization (PSO)algorithm in repulsion potential field was proposed. Firstly, the grid method was used to give a preliminary path planning of robot, which was regarded as the initial particle population. The size of grids was determined by the obstacles of different shapes and sizes and the total area of obstacles in the map, then mathematical modeling of the planning path was completed. Secondly, the particle position and speed were constantly updated through the cooperation between particles. Finally, the high-security fitness function was constructed using the repulsion potential field of obstacles to obtain an optimal path from starting point to target of robot. Simulation experiment was carried out with Matlab. The experimental results show that the proposed algorithm can implement path optimization and safely avoid obstacles in a complex environment; the contrast experimental results indicat that the proposed algorithm converges fast and can solve the local optimum problem.
    Second-order information based formation control in multi-Agent system
    CHAI Yun, XIONG Tao
    2017, 37(8):  2264-2269.  DOI: 10.11772/j.issn.1001-9081.2017.08.2264
    Asbtract ( )   PDF (835KB) ( )  
    References | Related Articles | Metrics
    In order to speed up the state convergence in the multi-Agent formation control process, a formation control method based on multi-hop network technology was proposed. Firstly, the relative velocity deviation between Agents of Multi-Agent System (MAS) was introduced into the control protocol. Then, the absolute displacement deviation between Agents and the standard displacement was introduced. Finally, the multi-hop network technology was applied in the communication topology, so more information was passed around and each Agent enlarges its "available" neighborhood. A six-Agent formation control example was used to verify the proposed protocol. The simulation results show that the proposed control method can make the system build up the specified formation; and the time required for state convergence is reduced by nearly 10 seconds compared with the control method that does not take into account multi-hop network technology, which verifies that the proposed method is more efficient.
    Mobile secure payment solution based on encrypted SMS verification code
    LI Sai, LI Xiaoyu
    2017, 37(8):  2270-2274.  DOI: 10.11772/j.issn.1001-9081.2017.08.2270
    Asbtract ( )   PDF (1019KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that payment verification code is easy to leak during the process of mobile payment, a two-factor mobile payment solution based on encrypted SMS was proposed. Based on the public key system, the Public Key Infrastructure/Certification Authority (PKI/CA) authentication method was used to authenticate the server and the client online, and the registered user name, password and encrypted transaction verification SMS were used to ensure that even if the verification code ciphertext was leaked, the attacker can not obtain the verification code, thus eliminating the risk of theft caused by the verification code leakage. The simulation results show that the response time of the encrypted verification solution using the SMS interface is not very different from the unencrypted solution, and the growth trend is consistent with that of the unencrypted solution and increases linearly with the increase of the user access, which can take into account both of security and effectiveness of the system.
    ECC-based RFID authentication protocol enabling tag ownership transfer
    YANG Xingchun, XU Chunxiang, LI Chaorong
    2017, 37(8):  2275-2280.  DOI: 10.11772/j.issn.1001-9081.2017.08.2275
    Asbtract ( )   PDF (989KB) ( )  
    References | Related Articles | Metrics
    To solve privacy leakage and other security problems in Radio-Frequency Identification (RFID) tag authentication and tag ownership transfer, and to simplify the design of tag ownership transfer protocol, an RFID authentication protocol enabling tag ownership transfer was proposed for those tags that support Elliptic Curve Cryptography (ECC). The structure of the protocol is similar to the structure of the Diffie-Hellman logarithm, and tag privacy of the protocol is based on complexity to solve the computational Diffie-Hellman problem. Analysis of tag privacy and other security properties of the protocol were given, and comparisons between recent ECC-based authentication protocols and the proposed protocol were also given. The results show that the proposed protocol achieves best performance, under a comprehensive evaluation of supporting tag ownership transfer, tag computation cost, communication cost and tag privacy protection. In addition, a simplified version that realizes tag authentication to reader and is suitable for secure environments was also given.
    Research of control plane' anti-attacking in software-defined network based on Byzantine fault-tolerance
    GAO Jie, WU Jiangxing, HU Yuxiang, LI Junfei
    2017, 37(8):  2281-2286.  DOI: 10.11772/j.issn.1001-9081.2017.08.2281
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    Great convenience has been brought by the centralized control plane of Software-Defined Network (SDN), but a lot of security risks have been introduced into it as well. In the light of single point failure, unknown vulnerabilities and back doors, static configuration and other security problems of the controller, a secure architecture for SDN based on Byzantine protocol was proposed, in which the Byzantine protocol was executed between controllers and each switching device was controlled by a controller view and control messages were decided by several controllers. Furthermore, the dynamics and heterogeneity were introduced into the proposed structure, so that the attack chain was broken and the capabilities of network active defense were enhanced; moreover, based on the quantification of the controller heterogeneity, a two-stage algorithm was designed to seek for the controller view, so that the availability of the network and the security of the controller view were ensured. Simulation results show that compared with the traditional structure, the proposed structure is more resistant to attacks.
    BGN type outsourcing the decryption of attribute-based encryption ciphertexts
    LI Zhenlin, ZHANG Wei, BAI Ping, WANG Xu'an
    2017, 37(8):  2287-2291.  DOI: 10.11772/j.issn.1001-9081.2017.08.2287
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics
    Cloud computing security is the key bottleneck that restricts its development, and access control on the result of cloud computing is a hot spot of current research. Based on the classical homomorphic encryption BGN (Boneh-Goh-Nissim) scheme, and combined with outsourcing the decryption of Ciphertext-Policy Attribute-Based Encryption (CP-ABE) ciphertexts, a BGN type outsourcing the decryption of ABE ciphertexts was constructed. In the scheme, partial decryption of ciphertexts was outsourced to the cloud, and only the users whose attributes meet the access policy could get the correct decryption result, thus reducing the storage and computation overhead of users. Compared with the existing outsourcing schemes of ABE, the proposed scheme can operate on ciphertexts for arbitrary additions and one multiplication. Finally, the security of the scheme was analyzed. The proposed scheme is semantically secure under the subgroup decision assumption, and its attribute security is proved under random oracle model.
    Line segment feature matching algorithm across video frames based on geometric constraints and 0-1 programming
    LI Haifeng, HU Zunhe, FAN Longfei, JIANG Zizheng, CHEN Xinwei
    2017, 37(8):  2292-2297.  DOI: 10.11772/j.issn.1001-9081.2017.08.2292
    Asbtract ( )   PDF (1171KB) ( )  
    References | Related Articles | Metrics
    To deal with the problems in line segment matching due to occlusion, fragmentation and inaccurate extraction of line segment endpoints, especially when the criteria of "most similar matching" was taken in "multiple-to-multiple" matching which may lead to fateful true correspondences lost, a line segment matching algorithm based on geometric constraints and 0-1 programming was proposed. Firstly, a candidate matching set was initially computed based on the spatial adjacency after correcting the vedio frames. Secondly, the multiple geometric constraints, such as epipolar constraint, homography matrix and point-to-line adjacency, were employed to remove the false positive correspondences. Then, the matching problem was modeled into a large scale 0-1 programming. Finally, a two-stage method based on grouping strategy was designed to solve this problem, so as to realize the "one to one" exact matching of line segment feature. The experimental results show that, compared with LS (Line Sigature) and LJL (Line-Junction-Line) methods, the propsed method has a similar performance in correct matching ratio but a larger matching amount over 60% and 11%, respectively. The proposed method can fulfill the line segment matching across vedio frames, which lays the foundation for line-based visual SLAM (Simultaneously Localization and Mapping).
    Augmented reality approach based on digital camera and temporal psycho-visual modulation
    LU Xiaoyong, YOU Bin, LIN Pei-Yu, CHEN Musheng
    2017, 37(8):  2298-2301.  DOI: 10.11772/j.issn.1001-9081.2017.08.2298
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics
    In order to extend the practicality of Augmented Reality (AR), a method based on Temporal Psycho Visual Modulation (TPSM) technology and digital camera to realize AR effect was proposed. First, the AR tags were embedded in the digital screen of the media. Based on the principle difference between the human eye to identify the perception and the digital camera to capture the image formed in the digital screen or projector, the digital camera equipment was used to obtain the digital screen image with AR tags which are not easily to be detected by human eye. Finally, the AR effect was displayed on the smart device that gets the AR tags. Simulation results show that the combination of AR and TPVM technology can accurately identify the AR tags in the image and achieve AR effect, while the human eye can not detect the AR tags. Through the mobile phone instead of 3D glasses and other extra equipment, the use restrictions of AR are reduced, and the practicality of AR is also expanded.
    Real-time interaction based modeling method for 3D objects with relief-texture
    ZHANG Luosheng, TONG Jing
    2017, 37(8):  2302-2306.  DOI: 10.11772/j.issn.1001-9081.2017.08.2302
    Asbtract ( )   PDF (943KB) ( )  
    References | Related Articles | Metrics
    In order to reconstruct 3D objects with relief-texture quickly, a modeling method based on real-time interaction for 3D objects was proposed. Firstly, the input object or image that needed to generate relief texture was converted into an initial depth map, then the depth map was converted into a gradient image. The obtained gradient image was compressed and filtered, then the continuous relief depth map was reconstructed by solving the linear equation. Secondly, using the proposed relief-texture-mapping algorithm based on mesh intersection, the optimized relief depth image was pasted on the goal object surface by real-time interaction including translation, rotation and scale under 3D scene directly. Finally, an object with relief-texture was achieved through re-triangulation of the goal model. The experimental results show that the proposed method can quickly generate concave or convex, especially multiple reliefs on the goal object, the final obtained model can be directly applied to 3D printing without any other processing.
    Automatic camera calibration method for video-based vehicle speed detection
    CHEN Ke
    2017, 37(8):  2307-2312.  DOI: 10.11772/j.issn.1001-9081.2017.08.2307
    Asbtract ( )   PDF (1160KB) ( )  
    References | Related Articles | Metrics
    To address the issue of inefficiency and inconvenience in camera's manual calibration required by the existing video-based vehicle speed detection, a robust algorithm was proposed to carry out automatic calibration of typically-configured road-monitoring video camera, including calibration of its focal length, pitch angle and distance to the road surface. The relationship between vanishing points formed by two groups of orthogonal lines was used to calibrate the focal length and pitch angle of the camera. Then the statistical width data collected from multiple vehicles detected in video stream was used to extract the distance between the camera and the road surface. The experimental results show that the proposed algorithm has high efficiency, excellent accuracy and high reliability. The proposed algorithm can be deployed as an effective automatic camera calibration way in helping the existing traffic monitoring video devices to carry out automatic collection, analysis and monitoring of vehicle speed, vehicle type, traffic flow data and enforcing traffic speed limit law.
    Adaptive bi-lp-l2-norm based blind super-resolution reconstruction for single blurred image
    LI Tao, HE Xiaohai, TENG Qizhi, WU Xiaoqiang
    2017, 37(8):  2313-2318.  DOI: 10.11772/j.issn.1001-9081.2017.08.2313
    Asbtract ( )   PDF (972KB) ( )  
    References | Related Articles | Metrics
    An adaptive bi-lp-l2-norm based blind super-resolution reconstruction method was proposed to improve the quality of a low-resolution blurred image, which includes independent blur-kernel estimation sub-process and non-blind super-resolution reconstruction sub-process. In the blur-kernel estimation sub-process, the bi-lp-l2-norm regularization was imposed on both the sharp image and the blur-kernel. Moreover, by introducing threshold segmentation of image gradients, the lp-norm and the l2-norm constraints on the sharp image were adaptively combined. With the estimated blur-kernel, the non-blind super-resolution reconstruction method based on non-locally centralized sparse representation was used to reconstruct the final high-resolution image. In the simulation experiments, compared with the bi-l0-l2-norm based method, the average Peak Signal-to-Noise Ratio (PSNR) gain of the proposed method was 0.16 dB higher, the average Structural Similarity Index Measure (SSIM) gain was 0.0045 higher, and the average reduction of Sum of Squared Difference (SSD) ratio was 0.13 lower. The experimental results demonstrate a superior performance of the proposed method in terms of kernel estimation accuracy and reconstructed image quality.
    Image restoration based on natural patch likelihood and sparse prior
    LI Junshan, YANG Yawei, ZHU Zijiang, ZHANG Jiao
    2017, 37(8):  2319-2323.  DOI: 10.11772/j.issn.1001-9081.2017.08.2319
    Asbtract ( )   PDF (898KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that images captured by optical system suffer unsteady degradation including noise, blurring and geometric distortion when imaging process is affected by defocusing, motion, atmospheric disturbance and photoelectric noise, a generic framework of image restoration based on natural patch likelihood and sparse prior was proposed. Firstly, on the basis of natural image sparse prior model, several patch likelihood models were compared. The results indicate that the image patch likelihood model can improve the restoration performance. Secondly, the image expected patch log likelihood model was constructed and optimized, which reduced the running time and simplified the learning process. Finally, image restoration based on optimized expected log likelihood and Gaussian Mixture Model (GMM) was accomplished through the approximate Maximum A Posteriori (MAP) algorithm. The experimental results show that the proposed approach can restore degraded images by kinds of blur and additive noise, and its performance outperforms the state-of-the-art image restoration methods based on sparse prior in both Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) with a better visual effect.
    Improved fast image defogging algorithm based on dark channel prior
    ZHANG Jiangxin, ZHOU Jiabo, MENG Limin
    2017, 37(8):  2324-2328.  DOI: 10.11772/j.issn.1001-9081.2017.08.2324
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    Due to high complexity of dark channel prior defogging algorithm, a fast defogging algorithm based on dark channel prior was proposed. Firstly, the computing of the dark channel map was accelerated by blocking the image. Secondly, the block effect was eliminated by using the linear interpolation algorithm to smoothing. Then, the transmission map was acquired by dark channel prior. Finally, a clear and haze-free image would be gotten from the atmospheric scattering model. The experimental results show that the defogging effect of the proposed algorithm can be as good as the original defogging algorithm but the complexity can be reduced effectively. The proposed algorithm needs time about one-tenth of the original algorithm and reaches the requirement of near real-time.
    Modified image dehazing algorithm of traffic sign image in fog and haze weather
    XU Zhe, CHEN Meizhu
    2017, 37(8):  2329-2333.  DOI: 10.11772/j.issn.1001-9081.2017.08.2329
    Asbtract ( )   PDF (843KB) ( )  
    References | Related Articles | Metrics
    When directly applying the existing fog algorithms to the traffic image, the transitional region is obvious and the color cast is serious, which can not meet the requirement of subsequent traffic sign detection. In order to solve this problem, a modified single traffic image dehazing algorithm based on dark channel prior image was proposed. Firstly, the modified mean shift algorithm was used to segment the sky region of traffic image; then the histogram equalization algorithm was used to defog the partitioned sky region, and the dark channel prior algorithm based on efficient bilateral filter was used to defog the non-sky region. At last, the final image dehazing was finished by image fusion. Experimental result shows that compared with the typical image dehazing algorithms, the proposed algorithm has better effect in transitional region of the sky, the color cast is not serious, and its processing speed is faster; the quantitative analysis result indicates that the proposed algorithm has better effect in dehazing, and can meet the requirement of subsequent traffic sign dectection system.
    Image denoising model with adaptive non-local data-fidelity term and bilateral total variation
    GUO Li, LIAO Yu, LI Min, YUAN Hailin, LI Jun
    2017, 37(8):  2334-2342.  DOI: 10.11772/j.issn.1001-9081.2017.08.2334
    Asbtract ( )   PDF (1659KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of over-smoothing, singular structure residual noise, contrast loss and stair effect of common denoising methods, an image denoising model with adaptive non-local data fidelity and bilateral total variation regularization was proposed, which provides an adaptive non-local regularization energy function and the corresponding variation framework. Firstly, the data fidelity term was obtained by non-local means filter with adaptive weighting method. Secondly, the bilateral total variation regularization was introduced in this framework, and a regularization factor was used to balance the data fidelity term and the regularization term. At last, the optimal solutions for different noise statistics were obtained by minimizing the energy function, so as to achieve the purpose of reducing residual noise and correcting excessive smoothing. The theoretical analysis and simulation results on simulated noise images and real noise images show that the proposed image denoising model can deal with different statistical noise in image, and the Peak-Signal-to-Noise Ratio (PSNR) of it can be increased by up to 0.6 dB when compared with the adaptive non-local means filter; when compared with the total variation regularization algorithm, the subjective visual effect of the proposed model was improved obviously and the details of image texture and edges was protected very well when denoising, and the PSNR was increased by up to 10 dB, the Multi-Scale Structural Similarity index (MS-SSIM) was increased by 0.3. Therefore, the proposed denoising model can theoretically better deal with the noise and the high frequency detail information of the image, and has good practical application value in the fields of video and image resolution enhancement.
    Shapelet classification method based on trend feature representation
    YAN Xinming, MENG Fanrong, YAN Qiuyan
    2017, 37(8):  2343-2348.  DOI: 10.11772/j.issn.1001-9081.2017.08.2343
    Asbtract ( )   PDF (1058KB) ( )  
    References | Related Articles | Metrics
    Shapelet is a kind of recognizable time series sub-sequence, by identifying the local characteristics to achieve the purpose of accurate classification of time series. The original shapelet discovery algorithm has low efficiency, and much work has focused on improving the efficiency of shapelet discovery. However, for the time series with trend change, the typical time series representation is used for shapelet discovery, which tends to cause the loss of trend information in the sequence. In order to solve this problem, a new trend-based diversified top-k shapelet classification method was proposed. Firstly, the method of trend feature symbolization was used to represent the trend information of time series. Then, the shapelet candidate set was obtained according to the trend signature of the sequence. Finally, the most representative k shapelets were selected from the candidate set by introducing the diversifying top-k query algorithm. Experimental results of time series classification show that compared with the traditional classification algorithms, the accuracy of the proposed method was improved on 11 experimental data sets; compared with FastShapelet algorithm, the efficiency was improved, the running time of the proposed method was shortened, specially for the data with obvious trend information. The experimental results indicate that the proposed method can effectively improve the accuracy and the effciency of time series classification.
    Clustering algorithm of time series with optimal u-shapelets
    YU Siqin, YAN Qiuyan, YAN Xinming
    2017, 37(8):  2349-2356.  DOI: 10.11772/j.issn.1001-9081.2017.08.2349
    Asbtract ( )   PDF (1191KB) ( )  
    References | Related Articles | Metrics
    Focusing on low quality of u-shapelets (unsupervised shapelets) in time series clustering based on u-shapelets, a time series clustering method based on optimal u-shapelets named DivUshapCluster was proposed. Firstly, the influence of different subsequence quality assessment methods on time series clustering results based on u-shapelets was discussed. Secondly, the selected best subsequence quality assessment method was used to evaluate the quality of the u-shapelet candidates. Then, the diversified top-k query technology was used to remove redundant u-shapelets from the u-shapelet candidates and select the optimal u-shapelets. Finally, the optimal u-shapelets set was used to transform the original dataset, so as to improve the accuracy of the time series clustering. The experimental results show that the DivUshapCluster method is superior to the traditional time series clustering methods in terms of clustering accuracy. Compared with the BruteForce method and the SUSh method, the average clustering accuracy of DivUshapCluster method is increased by 18.80% and 19.38% on 22 datasets, respectively. The proposed method can effectively improve the clustering accuracy of time series in the case of ensuring the overall efficiency.
    Fast algorithm for mining frequent patterns based on B-list
    LI Xiaolin, DU Tuo, LIU Biao
    2017, 37(8):  2357-2361.  DOI: 10.11772/j.issn.1001-9081.2017.08.2357
    Asbtract ( )   PDF (984KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems in the existing frequent pattern mining algorithms, such as complex tree building and low mining efficiency, a high-performance algorithm for mining frequent patterns was proposed, namely B-List Frequent Pattern Mining (BLFPM). A new data structure of Building list (B-list) was employed to represent frequent itemsets, and the frequent patterns were directly discovered by intersecting two B-lists without scanning the database. Aiming at the high time complexity of connecting two B-lists, a linear time complexity connection method was proposed to improve the time efficiency of BLFPM. Besides, set-enumeration search tree and an efficient pruning strategy were adopted to greatly reduce the search space and speed up the execution. Compared to N-list and Subsume-based algorithm for mining Frequent Itemsets (NSFI) and prepost algorithm, the time efficiency of BLFPM was improved by about 12% to 29%, and the space efficiency of BLFPM was improved by about 10% to 24%. The experimental results show that BLFPM has good performance for both sparse database and intensive database.
    Resident behavior model analysis method based on multi-source travel data
    XU Xiaowei, DU Yi, ZHOU Yuanchun
    2017, 37(8):  2362-2367.  DOI: 10.11772/j.issn.1001-9081.2017.08.2362
    Asbtract ( )   PDF (965KB) ( )  
    References | Related Articles | Metrics
    The mining and analysis of smart traffic card data can provide strong support for urban traffic construction and urban management. However, most of the existing research data only include data about bus or subway, and mainly focus on macro-travel patterns. In view of this problem, taking a city traffic card data as the example, which contains the multi-source daily travel data of urban residents including bus, subway and taxi, the concept of tour chain was put forward to model the behavior of residents. On this basis, the periodic travel characteristics of different dimensions were given. Then a spatial periodic feature extraction method based on the longest common subsequence was proposed, and the travel rules of urban residents were analyzed by clustering analysis. Finally, the effectiveness of this method was verified by five evaluation indexes defined by the rules, and the clustering result was improved by 6.8% by applying the spatial periodic feature extraction method, which is helpful to discover the behavior pattern of residents.
    HIC-MedRank:improved drug recommendation algorithm based on heterogeneous information network
    ZOU Linlin, LI Xueming, LI Xue, YUAN Hong, LIU Xing
    2017, 37(8):  2368-2373.  DOI: 10.11772/j.issn.1001-9081.2017.08.2368
    Asbtract ( )   PDF (1110KB) ( )  
    References | Related Articles | Metrics
    With the rapid growth of medical literature, it is difficult for physicians to maintain up-to-date knowledge by reading biomedical literatures. An algorithm named MedRank can be used to recommend influential medications from literature by analyzing information network, based on the assumption that "a good treatment is likely to be found in a good medical article published in a good journal, written by good author(s)", recomending the most effective drugs for all types of disease patients. But the algorithm still has several problems:1) the diseases, as the inputs, are not independent; 2) the outputs are not specific drugs; 3) some other factors such as the publication time of the article are not considered; 4) there is no definition of "good" for the articles, journals and authors. An improved algorithm named HIC-MedRank was proposed by introducing H-index of authors, impact factor of journals and citation count of articles as criterion for defining good authors, journals and articles, and recommended antihypertensive agents for the patients suffered from Hypertension with Chronic Kidney Disease (CKD) by considering published time, support institutions, publishing type and some other factors of articles. The experimental results on Medline datasets show that the recommendation drugs of HIC-MedRank algorithm are more precise than those of MedRank, and are more recognized by attending physicians. The consistency rate is up to 80% by comparing with the JNC guidelines.
    User identification across multiple social networks based on information entropy
    WU Zheng, YU Hongtao, LIU Shuxin, ZHU Yuhang
    2017, 37(8):  2374-2380.  DOI: 10.11772/j.issn.1001-9081.2017.08.2374
    Asbtract ( )   PDF (1186KB) ( )  
    References | Related Articles | Metrics
    The precision of user identification is low since the subjective weighting algorithms ignore the special meanings and effects of attributes in applications. To solve this problem, an Information Entropy based Multiple Social Networks User Identification Algorithm (IE-MSNUIA) was proposed. Firstly, the data types and physical meanings of different attributes were analyzed, then different similarity calculation methods were correspondingly adopted. Secondly, the weights of attributes were determined according to their information entropies, thus the potential information of each attribute could be fully exploited. Finally, all chosen attributes were integrated to determine whether the account pair was the matched one. Theoretical analysis and experimental results show that, compared with machine learning based algorithms and subjective weighting algorithms, the performance of the proposed algorithm is improved, on different datasets, the average precision of it is up to 97.2%, the average recall of it is up to 94.1%, and the average comprehensive evaluation metric of it is up to 95.6%. The proposed algorithm can accurately identify user accounts across multiple social networks.
    Dynamic weighted real-time map matching algorithm considering spatio-temporal property
    ZHENG Linjang, LIU Xu, YI Bing
    2017, 37(8):  2381-2386.  DOI: 10.11772/j.issn.1001-9081.2017.08.2381
    Asbtract ( )   PDF (891KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that current real-time map matching algorithms are difficult to keep high efficiency and high accuracy simultaneously, an improved dynamic weighted real-time map matching algorithm was proposed. Firstly, considering the temporal, speed, heading and direction constraints of Global Positioning System (GPS) points and the topological structures of road network, a weighted model was constructed in the algorithm based on spatio-temporal analysis, which consisted of proximity weight, heading weight, direction weight and connectivity weight. Then according to the properties of GPS points, a dynamic weighted coefficient model was created. Lastly, the best matching road segment was selected according to the confidence level of current GPS point. The experiments were conducted on three city bus trajectories with length of 36 km in Chongqing. The average matching accuracy of the algorithm was 97.31% and the average matching delay of each GPS point was 17.9 ms. The experimental results show that compared with the contrast algorithms, the proposed algorithm has higher accuracy and efficiency, and has better performance in matching Y-junctions and parallel roads.
    Density clustering based removal heuristic for vehicle routing problem
    YANG Wang, HE Guochao, WU Yan
    2017, 37(8):  2387-2394.  DOI: 10.11772/j.issn.1001-9081.2017.08.2387
    Asbtract ( )   PDF (1337KB) ( )  
    References | Related Articles | Metrics
    Focusing on large-scale vehicle routing problem with heterogeneous fleet, a new neighborhood mapping method, namely density clustering based removal heuristic algorithm, was proposed under the Adaptive Large Neighborhood Search (ALNS) frame work. ALNS includes two phases:destruction and reconstruction, which provides optimized solution by destroying and reconstructing current solution. In the destruction phase, a routine was randomly selected to get clusters by density clustering, and then the stores were removed from the routine according to the clusters. In reconstruction, Greedy or Regret-2 insert algorithm was randomly chosen to place those removed stores into proper routine. Test results on benchmark instances validate the effectiveness of the proposed method. Compared with other existing algorithms, the ALNS density clustering based removal heuristic algorithm has lower rate of error and better quality of solutions; in real situations, the proposed algorithm can provide optimized solution in limited time.
    Tourism route recommendation based on dynamic clustering
    XIAO Chunjing, XIA Kewen, QIAO Yongwei, ZHANG Yuxiang
    2017, 37(8):  2395-2400.  DOI: 10.11772/j.issn.1001-9081.2017.08.2395
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics
    In session-based Collaborative Filtering (CF), a user interaction history is divided into sessions using fixed time window and user preference is expressed by sequences of them.But in tourism data, there is no interaction in some sessions and it is difficult to select neighbors because of high sparsity. To alleviate data sparsity and better use the characteristics of the tourism data, a new tourism route recommendation method based on dynamic clustering was proposed. Firstly, the different characteristics of tourism data and other standard data were analyzed. Secondly, a user interaction history was divided into sessions by variable time window using dynamic clustering and user preference model was built by combining probabilistic topic distribution obtained by Latent Dirichlet Allocation (LDA) from each session and time penalty weights. Then, the set of neighbors and candidate routes were obtained through the feature vector of users, which reflected the characteristics of tourist age, route season and price. Finally, routes were recommended according to the relevance of probabilistic topic distribution between candidate routes and tourists. It not only alleviates data sparsity by using variable time window, but also generates the optimal number of time windows which is automatically obtained from data. User feature vector was used instead of similarity of tourism data to select neighbors, so as to the avoid the computational difficulty caused by data sparsity. The experimental results on real tourism data indicate that the proposed method not only adapts to the characteristics of tourism data, but also improves the recommendation accuracy.
    Algorithm for exploring coal mine disaster environment by multi-UAV cooperation
    LIU Dong, TONG Minming, LU Hongrui
    2017, 37(8):  2401-2404.  DOI: 10.11772/j.issn.1001-9081.2017.08.2401
    Asbtract ( )   PDF (749KB) ( )  
    References | Related Articles | Metrics
    Focusing on the low efficiency of the rescue robot in the coal mine disaster environment, a new improved boundary exploration algorithm based on multiple Unmanned Aerial Vehicles (multi-UAV) was proposed. Based on the utility value boundary exploration algorithm, the flight angle parameter of UAV was considered, and the distribution function was introduced as a judgment mechanism to construct the objective function. Finally, the ant colony algorithm was used to solve the objective function. Simulation experiments were carried out on a rasterized map with Matlab software. The simulation results show that the improved boundary exploration algorithm can reduce the phenomenon of repeated coverage and crowding, shorten the detection time, meanwhile the energy required by UAV is reduced by about 30%, thus improving the overall exploration efficiency of multi-UAV system.
    Residential electricity consumption analysis based on regularized matrix factorization
    WANG Yang, WU Fan, YAO Zongqiang, LIU Jie, LI Dong
    2017, 37(8):  2405-2409.  DOI: 10.11772/j.issn.1001-9081.2017.08.2405
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics
    Focusing on the electricity user group feature, a residential electricity consumption analysis method based on geographic regularized matrix factorization in smart grid was proposed to explore the characteristics of electricity users and provide decision support for personalized better power dispatching. In the proposed algorithm, customers were firstly mapped into a hidden feature space, which could represent the characteristics of users' electricity behavior, and then k-means clustering algorithm was employed to segment customers in the hidden feature space. In particular, geographic information was innovatively introduced as a regularization factor of matrix factorization, which made the hidden feature space not only meet the orthogonal characteristics of user groups, but also make the geographically close users mapping close in hidden feature space, consistent with the real physical space. In order to verify the effectiveness of the proposed algorithm, it was applied to the real residential data analysis and mining task of smart grid application in Sino-Singapore Tianjin Eco-City (SSTEC). The experimental results show that compared to the baseline algorithms including Vector Space Model (VSM) and Nonnegative Matrix Factorization (NMF) algorithm, the proposed algorithm can obtain better clustering results of user segmentation and dig out certain power modes of different user groups, and also help to improve the level of management and service of smart grid.
    Dimension reduction method of brain network state observation matrix based on Spectral Embedding
    DAI Zhaokun, LIU Hui, WANG Wenzhe, WANG Yanan
    2017, 37(8):  2410-2415.  DOI: 10.11772/j.issn.1001-9081.2017.08.2410
    Asbtract ( )   PDF (1084KB) ( )  
    References | Related Articles | Metrics
    As the brain network state observation matrix based on functional Magnetic Resonance Imaging (fMRI) reconstruction is high-dimensional and characterless, a method of dimensionality reduction based on Spectral Embedding was presented. Firstly, the Laplacian matrix was constructed from the similarity measurement between the samples. Secondly, in order to achieve the purpose of mapping (reducing dimension) datasets from high dimension to low dimension, the first two main eigenvectors were selected to construct a two-dimensional eigenvector space through Laplacian matrix factorization. The method was applied to reduce the dimension of the matrix and visualize it in two-dimensional space, and the results were evaluated by category validity indicators. Compared with the dimensionality reduction algorithms such as Principal Component Analysis (PCA), Locally Linear Embedding (LLE), Isometric Mapping (Isomap), the mapping points in the low dimensional space got by the proposed method have obvious category significance. According to the category validity indicators, compared with Multi-Dimensional Scaling (MDS) and t-distributed Stochastic Neighbor Embedding (t-SNE) algorithms, the Di index (the average distance among within-class samples) of the proposed method was decreased by 87.1% and 65.2% respectively, and the Do index (the average distance among between-class samples) of it was increased by 351.3% and 25.5% respectively. Finally, the visualization results of dimensionality reduction show a certain regularity through a number of samples, and the effectiveness and universality of the proposed method are validated.
    Excitation piecewise expansion method for speech bandwidth expansion based on hidden Markov model
    GUO Leiyong, LI Yu, LIN Shengyi, TAN Hongzhou
    2017, 37(8):  2416-2420.  DOI: 10.11772/j.issn.1001-9081.2017.08.2416
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics
    Speech bandwidth expansion is used to enhance the auditory quality by artificially recovering the lost components in the high-band spectrum of narrow-band speech. Aiming at the problem of excitation expansion in speech source-filter extension model, a piecewise extension method was proposed. The higher spectrum part in the narrow-band excitation source and the white noise with the equivalent narrow-band excitation frame energy were used as the excitation sources for the lower and upper part of the extension band respectively. At last, the wideband excitation signal was composed of the above two and the original narrow band one. Experimental results of the wide band speech reconstruction with Hidden Markov Model (HMM) based spectrum envelope estimation show that the proposed method is superior to spectrum shift excitation expansion method.
    Modeling of power amplifier based on dynamic X-parameter of new two-dimensional kernel function
    NAN Jingchang, CUI Hongyan
    2017, 37(8):  2421-2426.  DOI: 10.11772/j.issn.1001-9081.2017.08.2421
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    In order to accurately describe the characteristics of Radio Frequency (RF) power amplifier with memory effect, a new dynamic X-parameter amplifier modeling method based on traditional dynamic X-parameter model was proposed by combining with long-term memory effect and short-term memory effect. The new model preserves static kernel function of X-parameter model. A nonlinear function of memory was extracted by using the two-memory path model to replace dynamic kernel function. A novel FeedBack (FB) structure was proposed in which the output signal is a bivariate of amplitude and frequency, and the time-varying frequency variable was introduced to simplify the dynamic kernel function as a two-dimensional kernel function. The power amplifier was designed and modeled by the MW6S010N transistors. Simulation results indicate that the new model can correctly represent the characteristics of power amplifier with the excitation of single-tone large signal and Code Division Multiple Access (CDMA) signal. The Normalized Mean Square Error (NMSE) of the proposed method was decreased by 8.0 dB, 6.3 dB and 2.5 dB, respectively, in comparison with static X-parameter model, traditional dynamic X-parameter model and FeedForward (FF) structure X-parameter model. It is proved that the proposed model can more accurately fit the characteristics of power amplifier with nonlinear memory effect.
    Time synchronization of ship fault recording device based on Lagrange once interpolation
    HUANG Leiming, WANG Liming, CHEN Zhongqin
    2017, 37(8):  2427-2432.  DOI: 10.11772/j.issn.1001-9081.2017.08.2427
    Asbtract ( )   PDF (947KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the external clock synchronization method is difficult to implement, susceptible to interference and insecure under the marine condition, a time synchronization method of ship fault recording device based on interpolation algorithm was proposed. Firstly, the recording host extracted the original sampling values and the rated delay information from the packets uploaded by the merging units. Secondly, the real sampling time was restored after time correction. Finally, Lagrange once interpolation operation was used to obtain the resampling values at the synchronous time. Simulation on Matlab shows that the amplitude error can be reduced by increasing the sampling frequency appropriately, and the equivalent delay corresponding to the phase error is not more than 2 μs. Prototype test shows that the error of valid values does not exceed ±0.01% under the normal condition, of which the fundamental and 5th harmonic are less than ±0.006% and -0.5% after adding the fault signal. Moreover, both of the equivalent delays are less than -3.5 μs, and the synchronization accuracy is in accordance with the T4 level specified in IEC61850.
2024 Vol.44 No.4

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF