Loading...

Table of Content

    10 June 2018, Volume 38 Issue 6
    Review of spike sequence learning methods for spiking neurons
    XU Yan, XIONG Yingjun, YANG Jing
    2018, 38(6):  1527-1534.  DOI: 10.11772/j.issn.1001-9081.2017112768
    Asbtract ( )   PDF (1516KB) ( )  
    References | Related Articles | Metrics
    Spiking neuron is a novel artificial neuron model. The purpose of its supervised learning is to stimulate the neuron by learning to generate a series of spike sequences for expressing specific information through precise time coding, so it is called spike sequence learning. Because the spike sequence learning for single neuron has the characteristics of significant application value, various theoretical foundations and many influential factors, the existing spike sequence learning methods were reviewed and contrasted. Firstly, the basic concepts of spiking neuron models and spike sequence learning were introduced. Then, the typical learning methods of spike sequence learning were introduced in detail, the theoretical basis and synaptic weight adjustment way of each method were pointed out. Finally, the performance of these learning methods was compared through experiments, the characteristics of each method was systematically summarized, the current research situation of spike sequence learning was discussed, and the future direction of development was pointed out. The research results are helpful for the comprehensive application of spike sequence learning methods.
    Evaluation method for simulation credibility based on cloud model
    ZHENG Yaoyu, FANG Yangwang, WEI Xianzhi, CHEN Shaohua, GAO Xiang, WANG Hongke, PENG Weishi
    2018, 38(6):  1535-1541.  DOI: 10.11772/j.issn.1001-9081.2017122944
    Asbtract ( )   PDF (1043KB) ( )  
    References | Related Articles | Metrics
    A cloud model is not suitable for non-normal distribution. In order to solve the problem, a new one-dimensional backward cloud algorithm based on uniform distribution was proposed and applied to the credibility evaluation system of simulation system. Firstly, the importance of simulation credibility was expounded, and the credibility evaluation index of the evaluation results for a type of equipment concerning anti-jamming capability was established based on the actual project background. Secondly, the system was evaluated by using the evaluation method for simulation credibility based on cloud model, and the evaluation method was improved. Finally, in order to improve the evaluation method, a one-dimensional backward cloud algorithm based on uniform distribution was derived, and the experiment was designed for verifying the validity of the algorithm. The simulation experimental results show that, the average absolute error of the proposed backward cloud algorithm is less than 5% for large data, which has high applicability and provides a way of thinking for the perfection of cloud model theory. In addition, the simulation credibility evaluation results show that, the proposed method has high accuracy and contains the data information of dispersion and agglomeration, which can provides more comprehensive evaluation and the prediction of error data.
    Application of dual-channel convolutional neural network in sentiment analysis
    LI ping, DAI Yueming, WU Dinghui
    2018, 38(6):  1542-1546.  DOI: 10.11772/j.issn.1001-9081.2017122926
    Asbtract ( )   PDF (780KB) ( )  
    References | Related Articles | Metrics
    The single channel Convolutional Neural Network (CNN) cannot fully study the feature information of text with a single perspective. In order to solve the problem, a new Dual-Channel CNN (DCCNN) algorithm was proposed. Firstly, the word vector was trained by Word2Vec, and the semantic information of sentence was obtained by using word vector. Secondly, two different channels were used to carry out convolution operations, one channel was the character vector and the other was the word vector. The fine-grained character vector was used for assisting word vector to capture deep semantic information. Finally, the convolutional kernels of different sizes were used to find higher-level abstract features within the sentence. The experimental results show that, the proposed DCCNN algorithm can accurately identify the sentiment polarity of text, its accuracy and F1 value are above 95%, which are significantly improved compared with the algorithms of logistic regression, Support Vector Machine (SVM) and CNN.
    Heterogeneous compound transfer learning method for video content annotation
    TAN Yao, RAO Wenbi
    2018, 38(6):  1547-1553.  DOI: 10.11772/j.issn.1001-9081.2017112815
    Asbtract ( )   PDF (1021KB) ( )  
    References | Related Articles | Metrics
    The traditional machine learning has the disadvantage of requiring a large amount of manual annotation to train model and most current transfer learning methods are only applicable to isomorphic space. In order to solve the problems, a new Heterogeneous Compound Transfer Learning (HCTL) method for video content annotation was proposed. Firstly, based on the correspondence between video and image, Canonical Correlation Analysis (CCA) was applied to realize isomorphism of feature space between image domain (source domain) and video domain (target domain). Then, based on the idea of minimizing the cost of projection from this two feature spaces to a common space, a transformation matrix for aligning the feature space of source domain to the feature space of target domain was found. Finally, the features of source domain were translated into the feature space of target target domain by the alignment matrix, which realized the knowledge transfer and completed the video content annotation task. The mean annotation precision of the proposed HCTL on the Kodak database reaches 35.81%, which is 58.03%,23.06%, 45.04%, 6.70%, 15.52%, 13.07% and 6.74% higher than that of Standard Support Vector Machine (S_SVM), Domain Adaptation Support Vector Machine (DASVM), Heterogeneous Transductive Transfer Learning (HTTL), Cross Domain Structural Model (CDSM), Domain Selection Machine (DSM), Multi-domain Adaptation with Heterogeneous Sources (MDA-HS) and Discriminative Correlation Analysis (DCA) methods; while on the Columbia Consumer Video (CCV) database, it reaches 20.73% with the relative increase of 133.71%, 37.28%, 14.34%, 24.88%, 16.40%, 20.73% and 12.48% respectively. The experimental results show that the pre-homogeneous re-aligned compound transfer idea can effectively improve the recognition accuracy in the heterogeneous domain adaptation problems.
    Semantic segmentation of blue-green algae based on deep generative adversarial net
    YANG Shuo, CHEN Lifang, SHI Yu, MAO Yiming
    2018, 38(6):  1554-1561.  DOI: 10.11772/j.issn.1001-9081.2017122872
    Asbtract ( )   PDF (1306KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of insufficient accuracy of the traditional image segmentation algorithm in segmentation of blue-green alga images, a new network structure named Deep Generative Adversarial Net (DGAN) based on Deep Neural Network (DNN) and Generative Adversarial Net (GAN) was proposed. Firstly, based on Fully Convolutional neural Network (FCN), a 12-layer FCN was constructed as the Generater (G), which was used to study the distribution of data and generate the segmentation result of blue-green alga images (Fake). Secondly, a 5-layer Convolutional Neural Network (CNN) was constructed as the Discriminator (D), which was used to distinguish the segmentation result generated by the generated network (Fake) and the true segmentation result with manual annotation (Label), G tried to generate Fake and deceive D, D tried to find out Fake and punish G. Finally, through the adversarial training of two networks, a better segmentation result was obtained because Fake generated by G could cheat D. The training and test results on image sets with 3075 blue-green alga images show that, the proposed DGAN is far ahead of the iterative threshold segmentation algorithm in precision, recall and F1 score, which are increased by more than 4 percentage points than other DNN algorithms such as FCNNet (SHELHAMER E, LONG J, DARRELL T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):640-651) and Deeplab (CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs. Computer Science, 2014(4):357-361). The proposed DGAN has obtained more accurate segmentation results. In the aspect of segmentation speed, the DGAN needs 0.63 s per image, which is slightly slower than the traditional FCNNet with 0.46 s, but much faster than Deeplab with 1.31 s. The balanced segmentation accuracy and speed of DGAN can provide a feasible technical scheme for image-based semantic segmentation of blue-green algae.
    Intelligent vehicle path tracking algorithm based on cubic B-spline curve fitting
    ZHANG Yonghua, DU Yu, PAN Feng, WEI Yue
    2018, 38(6):  1562-1567.  DOI: 10.11772/j.issn.1001-9081.2017102563
    Asbtract ( )   PDF (947KB) ( )  
    References | Related Articles | Metrics
    The tangential angle acquisition of the traditional geometric path tracking algorithm depends on high precision inertial navigation equipments. In order to solve the problem, a new path tracking algorithm based on cubic B-spline curve fitting was proposed. Firstly, the smooth path was generated by fitting the discrete path points in the priori map. Then, the discrete path points were regenerated by using an interpolation method according to the path equation, and the tangential angle at each point was calculated to realize the optimization and tracking of the multi-sensor fusion path. On the real intelligent vehicle experiment platform, the 20 km/h low-speed-circle and the 60 km/h high-speed-straight-path tracking tests for the proposed algorithm were carried out under the two real road scenes. Under the two typical test scenarios of low-speed-large-curvature and high-speed-straight-path, the maximum lateral error of path tracking of the proposed algorithm is kept within 0.3 m. The experimental results show that, the proposed algorithm can effectively solve the problem of traditional geometric path tracking algorithm's dependence on inertial navigation device, and maintain good tracking performance at the same time.
    Human posture detection method based on long short term memory network
    ZHENG Yi, LI Feng, ZHANG Li, LIU Shouyin
    2018, 38(6):  1568-1574.  DOI: 10.11772/j.issn.1001-9081.2017112831
    Asbtract ( )   PDF (1094KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that distant historical signals cannot be transmitted to the current time under the network structure of Recurrent Neural Network (RNN), Long Short Term Memory (LSTM) network was proposed as a variant of RNN. On the premise of inheriting RNN's excellent memory ability for time series, LSTM overcomes the long-term dependence problem of time series and has a remarkable performance in natural language processing and speech recognition. For the long-term dependence problem of human behavior data as a time series and the problem of not real-time detection caused by using the traditional sliding window algorithm to collect data, the LSTM was extended and applied to the human posture detection, and then a human posture detection method based on LSTM was proposed. By using the real-time data collected by the accelerometers, gyroscopes, barometers and direction sensors in the smartphones, a human posture dataset with a total of 3336 manual annotation data was produced. The five kinds of daily behavior postures such as walking, running, going upstairs, going downstairs, calmness as well as the four kinds of sudden behavior postures of fallling, standing, sitting, jumping, were predicted and classified. The LSTM network was compared with the commonly used methods such as shallow learning algorithm, deep learning fully connected neural network and convolution neural network. The experimental results show that, by using the end-to-end deep learning method, the proposed method has improved the accuracy by 4.49 percentage points compared to the model of human posture detection algorithm trained on the produced dataset. The generalization ability of the proposed network structure is verified and it is more suitable for posture detection.
    Spatio-temporal index method for moving objects in road network based on HBase
    FENG Jun, LI Dingsheng, LU Jiamin, ZHANG Lixia
    2018, 38(6):  1575-1583.  DOI: 10.11772/j.issn.1001-9081.2017122977
    Asbtract ( )   PDF (1599KB) ( )  
    References | Related Articles | Metrics
    Hbase can only use key value query, it is not suitable for multidimensional query of mobile objects in road network, which leads to inefficiency in storing index and query. In order to solve this problem, an efficient HBase indexing framework for Road network Moving objects (RM-HBase) was designed and implemented on the basis of HBase storage structure. Firstly, the upper Hmaster and lower HregionServer of the primary HBase index structure were improved to solve the hot distribution problem of distributed cluster data and improve the query efficiency of spatial data. Secondly, the road network moving object index - Road Network tree (RN-tree) was proposed to solve the problem of "dead space" in space division and improve the query efficiency of road sections in the space at the same time. Then, based on the above improvements of HBase index, the query algorithms for spatio-temporal range query, spatial-temporal K Nearest Neighbor (KNN) query and moving object trajectory query were designed respectively. Finally, the Spatial-TEmporal HBase IndeX (STEHIX) framework based on HBase distributed database was selected as the contrast object, the performance of RM-HBase was respectively analyzed from two aspects of the performance of index framework and the efficiency of query algorithm. The experimental results show that, the proposed RM-HBase is superior to the STEHIX framework in both the performance of data equilibrium distribution and the query performance of spatio-temporal query algorithm, and it is helpful to promote the efficiency of spatial-temporal index for the moving object data in mass road network.
    Design of secondary indexes in HBase based on memory
    CUI Chen, ZHENG Linjiang, HAN Fengping, HE Mujun
    2018, 38(6):  1584-1590.  DOI: 10.11772/j.issn.1001-9081.2017112777
    Asbtract ( )   PDF (1073KB) ( )  
    References | Related Articles | Metrics
    In the age of big data, HBase which can store massive data is widely used. HBase only can optimize index for the rowkey and donot create indexes to the columns of non-rowkey, which has a serious impact on the efficiency of complicated condition query. In order to solve the problem, a new scheme about secondary indexes in HBase based on memory was proposed. The indexes of mapping to rowkey for the columns which needed to be queried were established, and these indexes were stored in memory environment which was built by Spark. The rowkey was firstly got by index during query time, then the rowkey was used to find the corresponding record quickly in HBase. Due to the cardinality size of the column and whether or not the scope query determined the type of index, and different types of indexes were constructed to deal with three different situations. Meanwhile, the memory computation and parallelization were used in Spark to improve the query efficiency of indexes. The experimental results show that the proposed secondary indexes in HBase can gain better query performance, and the query time is less than the secondary indexes based on Solr. The proposed secondary indexes can solve the problem of low query efficiency, which is caused by the lack of indexes of non-rowkey columns in HBase, and improve the query efficiency for large data analysis based on HBase storage.
    New ensemble classification algorithm for data stream with noise
    YUAN Quan, GUO Jiangfan
    2018, 38(6):  1591-1595.  DOI: 10.11772/j.issn.1001-9081.2017122900
    Asbtract ( )   PDF (838KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of concept drift and noise in data stream, a new kind of incremental learning data stream ensemble classification algorithm was proposed. Firstly, a noise filtering mechanism was introduced to filter the noise. Then, a hypothesis testing method was introduced to detect the concept drift, and an incremental C4.5 decision tree was used as the base classifier to construct the weighted ensemble model. Finally, the incremental learning examples were realized, and the classification model was updated dynamically. The experimental results show that, the detection accuracy of the proposed ensemble classifier for concept drift reaches 95%-97%, and its noise immunity in data steam stays above 90%. The proposed algorithm has higher classification accuracy and better performance in the accuracy of detecting concept drift and noise immunity.
    Efficient block-based sampling algorithm for aggregation query processing on duplicate charged records
    PAN Mingyu, ZHANG Lu, LONG Guobiao, LI Xianglong, MA Dongxue, XU Liang
    2018, 38(6):  1596-1600.  DOI: 10.11772/j.issn.1001-9081.2017112632
    Asbtract ( )   PDF (982KB) ( )  
    References | Related Articles | Metrics
    The existing query analysis methods usually treat the entity resolution as an offline preprocessing process to clean the whole data set. However, with the continuous increasing of data size, such offline cleaning mode with high computing complexity has been difficult to meet the needs of real-time analysis in most applications. In order to solve the problem of aggregation query on duplicate charged records, a new method integrating entity resolution with approximate aggregation query processing was proposed. Firstly, a block-based sampling strategy was adopted to collect samples. Then, an entity recognition method was used to identify the duplicate entities on the sampled samples. Finally, the unbiased estimation of aggregated results was reconstructed according to the results of entity recognition. The proposed method avoids the time cost of identifying all entities, and returns the query results that satisfy user needs by identifying only a small number of sample data. The experimental results on both real dataset and synthetic dataset demonstrate the efficiency and reliability of the proposed method.
    Density peaks clustering algorithm based on shared near neighbors similarity
    BAO Shuting, SUN Liping, ZHENG Xiaoyao, GUO Liangmin
    2018, 38(6):  1601-1607.  DOI: 10.11772/j.issn.1001-9081.2017122898
    Asbtract ( )   PDF (1016KB) ( )  
    References | Related Articles | Metrics
    Density peaks clustering is an efficient density-based clustering algorithm. However, it is sensitive to the global parameter dc. Furthermore, artificial intervention is needed for decision graph to select clustering centers. To solve these problems, a new density peaks clustering algorithm based on shared near neighbors similarity was proposed. Firstly, the Euclidean distance and shared near neighbors similarity were combined to define the local density of a sample, which avoided the setting of parameter dc of the original density peaks clustering algorithm. Secondly, the selection process of clustering centers was optimized to select initial clustering centers adaptively. Finally, each sample was assigned to the cluster as its nearest neighbor with higher density samples. The experimental results show that, compared with the original density peaks clustering algorithm on the UCI datasets and the artificial datasets, the average values of accuracy, Normalized Mutual Information (NMI) and F-Measure of the proposed algorithm are respectively increased by about 22.3%, 35.7% and 16.6%. The proposed algorithm can effectively improve the accuracy of clustering and the quality of clustering results.
    Threat and defense of new ransomware worm in industrial control system
    LIU Yukun, ZHUGE Jianwei, WU Yixiong
    2018, 38(6):  1608-1613.  DOI: 10.11772/j.issn.1001-9081.2017112703
    Asbtract ( )   PDF (1077KB) ( )  
    References | Related Articles | Metrics
    Industrial Control System (ICS) is widely used in critical infrastructure projects related to the national economy and people's livelihood such as power generation, transmission and distribution, petrochemical industry, water treatment and transmission. Large-scale attack on ICS is a huge threat to critical infrastructure. At present, the proposed ransomware worm for ICS is limited by the isolation characteristics of industrial control network, and it is difficult to spread on a large scale. Based on the observed actual development scene of ICS, in order to solve the problem of high isolation for ICS, a novel ransomware worm threat model with a new attack path was proposed. Firstly, the engineer station was taken as the primary infection target. Then, the engineer station was used as the springboard to attack the industrial control devices in the internal network. Finally, the worm infection and ransom were realized. Based on the proposed threat model, ICSGhost, which was a ransomware worm prototype, was implemented. In the closed experimental environment, ICSGhost can realize worm infection for ICS with a predetermined attack path. At the same time, for the ransomware worm threat, the defense plan was discussed. The experimental results show that such threat exists, and because its propagation path is based on the actual development scene of ICS, it is difficult to detect and guard against.
    Adaptive weight allocation method of risk assessment index for access control of cloud platform
    YANG Hongyu, NING Yuguang
    2018, 38(6):  1614-1619.  DOI: 10.11772/j.issn.1001-9081.2017122940
    Asbtract ( )   PDF (924KB) ( )  
    References | Related Articles | Metrics
    Aiming at the subjective and fixed setting problems of risk assessment index weight in risk access control model of cloud platform, an adaptive weight allocation method of risk assessment index was proposed. Firstly, the the adaptive weight allocation model of risk assessment index was designed through a multivariate linear regression with constraints. Secondly, the programming regression algorithm was proposed and optimized to solve the corresponding weight. Finally, the quantitative formula of risk value with adaptive weight allocation was constructed to calculate the risk value of access request dynamically. The experimental results show that, compared with the Dynamic Risk-based Access Control (DRAC) model and the access control model based on system security risk, the accuracy and sensitivity of risk value of the proposed method are averagely increased by 2.8% and 18.5%, 1.7% and 18.7% with the same order of magnitude training set. Compared with the DRAC model, Dynamic Attribute-based Risk Aware Access Control (DA-RAAC) model and the access control model based on system security risk, the response time of the proposed method is averagely shortened by 9.2%, 34.6% and 96.6% with the same number of access requests. The proposed method has higher accuracy and sensitivity in the risk value of large concurrent users, and its response time is shorter, which is more suitable for cloud environment.
    Biometric and password two-factor cross domain authentication scheme based on blockchain technology
    ZHOU Zhicheng, LI Lixin, GUO Song, LI Zuohui
    2018, 38(6):  1620-1627.  DOI: 10.11772/j.issn.1001-9081.2017122891
    Asbtract ( )   PDF (1299KB) ( )  
    References | Related Articles | Metrics
    The traditional cross domain authentication schemes are few and complex. In order to solve the problems, a new biometric and password two-factor cross domain authentication scheme based on blockchain technology was proposed. Firstly, the fuzzy extraction technology was used to extract the random key of biometrics for participation authentication, and the problem of permanent unavailability caused by the biometric leakage was solved. Secondly, the untampered blockchain was used to store the public information of biometrics, and the threat of being vulnerable to active attacks for the fuzzy extraction technology was solved. Finally, based on the distributed storage function and consortium blockchain architecture of blockchain, the two-factor cross domain authentication of user in local and remote environment was realized. The results of security analysis and efficiency analysis show that, in terms of security, the proposed scheme has the security properties of anti-man-in-the-middle attack and anti-replay attack; in terms of efficiency and feasibility, the efficiency of the proposed scheme is moderate, users do not need to carry smart cards, and the expandability of system is strong.
    Physical layer parallel interpolation encryption algorithm based on orthogonal frequency division multiplexing
    GAO Baojian, WANG Shaodi, HU Yun, CAO Yanjun
    2018, 38(6):  1628-1632.  DOI: 10.11772/j.issn.1001-9081.2017122981
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics
    The traditional link layer security mechanism can not fundamentally protect the security of information transmission of wireless communication system. In order to solve the problem, a parallel interpolation encryption algorithm based on the parallel modulation characteristics of Orthogonal Frequency Division Multiplexing (OFDM) system and physical layer security was proposed. Firstly, the number of inserted symbols was determined by the number of subcarriers modulated by the OFDM system, and the positions of inserted symbols were generated under the control of key. Secondly, the original OFDM symbols before and after the position of inserted symbol were taken out, and the inserted symbol was determined by calculating the mean value of these symbols. Finally, the pseudo-random interpolation was completed after Inverse Fast Fourier Transform (IFFT). Compared with the traditional link layer security methods, the proposed algorithm can realize the whole encryption of modulation symbols, ensure the security of signaling, flag and data information, and reduce the complexity of algorithm effectively. The simulation experimental results show that, the proposed algorithm can effectively resist a variety of eavesdropping attacks and has little influence on the inherent performance of communication system. Furthermore, the proposed algorithm can be well adapted to Gaussian channel and multipath channel, and shows a certain ability to resist multipath fading.
    Effecient outsourced computing based on extended attribute-based functional encryption
    LI Cong, YANG Xiaoyuan, WANG Xu'an
    2018, 38(6):  1633-1639.  DOI: 10.11772/j.issn.1001-9081.2017112657
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    The main problems exist in current Attribute-Based Encryption (ABE) schemes, such as the access policy has a single function, and the size and decryption time of ciphertext increase with the complexity of access formula. In order to solve the problems, a multi-function ABE scheme for effecient outsourced computing was proposed. Firstly, through the fine-grained access control of sensitive data, different function encryption systems were implemented. Then, using the huge computing power of cloud server to perform partial decryption calculations, the user attribute ciphertext satisfying the access policy was converted into a (constant-size) ElGamal-style ciphertext. At the same time, the correctness of outsourced computing was ensured through the efficient verification methods. The theoretical analysis results show that, compared with the traditional attribute-based functional encryption scheme, the decryption computation at the user end of the proposed scheme is reduced to one exponential operation and one pair operation. The proposed scheme can save a lot of bandwidth and decryption time for users without increasing the amount of transmission.
    Verifiable ciphertext retrieval scheme with user revocation
    BAI Ping, ZHANG Wei, LI Cong, WANG Xu'an
    2018, 38(6):  1640-1643.  DOI: 10.11772/j.issn.1001-9081.2017122938
    Asbtract ( )   PDF (787KB) ( )  
    References | Related Articles | Metrics
    The malicious cloud server may send incorrect or forged query results to the user, and the authorized user may send the key information privately to a non-authorized user after completing the retrieval. In order to solve the problems, a new verifiable ciphertext retrieval scheme with user revocation was constructed. Firstly, an encryption algorithm was used to encrypt the user documents and sign the keywords. Secondly, a searching algorithm was used to retrieve documents that needed to be retrieved. Finally, a verification algorithm and user revocation algorithm were used to verify the retrieval results and encrypt the unretrieved documents again. The analysis results show that, the proposed scheme can complete the accurate retrieval on the premise of guaranteeing the integrity of data, realize the user revocation through re-encryption scheme, and guarantee the security of system. Moreover, the proposed scheme satisfies the security of Indistinguishability-Chosen Keyword Attack (IND-CKA).
    Improvement of hybrid encryption scheme based on Niederreiter coding
    LIU Xiangxin, YANG Xiaoyuan
    2018, 38(6):  1644-1647.  DOI: 10.11772/j.issn.1001-9081.2017122960
    Asbtract ( )   PDF (612KB) ( )  
    References | Related Articles | Metrics
    Coding-based encryption scheme, with the advantages of anti-quantum feature and fast encryption and decryption speed, is one of the candidate schemes for anti-quantum cryptography. The existing coding-based hybrid encryption schemes have the INDistinguishability under Chosen Ciphertext Attack (IND-CCA) security, which have the disadvantage that the public key size used to encrypt the shared secret key of the sender and receiver is large. The problem of large size of public key in hybrid encryption scheme based on Niederreiter coding was solved by the following three steps. Firstly, the private key of Niederreiter coding scheme was randomly split. Then, the plaintext of Niederreiter coding scheme was split randomly. Finally, the encryption and decryption processes of Niederreiter coding scheme were improved. It is concluded through analysis that, the public key size of the improved scheme is less than that of Maurich scheme. Compared with Maurich scheme, the public key of the improved scheme is reduced from 4801 bits of the original scheme to 240 bits under the security level of 80 bits, and the public key of the improved scheme is reduced from 9857 bits to 384 bits under the security level of 128 bits. Although the improved scheme is more complicated than the original scheme, its storage cost and calculation cost are smaller, and the practicability of the improved scheme is enhanced.
    Playback speech detection algorithm based on modified cepstrum feature
    LIN Lang, WANG Rangding, YAN Diqun, LI Can
    2018, 38(6):  1648-1652.  DOI: 10.11772/j.issn.1001-9081.2017112822
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    With the development of speech technology, various kinds of phishing speech represented by playback speech have brought serious challenge for voiceprint authentication system and audio forensics technology. Aiming at the attack problem of playback speech to voiceprint authentication system, a new detection algorithm based on modified cepstrum feature was proposed. Firstly, the coefficient of variation was used to analyze the difference between the original speech and the playback speech in the frequency domain. Secondly, a new filter bank composed of inverse-Mel filters and linear filters was used to replace Mel filter bank in the process of extracting Mel Frequency Cepstral Coefficients (MFCC) pertinently, and then the modified cepstrum feature based on the new filter bank was obtained. Finally, Gaussian Mixture Model (GMM) was utilized as the classifier to classify and discriminate speech. The experimental results show that, the modified cepstrum feature can effectively detect the playback speech, and its equal error rate is about 3.45%.
    Security analysis and evaluation of representational state transfer based on attack graph
    ZHANG Youjie, ZHANG Qingping, WU wei, SHI Zhe
    2018, 38(6):  1653-1657.  DOI: 10.11772/j.issn.1001-9081.2017112756
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    The security mechanism of REpresentational State Transfer (REST) architecture is not perfect. In order to solve the problem, the security analysis and evaluation of REST architecture based on attack graph was proposed, and the security quantitative evaluation of REST architecture was realized by using attack graph. Firstly, the possible attack of REST architecture was predicted, the REST architecture attack graph model was constructed accordingly, and the attack probability parameter and attack realization parameter were calculated. Then, according to the attack state and attack behavior of attack graph, the security protection measures were proposed. In view of the above, the REST architecture attack graph model was reconstructed, and the attack probability parameter and attack realization parameter were recalculated too. By comparison, after the adoption of security protection measures, the attack possibility parameter has been reduced to about 1/10, and the attack realization parameter has been reduced to about 1/86. The comparison results show that the constructed attack graph can effectively and quantitatively evaluate the security performance of REST architecture.
    Workload uncertainty-based virtual machine consolidation method
    LI Shuangli, LI Zhihua, YU Xinrong, YAN Chengyu
    2018, 38(6):  1658-1664.  DOI: 10.11772/j.issn.1001-9081.2017112741
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics
    The uncertainty of workload in physical hosts easily leads to high overloaded risk and low resource utilization in physical hosts, which will further affect the energy consumption and service quality of data center. In order to solve this problem, a Workload Uncertainty-based Virtual Machine Consolidation (WU-VMC) method was proposed by analyzing the workload records of physical hosts and the historical data of virtual machine resource request. In order to stabilize the workload of each host in the cloud data center, firstly, the workloads of physical hosts were fitted according to resource requests of virtual machines, and the virtual machine matching degree between virtual machines and physical hosts was computed by using gradient descent method. Then, the virtual machines were integrated by using the matching degree to solve the problems such as increased energy consumption and decreased service quality which were caused by uncertain load. The simulation experimental results show that the proposed WU-VMC method can decrease energy consumption and virtual machine migration times of data center, improving the resource utilization and service quality of data center.
    Task performance collection and classification method in cloud platforms
    LIU Chunyi, ZHANG Xiao, QIN Yuansong, LU Shangqi
    2018, 38(6):  1665-1669.  DOI: 10.11772/j.issn.1001-9081.2017102790
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics
    It is difficult for users to determine the type of cloud hosts on cloud platforms when they are actually using cloud platforms, which results in low utilization of cloud platform resources. In some typical methods to solve the low resource utilization, the placement algorithms are optimized from the perspective of cloud provider, and the user selection will limit the utilization of resources; while in other methods, the collection and prediction of task performance under the cloud platform in a short time are made, but it will reduce the accuracy of task classification. In order to achieve the goals of improving cloud platform resource utilization and simplifying user operations, a multi-attribute task performance collection tool, named Lbenchmark, was proposed to collect the performance characteristics of task comprehensively, and the load was reduced by more than 50% compared with Ganglia. Then, with the performance data, a K-Nearest Neighbor (KNN) application performance classification algorithm with the multiple K-Dimension (KD) tree based on configurable weights was proposed. The suitable parameters were selected to establish multiple KNN classifiers with KD tree, and the cross validation method was used to adjust the weight of each attribute in different classifiers. The experimental results show that, compared with the traditional KNN algorithm, the calculation of the proposed algorithm is significantly increased by about 10 times, and its accuracy is averagely improved by about 10%. The proposed algorithm can use data feature mapping to provide resource recommendations to users and cloud providers, improving the overall utilization of cloud platforms.
    Resource scheduling algorithm of cloud computing based on ant colony optimization-shuffled frog leading algorithm
    CHEN Xuan, XU Jianwei, LONG Dan
    2018, 38(6):  1670-1674.  DOI: 10.11772/j.issn.1001-9081.2017112854
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    Aiming at the issue of low efficiency existing in resource scheduling of cloud computing, a new resource scheduling algorithm of cloud computing based on Quality of Service (QoS) was proposed. Firstly, the quality function and convergence factor were used in Ant Colony Optimization (ACO) algorithm to ensure the efficiency of pheromone updating and the feedback factor was set to improve the selection of probability. Secondly, the local search efficiency of Shuffled Frog Leading Algorithm (SFLA) was improved by setting crossover factor and mutation factor in the SFLA. Finally, the local search and global search of the SFLA were introduced for updating in each iteration of ACO algorithm, which improved the efficiency of algorithm. The simulation experimental results of cloud computing show that, compared with the basic ACO algorithm, SFLA, Improved Particle Swarm Optimization (IPSO) algorithm and Improved Artificial Bee Colony algorithm (IABC), the proposed algorithm has advantages in four indexes of QoS:the least completion time, the lowest cost of consumption, the highest satisfaction and the lowest abnormal value. The proposed algorithm can be effectively used in resource scheduling of cloud computing.
    Variance reduced stochastic variational inference algorithm for topic modeling of large-scale data
    LIU Zhanghu, CHENG Chunling
    2018, 38(6):  1675-1681.  DOI: 10.11772/j.issn.1001-9081.2017112786
    Asbtract ( )   PDF (1144KB) ( )  
    References | Related Articles | Metrics
    Stochastic Variational Inference (SVI) has been successfully applied to many types of models including topic models. Although it is extended to deal with large-scale data set with mapping the problem of reasoning to the optimization problems involving random gradient, the inherent noise of the stochastic gradient in SVI algorithm makes it produce large variance, which hinders fast convergence. In order to solve the problem, an improved Variance Reduced SVI (VR-SVI) was proposed. Firstly, the sliding window method was used to recalculate the noise term in the stochastic gradient, a new stochastic gradient was constructed, and the influence of noise on the stochastic gradient was reduced. Then, it was proved that the proposed algorithm could reduce the variance of random gradient on the basis of SVI. Finally, the influence of window size on the algorithm was discussed, and the convergence of algorithm was analyzed. The experimental results show that, the proposed VR-SVI algorithm can not only reduce the variance of stochastic gradient, but also save the computation time and achieve fast convergence.
    Supernetwork link prediction method based on spatio-temporal relation in location-based social network
    HU Min, CHEN Yuanhui, HUANG Hongcheng
    2018, 38(6):  1682-1690.  DOI: 10.11772/j.issn.1001-9081.2017122904
    Asbtract ( )   PDF (1605KB) ( )  
    References | Related Articles | Metrics
    The accuracy of link prediction in the existing methods for Location-Based Social Network (LBSN) is low due to the failure of integrating social factors, location factors and time factors effectively. In order to solve the problem, a supernetwork link prediction method based on spatio-temporal relation was proposed in LBSN. Firstly, aiming at the heterogeneity of network and the spatio-temporal relation among users in LBSN, the network was divided into four-layer supernetwork of "spatio-temporal-user-location-category" to reduce the coupling between the influencing factors. Secondly, considering the impact of edge weights on the network, the edge weights of subnets were defined and quantified by mining user influence, implicit association relationship, user preference and node degree information, and a four-layer weighted supernetwork model was built. Finally, on the basis of the weighted supernetwork model, the super edge as well as weighted super-edge structure were defined to mine the multivariate relationship among users for prediction. The experimental results show that, compared with the link prediction methods based on homogeneity and heterogeneity, the proposed method has a certain increase in accuracy, recall, F1-measure (F1) as well as Area Under the receiver operating characteristic Curve (AUC), and its AUC index is 4.69% higher than that of the link prediction method based on heterogeneity.
    Compressive data gathering based on even clustering for wireless sensor networks
    QIAO Jianhua, ZHANG Xueying
    2018, 38(6):  1691-1697.  DOI: 10.11772/j.issn.1001-9081.2017123013
    Asbtract ( )   PDF (1104KB) ( )  
    References | Related Articles | Metrics
    Compressive Data Gathering (CDG) using the combination of Compressed Sensing (CS) theory and sparse random projection for Wireless Sensor Network (WSN) can greatly reduce the amount of data transmitted over the network. Aiming at the unstable and unbalanced problems of the overall energy consumption of network caused by selecting the projection nodes randomly as cluster heads to collect data, two new compresseive data gathering methods of balanced projection nodes were proposed. For WSN with uniform distribution of nodes, an even clustering method based on spatial location was proposed. Firstly, the grids were evenly divided. Then, the projection nodes were selected in each grid for clustering according to the shortest distance principle. Finally, the intra-cluster data was collected by the projection nodes to the sink node for completing the data collection, so that the projection nodes were distributed evenly and the network energy consumption was balanced. For WSN with uneven distribution of nodes, an even clustering method based on node density was proposed. The locations and densities of nodes were taken into account together, for the grid with small number of nodes, the projection nodes were no longer selected, and the few nodes in the grid were allocated to the adjacent grids, which balanced the network energy and prolonged the network lifetime. The simulation results show that, compared with the random projection node method, the network lifetime of the proposed two methods is extended by more than 25%, and the number of remaining nodes can reach about 2 times in the middle stage of network running. The proposed two methods have better network connectivity and increase the overall network lifetime significantly.
    Application of asymmetric information in link prediction
    XIE Rui, HAO Zhifeng, LIU Bo, XU Shengbing
    2018, 38(6):  1698-1702.  DOI: 10.11772/j.issn.1001-9081.2017102467
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    The prediction accuracy of link prediction based on node similarity is always reduced without considering the asymmetric information. In order to solve the problem, a novel method for node similarity measurement with asymmetric information was proposed. Firstly, the disadvantage of the similarity measure algorithm based on Common Neighbor (CN) was analyzed, which it only considered the number of CNs without considering the number of all neighbors of each node. Secondly, the similarity measure between nodes was defined as the ratio of the common nodes to all the neighbor nodes. Then, the symmetric similar information and the asymmetric similar information between nodes were combined, and the similarity between nodes was described in detail. Finally, the proposed method was applied to predict the link relationship in complex networks. The experimental results on the real datasets show that, compared with the previous common neighbor-based similarity measurement methods such as CN, Adamic Adar (AA) and Resource Allocation (RA), the proposed method can improve the accuracy of node similarity measurement and improve the accuracy of link relationship prediction in complex networks.
    Content sharing algorithm for device to device cache communication with minimum inner-cluster energy consumption
    TONG Piao, LONG Long, HAN Xue, QIU Dawei, HU Qian
    2018, 38(6):  1703-1708.  DOI: 10.11772/j.issn.1001-9081.2017123015
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    The battery capacity of a terminal device is limited and the data transmission energy consumption is too large between devices in the Device to Device (D2D) cache communication, which lead to the decline of the file unloading rate. In order to solve the problem, a Caching communication content Sharing Algorithm for minimizing inner-Cluster node energy consumption (CCSA)was proposed. Firstly, the user nodes in the network were modeled as Poisson cluster process in view of the random distribution characteristics of user terminals. The unloading model was established based on the energy and communication distance of the node devices, and an adaptive cluster head selection weighting formula was designed. Secondly, the energy and distance weighted sum of nodes were traversed, and the local optimal principle of greedy algorithm was used to select cluster head node. Thus, the user node communication distance was optimized to ensure that users' energy consumption was the lowest to prolong their survival cycles, and the unloading rate of the system was improved. The experimental results show that, compared with the clustered Random cluster head (Random) and the non-clustered Energy Cost optimal (EC) energy consumption optimization algorithms, when the network energy consumption is optimal, the proposed algorithm prolongs the system survival cycle by about 60 percentage points and 72 percentage points. The proposed CCSA can improve the unloading rate and reduce the unloading energy consumption of the system.
    Regularized weighted incomplete robust principal component analysis method and its application in fitting trajectory of wireless sensor network nodes
    SUN Wange, XIA Kewen, LAN Pu
    2018, 38(6):  1709-1714.  DOI: 10.11772/j.issn.1001-9081.2017112728
    Asbtract ( )   PDF (961KB) ( )  
    References | Related Articles | Metrics
    The Sparsity Rank Singular Value Decomposition (SRSVD) method and Semi-Exact Augmented Lagrange Multiplier (SEALM) algorithm cannot fit the node trajectory of Wireless Sensor Network (WSN) accurately when the sampling rate is small, the sparse noise is large, and the Gaussian noise exists. In order to solve the problems, a novel Regularized Weighted Incomplete Robust Principal Component Analysis (RWIRPCA) method was proposed. Firstly, the Incomplete Robust Principal Component Analysis (IRPCA) was applied to the fitting of node trajectory. Then, on the basis of IRPCA, in order to better describe the low rank and sparsity of matrices, as well as the anti-Gauss noise performance of enhanced model, the low rank matrix and the sparse matrix were weighted respectively. Finally, the F norm of Gaussian noise matrix was used as a regular term and applied to the fitting of node trajectory. The simulation results show that, the fitting effects of IRPCA and RWIRPCA are better than those of SRSVD and SEALM in the case that the sampling rate is small and the sparse noise is large. Especially, the proposed RWIRPCA can still obtain accurate and stable results when both sparse noise and Gaussian noise exist at the same time.
    Traffic scheduling strategy based on improved Dijkstra algorithm for power distribution and utilization communication network
    XIANG Min, CHEN Cheng
    2018, 38(6):  1715-1720.  DOI: 10.11772/j.issn.1001-9081.2017112825
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of being easy to generate congestion during data aggregation in power distribution and utilization communication network, a novel hybrid edge-weighted traffic scheduling and routing algorithm was proposed. Firstly, the hierarchical node model was established according to the number of hops. Then, the priorities of power distribution and utilization services and node congestion levels were divided. Finally, the edge weights were calculated based on the comprehensive index of hop number, traffic load rate and link utilization ratio. The nodes of traffic scheduling were needed for routing selection according to the improved Dijkstra algorithm, and the severe congestion nodes were also scheduled in accordance with the priorities of power distribution and utilization services. Compared with Shortest Path Fast (SPF) algorithm and Greedy Backpressure Routing Algorithm (GBRA), when the data generation rate is 80 kb/s, the packet loss rate of emergency service by using the proposed algorithm is reduced by 81.3% and 67.7% respectively, and the packet loss rate of key service is reduced by 79% and 63.8% respectively. The simulation results show that, the proposed algorithm can effectively alleviate network congestion, improve the effective throughput of network, reduce the end-to-end delay of network and the packet loss rate of high priority service.
    Routing algorithm based on cluster-head optimization for self-energized wireless sensor network
    WANG Guan, WANG Ruiyao
    2018, 38(6):  1721-1725.  DOI: 10.11772/j.issn.1001-9081.2017122963
    Asbtract ( )   PDF (979KB) ( )  
    References | Related Articles | Metrics
    In the Energy Balanced Clustering algorithm for Self-energized wireless sensor network (EBCS), a node has no threshold limit of energy in the cluster-head election, which leads to that a node with low energy might be elected as the cluster-head; and the cluster-head node can only hold one round, which leads to that the node with rich energy can not continue to be reappointed. Meanwhile, the EBCS has no consideration about the election mechanism after the death node was resurrected based on the self-energized characteristic. In order to solve the problems, a new Clustering routing algorithm based on Cluster-head Optimization for Self-energized wireless sensor network (CCOS) was proposed. Firstly, the energy threshold of cluster-head election was optimized, which limited the election of the node with incompetent energy for the cluster-head. Secondly, the cluster-head reappointment mechanism was introduced and improved, which made that the cluster-head can decide whether to be the cluster-head in the next round with its own level of energy harvesting. What's more, a threshold sensitive node resurrection mechanism was proposed, soft and hard resurrection thresholds were set to let the death node resurrected when its harvesting energy reached the corresponding energy threshold. The experimental results show that, under different energy harvesting scenes, compared with EBCS, the number of available nodes of the proposed CCOS in the current network is increased by about 8% and its success ratio of data transmission is increased by about 5%. The proposed CCOS can make more rational use of renewable energy and is helpful to the deployment of self-energized sensor network.
    Method for determining boundaries of binary protocol format keywords based on optimal path search
    YAN Xiaoyong, LI Qing
    2018, 38(6):  1726-1731.  DOI: 10.11772/j.issn.1001-9081.2017112846
    Asbtract ( )   PDF (953KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of field segmentation in the reverse analysis of binary protocol message format, a novel algorithm with format keywords as the reverse analysis target was proposed, which can optimally determine the boundaries of binary protocol format keywords by improved n-gram algorithm and optimal path search algorithm. Firstly, by introducing the position factor into n-gram algorithm, a boundary extraction algorithm of format keywords was proposed based on the iterative n-gram-position algorithm, which effectively solved the problems that the n value was difficult to determine and the candidate boundary extraction of format keywords with fixed offset position in the n-gram algorithm. Then, the branch metric was defined based on the hit ratio of frequent item boundaries and the left and right branch information entropies, and the constraint conditions were constructed based on the difference of n-gram-position value change rate between keywords and non-keywords. The boundary selection algorithm of format keywords based on the optimal path search was proposed to determine the joint optimal bound for format keywords. The experimental results of testing on five different types of protocol message datasets such as AIS1, AIS18, ICMP00, ICMP03 and NetBios show that, the proposed algorithm can accurately determine the boundaries of different protocol format keywords, its F values are all above 83%. Compared with the classical algorithms of Variance of the Distribution of Variances (VDV) and AutoReEngine, the F value of the proposed algorithm is increased averagely by about 8 percentage points.
    Uplink clock synchronization method for low earth orbit satellite based on location information
    YAO Guangji, WANG Ling, HUANG Shengchun
    2018, 38(6):  1732-1736.  DOI: 10.11772/j.issn.1001-9081.2017102466
    Asbtract ( )   PDF (714KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of updating distance information frequently in the traditional method of setting uplink synchronization based on ranging information, a uplink clock synchronization method based on location information was proposed. Firstly, by measuring the pseudoranges to form a nonlinear system of equations, the location information of the terrestrial unit was located by using the solution method based on the principle of least squares. Then, due to the known location information of satellite movement, the change relationship of the distance between the satellite and the ground with time could be further obtained. The distance was converted into time delay to obtain the time advance of the uplink signal transmission of the terrestrial unit. Finally, the transmitter of the terrestrial unit was adjusted to make that the uplink signal could just arrive at the satellite in the assigned time slot with high accuracy, and the purpose of uplink clock synchronization was achieved. The simulation results show that, the proposed method can realize the synchronization of uplink clock in the satellite constellation communication system with high accuracy for the static units in the earth surface all over the world, and avoid the frequent ranging updates with high accuracy.
    Safety verification of stochastic continuous system using stochastic barrier certificates
    SHEN Minjie, ZENG Zhenbing, LIN Wang, YANG Zhengfeng
    2018, 38(6):  1737-1744.  DOI: 10.11772/j.issn.1001-9081.2017112824
    Asbtract ( )   PDF (1360KB) ( )  
    References | Related Articles | Metrics
    Aiming at the safety verification problem of a class of stochastic continuous system equipped with both random initial state and stochastic differential equation, a new computation method based on stochastic barrier certificates and initial set selection was proposed. Firstly, the related knowledge and concepts of stochastic continuous system and its safety verification were introduced. Then, it was discussed that how to determine the initial state set for the initial variables obeying several different distributions. The safety verification problem was converted into the polynomial optimization problem by using the method of stochastic barrier certificates according to the selected initial state set. Finally, the sum of squares relaxation method was used to transform the problem into sum of squares programming problem, and the lower bound of safety probability was obtained by using the SOSTOOLS tool. The theoretical analysis and experimental results show that, the proposed method has the complexity of polynomial time and can effectively compute the lower bound of safety probability for stochastic continuous system in unbounded time.
    Obfuscator low level virtual machine deobfuscation framework based on symbolic execution
    XIAO Shuntao, ZHOU Anmin, LIU Liang, JIA Peng, LIU Luping
    2018, 38(6):  1745-1750.  DOI: 10.11772/j.issn.1001-9081.2017122892
    Asbtract ( )   PDF (972KB) ( )  
    References | Related Articles | Metrics
    The deobfuscation result of deobfuscation framework Miasm is a picture, which cannot be decompiled to recovery program source code. After deep research on the obfuscation strategy of Obfuscator Low Level Virtual Machine (OLLVM) and Miasm deobfuscation idea, a general OLLVM automatic deobfuscation framework based on symbolic execution was proposed and implemented. Firstly, the basic block identification algorithm was used to find useful basic blocks and useless blocks in the obfuscated program. Secondly, the symbolic execution technology was used to determine the topological relations among useful blocks. Then, the instruction repairment was directly applied to the assembly code of basic blocks. Finally, an executable file after deobfuscation was obtained. The experimental results show that, under the premise of guaranteeing the deobfuscation time as little as possible, the code similarity between the deobfuscation program and the non-obfuscated source program is 96.7%. The proposed framework can realize the OLLVM deobfuscation of the C/C ++ files under the x86 architecture very well.
    Real-time visual tracking algorithm via channel stability weighted complementary learning
    FAN Jiaqing, SONG Huihui, ZHANG Kaihua
    2018, 38(6):  1751-1754.  DOI: 10.11772/j.issn.1001-9081.2017112735
    Asbtract ( )   PDF (584KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of tracking failure of the Sum of template and pixel-wise learners (Staple) tracking algorithm for in-plane rotation and partial occlusion, a simple and effective Channel Stability-weighted Staple (CSStaple) tracking algorithm was proposed.Firstly, a standard correlation filter classifier was employed to detect the response value of each channel. Then, the stability weight of each channel was calculated and multiplied to the weight of each layer to obtain correlation filtering response. Finally, by integrating the response of the color complementary learner, the final response result was obtained, and the location of the maximum value in the response was the tracking result. The proposed algorithm was compared with several state-of-the-art tracking algorithms including Channel and Spatial Reliability Discriminative Correlation Filter (CSR-DCF) tracking, Hedged Deep Tracking (HDT), Kernelized Correlation Filter (KCF) Tracking and Staple. The experimental results show that, the proposed algorithm performs best in the success rate, it is 2.5 percentage points higher and 0.9 percentage points higher than Staple on OTB50 and OTB100 respectively, which proves the effectiveness of the proposed algorithm for target in-plane rotation and partial occlusion.
    Hierarchical three-dimensional shape ring feature extraction method
    ZUO Xiangmei, JIA Lijiao, HAN Pengcheng
    2018, 38(6):  1755-1759.  DOI: 10.11772/j.issn.1001-9081.2017112816
    Asbtract ( )   PDF (1054KB) ( )  
    References | Related Articles | Metrics
    The existing three-dimensional shape local features are mostly lack of spatial structure information and only contain a single property. In order to solve the problems, a hierarchical feature extraction framework integrating topological connection information of three-dimensional shape was proposed to obtain the three-dimensional shape ring feature with shift invariance. Firstly, based on the low-level feature extraction of a three-dimensional shape, the local region of feature points was modeled by the way of the isometric geodesic ring, which could extract the middle-level feature containing rich spatial geometric structure information. Then, the middle-level feature was further abstracted by using sparse coding to obtain more discriminative high-level feature with abundant information. The obtained high-level feature was compared with the existing Scale Invariant Heat Kernel Signature (SI-HKS) in two tasks of three-dimensional shape correspondence and shape retrieval, and its accuracy was increased by 24.5 percentage points and 7.2 percentage points respectively. The experimental results show that the proposed feature has higher resolution and recognition than the existing feature descriptors.
    Online behavior recognition using space-time interest points and probabilistic latent-dynamic conditional random field model
    WU Liang, HE Yi, MEI Xue, LIU Huan
    2018, 38(6):  1760-1764.  DOI: 10.11772/j.issn.1001-9081.2017112805
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics
    In order to improve the recognition ability for online behavior continuous sequences and enhance the stability of behavior recognition model, a novel online behavior recognition method based on Probabilistic Latent-Dynamic Conditional Random Field (PLDCRF) from surveillance video was proposed. Firstly, the Space-Time Interest Point (STIP) was used to extract behavior features. Then, the PLDCRF model was applied to identify the activity state of indoor human body. The proposed PLDCRF model incorporates the hidden state variables and can construct the substructure of gesture sequences. It can select the dynamic features of gesture and mark the unsegmented sequences directly. At the same time, it can also mark the conversion process between behaviors correctly to improve the effect of behavior recognition greatly. Compared with Hidden Conditional Random Field (HCRF), Latent-Dynamic Conditional Random Field (LDCRF) and Latent-Dynamic Conditional Neural Field (LDCNF), the recognition rate comparison results of 10 different behaviors show that, the proposed PLDCRF model has a stronger recognition ability for continuous behavior sequences and better stability.
    Local binary pattern based on dominant gradient encoding for pollen image recognition
    XIE Yonghua, HAN Liping
    2018, 38(6):  1765-1770.  DOI: 10.11772/j.issn.1001-9081.2017112791
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics
    Influenced by the microscopic sensors and irregular collection method, the pollen images are often disturbed by different degrees of noise and have rotation changes with different angles, which leads to generally low recognition accuracy. In order to solve the problem, a Dominant Gradient encoding based Local Binary Pattern (DGLBP) descriptor was proposed and applied to the recognition of pollen images. Firstly, the gradient magnitude of an image block in the dominant gradient direction was calculated. Secondly, the radial, angular and multiple gradient differences of the image block were calculated separately. Then, the binary coding was performed according to the gradient differences of each image block. The binary coding was assigned weights adaptively with reference to the texture distribution of each local region, and the texture feature histograms of pollen images in three directions were extracted. Finally, the texture feature histograms under different scales were fused, and the Euclidean distance was used to measure the similarity between images. The average correct recognition rates of DGLBP on datasets of Confocal and Pollenmonitor are 94.33% and 92.02% respectively, which are 8.9 percentage points and 8.6 percentage points higher on average than those of other compared pollen recognition methods, 18 percentage points and 18.5 percentage points higher on average than those of other improved LBP-based methods. The experimental results show that the proposed DGLBP descriptor is robust to noise and rotation change of pollen images, and has a better recognition effect.
    Optimization algorithm of dynamic time warping for speech recognition of aircraft towing vehicle
    XIE Benming, HAN Mingming, ZHANG Pan, ZHANG Wei
    2018, 38(6):  1771-1776.  DOI: 10.11772/j.issn.1001-9081.2017122876
    Asbtract ( )   PDF (1117KB) ( )  
    References | Related Articles | Metrics
    In order to study the intelligent voice control of aircraft towing vehicle, realize accurate and efficient recognition of the voice command of pilot in the airport environment, and solve the problems of large computation, high time complexity and low recognition efficiency of the traditional Dynamic Time Warping (DTW) algorithm, a new optimization algorithm of DTW with constraint of hexagonal warping window for vehicle speech recognition was proposed. Firstly, the influence of warping window on the accuracy and efficiency of DTW algorithm was analyzed from three aspects such as the principles of DTW algorithm, the speech characteristics of towing vehicle instruction and the airport environment. Then, on the basis of DTW optimization algorithm with constraint of Itakura Parallelogram rhombic warping window, a DTW global optimization algorithm with the constraint of hexagonal warping window was further proposed. Finally, by varying the optimization coefficient, the optimal DTW algorithm with the constraint of hexagonal warping window was realized. The experimental results based on isolated-word recognition show that, compared with the traditional DTW algorithm and the DTW algorithm with rhombic warping window constraint, the recognition error rate of the proposed optimal algorithm is reduced by 77.14% and 69.27% respectively, and its recognition efficiency is increased by 48.92% and 27.90% respectively. The proposed optimal algorithm is more robust and timeliness, and can be used as an ideal instruction input port for intelligent control of aircraft towing vehicle.
    Image super-resolution reconstruction based on four-channel convolutional sparse coding
    CHEN Chen, ZHAO Jianwei, CAO Feilong
    2018, 38(6):  1777-1783.  DOI: 10.11772/j.issn.1001-9081.2017112742
    Asbtract ( )   PDF (1085KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low resolution of iamge, a new image super-resolution reconstruction method based on four-channel convolutional sparse coding was proposed. Firstly, the input image was turned over 90° in turn as the input of four channels, and an input image was decomposed into the high frequency part and the low frequency part by low pass filter and gradient operator. Then, the high frequency part and low frequency part of the low resolution image in each channel were reconstructed by convolutional sparse coding method and cubic interpolation method respectively. Finally, the four-channel output images were weighted for mean to obtain the reconstructed high resolution image. The experimental results show that the proposed method has better reconstruction effect than some classical super-resolution methods in Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity (SSIM) and noise immunity. The proposed method can not only overcome the shortcoming of consistency between image patches destroyed by overlapping patches, but also improve the detail contour of reconstructed image, and enhance the stability of reconstructed image.
    Logarithmic function based non-local total variation image inpainting model
    YANG Wenxia, ZHANG Liang
    2018, 38(6):  1784-1789.  DOI: 10.11772/j.issn.1001-9081.2017112855
    Asbtract ( )   PDF (995KB) ( )  
    References | Related Articles | Metrics
    Total variation minimization based image impainting method is easy to cause staircase effect in smooth regions. In order to solve the problem, a novel non-local total variation image inpainting model based on logarithmic function was proposed. The integrand function of the new total variation energy function is a logarithmic function concerning the magnitude of gradient. Under the framework of partial differential equations of total variation model and anisotropic diffusion model, firstly, the proposed model was proven theoretically to satisfy all the properties required for good diffusion. Besides, the local diffusion behavior was theoretically analyzed, and its good properties of diffusion in equal illumination direction and gradient direction were proved. Then, in order to consider the similarity of image blocks and avoid local blur, non-local logarithmic total variation was used for numerical implementation. The experimental results demonstrate that, compared with a classical total variation image inpainting model, the proposed non-local total variation image inpainting model based on logarithmic function has good effect on image inpainting, avoids local blur, and can better suppress the staircase effect in image smooth region; in the meantime, compared with the exemplar-based inpainting model, the proposed model can obtain more natural inpainting effect for texture images. The experimental results show that, compared with three types of total variation models and the exemplar-based inpainting model, the proposed model has the best performance. Compared with the average results of the comparison models (figure 2, figure 3, figure 4), the Structural Similarity Index Measure (SSIM) of the proposed model is improved by 0.065, 0.022 and 0.051, while its Peak Signal-to-Noise Ratio (PSNR) is improved by 5.94 dB、4.00 dB and 6.22 dB. The inpainting results of noisy images show that the proposed model has good robustness and can also get good inpainting results for noisy images.
    Hierarchical speech recognition model in multi-noise environment
    CAO Jingjing, XU Jieping, SHAO Shengqi
    2018, 38(6):  1790-1794.  DOI: 10.11772/j.issn.1001-9081.2017112678
    Asbtract ( )   PDF (805KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue of speech recognition in multi-noise environment, a new hierarchical speech recognition model considering environmental noise as the context of speech recognition was proposed. The proposed model was composed of two layers of noisy speech classification model and acoustic model under specific noise environment. The difference between training data and test data was reduced by noisy speech classification model, which eliminated the limitation of noise stability required in feature space research and solved the disadvantage of low recognition rate caused by traditional multi-type training under certain noise environment. Furthermore, a Deep Neural Network (DNN) was used for modeling of acoustic model, which could further enhance the ability of acoustic model to distinguish noise and speech, and the noise robustness of speech recognition in model space was improved. In the experiment, the proposed model was compared with the benchmark model obtained by multi-type training. The experimental results show that, the proposed hierarchical speech recognition model has relatively reduced the Word Error Rate (WER) by 20.3% compared with the traditional benchmark model. The proposed hierarchical speech recognition model is helpful to enhance the noise robustness of speech recognition.
    Parallel test scheduling optimization method for three-dimensional chip with multi-core and multi-layer
    CHEN Tian, WANG Jiawei, AN Xin, REN Fuji
    2018, 38(6):  1795-1800.  DOI: 10.11772/j.issn.1001-9081.2017123002
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of high cost of chip testing in the process of Three-Dimensional (3D) chip manufacturing, a new scheduling method based on Time Division Multiplexing (TDM) was proposed to optimize the testing resources between layers, layer and core cooperatively. Firstly, the shift registers were arranged on each layer of 3D chip, and the testing frequency was divided properly between the layers and cores of the same layer under the control of shift register group on input data, so that the cores in different locations could be tested in parallel. Secondly, greedy algorithm was used to optimize the allocation of registers for reducing the free test cycles of core parallel test. Finally, Discrete Binary Particle Swarm Optimization (DBPSO) algorithm was used to find out the best 3D stack layout, so that the transmission potential of the Through Silicon Via (TSV) could be adequately used to improve the parallel testing efficiency and reduce the testing time. The experimental results show that, under the power constraints, the utilization rate of the optimized whole Test Access Mechanism (TAM) is increased by an average of 16.28%, and the testing time of the optimized 3D stack is reduced by an average of 13.98%. The proposed method can decrease the time and reduce the cost of testing.
    Muscle fatigue state classification system based on surface electromyography signal
    CAO Ang, ZHANG Shenjia, LIU Rui, ZOU Lian, FAN Ci'en
    2018, 38(6):  1801-1808.  DOI: 10.11772/j.issn.1001-9081.2017102549
    Asbtract ( )   PDF (1309KB) ( )  
    References | Related Articles | Metrics
    In order to realize the accurate detection and classification of muscle fatigue states, a new complete muscle fatigue detection and classification system based on human surface ElectroMyoGraphy (sEMG) signals was proposed. Firstly, human sEMG signals were collected through AgCl surface patch electrode and high-precision analog front-end device ADS1299. The time-domain and frequency-domain features of sEMG signals reflecting human muscle fatigue states were extracted after the denoising preprocessing using wavelet transformation. Then, on the basis of the common features such as Integrated ElectroMyoGraphy (IEMG), Root Mean Square (RMS), Median Frequency (MF), Mean Power Frequency (MPF), in order to depict the fatigue states of human muscle more finely, the Band Spectral Entropy (BSE) of frequency domain features of sEMG signals were introduced. In order to compensate the weakness of Fourier transform in dealing with non-stationary signals, the time-frequency feature of the sEMG signals, named mean instantaneous frequency based on Ensemble Empirical Mode Decomposition-Hilbert transform (EEMD-HT), was introduced. Finally, in order to improve the classification accuracy of muscle non-fatigue and fatigue states, the Support Vector Machine optimized by Particle Swarm Optimization algorithm (PSO-SVM) with mutation was used for the classification of sEMG signals to realize the detection of human muscle fatigue states. Fifteen healthy young men were recruited to carry out sEMG signal acquisition experiments, and a sEMG signal database was established to extract features for classification. The experimental results show that, the proposed system can realize high-accuracy sEMG signal acquisition and high-accuracy classification of muscle fatigue states, and its accuracy rate of classification is above 90%.
    Six-legged robot path planning algorithm for unknown map
    YANG Yang, TONG Dongbing, CHEN Qiaoyu
    2018, 38(6):  1809-1813.  DOI: 10.11772/j.issn.1001-9081.2017112671
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics
    The global map cannot be accurately known in the path planning of mobile robots. In order to solve the problem, a local path planning algorithm based on fuzzy rules and artificial potential field method was proposed. Firstly, the ranging group and fuzzy rules were used to classify the shape of obstacles and construct the local maps. Secondly, a modified repulsive force function was introduced in the artificial potential field method. Based on the local maps, the local path planning was performed by using the artificial potential field method. Finally, with the movement of robot, time breakpoints were set to reduce path oscillation. For the maps of random obstacles and bumpy obstacles, the traditional artificial potential field method and the improved artificial potential field method were respectively used for simulation. The experimental results show that, in the case of random obstacles, compared with the traditional artificial potential field method, the improved artificial potential field method can significantly reduce the collision of obstacles; in the case of bumpy obstacles, the improved artificial potential field method can successfully complete the goal of path planning. The proposed algorithm is adaptable to terrain changes, and can realize the path planning of six-legged robot under unknown maps.
    Hub backup to deal with hub failure in hub and spoke network
    HU Jingjing, HUANG Youfang
    2018, 38(6):  1814-1819.  DOI: 10.11772/j.issn.1001-9081.2017102564
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    In order to improve the reliability of a hub and spoke network and maintain the normal operation of the hub and spoke network during the failure of the initial hub, a new hub backup optimization method for the hub and spoke network was proposed, in which a backup hub was selected for each hub point to make the initial cost and the backup cost of the hub and spoke network the best. Firstly, the hub backup variables were introduced into the basic model of a hub and spoke network, and an extension model of nonlinear programming was established. The extended model was linearized by the linearization method of variable substitution, and mathematical solver CPLEX was used to solve the small scale problem of the hub and spoke network hub backup. Then, the scale of hub and spoke network nodes was increased, and a genetic algorithm was designed to solve the problem of large scale hub backup optimization in the hub and spoke network. Finally, in the CPLEX and genetic algorithm, the proportion weights of the initial hub and spoke network cost and backup cost were adjusted, the exact solutions and optimal solutions of initial cost, backup cost, hub location and backup hub were obtained respectively. The optimal values of the initial hub and spoke network, backup hub as well as the objective function were obtained by the example experiments. The experimental results show that, the backup hub of the the proposed method shares the traffic and capacity of the initial hub, and when the initial hub fails, the backup hub can undertake the transportation task of the initial hub and keep the hub and spoke network running. The proposed optimization method of hub backup can be applied to the emergency logistics and security management of logistics network.
    Prediction method of tectonic coal thickness based on particle swarm optimized hybrid kernel extreme learning machine
    FAN Jun, WANG Xin, XU Hui
    2018, 38(6):  1820-1825.  DOI: 10.11772/j.issn.1001-9081.2017112807
    Asbtract ( )   PDF (1149KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low prediction accuracy in tectonic coal thickness prediction, a new method of Extreme Learning Machine (ELM) optimized by Particle Swarm Optimization (PSO) algorithm was proposed for predicting tectonic coal thickness. Firstly, Principal Component Analysis (PCA) was used to reduce the dimensionality of 3D seismic attributes, which reduced the dimension of seismic attributes, and eliminated the correlation among variables. Then, a Hybrid Kernel Extreme Learning Machine (HKELM) model with global polynomial kernel function and local Gaussian radial basis kernel function was constructed, and the kernel parameters of HKELM were optimized by using PSO algorithm. Furthermore, in order to solve the problem of easily falling into the local optimum for the PSO algorithm, the idea of simulated annealing, the inertia weight decreasing with the number of iterations, and the mutation operation based on reverse learning were added to the PSO algorithm, which made it easier jump out of local minimum points and get better results. In addition, in order to enhance the generalization ability of model, L2 regularization term was added based on the kernel function, which could effectively avoid the influence of noisy data and abnormal points on the generalization performance of model. Finally, the improved prediction model was applied to 15# coal seam in the central part of Luonan No.2 mining area in Xinjing Mining Area of Yangquan Coal Mine, and the predicted thickness of tectonic coal in the mining area guaranteed high consistency with the actual geological data. The experimental results show that the prediction error of the prediction model of tectonic coal thickness constructed by using the improved PSO algorithm to optimize HKELM is smaller, therefore the proposed method can be extended to the prediction of tectonic coal thickness in the actual mining area.
    Cis-regulatory motif finding algorithm in chromatin immunoprecipitation sequencing datasets
    FENG Yanxia, ZHANG Zhihong, ZHANG Shaoqiang
    2018, 38(6):  1826-1830.  DOI: 10.11772/j.issn.1001-9081.2017112749
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics
    Aiming at the motif finding problem in Chromatin Immunoprecipitation Sequencing (ChIP-Seq) datasets of Next-Generation Sequencing (NGS), a new motif finding algorithm based on Fisher's exact test, called FisherNet, was proposed. Firstly, Fisher's exact test was used to calculate the P values of all k-mers, some of which were selected as motif seeds. Secondly, the position weight matrix of the initial motif was constructed. Finally, the position weight matrix was employed to scan all k-mers for obtaining the final motif. The ChIP-Seq datasets of mouse Embryonic Stem cells (mESC), mouse erythrocytes, human lymphoblastic lines and the ENCODE database were used for verifying. The verification results show that, the accuracy and calculation speed of the proposed algorithm are higher than those of other common motif finding algorithms, and it can find more than 80% of core motifs for known transcription factors and their co-factors. The proposed algorithm can be applied to large-scale sequencing datasets while ensuring high accuracy.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF