Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 July 2018, Volume 38 Issue 7
Previous Issue
Next Issue
Semantic relation extraction model via attention based neural Turing machine
ZHANG Runyan, MENG Fanrong, ZHOU Yong, LIU Bing
2018, 38(7): 1831-1838. DOI:
10.11772/j.issn.1001-9081.2017123009
Asbtract
(
)
PDF
(1298KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem of poor memory in long sentences and the lack of core words' influence in semantic relation extraction, an Attention based bidirectional Neural Turing Machine (Ab-NTM) model was proposed. Instead of a Recurrent Neural Network (RNN), a Neural Turing Machine (NTM) was used firstly, and a Long Short-Term Memory (LSTM) network was acted as a controller, which contained larger and non-interfering storage, and it could hold longer memories than the RNN. Secondly, an attention layer was used to organize the context information on the word level so that the model could pay attention to the core words in sentences. Finally, the labels were gotten through the classifier. Experiments on the SemEval-2010 Task 8 dataset show that the proposed model outperforms most state-of-the-art methods with an 86.2% F1-score.
Multi-intention recognition model with combination of syntactic feature and convolution neural network
YANG Chunni, FENG Chaosheng
2018, 38(7): 1839-1845. DOI:
10.11772/j.issn.1001-9081.2017122996
Asbtract
(
)
PDF
(1194KB) (
)
References
|
Related Articles
|
Metrics
Multi-Intention (MI) recognition of short texts is a problem in Spoken Language Understanding (SLU). The effective features of short texts are difficult to extract in classification problems because of sparse features of short texts and few words containing many information. To solve the problem, by combining syntactic features and Convolution Neural Network (CNN), a multi-intention recognition model was proposed. Firstly, the sentence was syntactically analyzed to determine whether it contains multi-intention. Secondly, the number of intentions and matrix of distance were calculated by using Term Frequency-Inverse Document Frequency (TF-IDF) and word embedding. Then the matrix of distance was acted as the input of CNN model to classify intentions. Finally, the emotional polarity of each intention was judged to return to the user's true intentions. The experiment was carried out by using the real data of the existing intelligent customer service system. The experimental results show that, the single classification precision of the combination model of syntactic features and CNN is 93.5% in 10 intentions, which is 1.4 percentage points higher than the original CNN model without syntactic features. And in mutil-intention recognition, the classification precision is 30 percentage points higher than others.
Semantic matching model of knowledge graph in question answering system based on transfer learning
LU Qiang, LIU Xingyu
2018, 38(7): 1846-1852. DOI:
10.11772/j.issn.1001-9081.2018010186
Asbtract
(
)
PDF
(1183KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem that semantic matching between questions and relations in a single fact-based question answering system is difficult to obtain high accuracy in small-scale labeled samples, a transfer learning model based on Recurrent Neural Network (RNN) was proposed. Firstly, by the way of reconstructing sequences, an RNN-based sequence-to-sequence unsupervised learning algorithm was used to learn the semantic distribution (word vector and RNN) of questions in a large number of unlabeled samples. Then, by assigning values to the parameters of a neural network, the semantic distribution was used as the parameters of the supervised semantic matching algorithm. Finally, by the inner product of the question features and relation features, the semantic matching model was trained and generated in labeled samples. The experimental results show that compared with the supervised learning method Embed-AVG and RNNrandom, the accuracy of semantic matching of the proposed model is averagely increased by 5.6 and 8.8 percentage points respectively in an environment with a small number of labeled samples and a large number of unlabeled samples. The proposed model can significantly improve the accuracy of semantic matching in an environment with labeled samples by pre-learning the semantic distribution of a large number of unlabeled samples.
Fine-grained image classification method based on multi-feature combination
ZOU Chengming, LUO Ying, XU Xiaolong
2018, 38(7): 1853-1856. DOI:
10.11772/j.issn.1001-9081.2017122920
Asbtract
(
)
PDF
(862KB) (
)
References
|
Related Articles
|
Metrics
As the limitation of single feature representation may cause low accuracy of fine-grained image classification, a multi-feature combination representation method based on Convolutional Neural Network (CNN) and Scale Invariant Feature Transform (SIFT) was proposed. The features were extracted from the entire target, the key parts and the key points comprehensively. Firstly, two CNN models were trained with the target-entirety regions and the head-only regions in the fine-grained image library respectively, which were used to extract the target-entirety and the head-only CNN features. Secondly, the SIFT key points were extracted from all the target-entirety regions in the image library, and the codebook was generated through the
K
-means clustering. Then, the SIFT descriptors of each target-entirety region were encoded into a feature vector by using the Vector of Locally Aggregated Descriptors (VLAD) along with the codebook. Finally, Support Vector Machine (SVM) was used to classify the fine-grained images by using the combination of multiple features. The method was evaluated in CUB-200-2011 database and compared with the single feature representation method. The experimental results show that the proposed method can improve the classification accuracy by 13.31% compared with the single CNN feature representation, which proves the positive effect of multi-feature combination on fine-grained image classification.
Interaction based algorithm for feature selection in text categorization
TANG Xiaochuan, QIU Xiwei, LUO Liang
2018, 38(7): 1857-1861. DOI:
10.11772/j.issn.1001-9081.2018010114
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue of feature selection in text categorization, an interaction maximum feature selection algorithm, called Max-Interaction, was proposed. Firstly, an information theoretic feature selection model was established based on Joint Mutual Information (JMI). Secondly, the assumptions of the existing feature selection algorithms were relaxed, and the feature selection problem was transformed into an interaction optimization problem. Thirdly, the maximum of the minimum method was employed to avoid the overestimation of higher-order interaction. Finally, a text categorization feature selection algorithm based on sequential forward search and high-order interaction was proposed. In the comparison experiments, the average classification accuracy of Max-Interaction over Interaction Weight Feature Selection (IWFS) was improved by 5.5%; the average classification accuracy of Max-Interaction over Chi-square was improved by 6%; and Max-Interaction outperformed other methods on 93% of the experiments. Therefore, Max-Interaction can effectively improve the performance of feature selection in text categorization.
Imbalanced image classification approach based on convolution neural network and cost-sensitivity
TAN Jiefan, ZHU Yan, CHEN Tung-shou, CHANG Chin-chen
2018, 38(7): 1862-1865. DOI:
10.11772/j.issn.1001-9081.2018010152
Asbtract
(
)
PDF
(804KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issues that the recall of minority class is low, the cost of classification is high and manual feature selection costs too much in imbalanced image classification, an imbalanced image classification approach based on Triplet-sampling Convolutional Neural Network (Triplet-sampling CNN) and Cost-Sensitive Support Vector Machine (CSSVM), called Triplet-CSSVM, was proposed. This method had two parts:feature learning and cost sensitive classification. Firstly, the coding method which mapped images to a Euclidean space end-to-end was learned by the CNN which used Triplet loss as loss function. Then, the dataset was rescaled by sampling method to balance the distribution. At last, the best classification result with the minimum cost was obtained by CSSVM classification algorithm which assigned different cost factors to different classes. Experiments with the portrait dataset FaceScrub on the deep learning framework Caffe were conducted. And the experimental results show that the precision is increased by 31 percentage points and the recall of the proposed method is increased by 71 percentage points compared with VGGNet-SVM (Visual Geometry Group Net-Support Vector Machine) in the condition of 1:3 imbalanced rate.
Improved hybrid recommendation algorithm based on stacked denoising autoencoder
YANG Shuai, WANG Juan
2018, 38(7): 1866-1871. DOI:
10.11772/j.issn.1001-9081.2017123060
Asbtract
(
)
PDF
(941KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that traditional collaborative filtering algorithm just utilizes users' ratings on items when generating recommendation, without considering users' labels or comments, which can not reflect users' real preference on different items and the prediction accuracy is not high and easily overfits, a Stacked Denoising AutoEncoder (SDAE)-based improved Hybrid Recommendation (SDHR) algorithm was proposed. Firstly, SDAE was used to extract items' explicit features from users' free-text labels. Then, Latent Factor Model (LFM) algorithm was improved, the LFM's abstract item features were replaced with extracted explicit ones to train matrix decomposition model. Finally, the user-item preference matrix was used to generate recommendations. Experimental tests on the dataset MovieLens showed that the accuracy of the proposed algorithm was improved by 38.4%, 16.1% and 45.2% respectively compared to the three recommendation models (including the model based on label-based weights with collaborative filtering, the model based on SDAE and extreme learning machine, and the model based on recurrent neural networks). The experimental results show that the proposed algorithm can make full use of items' free-text label information to improve recommendation performance.
Mechanism of sparse restricted Boltzmann machine based on competitive learning
ZHOU Lijun, LIU Kai, LYU Haiyan
2018, 38(7): 1872-1876. DOI:
10.11772/j.issn.1001-9081.2018010001
Asbtract
(
)
PDF
(816KB) (
)
References
|
Related Articles
|
Metrics
To resolve the problems of feature homogeneity in unsupervised training of Restricted Boltzmann Machine (RBM) and non-adaptiveness of Sparse Restricted Boltzmann Machine (SRBM), a new sparse mechanism method of RBM based on competitive learning was designed. Firstly, a distance measurement was designed based on the cosine value between the neuron weight vector and the input vector to evaluate the similarity. Secondly, the optimal matching implicit unit based on distance measurement was selected for different samples during training. Thirdly, the sparse penalty for other hidden units was calculated according to the activation state of the optimal matching hidden unit. Finally, the parameters were updated and the competitive sparseness was applied to the construction of Deep Boltzmann Machine (DBM) based on the deep model training process. The handwritten number recognition results show that, compared with the mechanism using the sum of squared errors as the regularization factor, the classification accuracy of DBM based on new sparse mechanism is improved by 0.74%, and the average sparsity measurement is increased by 5.6%, without the need to set sparse parameters. Therefore, the proposed sparse mechanism can improve the training efficiency of unsupervised training model, such as RBM, and can be applied into the construction of deep models.
List-wise matrix factorization algorithm with combination of item popularity
ZHOU Ruihuan, ZHAO Hongyu
2018, 38(7): 1877-1881. DOI:
10.11772/j.issn.1001-9081.2017123066
Asbtract
(
)
PDF
(805KB) (
)
References
|
Related Articles
|
Metrics
For the difference of transmutative Singular Value Decomposition (SVD++) algorithm's rating rule in two stages of model training and prediction, and the same probability of List-wise Matrix Factorization (ListRank-MF) algorithm's Top-1 ranking probability caused by a large number of same rating items, an algorithm of list-wise matrix factorization combining with item popularity was proposed. Firstly, the current item to be rated was removed from the set of items that the user had used in the rating rule. Secondly, the item popularity was used to improve the Top-1 ranking probability. Then the stochastic gradient descent algorithm was used to solve the objective function and make Top-
N
recommendation. Based on the modified SVD++ rating rule, the proposed algorithm and the SVD++ algorithms whose objective functions are point-wise and list-wise were compared on MovieLens and Netflix datasets. Compared with the list-wise SVD++ algorithm, the value of Normalized Discounted Cumulative Gain (NDCG) of Top-
N
recommendation accuracy was increased by 5%-8% on MovieLens datasets and about 1% on Netflix datasets. The experimental results show that the proposed algorithm can effectively improve the Top-
N
recommendation accuracy.
Cooperative hybrid imperialist competitive algorithm for flexible job-shop scheduling problem
LYU Cong, WEI Kanglin
2018, 38(7): 1882-1887. DOI:
10.11772/j.issn.1001-9081.2017122933
Asbtract
(
)
PDF
(855KB) (
)
References
|
Related Articles
|
Metrics
For Flexible Job-shop Scheduling Problem (FJSP) with non-deterministic polynomial characteristics, a cooperative hybrid imperialist competitive algorithm was proposed to minimize the maximum makespan. Firstly, based on process characteristics of standard Imperialist Competitive Algorithm (ICA), improvement of adaptive parameters was designed to improve the convergence speed of the algorithm. Secondly, the reform of the empire and the colonies was introduced. Aiming to the stages of process arrangement and machine selection, a multi-mutation reform strategy was proposed to improve the local search efficiency of the algorithm. Finally, the mechanism for exchanges and cooperation among countries in the mainland was created to promote the exchange of information among outstanding countries and improve the global search capability of the algorithm. By testing on many flexible shop scheduling examples, the experimental results show that the proposed algorithm outperforms many swarm intelligence algorithms in terms of quality and stability, and it is more suitable for solving this kind of scheduling problems.
Hyperspectral image classification method based on spatial-spectral fusion network
OUYANG Ning, ZHU Ting, LIN Leping
2018, 38(7): 1888-1892. DOI:
10.11772/j.issn.1001-9081.2017122905
Asbtract
(
)
PDF
(860KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that the extracted spatial-spectral features have the shortcoming of weak representation ability and high dimensionality, a HyperSpectral Image (HSI) classification method based on Spatial-Spectral Fusion Network (SSF-Net) was proposed. Firstly, a Two-channel Convolutional Neural Network (Two-CNN) was used to extract the features of spectral domain and spatial domain for HSI respectively. Secondly, Multimodal Compact Bilinear pooling (MCB) was employed to project the outer product of extracted multimodal features to the low dimensional space for producing jointly spatial-spectral features. The fusion network could not only analyze the complex relationship between elements in spectral and spatial eigenvectors, but also avoid directly computing the outer product of spectral and spatial vectors, resulting in high dimension and high computation times. The experimental results show that, compared with the state-of-the-art neural network based classification methods, the proposed algorithm can obtain higher pixel classification accuracy. It indicates that the spatial-spectral joint vector extracted by the proposed network has stronger representation ability.
Resilience computation for queries with inequalities in databases
LIN Jie, QIN Biao, QIN Xiongpai
2018, 38(7): 1893-1897. DOI:
10.11772/j.issn.1001-9081.2018010078
Asbtract
(
)
PDF
(941KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the causality problem of conjunctive queries with inequalities in databases, the resilience computation was introduced and implemented. In order to reduce the computational time complexity in path conjunctive queries with inequalities, a Dynamic Programming for Resilience (DPResi) algorithm was proposed. Firstly, according to the properties of path conjunctive queries with inequalities and the max-flow Min-Cut theorem, the Min-Cut algorithm with polynomial time complexity was implemented. Then by compiling the lineage expression of a Boolean conjunctive query with inequality into a lineage graph, the problem of resilience was transformed into the computation of the shortest distance in lineage graph. Combining inclusion property with optimal substructure of the lineage graph, DPResi algorithm with linear time complexity was implemented by applying the idea of dynamic programming. Extensive experiments were carried out on the TPC-H datasets. The experimental results show that DPResi algorithm greatly improves the efficiency of resilience computation and has good scalability compared with Min-Cut algorithm.
Efficient subgraph matching method based on structure segmentation of RDF graph
GUAN Haoyuan, ZHU Bin, LI Guanyu, ZHAO Ling
2018, 38(7): 1898-1904. DOI:
10.11772/j.issn.1001-9081.2017122950
Asbtract
(
)
PDF
(1251KB) (
)
References
|
Related Articles
|
Metrics
With the complexity increasing of query graph structure, the efficiency of graph-based query in SPARQL query processing becomes lower and lower. By analyzing the basic structure of Resource Description Framework (RDF) graph, a subgraph matching method based on structure segmentation of query graph, called RSM (RDF Subgraph Matching), was proposed. Firstly, a query graph was divided into several simple query subgraphs, and query graph node searching space was defined through structure index of adjacent predicate. Secondly, the searching space was narrowed down by the adjacent subgraph structure, and a matchable subgraph could be found in data graph according to the searching area in the searching space. Finally, the result graph was output by joining related subgraphs. The query response times of RSM, RDF-3X, R3F and GraSS on query graphs with different structural complexity in different data sets were compared. The experimental results show that, compared with the other three methods, RSM has a shorter query response time and higher query efficiency in processing complex query graphs.
Outlier detection algorithm based on neighborhood value difference metric
YUAN Zhong, FENG Shan
2018, 38(7): 1905-1909. DOI:
10.11772/j.issn.1001-9081.2017123028
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems that symbolic attribute data set can not be processed effectively with traditional distance measure method and numerical attribute data set can not be processed effectively by classical rough set method, an improved method of Neighborhood Value Difference Metric (NVDM) was proposed for outlier detection by utilizing the granulation features of neighborhood rough set. Firstly, with attribute values being normalized, the Neighborhood Information System (NIS) was constructed based on optimized Heterogeneous Euclidian-Overlap Metric (HEOM) and neighborhood radius with adaptive characteristic. Secondly, Neighborhood Outlier Factor (NOF) of data object was constructed based on the NVDM. Finally, a Neighborhood Value Difference Metric-based Outlier Detection (NVDMOD) algorithm was designed and implemented, which improves the traditional unordered one by one model via making full use of the idea of ordered binary and nearest neighbor search in computing Single Attribute Neighborhood Cover (SANC). The NVDMOD algorithm was analyzed and compared with existing outlier detection algorithms including NEighborhood outlier Detection (NED) algorithm, DIStance-based outlier detection (DIS) algorithm and
K
-Nearest Neighbor (
K
NN) algorithm on UCI standard data sets. The experimental results show that NVDMOD algorithm has much higher adaptability and effectiveness, and it provides a more effective new method for outlier detection of mixed attribute data sets.
Time series classifier design based on piecewise dimensionality reduction and updated dynamic time warping
CHANG Bingguo, ZANG Hongying
2018, 38(7): 1910-1915. DOI:
10.11772/j.issn.1001-9081.2018010106
Asbtract
(
)
PDF
(935KB) (
)
References
|
Related Articles
|
Metrics
Since traditional Dynamic Time Warping (DTW) measurement method is prone to over-bending and has the shortcoming of high computation complexity and low efficiency, an Updated Dynamic Time Warping (UDTW) measurement method based on path correction was proposed. Firstly, the characteristic information of time series was extracted by Piecewise Local Max-smoothing (PLM) method (a dimensionality reduction method), so that the computational cost of UDTW was reduced. Secondly, considering the sequence similarity requirements of morphological characteristics, a dynamic penalty factor was set to correct the bending degree of the excessive bending path. Finally, based on updated distance metric, 1-nearest neighbor classification algorithm was used to classify time series data, which improved the accuracy and efficiency of the time series similarity measurement. The experimental results show that UDTW measurement method outperforms the traditional DTW measurement method among 15 UCR datasets, and the accuracy rate achieved 100% in 3 of them. In the comparison experiments with Derivative DTW (DDTW) measurement method, UDTW increases the classification accuracy by 71.8% at most, and the execution time of PLM-UDTW is reduced by 99% without decreasing classification accuracy.
Driver behavior spectrum analysis method based on vehicle driving data
CHEN Jingren, WU Yefu, WU Bing
2018, 38(7): 1916-1922. DOI:
10.11772/j.issn.1001-9081.2018010090
Asbtract
(
)
PDF
(1311KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that our country's driver behavior spectrum research is still not perfect, and there is no corresponding behavioral spectrum analysis tool in the professional field, a set of complete driver behavior spectrum system for commercial motor vehicle of passenger transport was proposed and an analyzing tool was designed. Firstly, the characteristic indexes and the evaluation indexes of driver behavior spectrum were designed and defined. Secondly, the characteristic indexes analysis method and algorithm of driver behavior spectrum were given, the improved
K
-means algorithm based on Markov chain Monte Carlo sampling and outlier removing was used to analyze driving styles of drivers, and regression learning was used to analyze driving skills of drivers. Then, the basic data acquisition scheme and preprocessing methods of driver behavior spectrum based on car networking and big data were designed and proposed. Finally, Java language and the Spring MVC (Model View Controller) architecture were used to develop the profiling tool of driver behavior spectrum. Data mining and data analysis methods in machine learning were combined with traffic safety, which has theoretical significance for perfecting the driver behavior spectrum framework. It provides a scientific and quantitative analysis tool for our country's driver behavior spectrum analysis work. It also provides guiding significance for traffic management department to standardize the driving behaviors of drivers, improves the road safety index and makes reasonable traffic safety management strategies.
Map matching algorithm based on massive bus trajectory data mining
CHEN Hui, JIANG Guifeng, JIANG Guiyuan, WU Jigang
2018, 38(7): 1923-1928. DOI:
10.11772/j.issn.1001-9081.2017123041
Asbtract
(
)
PDF
(958KB) (
)
References
|
Related Articles
|
Metrics
Concerning poor matching effect of existing map matching algorithms (such as classical Hidden Markov and its variants, advanced algorithms) for low-frequency trajectory data, a trajectory data mining method based on massive bus historical trajectory data was proposed. Taking bus stations as the sequence skeleton firstly, by mining, extracting, regrouping and sorting trajectory data from a large number of low frequency trajectory points to form high frequency trajectory data, then the high-frequency trajectory data sequence was processed by the map matching algorithm based on classical hidden Markov model to get the bus route map matching results. Compared with the matching method on the low-frequency data not processed by the mining algorithm, the proposed method reduces the matching error by an average of 6.3%, requires smaller data size and costs less time. In addition, this method is robust to low-frequency, unstable noise trajectory data, and it is suitable for map matching of all bus routes.
Research and implementation of key module of data security processing mechanism in software defined network
LI Zhaobin, LI Weilong, WEI Zhanzhen, LIU Mengtian
2018, 38(7): 1929-1935. DOI:
10.11772/j.issn.1001-9081.2017123007
Asbtract
(
)
PDF
(1175KB) (
)
References
|
Related Articles
|
Metrics
To solve the data leakage problem of data plane in Software Defined Network (SDN), a new data security processing mechanism based on OpenFlow protocol was proposed. Firstly, the flow table structure of OpenFlow protocol was reconstructed, the OpenFlow data security policies including safe matching fields, safe actions were designed and implemented. Secondly, a centralized management controller was designed to sense changes in the network in a timely manner through the development of multiple functional modules, which effectively controlled the global network, maintained and distributed data encryption/decryption keys and data security policies. Thirdly, the open virtual switch OVS (Open vSwitch) architecture was reconstructed deeply, the complete process including data security strategy matching and data security processing was designed, and the extraction interface of data payload information was programmed. Through the development of multiple functional modules, OVS can match the data packets according to the fine-grained granularity of data security policies, and perform complete data security processing operations on matched data packets. Finally, by building the hardware and software platform, the results of the encryption and decryption mechanisms, and the time delay, throughput and CPU utilization rate were tested and compared. The experimental results show that the proposed mechanism can accurately operate data encryption and decryption. The latency and throughput of the proposed mechanism are at normal levels, but its CPU usage rate is between 45% and 60%, which indicates that it needs to be optimized furtherer.
Address hopping proactive defense model in IPv6 based on sliding time window
KONG Yazhou, ZHANG Liancheng, WANG Zhenxing
2018, 38(7): 1936-1940. DOI:
10.11772/j.issn.1001-9081.2018010073
Asbtract
(
)
PDF
(924KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that IPv6 nodes are easily under probing attack by an attacker while end-to-end communication is restored in the IPv6 network, a proactive defense model of Address Hopping based on Sliding Time Window in IPv6 (AHSTW) was proposed. Session parameters such as the address hopping interval were firstly negotiated by using the shared key, and then the concept of sending and receiving time window was introduced. The two communication parties sent or received only the packets in the time window, through a Time Window Adaptive Adjustment (TWAA) algorithm. According to the change of network delay, the time window could be adjusted in time to adapt to the changes of the network environment. The theoretical analysis shows that the proposed model can effectively resist the data interception attacks and Denial of Service (DoS) attacks on the target IPv6 nodes. The experimental results show that in the transmission of the same data packet size, the extra CPU overhead of AHSTW model is to 2-5 percentage points, with no significant increase in communication cost and no significant decline in communication efficiency. The addresses and ports of two communication parties are random, decentralized, out of order and so on, which greatly improves the cost and difficulty of attackers and protects the network security of IPv6.
Anomaly detection based on synthetic minority oversampling technique and deep belief network
SHEN Xueli, QIN Shujuan
2018, 38(7): 1941-1945. DOI:
10.11772/j.issn.1001-9081.2018010178
Asbtract
(
)
PDF
(741KB) (
)
References
|
Related Articles
|
Metrics
To solve low detection rate problem of intrusion for a small number of samples in mass unbalanced datasets, an anomaly detection based on Synthetic Minority Oversampling Technique (SMOTE) and Deep Belief Network (DBN), called SMOTE-DBN method, was proposed. Firstly, SMOTE technology was used to increase the number of samples in minority categories. Secondly, on the preprocessed balanced data set, the dimensionality of the preprocessed high-dimensional data was reduced by unsupervised Restricted Boltzmann Machine (RBM). Thirdly, the model parameters were finely tuned by Back Propagation (BP) algorithm to obtain the lower low-dimensional representation of the preprocessed data. Finally, softmax classifier was used to classify the optimal low-dimensional data. The simulation experiment results on KDD1999 show that, compared with DBN method and Support Vector Machine (SVM) method, the detection rate of SMOTE-DBN method is increased by 3.31 and 7.34 percentage points respectively, and the false alarm rate is decreased by 1.11 and 2.67 percentage points respectively.
Interrupt path optimization method of virtual cryptographic device with reducing context switching
LI Shuai, SUN Lei, GUO Songhui
2018, 38(7): 1946-1950. DOI:
10.11772/j.issn.1001-9081.2017122890
Asbtract
(
)
PDF
(980KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of cryptographic performance being affected by the excessive interrupt transmission cost of the cipher device in virtual environment, an interrupt path optimization method for virtual cryptographic device with Reducing Context Switching (RCS) was proposed. Firstly, a host to Virtual Cipher Machine (VCM) relationship mapping table was established in the kernel of the virtual machine. Then, the types of the interrupt requests that the host transmits to the VCM were judged by the relational mapping table, and the unassigned types in VCM were registered. Finally, the interrupts were handled by the VCM interrupt handler directly. In the process, the system context switching overhead was reduced due to the host intervening and the cryptographic performance was improved. The speed at which the VCM executes the encryption was selected as a performance reference in the experiment. The results show that the speed of VCM using Advanced Encryption Standard (AES) algorithm is increased by 16.35% and that using Secure Hash Algorithm (SHA256) is increased by 12.25%.
SIR rumor propagation model on dynamic homogeneity network
FU Wei, WANG Jing, PAN Xiaozhong, LIU Yazhou
2018, 38(7): 1951-1955. DOI:
10.11772/j.issn.1001-9081.2018010132
Asbtract
(
)
PDF
(933KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem that infected nodes move out of the system during the process of rumor propagation, a new SIR (Susceptible-Infective-Removal) rumor propagation model on dynamic homogeneity network was proposed by improving the normalization conditions of classical SIR rumor propagation model. Firstly, according to the propagation rules and mean field theory, the rumor propagation dynamics equation was established on the homogenous network. Then the steady state and infection peak of the rumor propagation process were theoretically analyzed. Finally, the influence of factors on rumor propagation was studied through numerical simulation, which including infection rate, immune rate, real immune coefficient and average degree of network. Research indicates that, as infected nodes move out of the system, steady state value decreases and infection peak slightly increases, compared with the classical SIR rumor propagation model. The study also shows that the peak value of rumor infection increases as the infection probability increases and the immune probability decreases. As the real immune coefficient increases, the steady state value of immune nodes increases. The network average degree has no influence on the steady state of rumor propagation. The larger the average degree is, the earlier the arrival time of the infection peak. This research expands the application scope of SIR propagation model from a closed system to a non-closed system, providing guidance theory and numerical support for making rumor prevention measures.
Improvement of Niederreiter public key cryptosystem
LIU Xiangxin, YANG Xiaoyuan
2018, 38(7): 1956-1959. DOI:
10.11772/j.issn.1001-9081.2018010033
Asbtract
(
)
PDF
(625KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the current status of Niederreiter public key cryptosystem which is vulnerable to distinguishing attack and ISD (Information Set Decoding), an improved Niederreiter public key cryptosystem was proposed. Firstly, the permutation matrix in the Niederreiter cryptography scheme was improved, and the original permutation matrix was replaced by a random matrix. Secondly, the error vector in the Niederreiter cryptography scheme was randomly divided to conceal the Hamming weight. Finally, the encryption and decryption processes of the Niederreiter cryptography scheme were improved to improve the security. The analysis shows that the improved scheme can resist the distinguishing attack and ISD. The public key size of the improved scheme is smaller than that of the scheme proposed by Baldi, et al. (BALDI M, BIANCHI M, CHIARALUCE F, et al. Enhanced public key security for the McEliece cryptosystem. Journal of Cryptology, 2016, 29(1):1-27). At the 80-bit security level, the public key of the improved scheme is reduced from 28408 bits to 4800 bits. At the 128-bit security level, the public key size of the improved scheme is reduced from 57368 bits to 12240 bits. As one of the anti-quantum cryptography schemes, the viability and competitiveness of the improved scheme are enhanced.
Differential privacy budget allocation method for data of tree index
WANG Xiaohan, HAN Huihui, ZHANG Zepei, YU Qingying, ZHENG Xiaoyao
2018, 38(7): 1960-1966. DOI:
10.11772/j.issn.1001-9081.2018010014
Asbtract
(
)
PDF
(1075KB) (
)
References
|
Related Articles
|
Metrics
Noise is required in differential privacy protection for spatial data with tree index. Most of the existing differential privacy budget methods adopt uniform allocation, and ordinary users can not personalize their choice. To solve this problem, an arithmetic sequence privacy budget allocation method and a geometric sequence privacy budget allocation method were proposed. Firstly, the spatial data was indexed by tree structure. Secondly, users could personalize the difference or ratio of privacy budgets assigned by two adjacent layers to dynamically adjust the privacy budget according to the needs of privacy protection and query accuracy. Finally, the privacy budget was allocated to each layer of tree to realize personalized and on-demand allocation. Theoretical analysis and experimental results show that these two methods are more flexible in the allocation of privacy budget than the uniform allocation method, and the geometric sequence allocation method is better than the arithmetic sequence allocation method.
Steganalysis based on Bayesian network for compressed speech
YANG Jie, LI Songbin, DENG Haojiang
2018, 38(7): 1967-1973. DOI:
10.11772/j.issn.1001-9081.2017122883
Asbtract
(
)
PDF
(1111KB) (
)
References
|
Related Articles
|
Metrics
In the steganography methods for low-bit-rate compressed speech based on Quantization Index Modulation (QIM), Nearest-neighbor Projection Point QIM (NPP-QIM) steganography has high embedding efficiency and security. Focusing on the issue that the accuracy of the existing steganalysis methods against the NPP-QIM steganography is not high, a steganalysis approach based on Bayesian inference was proposed for improving it. Firstly, Codeword Spatiotemporal Transition Network (CSTN) was constructed by using the Vector Quantization (VQ) codewords VQ1, VQ2, VQ3. Secondly, the codeword transition index was introduced to simplify the CSTN to obtain Steganography-Sensitive CSTN (SS-CSTN). Thirdly, Codeword Bayesian Network (CBN) was further constructed based on SS-CSTN. Finally, the network parameters of CBN were learned by utilizing Dirichlet distribution as the prior distribution to implement QIM steganalysis. The experimental results indicate that the detection accuracy of the proposed CBN method against the NPP-QIM steganography is improved by 25 percentage points and 37 percentage points compared with Index Distribution Characteristic (IDC) method and Derivative Mel-Frequency Cepstral Coefficients (DMFCC) method when the embedding strength is 100% and the speech length is 10 s. In the aspect of time performance, the CBN method can detect a 10 s speech segment in real time with about 21 ms.
WSN hierarchical routing algorithm based on fuzzy C-means clustering and swarm intelligence
QI Pan, BAO Kaiyang, MA Xiaoyuan
2018, 38(7): 1974-1980. DOI:
10.11772/j.issn.1001-9081.2018010144
Asbtract
(
)
PDF
(1302KB) (
)
References
|
Related Articles
|
Metrics
To improve the energy efficiency and prolong the life cycle of Wireless Sensor Network (WSN), a WSN hierarchical routing algorithm based on Fuzzy C-Means (FCM) clustering and Swarm Intelligence (FCM-SI) was proposed. Firstly, FCM clustering algorithm was adopted to cluster the network, which optimized the distance between common nodes and Cluster Heads (CH). Then, Artificial Bee Colony (ABC) algorithm with three parameters was used to select the optimal CH. At last, Ant Colony Optimization (ACO) algorithm was adopted to search the multi-hop path between CH and Base Station (BS), which took energy consumption and load balancing into consideration. The simulation results show that compared with the algorithms of Improved Low Energy Adaptive Cluster Hierarchy (I-LEACH) based on uniform clustering, Low Energy Adaptive Cluster Hierarchy (LEACH) based on ABC (ABC-LEACH) and LEACH based on ACO (ANT-LEACH), the network life cycle of FCM-SI is prolonged by 65.2%, 49.6%, and 29.0% under initial network conditions with 100 nodes deployed in 100 m*100 m area. FCM-SI can effectively prolong network life and improve energy efficiency.
RSSI collaborative location algorithm of selecting preference accuracy for wireless sensor network
WANG Ming, XU Liang, HE Xiaomin
2018, 38(7): 1981-1988. DOI:
10.11772/j.issn.1001-9081.2017123050
Asbtract
(
)
PDF
(1237KB) (
)
References
|
Related Articles
|
Metrics
Concerning insufficient and blind use of the Received Signal Strength Indicator (RSSI) information among unknown nodes, a new RSSI collaborative location algorithm of selecting preference accuracy for Wireless Sensor Network (WSN) was proposed. Firstly, the nodes with high locating accuracy were selected from coarsely located unknown nodes based on the RSSI thresholds. Secondly, subset judgment method was used to seek out the unknown nodes which were less affected by the environment as the second collaboration backbone nodes. Then, based on the positioning errors of the anchor nodes, anchor node replacement criterion was used to further extract the high-precision node from the secondary selected cooperative nodes as the optimal cooperative backbone nodes. Finally, the collaborative backbone nodes were used as the cooperative objects, and the unknown nodes were modified according to the precision priorities. In the simulation experiments, the average localization accuracy of the proposed algorithm was within 1.127 m in 100 m*100 m grids. In terms of locating accuracy, the average locating accuracy of the proposed algorithm is improved by 15% compared with the improved WSN locating algorithm using RSSI model. In terms of time efficiency, compared with the traditional RSSI collaborative location algorithm, the proposed algorithm improves the time efficiency by 20% under the same condition. It can be seen that the proposed algorithm can effectively enhance the locating accuracy, reduce computational complexity and improve time efficiency.
UWB high-precision localization in underground coal mine based on region determination
FANG Wenhao, LU Yang, WEI Xing
2018, 38(7): 1989-1994. DOI:
10.11772/j.issn.1001-9081.2017122994
Asbtract
(
)
PDF
(913KB) (
)
References
|
Related Articles
|
Metrics
To meet the increasing demand of high-precision localization in coal mine, a set of underground positioning anchors and tags based on Ultra WideBand (UWB) communication were designed and implemented by applying high-precision wireless transceiver chip DW1000. Asymmetric Double Sided Two-Way Ranging (ADS-TWR) algorithm was used to improve the accuracy of ranging between the anchor and the tag, which effectively suppressed the error caused by clock drift. Aiming at the problem that a large amount of invalid communication was generated by the tag broadcasting request frame when starting the localization in the underground multi-anchor layout, a region determination strategy for tags based on ADS-TWR was proposed, which made a tag communicate with the anchors in its region only. At the same time, region abnormal self-checking and region correction mechanism for tags were introduced to ensure the efficient and stable operation of the system. In the coordinate analysis phase for tags, triangle centroid algorithm was used to further improve localization accuracy based on high-precision ranging and reduce localization processing time. Finally, the experimental results show that the localization accuracy of tags is within 15 cm, which meets the requirement of high-precision localization in underground coal mine.
Uneven clustering protocol based on odd-even round clustering and double cluster head
LI Anchao, CHEN Guifen
2018, 38(7): 1995-2000. DOI:
10.11772/j.issn.1001-9081.2017123081
Asbtract
(
)
PDF
(949KB) (
)
References
|
Related Articles
|
Metrics
According to the problem of "energy hotspot" and poor system robustness in Wireless Sensor Network (WSN), an Uneven Clustering protocol based on Odd-even round clustering and Double cluster head (UCOD) was proposed. Firstly, the competitive radius function was optimized to make the cluster head distribution more reasonable. Secondly, main and vice cluster head mechanism was introduced. The main cluster head slept when its energy was lower than the set energy threshold, and the vice cluster head carried out the functions of main and vice cluster heads to improve robustness. Then, different clustering mechanisms for odd and even rounds were adopted. In odd-numbered rounds, global nodes competed for cluster heads; and in even-numbered rounds cluster heads were selected in odd-numbered rounds; which could reduce nodes' energy consumption in cluster selection. Finally, the network was ranked and the node selected the relay node at the next level according to the location, energy, times of forwarding and the number of surrounding nodes. In the comparison experiments with DEBUC (Distributed Energy-Balanced Unequal Clustering routing protocol) and HRPNC (Hierarchical Routing Protocol for wireless sensor networks based on Non-uniform Clustering), the network cycle of UCOD was increased by 28.4% and 13.7% respectively, and the packet loss rate of UCOD was reduced by 39.1 and 27.5 percentage points at a cluster head damage of 50%. The experimental results show that UCOD can effectively improve energy efficiency and system robustness.
State machine based video rate adaptation algorithm
HUANG Sheng, HU Lingwei, FU Yuanpeng
2018, 38(7): 2001-2004. DOI:
10.11772/j.issn.1001-9081.2017122934
Asbtract
(
)
PDF
(803KB) (
)
References
|
Related Articles
|
Metrics
Due to the inherent randomness of bandwidth, the existing rate adaptation algorithms based on Dynamic Adaptive Streaming over Hyper Text Transfer Protocol (DASH) fail to make a balance between playback fluency and video quality. Concerning the above problem, a State machine-based DASH (SDASH) algorithm was proposed to analyze and control the rate switching process. Firstly, the influence factors of client's Quality of Experience (QoE) were fully considered and numerically analyzed. Secondly, six bitrate states were proposed according to the influence factors, and the relations between the video bitrate and the changes in influence factors' values were used as the state transition conditions. Finally, while the playback buffer and the bitrate deviation ratio satisfying the threshold condition, the video bitrate was switched to a rate level which has the relatively optimum overall performance of playback fluency and video quality. The experimental results demonstrate that the proposed algorithm can not only improve the video bitrate compared with the fuzzy-based DASH adaptation algorithm but also avoid bitrate plunging, thus reaching a balance between playback fluency and video quality, and leading to an improvement of QoE.
Network faulty link identification based on linear programming relaxation method
FAN Xiaobo, LI Xingming
2018, 38(7): 2005-2008. DOI:
10.11772/j.issn.1001-9081.2018010155
Asbtract
(
)
PDF
(628KB) (
)
References
|
Related Articles
|
Metrics
Concerning the NP (Nondeterministic Polynomial)-hard problem of localizing link failures from end-to-end measurement in communication network, a new tomography method which relaxes the Boolean constraints was proposed. Firstly, the relationship between path state and link state in a network was modeled as Boolean algebraic equations, and the essence of localizing failures was treated as an optimization problem under the constraints of these equations. Then the NP property was derived from the binary states (normal/failed) of link state in the optimization problem. Therefore, by relaxing the Boolean constraints, the proposed method simply transformed the problem into a Linear Programming (LP) problem, which can be easily solved by any LP solver to get the set of failed links. Simulation experiments were conducted to identity link failures in real network topologies. The experimental results show that the false negative rate of the proposed method is 5%-30% lower than that of the classical heuristic algorithm TOMO (TOMOgraphy).
Channel estimation algorithm based on cell reference signal in LTE-A system
LI Huimin, ZHANG Zhizhong, LI Linxiao
2018, 38(7): 2009-2014. DOI:
10.11772/j.issn.1001-9081.2017123054
Asbtract
(
)
PDF
(887KB) (
)
References
|
Related Articles
|
Metrics
Interpolation algorithms are usually used to estimate the channel frequency response value at the data location in Long Term Evolution-Advanced (LTE-A) system. Concerning the problem that traditional Linear Minimum Mean Square Error (LMMSE) algorithm needs to obtain channel statistical properties in advance and it suffers from a high computational complexity due to an inversion matrix operation, an improved LMMSE channel estimation interpolation algorithm was proposed. Firstly, the pilots were interpolated to add virtual pilots, which improved the performance of the algorithm. Secondly, an approximate estimation method of autocorrelation matrix and Signal-to-Noise Ratio (SNR) was given by using the fact that channel energy in the time domain is more concentrated. Finally, a sliding window method was adopted to further simplify the algorithm complexity to complete the LMMSE interpolation in frequency domain. The simulation results show that the overall performance of the proposed algorithm is better than that of linear interpolation method and Discrete Fourier Transform (DFT) interpolation method, and it has similar Bit Error Rate (BER) and Mean Squared Error (MSE) performance with the traditional LMMSE interpolation algorithm. Furthermore, it reduces the complexity by 98.67% compared with traditional LMMSE estimator without degrading the overall BER and MSE performance, so it is suitable for practical engineering applications.
Performance analysis of Luby transform codes under Gaussian elimination decoding
SUO Longlong, ZHANG Gengxin, BIAN Dongming, XIE Zhidong, TIAN Xiang
2018, 38(7): 2015-2019. DOI:
10.11772/j.issn.1001-9081.2017122989
Asbtract
(
)
PDF
(744KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that the performance analysis method of Luby Transform (LT) codes under Gaussian elimination decoding algorithm is complicated and inaccurate, a novel performance analysis method based on probability transfer function was proposed. Firstly, for two LT codes with simple uniform degree distribution, the precise performance was studied and its quantitative expression was given. Secondly, the general LT code was investigated, and a simple but effective qualitative analysis method was proposed. Finally, the simulation work was done to verify the new method. In the comparison experiments with the traditional method which only gives the upper and the lower bounds of the rank of generated matrix, the maximum error of performance analysis results for simple uniform degree LT codes reduces to 0.0124, and the complexity of general LT codes decrease to
O
(
k
2
). Theoretical analysis shows that the proposed method can effectively guide the optimization design of LT codes in communication area.
Capture method of direct sequence spread spectrum signal based on cascade stochastic resonance
WANG Aizhen, HU Jiao, HAN Hangcheng
2018, 38(7): 2020-2023. DOI:
10.11772/j.issn.1001-9081.2018030514
Asbtract
(
)
PDF
(794KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the influence of low Signal-to-Noise Ratio (SNR) and large frequency offset on signal acquisition in the capture of direct sequence spread spectrum signal, a new capture method of direct sequence spread spectrum signal based on cascade stochastic resonance was proposed. Firstly, the input signal was filtered by a partial matched filter, when the local pseudo code and the pseudo code phase of the input signal were aligned, only the residual Doppler frequency offset remained in the output signal, and then the signal was processed by the cascaded stochastic resonance to increase the SNR of the input signal. Finally, Fast Fourier Transformation (FFT) spectrum analysis was used to obtain the spectral peak on the frequency spectrum and then calculate Doppler frequency deviation. Theoretical analysis and experimental simulation shows that:the proposed algorithm can improve the acquisition sensitivity of direct sequence spread signals when the input SNR is -26 dB, and the output SNR is improved by 15 dB after two-stage cascade stochastic resonance system; at the same time, the correct detection probability of this algorithm is improved by about 4 dB compared with the traditional capture algorithm. The proposed method not only can suppress noise, but also convert some noise energy into signal energy; meanwhile it can decrease the influence of large Doppler frequency offset. Therefore, it has great advantages in capturing weak signals.
Automatic custom instructions identification method for high level synthesis
XIAO Chenglong, LIN Jun, WANG Shanshan, WANG Ning
2018, 38(7): 2024-2031. DOI:
10.11772/j.issn.1001-9081.2018010062
Asbtract
(
)
PDF
(1378KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems that it is difficult to improve performance and reduce power consumption in the process of High Level Synthesis (HLS), an automatic custom instructions identification method for high level synthesis was proposed. The enumeration and selection of custom instructions were implemented before high level synthesis, so as to provide a universal automatic custom instructions identification method for high level synthesis. Firstly, the high level source code was transformed into a Control Data Flow Graph (CDFG), and the source code was preprocessed. Secondly, a subgraph enumeration algorithm was used to enumerate all the connected convex subgraphs in a bottom-up manner from the Data Flow Graph (DFG) based on control data flow graph, which effectively improved the user's ability to flexibly modify the constraints. Then, considering the area, performance and code size, the subgraph selection algorithms were used to select partial optimal subgraphs as the final custom instructions. Finally, a new code was regenerated by incorporating the selected custom instructions as the input of high level synthesis. Compared with the traditional high level synthesis, the pattern selection based on frequency of occurrence reduced the area by an average of 19.1%. Meanwhile, the subgraph selection based on critical paths reduced the latency by an average of 22.3%. In addition, compared with Transitive Digraph (TD) algorithm, the enumeration efficiency of the proposed algorithm was increased by an average of 70.8%. The experimental results show that the automatic custom instructions identification method can significantly improve performance and reduce area and code size for high level synthesis in circuit design.
Priority calculation method of software crowdsourcing task release
ZHAO Kunsong, YU Dunhui, ZHANG Wanshan
2018, 38(7): 2032-2036. DOI:
10.11772/j.issn.1001-9081.2018010001
Asbtract
(
)
PDF
(757KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the existing software crowdsourcing platforms do not consider the order of task release, a method of calculating Task Release Priority (TRP) of software crowdsourcing based on task publisher weight and task weight was proposed. Firstly, a time weight function based on semi-sinusoidal curve was used to measure the activity of the task publisher and the cumulative turnover of the task, so as to calculate the weight of the task publisher. Secondly, the task complexity was calculated according to the system architecture diagram and data flow diagram to measure module complexity, design complexity and data complexity, and the task benefit factor and task emergency factor were calculated based on task quotation and task duration. In this way, the task weight was calculated. Finally, the task publishing priority would be given according to task publisher weight and task weight. The experimental results show that the proposed algorithm not only is effective and reasonable, but also has a maximum success rate of 98%.
Recommending clone refactoring method based on decision tree
SHE Rongrong, ZHANG Liping, HOU Min, YAN Sheng
2018, 38(7): 2037-2043. DOI:
10.11772/j.issn.1001-9081.2017122997
Asbtract
(
)
PDF
(1208KB) (
)
References
|
Related Articles
|
Metrics
Aiming at long-term software maintenance even introduction of errors due to extensive use of cloned code, a classifier based on decision tree was proposed to recommend clone for refactoring. Firstly, clone detection was performed using NiCad. Secondly, the features related to cloning relationship, cloned code segment and clonal context were collected. Thirdly, a decision tree classifier was used for training. Finally, the classification results were evaluated by
K
-fold crossover. The experiments were conducted on nearly 600 clones in five kinds of open-source software. The experimental results show that the proposed method achieves 80% accuracy when recommending clonal refactoring instances for each target system.
Construction and optimization of orthopedic plates based on average bone model
ZHANG Rongli, HE Kunjin, ZHANG Yuxue
2018, 38(7): 2044-2049. DOI:
10.11772/j.issn.1001-9081.2017123031
Asbtract
(
)
PDF
(935KB) (
)
References
|
Related Articles
|
Metrics
Serial plates are not reasonable in material saving and stress dispersion. To design orthopedic plates quickly and conveniently, a method to optimize plates through editing semantic parameters based on average bone model was proposed. Firstly, for reasonable distribution of serial plates in number and size, an average bone model with weights was created from the exiting bones. Secondly, a common orthopedic plate with semantic parameters was constructed based on average bone model and it could be conveniently modified and optimized. Finally, the thickness of the plate was optimized through finite element analysis and bisection strategy to meet the stress condition and use the least material. The experimental results show that the volume of the optimized femur clover plate and the Ⅲ type of the plate for distal femur is decreased by 2.7% and 12.2%, and the maximum stress is decreased by 56.9% and 24.4% respectively. Experimental results indicate that the proposed method can save material and disperse the stress of plates to effectively construct and optimize the orthopedic plates.
Fast workpiece matching method for flexible clamping robot based on improved SURF algorithm
DU Liuqing, XU Hezuo, YU Yongwei, ZHANG Jianheng
2018, 38(7): 2050-2055. DOI:
10.11772/j.issn.1001-9081.2018010117
Asbtract
(
)
PDF
(980KB) (
)
References
|
Related Articles
|
Metrics
For traditional SURF (Speeded-Up Robust Feature) algorithm takes a long time for constructing local feature descriptors, an improved SURF algorithm was proposed to meet the real-time requirement. Firstly, the Determinant of Hessian (DoH) matrix was adopted to detect the key points of an image. Non-maximum suppression algorithm and interpolation operation were used to search and position the extreme points. Secondly, gray centroid method was adopted to determine the main direction of the key points. Then a binary descriptor, BRIEF (Binary Robust Independent Elementary Feature), was adopted to describe the key points, and the main direction of the key points was used to construct a directed feature descriptor with the objective of keeping its rotation invariance. Finally, Hamming distance was used to preliminarily determine the matching points. Then, the mismatching points were removed to improve the matching accuracy by ratio detection method and RANSAC (Random Sample Consensus) algorithm. The experimental results show that, when the improved SURF algorithm is applied to the flexible clamping robot, the matching time is reduced from 214.10 ms to 86.29 ms, the matching accuracy is increased by 2.6% compared with traditional SURF algorithm. Therefore, the proposed algorithm can improve the workpiece image matching speed and matching precision of flexible clamping robot effectively.
Color feature coding and classification of single polarized synthetic aperture radar image
DENG Xu, XU Xin, DONG Hao
2018, 38(7): 2056-2063. DOI:
10.11772/j.issn.1001-9081.2017112780
Asbtract
(
)
PDF
(1715KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of poor detail and visibility in current color coding methods of single polarization Synthetic Aperture Radar (SAR), a color feature coding method was proposed. Firstly, texture features were extracted from a single-polarized SAR image. Secondly, each feature was quantized to 0 to 255. Then an RGB color was assigned to each gray level to generate a color feature map. Finally, the importance of features calculated by random forest was sorted; the pseudo-color graphs were generated by each three dimensional feature corresponding to the R, G, and B channels. Based on the presented color feature coding method, a new classification method was proposed. Firstly, the pseudo color map with the best geographical separability was selected according to the visual effect, and then segmented by the Statistical Region Merging (SRM) segmentation algorithm. Secondly, all the RGB pseudo color maps were used as the classification features, and a random forest was used as the classifier and obtain the preliminary results. At the end, a relative majority vote was made on the preliminary results and the final classification results were obtained. In the method verification, two sets of TerraSAR-X single-polarization SAR data were used. By comparing the corresponding grayscale image with HIS-based color coding method, the color image information entropy generated by the proposed color feature coding method was greatly improved, and the classification accuracy of each type of ground features for two data sets was greatly improved. It is demonstrated that the proposed algorithm preserves more details for more color information, and it is more conducive to visualization and terrain classification, which indicating the proposed color feature coding method is feasible.
Face alignment method based on scale self-adaption and incremental learning
CHEN Ping, GONG Xun
2018, 38(7): 2064-2069. DOI:
10.11772/j.issn.1001-9081.2017122928
Asbtract
(
)
PDF
(997KB) (
)
References
|
Related Articles
|
Metrics
To solve the problems of traditional regression-based methods such as the loss of texture caused by face scale normalization, costing more time for retraining expanded data set to improve the generalization ability of the original model, and even potential non-convergence and incomputability, a face alignment method based on scale self-adaption and Incremental Learning (IL) was proposed. Firstly, the mapping relationship between the initial face points and the standard face points was established. Secondly, the extraction of the texture features on the original image and the normalization of the face scales were achieved via the mapping relationship. At last, incremental learning was applied to the new data set by using the existing models, which improved the generalization ability of the original model quickly. The experimental results show that the proposed method performs higher alignment accuracy than traditional regression-based methods. On the AFW (Annotated Faces in the Wild) dataset (68 feature points), the accuracy is increased by 2 to 4 percentage points; and on a 100000-level large dataset (5 feature points), the robustness is increased by 1 to 2 percentage points compared to the methods based on Deep Learning (DL). In addition, the proposed method is not only suitable for face alignment regression model, but also applicable for solving other regression models.
Road extraction from multi-source high resolution remote sensing image based on fully convolutional neural network
ZHANG Yonghong, XIA Guanghao, KAN Xi, HE Jing, GE Taotao, WANG Jiangeng
2018, 38(7): 2070-2075. DOI:
10.11772/j.issn.1001-9081.2017122923
Asbtract
(
)
PDF
(961KB) (
)
References
|
Related Articles
|
Metrics
The semi-automatic road extraction method needs more artificial participation and is time-consuming, and its accuracy of road extraction is low. In order to solve the problems, a new method of road extraction from multi-source high resolution remote sensing image based on Fully Convolutional neural Network (FCN) was proposed. Firstly, the GF-2 and World View high resolution remote sensing images were divided into small pieces, the images containing roads were classified by Convolutional Neural Network (CNN). Then, the Canny operator was used to extract the edge feature information of road. Finally, RGB, Gray and ground truth were combined and put into the FCN model for training, and the existing FCN model was extended to a new FCN model with multi-satellite source input and multi-feature source input. The Shigatse region of Tibet was chosen as the research area. The experimental results show that, the proposed method can achieve the extraction precision of 99.2% in the road extraction from high resolution remote sensing images, and effectively reduce the time needed for extraction.
Proximal smoothing iterative algorithm for magnetic resonance image reconstruction based on Moreau-envelope
LIU Xiaohui, LU Lijun, FENG Qianjin, CHEN Wufan
2018, 38(7): 2076-2082. DOI:
10.11772/j.issn.1001-9081.2017122980
Asbtract
(
)
PDF
(1157KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem of two non-smooth regularization terms in sparse reconstruction of Magnetic Resonance Imaging (MRI) based on Compressed Sensing (CS), a new Proximal Smoothing Iterative Algorithm (PSIA) based on Moreau-envelope was proposed. The classical sparse reconstruction for MRI based on CS is a problem of minimizing the objective function with a linear combination of three terms:the least square data fidelity term, the sparse regularization term of wavelet transform, and the Total Variation (TV) regularization term. Firstly, the proximal smoothing of the wavelet transform regularization term in the objective function was carried out. Then, the linear combination of the data fidelity term and the wavelet transform regularization term after smooth approximation was considered as a new convex function that could be continuously derived. Finally, PSIA was used to solve the new optimization problem. The proposed algorithm can not only cope with the two regularization constraints simultaneously in the optimization problem, but also avoid the algorithm robustness problem caused by fixed weights. The experimental results on simulated phantom images and real MR images show that, compared with four classical sparse reconstruction algorithms such as Conjugate Gradient (CG) decent algorithm, TV
l
1
Compressed MRI (TVCMRI) algorithm, Reconstruction From Partial k space algorithm (RecPF) and Fast Composite Smoothing Algorithm (FCSA), the proposed algorithm has better reconstruction results of image signal-to-noise ratio, relative error and structural similarity index, and its algorithm complexity is comparable to the existing fastest reconstruction algorithm FCSA.
Retinal vessel segmentation algorithm based on hybrid phase feature
LI Yuanyuan, CAI Yiheng, GAO Xurong
2018, 38(7): 2083-2088. DOI:
10.11772/j.issn.1001-9081.2017123045
Asbtract
(
)
PDF
(1042KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that the phase consistency feature is deficient in detection of vascular center, a new retinal vessel segmentation algorithm based on hybrid phase feature was proposed. Firstly, an original retinal image was preprocessed. Secondly, every pixel was represented by a 4-D vector composed of Hessian matrix, Gabor transformation, Bar-selective Combination Of Shifted FIlter REsponses (B-COSFIRE) and phase feature. Finally, Support Vector Machine (SVM) was used for pixel classification to realize the segmentation of retinal vessels. Among the four features, phase feature was a new hybrid phase feature formed by phase consistency feature and Hessian matrix feature through wavelet fusion. This new phase feature not only preserves good vascular edge information by phase consistency feature, but also compensates for the deficient detection of vascular center by phase consistency feature. The average Accuracy (Acc) of the proposed algorithm evaluated on the Digital Retinal Images for Vessel Extraction (DRIVE) database is 0.9574, and the average Area Under receiver operating characteristic Curve (AUC) is 0.9702. In the experiment of using single feature for vessel extraction through pixel classification, compared with using phase consistency feature, using hybrid phase feature for vessel extraction improves the average Accuracy (Acc) from 0.9191 to 0.9478, the AUC from 0.9359 to 0.9702. The experimental results show that hybrid phase feature is more suitable for retinal vessel segmentation based on pixel classification than phase consistency feature.
Emergency resource assignment for requirements of multiple disaster sites in view of fairness
DU Xueling, MENG Xuelei, YANG Bei, TANG Lin
2018, 38(7): 2089-2094. DOI:
10.11772/j.issn.1001-9081.2018010118
Asbtract
(
)
PDF
(904KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that emergency resource assignment for multiple demand points and multiple supply points in railway emergencies, an emergency resource assignment model of multiple rescue targets was established, which was based on the concept of "soft time window". The maximum fairness and minimum total assignment cost were considered as the optimization objectives, and parallel selected genetic algorithm was used to solve the model. The population was equally divided into subpopulations by the algorithm. Subpopulations' number was equal to the number of objective functions. An objective function was assigned to each divided subpopulation and the selection work was done independently, by which individuals with high fitness were selected from each subpopulation to form a new population. Crossover and mutation were done to generate the next generation of population. The computing cases show that the parallel selected genetic algorithm reduces the variance of resource satisfaction degree of all demand points by 93.88% and 89.88% respectively, and cuts down the cost by 5% and 0.15% respectively, compared with Particle Swarm Optimization (PSO) and two-phase heuristic algorithm. The proposed algorithm can effectively reduce the variance of the resource satisfaction degree of all demand points, that is, it improves the fairness of each demand point and reduces the cost at the same time, and can obtain higher quality solution when solving multiple objective programming problem.
Branch and bound algorithm for solving hybrid flow shop robotic cells scheduling problem with blocking
ZHAO Xiaofei, GUO Xiuping
2018, 38(7): 2095-2099. DOI:
10.11772/j.issn.1001-9081.2018010135
Asbtract
(
)
PDF
(878KB) (
)
References
|
Related Articles
|
Metrics
Focusing on hybrid flow shop robotic cells scheduling problem with blocking, for optimizing simultaneously robotic operation sequence and part input sequence, a branch and bound method was proposed. Firstly, robotic activity was defined to transfer double sequence into single sequence. Secondly, order insertion rule was proposed for obtaining feasible solution. Finally, according to the order insertion rule, branching process was designed. Stochastically generated instances were computed by CPLEX and branch and bound method. When there were three tanks in robotic cells, the value of objective function got by branch and bound method was the same as that of CPLEX; but compared to CPLEX, the average computation time was reduced by 38.58%. So the branch and bound method was effective. When the number of tanks exceeded three and the computation time did not exceed 600 seconds, compared to CPLEX, 85.19% of instances could be found out better solution. It means that the branch and bound method is more valuable, especially for big scale problems.
Flight delay prediction model based on dual-channel convolutional neural network
WU Renbiao, LI Jiayi, QU Jingyi
2018, 38(7): 2100-2106. DOI:
10.11772/j.issn.1001-9081.2018010037
Asbtract
(
)
PDF
(1206KB) (
)
References
|
Related Articles
|
Metrics
Nowadays, flight delay prediction has a large amount of data and the feature extraction is difficult. Traditional models can not solve these problems effectively, so a flight delay prediction model based on Dual-Channel Convolutional Neural Network (DCNN) was proposed. Firstly, flight data and meteorological data were fused in the model. Then, a DCNN was used to extract features automatically, and Batch Normalization (BN) and Padding strategy were used to improve the classification prediction performance of arrival delay level. Secondly, to guarantee the lossless transmission of feature matrix and enhance the patency of deep network, a straight channel was used in the Convolutional Neural Network (CNN). Meanwhile, convolution attenuation factor was introduced to control the sparseness of feature matrix, it also was used to control the proportion of feature matrix from different depth and guarantee the stability of the model. The experimental results indicate that the proposed model has a stronger data processing capability than the traditional model, and through fusion of meteorological data, the accuracy of the proposed model is improved 1 percentage point. When the networks are deepened, the model can guarantee the stability of gradients and train the deeper network, thus improves the accuracy to 92.1%. The proposed model based on DCNN algorithm has sufficient feature extraction and better prediction performance than the contrast model, it can better serve the civil aviation decision-making.
Charged system search based route planning method for unmanned underwater vehicle
ZHAO Yunqin, CAI Chao, WANG Houjun, LI Dongwu
2018, 38(7): 2107-2112. DOI:
10.11772/j.issn.1001-9081.2017112774
Asbtract
(
)
PDF
(961KB) (
)
References
|
Related Articles
|
Metrics
To solve the problems of long time consuming and large space occupation in the route planning process of Unmanned Underwater Vehicle (UUV) under complex environments and multi-constraint conditions, a new route planning method based on Charged System Search (CSS) was proposed. Firstly, the UUV route planning problem model was established, and the cost function was designed. Then, a route planning method based on CSS was presented, and the charged particle was affected by the electric field force of other charged particles in the search space to achieve the purpose of iterative optimization. In addition, a method of nonlinear adjustment of speed and acceleration parameters was proposed, which could effectively balance global search and local search processes, and avoid premature convergence of algorithm. Finally, the proposed method was compared with the route planning methods based on A
*
algorithm, ant colony optimization algorithm and particle swarm optimization from two respects of the quality of planning route and the time consuming of algorithm. The experimental results show that, the proposed method has faster convergence speed and lower time complexity while ensuring the quality of planned route.
Vibration detection system based on process field bus technology
SU Leihao, ZHU Minghua
2018, 38(7): 2113-2118. DOI:
10.11772/j.issn.1001-9081.2017122909
Asbtract
(
)
PDF
(1034KB) (
)
References
|
Related Articles
|
Metrics
The current vibration detection system has the problems of large delay, poor controllability of sensor network and low detection precision. In order to solve the problems, a new vibration detection system based on Process field bus (Profibus) technology was proposed. Firstly, the complex calculations such as Kalman filtering and Fast Fourier Transform (FFT) were implemented at each node of detection device. The network load was reduced by about 95% compared with the traditional scheme of transmitting large amount of original vibration data, the network transmission time and the computing time of workstation were shortened, the real-time performance and computing power of system were improved. Secondly, the Profibus protocol was used to realize the management and data transmission of the vibration detection device network, which ensured the stability and controllability of the sensor network. In addition, the high-precision vibration sensor was used on the vibration detection node device, and the vibration data was filtered on the detection node. The detection precision was up to 0.0039 mg. What's more, the function customization and secondary development of detection device were convenient by the independent design and development of the Profibus protocol slave station. The RT-Thread embedded system kernel was used to achieve resource allocation and task scheduling on the detection node device, and the real-time performance and reliability were improved. The experimental results show that, the proposed system can process the original vibration data of the vibration field quickly, and transmit the processed data to the workstation computer with high real-time performance. Meanwhile, the Profibus network composed of vibration detection devices can display the status information of node in real-time and remind users promptly when there is a network fault. The network has good controllability.
Cyanobacterial bloom forecast method based on genetic algorithm-first order lag filter and long short-term memory network
YU Jiabin, SHANG Fangfang, WANG Xiaoyi, XU Jiping, WANG Li, ZHANG Huiyan, ZHENG Lei
2018, 38(7): 2119-2123. DOI:
10.11772/j.issn.1001-9081.2017122959
Asbtract
(
)
PDF
(1003KB) (
)
References
|
Related Articles
|
Metrics
The process of algal bloom evolution in rivers or lakes has characteristics of suddenness and uncertainty, which leads to low prediction accuracy of algal bloom. To solve this problem, chlorophyll a concentration was used as the surface index of cyanobacteria bloom evolution process, and a cyanobacterial bloom forecast model based on Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) was proposed. Firstly, the improved Genetic algorithm-First order lag filter (GF) optimization algorithm was taken as data smoothing filter. Secondly, a GF-LSTM network model was built to accurately predict the cyanobacterial bloom. Finally, the data sampled from Meiliang Lake in Taihu area were used to test the forecast model, and then the model was compared with the traditional RNN and LSTM network. The experimental results show that, the mean relative error of the proposed GF-LSTM network model is 16%-18%, lower than those of RNN model (28%-32%) and LSTM network model (19%-22%). The proposed model has good effect on data smoothing filtering, higher prediction accuracy and better adaptability to samples. It also avoids two widely known issues of gradient vanishing and gradient exploding when using traditional RNN model during long term training.
Grading of diabetic retinopathy based on cost-sensitive semi-supervised ensemble learning
REN Fulong, CAO Peng, WAN Chao, ZHAO Dazhe
2018, 38(7): 2124-2129. DOI:
10.11772/j.issn.1001-9081.2018010123
Asbtract
(
)
PDF
(1014KB) (
)
References
|
Related Articles
|
Metrics
Since the lack of lesion labels and unbalanced data distribution in datasets lead to the problem that the supervised classification model can not effectively classify the lesions in the traditional Diabetic Retinopathy (DR) grading system, a Cost-Sensitive based Semi-supervised Bagging (CS-SemiBagging) algorithm for DR classification was proposed. Firstly, retinal vessels were removed from a fundus image, and then the suspicious red lesions (MicroAneurysms (MAs) and HEMorrhages (HEMs)) were detected on the image without vessels. Secondly, a 22-dimensional feature based on color, shape and texture was extracted to describe each candidate lesion region. Thirdly, a CS-SemiBagging model was constructed for the classification of MAs and HEMs. Finally, the severity of DR was graded into four levels based on the numbers of different lesions. The proposed method was evaluated on the publicly available MESSIDOR database. It achieved an average accuracy of 90.2%, which was 4.9 percentage points higher than that of classical semi-supervised learning method based on Co-training. The CS-SemiBagging algorithm can effectively classify DR without label information of the suspicious lesions, so as to avoid the time-consuming effort of labeling the lesions by specialists and the bad influence of unbalanced samples on the classification.
Fault detection for multistage process based on improved local neighborhood standardization and
k
NN
FENG Liwei, ZHANG Cheng, LI Yuan, XIE Yanhong
2018, 38(7): 2130-2135. DOI:
10.11772/j.issn.1001-9081.2017112701
Asbtract
(
)
PDF
(905KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that multistage process data has the characteristics of multi-center and different process structure, a fault detection based on Improved Local Neighborhood Standardization and
k
Nearest Neighbors (ILNS-
k
NN) method was proposed. Firstly,
K
local neighbor set of
k
neighbors of the sample was found. Secondly, the sample was standardized to obtain the standard sample by using mean and standard deviation of
K
local neighbor set. Finally, fault detection was carried out by calculating the cumulative neighbor distance of samples in the standard sample set. The center of each stage data was shifted to the origin by Improved Local Neighborhood Standardization (ILNS), and dispersion degree of each stage data was adjusted approximately to the same, then the multistage process data was fused to single stage data obeying multivariate Gauss distribution. The fault detection of penicillin fermentation process experiment was carried out. The experimental results show that the ILNS-
k
NN method has more than 97% detection rate for six types of faults. The ILNS-
k
NN method can detect faults not only in general multistage process, but also in multistage process with significant different variances. It is better to ensure the safety of multistage process and the high quality of product.
Bearing fault diagnosis method based on Gibbs sampling
WANG Yan, LUO Qian, DENG Hui
2018, 38(7): 2136-2140. DOI:
10.11772/j.issn.1001-9081.2018010035
Asbtract
(
)
PDF
(804KB) (
)
References
|
Related Articles
|
Metrics
To suppress judgment one-sidedness in the existing bearing fault diagnosis method, a bearing fault diagnosis method based on Gibbs sampling was proposed. Firstly, the bearing vibration signal was decomposed by Local Characteristic Scale Decomposition (LCD) to obtain Intrinsic Scale Components (ISC). Secondly, the time domain features were extracted from the bearing vibration signal and ISC, and the time domain features were ranked according to feature sensitivity level. The top ranked features were selected to make up feature sets. Thirdly, feature set training was used to generate a multi-dimensional Gaussian distribution model based on Gibbs sampling. Finally, posterior analysis was used to obtain the probability to realize bearing fault diagnosis. The experimental results show that the diagnostic accuracy of the proposed method reaches 100%; compared with the bearing diagnosis method based on SVM (Support Vector Machine), the diagnostic accuracy is improved by 11.1 percentage points when the number of features is 43. The proposed method can effectively diagnose rolling bearing faults, and it also has good diagnostic effect on high-dimensional and complex bearing fault data.
2024 Vol.44 No.10
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF