Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 April 2019, Volume 39 Issue 4
Previous Issue
Next Issue
Ensemble learning training method based on AUC and Q statistics
ZHANG Ning, CHEN Qin
2019, 39(4): 935-939. DOI:
10.11772/j.issn.1001-9081.2018102162
Asbtract
(
)
PDF
(884KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the information asymmetry problem in the process of lending, in order to integrate different data sources and loan default prediction models more effectively, an ensemble learning training method was proposed, which measured the accuracy and the diversity of learners by Area Under Curve (AUC) value and Q statistics, and an ensemble learning training method named TABAQ (Training Algorithm Based on AUC and Q statistics) was implemented. By empirical analyses based on Peer-to-Peer (P2P) loan data, it was found that the performance of ensemble learning was closely related to the accuracy and diversity of the base learners and had low correlation with the number of base learners, and statistical ensemble performed best in all ensemble learning methods. It was also found in the experiments that by integrating the information sources of borrower side and investor side, the information asymmetry in loan default prediction was effectively reduced. TABAQ can combine the advantages of both information sources fusion and ensemble learning. With the accuracy of prediction steadily improved, the number of forecast errors further reduced by 4.85%.
Generalization error bound guided discriminative dictionary learning
XU Tao, WANG Xiaoming
2019, 39(4): 940-948. DOI:
10.11772/j.issn.1001-9081.2018081785
Asbtract
(
)
PDF
(1327KB) (
)
References
|
Related Articles
|
Metrics
In the process of improving discriminant ability of dictionary, max-margin dictionary learning methods ignore that the generalization of classifiers constructed by reacquired data is not only in relation to the principle of maximum margin, but also related to the radius of Minimum Enclosing Ball (MEB) containing all the data. Aiming at the fact above, Generalization Error Bound Guided discriminative Dictionary Learning (GEBGDL) algorithm was proposed. Firstly, the discriminant condition of Support Vector Guided Dictionary Learning (SVGDL) algorithm was improved based on the upper bound theory of about the generalization error of Support Vector Machine (SVM). Then, the SVM large margin classification principle and MEB radius were used as constraint terms to maximize the margin between different classes of coding vectors, and to minimum the MEB radius containing all coding vectors. Finally, as the generalization of classifier being better considered, the dictionary, coding coefficients and classifiers were updated respectively by alternate optimization strategy, obtaining the classifiers with larger margin between the coding vectors, making the dictionary learn better to improve dictionary discriminant ability. The experiments were carried out on a handwritten digital dataset USPS, face datasets Extended Yale B, AR and ORL, object dataset Caltech 101, COIL20 and COIL100 to discuss the influence of hyperparameters and data dimension on recognition rate. The experimental results show that in most cases, the recognition rate of GEBGDL is higher than that of Label Consistent
K
-means-based Singular Value Decomposition (LC-KSVD), Locality Constrained and Label Embedding Dictionary Learning (LCLE-DL), Fisher Discriminative Dictionary Learning (FDDL) and SVGDL algorithm, and is also higher than that of Sparse Representation based Classifier (SRC), Collaborative Representation based Classifier (CRC) and SVM.
Improved artificial bee colony algorithm with enhanced exploitation ability
ZHANG Zhiqiang, LU Xiaofeng, SUN Qindong, WANG Kan
2019, 39(4): 949-955. DOI:
10.11772/j.issn.1001-9081.2018091984
Asbtract
(
)
PDF
(930KB) (
)
References
|
Related Articles
|
Metrics
The basic Artificial Bee Colony (ABC) algorithm has some shortcomings such as slow convergence, low precision and easily getting trapped in local optimum. To overcome these issues, an improved ABC algorithm with enhanced exploitation ability was proposed. On one hand, the obtained optimum solution was directly introduced into the search equations of employed bees in two different ways and guided the employed bees to perform neighborhood search, which enhanced the exploitation or local search ability of the algorithm. On the other hand, the search was performed by the combination of the current solution and its random neighborhood in the search equations of onlooker bees, which improved the global optimization ability of the algorithm. The simulation results on some common benchmark functions show that in convergence rate, precision, and global optimization or exploration ability, the proposed ABC algorithm is generally better than the other similar improved ABC algorithms such as global best ABC (ABC/best) algorithm, and some ABC algorithms with hybrid search strategy such as ABC algorithm with Variable Search Strategy (ABCVSS) and Multi-Search Strategy Cooperative Evolutionary (ABCMSSCE).
Random forest based on double features and relaxation boundary for anomaly detection
HU Miao, WANG Kaijun
2019, 39(4): 956-962. DOI:
10.11772/j.issn.1001-9081.2018091966
Asbtract
(
)
PDF
(1029KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the low performance of existing anomaly detection algorithms based on random forest, a random forest algorithm combining double features and relaxation boundary was proposed for anomaly detection. Firstly, in the process of constructing binary decision tree of random forest with normal class data only, the range of two features (each feature had a corresponding eigenvalue range) were recorded in each node of the binary decision tree, and the double-feature eigenvalue ranges were used as the basis for abnormal point judgment. Secondly, during the anomaly detection, if a sample did not satisfy the double-feature eigenvalue range in the decision tree node, the sample would be marked as a candidate exception class; otherwise, the sample would enter the lower nodes of the decision tree and continue the comparision with the corresponding double-feature eigenvalue range. The sample would be marked as candidate normal class if there were no lower nodes. Finally, the discriminative mechanism in random forest algorithm was used to distinguish the class of the samples. Experimented results on five UCI datasets show that the proposed method has better performance than the existing random forest algorithms for anomaly detection, and its comprehensive performance is equivalent to or better than isolation Forest (iForest) and One-Class SVM (OCSVM), and stable at a high level.
Evolution model of normal aging human brain functional network
DING Chao, ZHAO Hai, SI Shuaizong, ZHU Jian
2019, 39(4): 963-971. DOI:
10.11772/j.issn.1001-9081.2018081850
Asbtract
(
)
PDF
(1354KB) (
)
References
|
Related Articles
|
Metrics
In order to explore the topological changes of Normal Aging human Brain Functional Network (NABFN), a network evolution Model based on Naive Bayes (NBM) was proposed. Firstly, the probability of existing edges between nodes was defined based on link prediction algorithm of Naive Bayes (NB) and anatomical distance. Secondly, based on the brain functional networks of young people, a specific network evolution algorithm was used to obtain a simulation network of the corresponding middle-aged and old-aged gradually by constantly adding edges. Finally, a network Similarity Index (SI) was proposed to evaluate the similarity degree between the simulation network and the real network. In the comparison experiments with network evolution Model based on Common Neighbor (CNM), the SI values between the simulation networks constructed by NBM and the real networks (4.479 4, 3.402 1) are higher than those of CNM (4.100 4, 3.013 2). Moreover, the SI value of both simulation networks are significantly higher than those of simulation networks derived from random network evolution algorithm (1.892 0, 1.591 2). The experimental results confirm that NBM can predict the topological changing process of NABFN more accurately.
Retrieval matching question and answer method based on improved CLSM with attention mechanism
YU Chongchong, CAO Shuai, PAN Bo, ZHANG Qingchuan, XU Shixuan
2019, 39(4): 972-976. DOI:
10.11772/j.issn.1001-9081.2018081691
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem that the Retrieval Matching Question and Answer (RMQA) model has weak adaptability to Chinese corpus and the neglection of semantic information of the sentence, a Chinese text semantic matching model based on Convolutional neural network Latent Semantic Model (CLSM) was proposed. Firstly, the word-
N
-gram layer and letter-
N
-gram layer of CLSM were removed to enhance the adaptability of the model to Chinese corpus. Secondly, with the focus on vector information of input Chinese words, an entity attention layer model was established based on the attention mechanism algorithm to strengthen the weight information of the core words in sentence. Finally, Convolutional Neural Network (CNN) was used to capture the input sentence context structure information effectively and the pool layer was used to reduce the dimension of semantic information. In the experiments based on a medical question and answer dataset, compared with the traditional semantic models, traditional translation models and deep neural network models, the proposed model has 4-10 percentage points improvement in Normalized Discount Cumulative Gain (NDCG).
Person re-identification based on Siamese network and bidirectional max margin ranking loss
QI Ziliang, QU Hanbing, ZHAO Chuanhu, DONG Liang, LI Bozhao, WANG changsheng
2019, 39(4): 977-983. DOI:
10.11772/j.issn.1001-9081.2018091889
Asbtract
(
)
PDF
(1221KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the low accuracy of person re-identification caused by that the similarity between different pedestrians' images is more than that between the same pedestrians' images in reality, a person re-identification method based on Siamese network combined with identification loss and bidirectional max margin ranking loss was proposed. Firstly, a neural network model which was pre-trained on a huge dataset, especially its final full-connected layer was structurally modified so that it can output correct results on the person re-identification dataset. Secondly, training of the network on the training set was supervised by the combination of identification loss and ranking loss. And according to that the difference between the similarity of the positive and negative sample pairs is greater than the predetermined value, the distance between negative sample pair was made to be larger than that of positive sample pair. Finally, a trained neural network model was used to test on the test set, extracting features and comparing the cosine similarity between the features. Experimental result on the open datasets Market-1501, CUHK03 and DukeMTMC-reID show that rank-1 recognition rates of the proposed method reach 89.4%, 86.7%, and 77.2% respectively, which are higher than those of other classical methods. Moreover, the proposed method can achieve a rank-1 rate improvement of up to 10.04% under baseline network structure.
Random walking recommendation algorithm based on combinational category space
FAN Wei, XIE Cong, XIAO Chunjing, CAO Shuyan
2019, 39(4): 984-988. DOI:
10.11772/j.issn.1001-9081.2018081822
Asbtract
(
)
PDF
(827KB) (
)
References
|
Related Articles
|
Metrics
The traditional category-driven approaches only consider the association between categories or organize them into flat or hierarchical structure, but the relationships between items and categories are complex, making other information be ignored. Aiming at this problem, a random walk recommendation algorithm based on combinational category space was proposed to better organize the category information of items and alleviate data sparsity. Firstly, a combinational category space of items represented by Hasse diagrams was constructed to map the one-to-many relationship between items and categories into one-to-one simple relationships, and represent the user's jumps between items in higher and lower levels, the same level and the cross-levels. Then the semantic relationships and two types of semantic distances - the links and the preferences were defined to better describe the changes of the user's dynamic preferences qualitatively and quantitatively. Afterwards,the user personalized category preference model was constructed based on random walking and combination of the semantic relationship, semantic distance, user behavior jumping, jumping times, time sequence and scores of the user's browsing graph in the combinatorial category space. Finally, the items were recommended to users by collaborative filtering based on the user's personalized category preference. Experimental results on MovieLens dataset show that compared with User-based Collaborative Filtering (UCF) model and category-based recommendation models (UBGC and GENC), the recommended F1-score was improved by 6 to 9 percentage points, the Mean Absolute Error (MAE) was reduced by 20% to 30%; compared with Category Hierarchy Latent Factor (CHLF) model, the recommended F1-score was improved by 10%. Therefore, the proposed algorithm has advantage in ranking recommendation and is superior to other category-based recommendation algorithms.
Adaptive Monte-Carlo localization algorithm integrated with two-dimensional code information
HU Zhangfang, ZENG Linquan, LUO Yuan, LUO Xin, ZHAO Liming
2019, 39(4): 989-993. DOI:
10.11772/j.issn.1001-9081.2018091910
Asbtract
(
)
PDF
(790KB) (
)
References
|
Related Articles
|
Metrics
Monte Carlo Localization (MCL) algorithm has many problems such as large computation and poor positioning accuracy. Because of the diversity of information carried by two-dimensional code and usability and convenience of two-dimensional code recognition, an adaptive MCL algorithm integrated with two-dimensional code information was proposed. Firstly, the cumulative error of odometer model was corrected by absolute position information provided by two-dimensional code and then sampling was performed. Sencondly, the measurement model provided by laser sensor was used to determine the importance weights of the particles. Finally, as fixed sample set used in the resampling part caused large computation, Kullback-Leibler Distance (KLD) was utilized in resampling to reduce the computation by adaptively adjusting the number of particles required for the next iteration according to the distribution of particles in state space. Experimental result on the mobile robot show that the proposed algorithm improves the localization accuracy by 15.09% and reduces the localization time by 15.28% compared to traditional Monte-Carlo algorithm.
GNSS/INS global high-precision positioning method based on Elman neural network
DENG Tianmin, FANG Fang, YUE Yunxia, YANG Qizhi
2019, 39(4): 994-1000. DOI:
10.11772/j.issn.1001-9081.2018091920
Asbtract
(
)
PDF
(1000KB) (
)
References
|
Related Articles
|
Metrics
Aiming at positioning failure occured when positioning and navigation system of the intelligent connected vehicle fail to receive the signal of Global Navigation Satellite System (GNSS), a GNSS/Inertial Navigation System (INS) global high-precision positioning method based on Elman neural network was proposed. Firstly, a GNSS/INS high-precision positioning training model and a GNSS failure prediction model based on Elman neural network were established. Then, by using GNSS, INS and Real-Time Kinematic (RTK) and other positioning techniques, a data acquisition experiment system of GNSS/INS high-precision positioning was designed. Finally, the effective experimental data were collected to compare the performance of the training model of Back Propagation (BP) neural network, Cased-Forward BP (CFBP) neural network, Elman neural network, and the prediction model of GNSS signal outage based on Elman network was verified. The experimental results show that the training error of GNSS/INS prediction model based on Elman network is better than those based on BP and CFBP neural networks. When GNSS fails for 1 min, 2 min and 5 min, the prediction Mean Absolute Error (MAE), Variance (VAR) and Root Mean Square Error (RMSE) were 18.88 cm, 19.29 cm, 58.83 cm and 8.96, 8.45, 5.68 and 20.90, 21.06, 59.10 respectively, and with the increase of GNSS signal outage time, the positioning prediction accuracy is reduced.
Monocular vision obstacle avoidance method for quadcopter based on deep learning
ZHANG Wuyang, ZHANG Wei, SONG Fang, LONG Lin
2019, 39(4): 1001-1005. DOI:
10.11772/j.issn.1001-9081.2018091952
Asbtract
(
)
PDF
(890KB) (
)
References
|
Related Articles
|
Metrics
A monocular vision obstacle avoidance method for quadrotor based on deep learning was proposed to help quadrotors to avoid obstacles. Firstly, the position of object in the image was obtained by object detection, and by calculating the height of the object box in the image, the distance between quadcopter and obstacle was estimated. Then, whether performing obstacle avoidance was determined by synergetic computer. Finally, experiments were conducted on a flight test platform based on Pixhawk flight control board. The results show that the proposed method can be applied to quadcoptor obstacle avoidance with low speed. Compared with traditional active sensor methods, the proposed method greatly reduces the occupied volume with only one monocular camera as sensor. This method is robust and can identify people with different postures as obstacles.
Application of improved A
*
algorithm in indoor path planning for mobile robot
CHEN Ruonan, WEN Congcong, PENG Ling, YOU Chengzeng
2019, 39(4): 1006-1011. DOI:
10.11772/j.issn.1001-9081.2018091977
Asbtract
(
)
PDF
(972KB) (
)
References
|
Related Articles
|
Metrics
For indoor path planning for mobile robot in particular scenario with multiple U-shape obstacles, traditional A
*
algorithm has some problems such as ignoring the actual size of robot and long computational time. An improved A
*
algorithm was proposed to solve these problems. Firstly, a neighborhood matrix was introduced to perform obstacle search, improving path safety. Then, the effects of different types and sizes of neighborhood matrices on the performance of the algorithm were studied and summarized. Finally, heuristic function was improved by combining the angle information and the distance information (calculated in different expressions when situation changes) to improve the calculation efficiency. The experimental results show that the proposed algorithm can obtain different safety spacing by changing the size of obstacle search matrix to ensure the safety of different types of robots in different environments. Moreover, in the complex environment, compared with traditional A
*
algorithm, path planning speed is improved by 28.07%, and search range is narrowed by 66.55%, so as to improve the sensitivity of the secondary planning of robot when encountering dynamic obstacles.
Network representation learning algorithm incorporated with node profile attribute information
LIU Zhengming, MA Hong, LIU Shuxin, LI Haitao, CHANG Sheng
2019, 39(4): 1012-1020. DOI:
10.11772/j.issn.1001-9081.2018081851
Asbtract
(
)
PDF
(1354KB) (
)
References
|
Related Articles
|
Metrics
In order to enhance the network representation learning quality with node profile information, and focus on the problems of semantic information dispersion and incompleteness of node profile attribute information in social network, a network representation learning algorithm incorporated with node profile information was proposed, namely NPA-NRL. Firstly, attribute information were encoded by one-hot encoding, and a data augmentation method of random perturbation was introduced to overcome the incompleteness of node profile attribute information. Then, attribute coding and structure coding were combined as the input of deep neural network to realize mutual complementation of the two types of information. Finally, an attribute similarity measure function based on network homogeneity and a structural similarity measure function based on SkipGram model were designed to mine fused semantic information through joint training. The experimental results on three real network datasets including GPLUS, OKLAHOMA and UNC demonstrate that, compared with the classic DeepWalk, Text-Associated DeepWalk (TADW), User Profile Preserving Social Network Embedding (UPP-SNE) and Social Network Embedding (SNE) algorithms, the proposed NPA-NRL algorithm has a 2.75% improvement in average Area Under Curve of ROC (AUC) value on link prediction task, and a 7.10% improvement in average F1 value on node classification task.
Multiple kernel concept factorization algorithm based on global fusion
LI Fei, DU Liang, REN Chaohong
2019, 39(4): 1021-1026. DOI:
10.11772/j.issn.1001-9081.2018081817
Asbtract
(
)
PDF
(890KB) (
)
References
|
Related Articles
|
Metrics
Non-negative Matrix Factorization (NMF) algorithm can only be used to find low rank approximation of original non-negative data while Concept Factorization (CF) algorithm extends matrix factorization to single non-linear kernel space, improving learning ability and adaptability of matrix factorization. In unsupervised environment, to design or select proper kernel function for specific dataset, a new algorithm called Globalized Multiple Kernel CF (GMKCF) was proposed. Multiple candidate kernel functions were input in the same time and learned in the CF framework based on global linear fusion, obtaining a clustering result with high quality and stability and solving the problem of kernel function selection that the CF faced. The convergence of the proposed algorithm was verified by solving the model with alternate iteration. The experimental results on several real databases show that the proposed algorithm outperforms comparison algorithms in data clustering, such as Kernel
K
-Means (KKM), Spectral Clustering (SC), Kernel CF (KCF), Co-regularized multi-view spectral clustering (Coreg), and Robust Multiple KKM (RMKKM).
Improved BIRCH clustering algorithm based on connectivity distance and intensity
FAN Zhongxin, WANG Xing, MIAO Chunsheng
2019, 39(4): 1027-1031. DOI:
10.11772/j.issn.1001-9081.2018081790
Asbtract
(
)
PDF
(778KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issues that clustering results of Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) depend on the adding order of data objects, BIRCH has poor clustering effect on non-convex clusters, and each cluster of BIRCH can only contain a similar number of data objects because of the cluster diameter threshold, an improved BIRCH algorithm was proposed. In this algorithm, the cluster diameter threshold was replaced by connectivity distance and intensity threshold which described the connectivity between the data objects, and cluster merging step was added into the generation of cluster feature tree. Experimental result on custom and iris, wine, pendigits datasets show that the proposed algorithm has higher clustering accuracy than the existing improved algorithms such as multi-threshold BIRCH and density-improved BIRCH; especially on large datasets, the proposed algorithm has accuracy increased by 6 percentage points and running time reduced by 61% compared to density-improved BIRCH. The proposed algorithm can be applied to online real-time incremental data processing and identify non-convex clusters and clusters with uneven volume, has denoising function and significantly reduces time-complexity and space-complexity.
Functional module mining in uncertain protein-protein interaction network based on fuzzy spectral clustering
MAO Yimin, LIU Yinping, LIANG Tian, MAO Dinghui
2019, 39(4): 1032-1040. DOI:
10.11772/j.issn.1001-9081.2018091880
Asbtract
(
)
PDF
(1499KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that Protein-Protein Interaction (PPI) network functional module mining method based on spectral clustering and Fuzzy
C
-Means (FCM) clustering has low accuracy and low running efficiency, and is susceptible to false positive, a method for Functional Module mining in uncertain PPI network based on Fuzzy Spectral Clustering (FSC-FM) was proposed. Firstly, in order to overcome the effect of false positives, an uncertain PPI network was constructed, in which every protein-protein interaction was endowed with a existence probability measure by using edge aggregation coefficient. Secondly, based on edge aggregation coefficient and flow distance, the similarity calculation of spectral clustering was modified using Flow distance of Edge Clustering coefficient (FEC) strategy to overcome the sensitivity problem of the spectral clustering to the scaling parameters. Then the spectral clustering algorithm was used to preprocess the uncertain PPI network data, reducing the dimension of the data and improving the accuracy of clustering. Thirdly, Density-based Probability Center Selection (DPCS) strategy was designed to solve the problem that FCM algorithm was sensitive to the initial cluster center and clustering numbers, and the processed PPI data was clustered by using FCM algorithm to improve the running efficiency and sensitivity of the clustering. Finally, the mined functional module was filtered by Edge-Expected Density (EED) strategy. Experiments on yeast DIP dataset show that, compared with Detecting protein Complexes based on Uncertain graph model (DCU) algorithm, FSC-FM has F-measure increased by 27.92%, running efficiency increased by 27.92%; compared with an uncertain model-based approach for identifying Dynamic protein Complexes in Uncertain protein-protein interaction Networks (CDUN), Evolutionary Algorithm (EA) and Medical Gene or Protein Prediction Algorithm (MGPPA), FSC-FM also has higher F-measure and running efficiency. The experimental results show that FSC-FM is suitable for the functional module mining in the uncertain PPI network.
Time series similarity measure based on Siamese neural network
JIANG Yifan, YE Qing
2019, 39(4): 1041-1045. DOI:
10.11772/j.issn.1001-9081.2018081837
Asbtract
(
)
PDF
(673KB) (
)
References
|
Related Articles
|
Metrics
In data mining such as time series classification, the similarity performance based on category of different datasets are significantly different from each other. Therefore, a reasonable and effective similarity measure is crucial to data mining. The traditional methods such as Euclidean Distance (ED), cosine distance and Dynamic Time Warping (DTW) only focus on the similarity formula of the data themselves, but ignore the influence of the knowledge annotation contained in different datasets on the similarity measure. To solve this problem, a learning method of time series similarity measure based on Siamese Neural Network (SNN) was proposed. In the method, the neighborhood relationship between the data was learnt from the supervision information of sample tags, and an efficient distance measure between time series was established. The similarity measurement and confirmatory classification experiments were performed on UCR-provided time series datasets. Experimental results show that compared with ED/DTW-1NN(one Nearest Neighbors), the overall classification quality of SNN is improved significantly. The Dynamic Time Warping (DTW)-based 1NN calssification method outperforms the SNN-based 1NN classification method on some data, but SNN outperforms DTW in complexity and speed of similarity calculation during the classification. The results show that the proposed method can significantly improve the measurement efficiency of the classification of dataset similarity, and has good performance for high-dimensional and complex time-series data classification.
Time series trend prediction at multiple time scales
WANG Jince, DENG Yueping, SHI Ming, ZHOU Yunfei
2019, 39(4): 1046-1052. DOI:
10.11772/j.issn.1001-9081.2018091882
Asbtract
(
)
PDF
(983KB) (
)
References
|
Related Articles
|
Metrics
A time series trend prediction algorithm at multiple time scales based on novel feature model was proposed to solve the trend prediction problem of stock and fund time series data. Firstly, a feature tree with multiple time scales of features was extracted from original time series, which described time series with the characteristics of the series in each level and relationship between levels. Then, the hidden states in feature sequences were extracted by clustering. Finally, a Multiple Time Scaled Trend Prediction Algorithm (MTSTPA) was designed by using Hidden Markov Model (HMM) to simultaneously predict the trend and length of the trends at different scales. In the experiments on real stock datasets, the prediction accuracy at every scale are more than 60%. Compared with the algorithm without using feature tree, the model using the feature tree is more efficient, and the accuracy is up to 10 percentage points higher at a certain scale. At the same time, compared with the classical Auto-Regressive Moving Average (ARMA) model and pattern-based Hidden Markov Model (PHMM), MTSTPA performs better, verifying the validity of MTSTPA.
Agricultural greenhouse temperature prediction method based on improved deep belief network
ZHOU Xiangyu, CHENG Yong, WANG Jun
2019, 39(4): 1053-1058. DOI:
10.11772/j.issn.1001-9081.2018091876
Asbtract
(
)
PDF
(890KB) (
)
References
|
Related Articles
|
Metrics
Concerning low representation ability and long learning time for complex and variable environmental factors in greenhouses, a prediction method based on improved Deep Belief Network (DBN) combined with Empirical Mode Decomposition (EMD) and Gated Recurrent Unit (GRU) was proposed. Firstly, the temperature environment factor was decomposed by EMD, and then the decomposed intrinsic mode function and residual signal were predicted at different degrees. Secondly, glia was introduced to improve DBN, and the decomposition signal was used to multi-attribute feature extraction combined with illumination and carbon dioxide. Finally, the signal components predicted by GRU were added together to obtain the final prediction result. The simulation results show that compared with empirical decomposition belief network (EMD-DBN) and glial DBN-glial chains (DBN-g), the prediction error of the proposed method is reduced by 6.25% and 5.36% respectively, thus verifying its effectiveness and feasibility of predictions in greenhouse time series environment with strong noise and coupling.
Dynamic multi-keyword ranked search over encrypted data supporting semantic extension
PANG Xiaoqiong, YAN Xiaolong, CHEN Wenjun, YU Benguo, NIE Mengfei
2019, 39(4): 1059-1065. DOI:
10.11772/j.issn.1001-9081.2018091865
Asbtract
(
)
PDF
(1001KB) (
)
References
|
Related Articles
|
Metrics
Since existing dynamic multi-keyword ranked search schemes over encrypted data in cloud storage can not support semantic extension and do not have forward and backward security, a multi-keyword ranked search scheme over encrypted cloud data was proposed, which supported semantic search and achieved forward and backward security. The semantic extension of query keywords was achieved by constructing semantic relationship graph, the retrieval and dynamic update of data were achieved by use of tree-based index structure, the multi-keyword ranked search was achieved based on vector space model, and the extended index and query vectors were encrypted by using secure
K
-nearest neighbor algorithm. Security analysis indicates that the proposed scheme is secure under the known ciphertext model and achieves forward and backward security during dynamic update. Efficiency analysis and simulation experiments show that this scheme is superior to the same type schemes with the same security or function in server retrieval efficiency.
Real-time defence against dynamic host configuration protocol flood attack in software defined network
ZOU Chengming, LIU Panwen, TANG Xing
2019, 39(4): 1066-1072. DOI:
10.11772/j.issn.1001-9081.2018091852
Asbtract
(
)
PDF
(1082KB) (
)
References
|
Related Articles
|
Metrics
In Software Defined Network (SDN), Dynamic Host Configuration Protocol (DHCP) flood attack packets can actively enter the controller in reactive mode, which causes a huge hazard to SDN. Aiming at the promblem that the traditional defense method against DHCP flood attack cannot keep the SDN network from control link blocking caused by the attack, a Dynamic Defense Mechanism (DDM) against DHCP flood attacks was proposed. DDM is composed of a detection model and mitigation model. In the detection model, different from the static threshold detection method, a dynamic peak estimation model was constructed by two key parameters - DHCP average traffic seed and IP pool surplus to evaluate whether the ports were attacked. If the ports were attacked, the mitigation model would be informed. In the mitigation model, the IP pool cleaning was performed based on the response character of Address Resolution Protocol (ARP), and an interval interception mechanism was designed to intercept the attack source, mitigating the congestion and minimizing the impact on users during interception. Simulation experimental results show that the detection error of DDM is averagely 18.75%, lower than that of the static threshold detection. The DDM mitigation model can effectively intercept traffic and reduce the waiting time for users to access the network during the interception by an average of 81.45%.
Mechanism of trusted storage in Ethereum based on smart contract
CAO Didi, CHEN Wei
2019, 39(4): 1073-1080. DOI:
10.11772/j.issn.1001-9081.2018092005
Asbtract
(
)
PDF
(1333KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that Ethereum platporm has simple data management function and poor performance of low throughput and high latency, a trusted storage mechanism based on smart contract in Ethereum was proposed. Firstly, a framework of trusted storage based on smart contract was proposed for solving data management problem exposed in Ethereum. Secondly, the framework and implementation of the proposed mechanism were expounded from the aspects of centralized data processing, authenticated data distributed storage and dynamic forensics. Finally, the feasibility of the mechanism was proved by the system development based on smart contract. The experimental and analysis results show that compared with the traditional relational database storage, the proposed method increases processing credibility, storage credibility and access credibility; compared with blockchain storage, it enriches data management function, reduces the cost of block storage and improves the efficiency of storage.
Malicious webpage integrated detection method based on Stacking ensemble algorithm
PIAOYANG Heran, REN Junling
2019, 39(4): 1081-1088. DOI:
10.11772/j.issn.1001-9081.2018091926
Asbtract
(
)
PDF
(1165KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems of excessive cost of resource, long detection period and low classification effect of mainstream malicious webpage detection technology, a Stacking-based malicious webpage integrated detection method was proposed, with heterogeneous classifiers integration method applying to malicious webpage detection and recognition. By extracting and analyzing the relevant factors of webpage features, and performing classification and ensemble learning, the detection model was obtained. In the detection model, the primary classifiers were constructed based on
K
-Nearest Neighbors (KNN) algorithm, logistic regression algorithm and decision tree algorithm respectively, and Support Vector Machine (SVM) classifier was used for the construction of secondary classifier. Compared with the traditional malicious webpage detection methods, the proposed method improves the recognition accuracy by 0.7% and obtains a high accuracy of 98.12% in the condition of low resource consumption and high velocity. The experimental results show that the detection model constructed by the proposed method can recognize malicious webpages efficiently and accurately.
Intrusion detection method for industrial control system with optimized support vector machine and
K
-means++
CHEN Wanzhi, XU Dongsheng, ZHANG Jing, TANG Yu
2019, 39(4): 1089-1094. DOI:
10.11772/j.issn.1001-9081.2018091932
Asbtract
(
)
PDF
(829KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that traditional single detection algorithm models have low detection rate and slow detection speed on different types of attacks in industrial control system, an intrusion detection model combining optimized Support Vector Machine (SVM) and
K
-means++algorithm was proposed. Firstly, the original dataset was preprocessed by Principal Component Analysis (PCA) to eliminate its correlation. Secondly, an adaptive mutation process was added to Particle Swarm Optimization (PSO) algorithm to avoid falling into local optimal solution during the training process. Thirdly, the PSO with Adaptive Mutation (AMPSO) algorithm was used to optimize the kernel function and penalty parameters of the SVM. Finally, a
K
-means algorithm improved by density center method was united with the optimized support vector machine to form the intrusion detection model, achieving anomaly detection of industrial control system. The experimental results show that the proposed method can significantly improve the detection speed and the detection rate of various attacks.
Group key management scheme based on distributed path computing element in multi-domain optical network
ZHOU Yang, WU Qiwu, JIANG Lingzhi
2019, 39(4): 1095-1099. DOI:
10.11772/j.issn.1001-9081.2018092045
Asbtract
(
)
PDF
(786KB) (
)
References
|
Related Articles
|
Metrics
A group key management scheme based on distributed Path Computation Element (PCE) architecture was proposed aiming at the communication characteristics and key management requirement of multi-domain optical networks in PCE architecture. Firstly, the key relation of multi-domain optical network under distributed PCE architecture was modeled as a two-layer key hypergraph by using hypergraph theory. Then, the key management method based on self-authenticated public key cryptosystem and member filtering technique was adopted in the autonomous domain layer and the group key agreement method based on elliptic curve cryptosystem was adopted in the PCE layer. Finally, the generation, distribution, update and dynamic management of the key were completed, and the confidentiality problem of the private key of member and the impersonation problem of the third party node were well solved. At the same time, the computational overhead of key update was reduced. The performance analysis shows that the proposed scheme has forward security, backward security, private key confidentiality and is against collusion attack. Compared with the typical decentralized scheme, the proposed scheme achieves better performance in terms of key storage capacity, encryption/decryption times and communication overhead.
Low-density 3D model information hiding algorithm based on multple fusion states
REN Shuai, XU Zhenchao, WANG Zhen, HE Yuan, ZHANG Tao, SU Dongxu, MU Dejun
2019, 39(4): 1100-1105. DOI:
10.11772/j.issn.1001-9081.2018091855
Asbtract
(
)
PDF
(929KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the existing 3D model information hiding algorithms cannot effectively resist uneven compression, a multi-carrier low-density information hiding algorithm based on multiple fusion states was proposed. Firstly, multiple 3D models were positioned, oriented and stereotyped by translation and scaling. Secondly, the 3D models were rotated at different angles and merged by using the center point as merging point to obtain multiple fusion states. Thirdly, local height and Mean Shift clustering analysis were used to divide the energy of the vertices of the fusion state model, obtaining the vertices with different energies. Finally, by changing the vertex coordinates, the secret information changed by Arnold scrambling was quickly hidden in multiple fusion states and 3D models. Experimental results show that the proposed algorithm is robust against uneven compression attacks and has high invisibility.
Task scheduling strategy based on data stream classification in Heron
ZHANG Yitian, YU Jiong, LU Liang, LI Ziyang
2019, 39(4): 1106-1116. DOI:
10.11772/j.issn.1001-9081.2018081848
Asbtract
(
)
PDF
(1855KB) (
)
References
|
Related Articles
|
Metrics
In a new platform for big data stream processing called Heron, the round-robin scheduling algorithm is usually used for task scheduling by default, which does not consider the topology runtime state and the impact of different communication modes among task instances on Heron's performance. To solve this problem, a task scheduling strategy based on Data Stream Classification in Heron (DSC-Heron) was proposed, including data stream classification algorithm, data stream cluster allocation algorithm and data stream classification scheduling algorithm. Firstly, the instance allocation model of Heron was established to clarify the difference in communication overhead among different communication modes of the task instances. Secondly, the data stream was classified according to the real-time data stream size between task instances based on the data stream classification model of Heron. Finally, the packing plan of Heron was constructed by using the interrelated high-frequency data streams as the basic scheduling unit to complete the scheduling to minimize the communication cost by transforming inter-node data streams into intra-node ones as many as possible. After running SentenceWordCount, WordCount and FileWordCount topologies in a Heron cluster environment with 9 nodes, the results show that compared with the Heron default scheduling strategy, DSC-Heron has 8.35%, 7.07% and 6.83% improvements in system complete latency, inter-node communication overhead and system throughput respectively; in the load balancing aspect, the standard deviations of CPU usage and memory usage of the working nodes are decreased by 41.44% and 41.23% respectively. All experimental results show that DSC-Heron can effectively improve the performance of the topologies, and has the most significant optimization effect on FileWordCount topology which is close to the real application scenario.
Optimal combination prediction based on polynomial coefficient autoregressive model for radar performance parameter
WU Jie, LYU Yongle
2019, 39(4): 1117-1121. DOI:
10.11772/j.issn.1001-9081.2018091878
Asbtract
(
)
PDF
(795KB) (
)
References
|
Related Articles
|
Metrics
Aiming at low prediction accuracy of the variation trend of radar performance parameters in Prognostics and Health Management (PHM) of radar, a prediction method based on Polynomial Coefficient AutoRegressive (PCAR) model was proposed. Firstly, the form of PCAR model and methods of determining order and parameters were introduced. Compared with the traditional linear model, PCAR model expanded the model selection range and effectively reduced the modeling deviation. Then, to further improve prediction accuracy, the performance parameter monitoring sequence was divided into subsequences corresponding to each failure factor by selecting the optimal threshold on the basis of Singular Value Decomposition Filtering Algorithm (SVDFA). Finally, PCAR models with different orders were employed to realize the prediction. As shown in the simulation experiment, compared with the results predicted by the single AutoRegressive Moving Average model, the combined prediction method improves the accuracies of the three performance parameter monitoring sequences by 79.7%, 97.6% and 82.8% respectively. The results show that the proposed method can be applied to the prediction of radar performance parameters and improve the operational reliability of radar.
Rate smooth switching algorithm based on DASH standard
HUANG Sheng, FU Yuanpeng, ZHANG Qianyun
2019, 39(4): 1122-1126. DOI:
10.11772/j.issn.1001-9081.2018091933
Asbtract
(
)
PDF
(887KB) (
)
References
|
Related Articles
|
Metrics
Concerning the fact that the existing rate adaptation algorithms based on Dynamic Adaptive Streaming over HTTP (DASH) have frequent bitrate switching and low average bitrate in wireless network, a Rate Smooth Switching (RSS) algorithm based on DASH standard was proposed. Firstly, a sliding window was used by the bandwidth detection mechanism of the algorithm to sample the download speed of historical segments to calculate the bandwidth offset coefficient, the fluctuation of the bandwidth was initially determined according to the value of offset coefficient, and the situation of the fluctuation was further determined whether there was a consistent variation trend, thereby distinguishing continuous variation and short-term jitter of the bandwidth, and the bandwidth prediction value corresponding to each circumstance was calculated. Secondly, with bandwidth fluctuation, buffer occupancy and variation, bandwidth prediction value considered, the rate decision model of the algorithm adopted Fast Buffering (FB), Slow Switching (SS), Fast Rising (FR), Limited Declining (LD), Stable Holding (SH) strategies and sleeping mechanism to dynamically control the video bitrate selection process. The experimental results show that compared with fuzzy-based DASH rate adaptation algorithm and modulated throughput driven rate adaptation algorithm, the proposed algorithm can not only increase the bitrate to optimum level in the shortest time at the beginning of video playback to improve the average bitrate, but also minimize the number of bitrates' switching in the case of sudden change and frequent fluctuation of bandwidth, thus obtaining a good quality of experience for wireless video users.
Dynamic adaptive step-wise bitrate switching algorithm for HTTP streaming
TU Daxi, JIANG Yuhao, XU Cheng, YU Linchen
2019, 39(4): 1127-1132. DOI:
10.11772/j.issn.1001-9081.2018091893
Asbtract
(
)
PDF
(998KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of low quality of video viewing experience in dynamic network environment with limited cache capacity, a Dynamic Adaptive Step-wise Bitrate Switching (DASBS) algorithm for HTTP streaming considering network bandwidth and cache capacity was proposed. Firstly, a sliding window was used to analyze the recent downloaded fragments, obtaining the initial bandwidth estimation. Then, according to the real-time bandwidth fluctuation degree and cache state, two correction factors were set to further smooth the bandwidth estimation. Finally, a cache threshold was set to establish a correlation with the current bitrate, and the bandwidth estimation and the cache dynamic threshold were used to jointly control the bitrate switching. Experimental results on platform libdash show that DASBS is better than Video Quality Control for QoE (VQCQ) algorithm in switching smoothness and its average bitrate of video playback is higher, which effectively improves the bandwidth utilization. Although the average bitrate is slightly lower than that of Evolution of Adaptive Bitrate Switching (EABS) algorithm, the number of switching times is greatly reduced, improving the switching stability. The experimental results show that the proposed algorithm has high bandwidth utilization, switching smoothness and switching stability in dynamic network environment, which can effectively improve user experience.
Low complexity reactive tabu search detection algorithm in MIMO-GFDM systems
ZHOU Wei, XIANG Danlei, GUO Mengyu
2019, 39(4): 1133-1137. DOI:
10.11772/j.issn.1001-9081.2018092002
Asbtract
(
)
PDF
(721KB) (
)
References
|
Related Articles
|
Metrics
The equivalent channel matrix dimension of Generalized Frequency Division Multiplexing with Multiple Input Multiple Output (MIMO-GFDM) system is very large, and the traditional Multiple Input Multiple Output (MIMO) detection algorithm has high complexity and poor performance. Aiming at those problems, Reactive Tabu Search (RTS) detection algorithm in massive MIMO systems was applied to MIMO-GFDM system, and the high complexity problem of the initial value in RTS algorithm was also solved. Firstly, by using the positive definite symmetry of the matrix used in Minimum Mean Squared Error (MMSE) detection algorithm, Cholesky decomposition was applied to the matrix, and Sherman-Morrison formula was combined to iteratively calculate the initial value, reducing high complexity of the initial value inversion. Then, with the result of the improved MMSE detection as the initial value of RTS algorithm, the optimum solution was searched globally from the initial value. Finally, the iteration numbers and Bit Error Rate (BER) performance were researched through simulations. Theoretical analysis and simulation results show that, in MIMO-GFDM, the improved RTS signal detection algorithm has much lower BER than traditional signal detection algorithms. In 4 Quadrature Amplitude Modulation (4QAM), the RTS algorithm has approximately 6 dB lower signal-to-noise performance gain than MMSE detection (when BER is 10
-3
). In 16QAM, the RTS algorithm has approximately 4 dB lower signal-to-noise performance gain than MMSE detection (when BER is 10
-2
). Compared with the traditional RTS algorithm, the proposed algorithm has lower complexity without affecting the BER performance.
Phase error analysis and amplitude improvement algorithm for asymmetric paired carry multiple access signal
XU Xingchen, CHENG Jian, TANG Jingyu, ZHANG Jian
2019, 39(4): 1138-1144. DOI:
10.11772/j.issn.1001-9081.2018092003
Asbtract
(
)
PDF
(935KB) (
)
References
|
Related Articles
|
Metrics
To solve the signal demodulation problem of asymmetric Paired Carry Multiple Access (PCMA) composed of the same frequency of main station and small station signals, a framework to realize this kind of signal demodulation was constructed. Parameter estimation is an indispensable part in the realization of two-way signal separation and demodulation for asymmetric PCMA communication systems. For the estimation accuracy of amplitude parameters, a searching amplitude estimation algorithm based on fourth-power method was proposed. Firstly, the demodulation model for asymmetric PCMA systems was established and the basic assumptions were made. Then the phase errors under different assumptions were compared with each other and the influence of phase error on the amplitude estimation algorithm was analyzed. Finally, a new amplitude estimation algorithm was proposed. Experimental results show that, under same Signal-to-Noise Ratio (SNR), the demodulation performance of the small station signal under normal phase error is inferior to its demodulation performance under mean value condition. When the order of magnitude of the Bit Error Rate (BER) is 10
-4
, the demodulation performance of small station signal is improved by 1 dB with the improved algorithm, proving that the improved algorithm is better than fourth-power method.
Fast scale adaptive object tracking algorithm with separating window
YANG Chunde, LIU Jing, QU Zhong
2019, 39(4): 1145-1149. DOI:
10.11772/j.issn.1001-9081.2018081821
Asbtract
(
)
PDF
(807KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of object drift caused by Kernelized Correlation Filter (KCF) tracking algorithm when scale changes, a Fast Scale Adaptive tracking of Correlation Filter (FSACF) was proposed. Firstly, a global gradient combination feature map based on salient color features was obtained by directly extracting features for the original frame image, reducing the effect of subsequent scale calculation on the performance. Secondly, the method of separating window was performed on the global feature map, adaptively selecting the scale and calculating the corresponding maximum response value. Finally, a defined confidence function was used to adaptively update the iterative template function, improving robustness of the model. Experimental result on video sets with different interference attributes show that compared with KCF algorithm, the accuracy of the FSACF algorithm by was improved 7.4 percentage points, and the success rate was increased by 12.8 percentage points; compared with the algorithm without global feature and separating window, the Frames Per Second was improved by 1.5 times. The experimental results show that the FSACF algorithm avoids the object drift when facing scale change with certain efficiency, and is superior to the comparison algorithms in accuracy and success rate.
Object tracking algorithm based on correlation filter with spatial structure information
HU Xiuhua, WANG Changyuan, XIAO Feng, WANG Yawen
2019, 39(4): 1150-1156. DOI:
10.11772/j.issn.1001-9081.2018091884
Asbtract
(
)
PDF
(1190KB) (
)
References
|
Related Articles
|
Metrics
To solve the tracking drift problem caused by the low discriminability of sample information in typical correlation filtering framework, a correlation filter based object tracking algorithm with spatial structure information was proposed. Firstly, the spatial context structure constraint was introduced to optimize the model construction, meanwhile, the regularized least square and matrix decomposition idea were exploited to achieve the closed solution. Then, the complementary features were used for the target apparent description, and the scale factor pool was utilized to deal with target scale changing. Finally, according to the occlusion influence of target judged by motion continuity, the corresponding model updating strategy was designed. Experimental results demonstrate that compared with the traditional algorithm, the precision of the proposed algorithm is increased by 17.63%, and the success rate is improved by 24.93% in various typical test scenarios, achieving more robust tracking effect.
Positioning accuracy analysis of optical micropositioning system
CHEN Xiong, ZOU Xiangjun, FAN Ke, LU Jun
2019, 39(4): 1157-1161. DOI:
10.11772/j.issn.1001-9081.2018091895
Asbtract
(
)
PDF
(830KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the accuracy of identification and localization of cell microorganisms by optical micropositioning system, on the one hand, the hand-eye calibration method should be optimized, on the other hand, the accuracy of global image recognition should be improved. Aiming at those, a two-step method for hand-eye calibration of the system was proposed. Firstly, the origin of the system was determined by calibrating the fixed target, and the transformation relationship of the vision module to the origin of the system was obtained. Then, according to the starting point position of each photograph, the number of photoing and the step size of movement, the transformation relationship of the global image to the origin of the system was solved. Finally, in order to further improve the accuracy of the global transformation relationship, an error correction method based on Fourier transform was used to obtain the error of the visual module in movement,then the error was added into the system for compensation. Experimental results show that after error compensation, the micropositioning system has the error mean value in
X
-axis direction reduced from 10.23 μm to -0.002 μm, the error mean value in
Y
-axis direction reduced from 6.9 μm to -0.50 μm, and the average positioning accuracy over 99%. The results show that the proposed method can be applied to the optical micropositioning system for high-precision automated capture of cell microorganisms.
Weakly illuminated image enhancement algorithm based on convolutional neural network
CHENG Yu, DENG Dexiang, YAN Jia, FAN Ci'en
2019, 39(4): 1162-1169. DOI:
10.11772/j.issn.1001-9081.2018091979
Asbtract
(
)
PDF
(1448KB) (
)
References
|
Related Articles
|
Metrics
Existing weakly illuminated image enhancement algorithms are strongly dependent on Retinex model and require manual adjustment of parameters. To solve those problems, an algorithm based on Convolutional Neural Network (CNN) was proposed to enhance weakly illuminated image. Firstly, four image enhancement techniques were used to process weakly illuminated image to obtain four derivative images, including contrast limited adaptive histogram equalization derivative image, Gamma correction derivative image, logarithmic correction derivative image and bright channel enhancement derivative image. Then, the weakly illuminated image and its four derivative images were input into CNN. Finally, the enhanced image was output after activation by CNN. The proposed algorithm can directly map the weakly illuminated image to the normal illuminated image in end-to-end way without estimating the illumination map or reflection map according to Retinex model nor adjusting any parameters. The proposed algorithm was compared with Naturalness Preserved Enhancement Algorithm for non-uniform illumination images (NPEA), Low-light image enhancement via Illumination Map Estimation (LIME), LightenNet (LNET), etc. In the experiment on synthetic weakly illuminated images, the average Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity index (SSIM) metrics of the proposed algorithm are superior to comparison algorithms. In the real weakly illuminated images experiment, the average Natural Image Quality Evaluator (NIQE) and entropy metric of the proposed algorithm are the best of all comparison algorithms, and the average contrast gain metric ranks the second among all algorithms. Experimental results show that compared with comparison algorithms, the proposed algorithm has better robustness, and the details of the images enhanced by the proposed algorithm are richer, the contrast is higher, and the visual effect and image quality are better.
Foreground detection with weighted Schatten-
p
norm and 3D total variation
CHEN Lixia, LIU Junli, WANG Xuewen
2019, 39(4): 1170-1175. DOI:
10.11772/j.issn.1001-9081.2018092038
Asbtract
(
)
PDF
(811KB) (
)
References
|
Related Articles
|
Metrics
In view of the fact that the low rank and sparse methods generally regard the foreground as abnormal pixels in the background, which makes the foreground detection precision decrease in the complex scene, a new foreground detection method combining weighted Schatten-
p
norm with 3D Total Variation (3D-TV) was proposed. Firstly, the observed data were divided into low rank background, moving foreground and dynamic disturbance. Then 3D total variation was used to constrain the moving foreground and strengthen the prior consideration of the spatio-temporal continuity of the foreground objects, effectively suppressing the random disturbance of the anomalous pixels in the discontinuous dynamic background. Finally, the low rank performance of video background was constrained by weighted Schatten-
p
norm to remove noise interference. The experimental results show that, compared with Robust Principal Component Analysis (RPCA), Higher-order RPCA (HoRPCA) and Tensor RPCA (TRPCA), the proposed model has the highest F-measure value, and the optimal or sub-optimal values of recall and precision. It can be concluded that the proposed model can better overcome the interference in complex scenes, such as dynamic background and severe weather, and its extraction accuracy as well as visual effect of moving objects is improved.
Palm vein enhancement method based on adaptive fusion
LOU Mengying, YUAN Lisha, LIU Yaqin, WAN Xuemei, YANG Feng
2019, 39(4): 1176-1182. DOI:
10.11772/j.issn.1001-9081.2018092043
Asbtract
(
)
PDF
(1239KB) (
)
References
|
Related Articles
|
Metrics
To solve the degradation of recognition performance caused by unclear palm vein contour, low image contrast and brightness, a new palm vein enhancement method based on adaptive fusion was proposed. Firstly, based on Dark Channel Prior (DCP) defogging algorithm and adaptively selected defogging coefficient according to variation coefficient of the palm vein image, DCP enhanced image was obtained. And based on Partial Overlapped Sub-block Histogram Equalization (POSHE) algorithm, POSHE enhanced image was obtained. Secondly, the image was divided into 16 sub-blocks, and the weight of each sub-block was determined by the gray mean and the standard deviation. Finally, two kinds of enhanced images were fused adaptively according to the weight of each sub-block, obtaining the adaptive fused enhanced image. This method not only retains the advantages of DCP algorithm in enhancing image contrast and brightness without introducing significant noise, but also preserves the benefits of POSHE algorithm in enhancing image contrast and brightness without losing local details. Meanwhile, adaptive fusion of the two algorithms solves the problem of missing palm vein in shadow areas of DCP images and reduces the blocking artifacts produced by POSHE. Experimental results carried out on two public databases and a self-built database show that the equal error rates are 0.0004, 0.0472, 0.0579 and the correct recognition rates are 99.98%, 94.27%, 92.05% respectively, indicating that compared with existing image enhancement methods, the proposed method reduces the equal error rate and improves the recognition accuracy.
Automatic segmentation of nasopharyngeal neoplasm in MR image based on U-net model
PAN Peike, WANG Yan, LUO Yong, ZHOU Jiliu
2019, 39(4): 1183-1188. DOI:
10.11772/j.issn.1001-9081.2018091908
Asbtract
(
)
PDF
(970KB) (
)
References
|
Related Articles
|
Metrics
Because of the uncertain growth direction and complex anatomical structure for nasopharyngeal tumors, doctors always manually delineate the tumor regions in MR images, which is time-consuming and the delineation result heavily depends on the experience of doctors. In order to solve this problem, based on deep learning algorithm, a U-net based MR image automatic segmentation algorithm of nasopharyngeal tumors was proposed, in which the max-pooling operation in original U-net model was replaced by the convolution operation to keep more feature information. Firstly,the regions of 128×128 were extracted from all slices with tumor regions of the patients as data samples. Secondly, the patient samples were divided into training sample set and testing sample set, and data augmentation was performed on the training samples. Finally, all the training samples were used to train the model. To evaluate the performance of the proposed U-net based model, all slices of patients in testing sample set were selected for segmentation, and the final average results are:Dice Similarity Coefficient (DSC) is 80.05%, Prevent Match (PM) coefficient is 85.7%, Correspondence Ratio (CR) coefficient is 71.26% and Average Symmetric Surface Distance (ASSD) is 1.1568. Compared with Convolutional Neural Network (CNN) based model, DSC, PM and CR coefficients of the proposed method are increased by 9.86 percentage points, 19.61 percentage points and 16.02 percentage points respectively, and ASSD is decreased by 0.4364. Compared with Fully Convolutional Network (FCN) model and max-pooling based U-net model, DSC and CR coefficients of the proposed method achieve the best results, while PM coefficient is 2.55 percentage points lower than the maximum value in the two comparison models, and ASSD is slightly higher than the minimum value of the two comparison models by 0.0046. The experimental results show that the proposed model can achieve good segmentation results of nasopharyngeal neoplasm, which assists doctors in diagnosis.
Automatic screening of abnormal cervical nucleus based on maximum section feature
HAN Ying, ZHAO Meng, CHEN Shengyong, WANG Zhaoxi
2019, 39(4): 1189-1195. DOI:
10.11772/j.issn.1001-9081.2018091904
Asbtract
(
)
PDF
(1118KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the complexity of cervical cell image fine segmentation makes it difficult to achieve automatic abnormal cell screening based on cell image segmentation, a cervical cell classification algorithm without fine segmentation step was proposed. Firstly, a new feature named MAXimum Section (MAXSection) was defined for describing the distribution of pixel values, and was combined with Back Propagation (BP) neural network and Selective Search algorithm to realize the accurate extraction of nucleus Region Of Interest (ROI) (the highest accuracy was 100%). Secondly, two parameters named estimated length and estimated width were defined based on MAXSection to describe morphological changes of abnormal nucleus. Finally, according to the characteristic of absolute enlargement of cervical nucleus when cervical cancer occurs, the classification of abnormal nucleus (at least one parameter of estimated length and width is greater than 65) and normal nucleus (estimated length and width are both less than 65) can be realized by using the above two parameters. Experimental results show that the proposed algorithm has screening accuracy of 98.89%, sensitivity of 98.18%, and specificity of 99.20%. The proposed algorithm can complete the total process from the input of whole Pap smear image to the output of final screening results, realizing the automation of abnormal cervical cell screening.
Fractional differential algorithm based on wavelet transform applied on texture enhancement of liver tumor in CT image
QIU Jiajun, WU Yue, HUI Bei, LIU Yanbo
2019, 39(4): 1196-1200. DOI:
10.11772/j.issn.1001-9081.2018081823
Asbtract
(
)
PDF
(920KB) (
)
References
|
Related Articles
|
Metrics
Smooth texture details are easily lost in the process of image texture enhancement. Although fractional-order differential enhancement can preserve the texture details of smooth regions nonlinearly, it is sensitive to frequency resolution. Focusing on this problem, a fractional differential texture enhancement algorithm based on wavelet transform was proposed and applied to texture enhancement of liver tumor regions in plain Computed Tomography (CT) images. Firstly, wavelet transform was used to decompose the image region of interest into multiple subband components. Then, a fractional differential mask with compensation parameter was constructed based on fractional-order differential definition. Finally, the mask was used to convolve with each high frequency subband component respectively, and the image region of interest was recombined by using reverse wavelet transform. The experimental results show that the algorithm effectively preserves the low-frequency smooth texture details while observably enhances the high-frequency contour information of the tumor region by a relatively large fractional order:compared with the original region, the enhanced hepatocellular carcinoma region has the information entropy increased by 36.56% averagely, the average gradient increased by 321.56% averagely, and the mean absolute difference of 9.287 averagely; compared with the original region, the enhanced hepatic hemangioma region has the information entropy increased by 48.77% averagely, the average gradient increased by 511.26% averagely, and the mean absolute difference of 14.097 averagely.
Feature point localization of left ventricular ultrasound image based on convolutional neural network
ZHOU Yujin, WANG Xiaodong, ZHANG Lige, ZHU Kai, YAO Yu
2019, 39(4): 1201-1207. DOI:
10.11772/j.issn.1001-9081.2018091931
Asbtract
(
)
PDF
(1169KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that the traditional cascaded Convolutional Neural Network (CNN) has low accuracy of feature point localization in left ventricular ultrasound image, an improved cascaded CNN with region extracted by Faster Region-based CNN (Faster-RCNN) model was proposed to locate the left ventricular endocardial and epicardial feature points in ultrasound images. Firstly, the traditional cascaded CNN was improved by a structure of two-stage cascaded. In the first stage, an improved convolutional network was used to roughly locate the endocardial and epicardial joint feature points. In the second stage, four improved convolutional networks were used to fine-tune the endocardial feature points and the epicardial feature points separately. After that, the positions of joint contour feature points were output. Secondly, the improved cascaded CNN was merged with target region extraction, which means that the target region containing the left ventricle was extracted by the Faster-RCNN model and then was sent into the improved cascaded CNN. Finally, the left ventricular contour feature points were located from coarse to fine. Experimental results show that compared with the traditional cascaded CNN, the proposed method is much more accurate in left ventricle feature point localization, and its prediction points are closer to the actual values. Under the root mean square error evaluation standard, the accuracy of feature point localization is improved by 32.6 percentage points.
Node recognition for different types of sugarcanes based on machine vision
SHI Changyou, WANG Meili, LIU Xinran, HUANG Huili, ZHOU Deqiang, DENG Ganran
2019, 39(4): 1208-1213. DOI:
10.11772/j.issn.1001-9081.2018092016
Asbtract
(
)
PDF
(917KB) (
)
References
|
Related Articles
|
Metrics
The sugarcane node is difficult to recognize due to the diversity and complexity of surface that different types of sugarcane have. To solve the problem, a sugarcane node recognition method suitable for different types of sugarcane was proposed based on machine vision. Firstly, by the iterative linear fitting algorithm, the target region was extracted from the original image and its slope angle to horizontal axis was estimated. According to the angle, the target was rotated to being nearly parallel to the horizontal axis. Secondly, Double-Density Dual Tree Complex Wavelet Transform (DD-DTCWT) was used to decompose the image, and the image was reconstructed by using the wavelet coefficients that were perpendicular or approximately perpendicular to the horizontal axis. Finally, the line detection algorithm was used to detect the image, and the lines near the sugarcane node were obtained. The recognition was realized by further verifying the density, length and mutual distances of the edge lines. Experimental results show that the complete recognition rate reaches 92%, the localization accuracy of about 80% of nodes is less than 16 pixels, and the localization accuracy of 95% nodes is less than 32 pixels. The proposed method realizes node recognition for different types of sugarcane under different background with high position accuracy.
Dense subgraph based telecommunication fraud detection approach in bank
LIU Xiao, WANG Xiaoguo
2019, 39(4): 1214-1219. DOI:
10.11772/j.issn.1001-9081.2018091861
Asbtract
(
)
PDF
(890KB) (
)
References
|
Related Articles
|
Metrics
Lack of labeled data accumulated for telecommunication fraud in the bank and high cost of manually labeling cause the insufficiency of labeled data that can be used in supervised learning methods for telecommunication fraud detection. To solve this problem, an unsupervised learning method based on dense subgraph was proposed to detect telecommunication fraud. Firstly, subgraphs with high anomaly degree in the network of accounts and resources (IP addresses and MAC addresses) were searched to identify fraud accounts. Then, a subgraph anomaly degree metric satisfying the features of telecommunication fraud was designed. Finally, a suspicious subgraph searching algorithm with resident disk, efficient memory and theory guarantee was proposed. On two synthetic datasets, the F1-scores of the proposed method are 0.921 and 0.861, which are higher than those of CrossSpot, fBox and EvilCohort algorithms while very close to those of M-Zoom algorithm (0.899 and 0.898), but the average running time and memory consumption peak of the proposed method are less than those of M-Zoom algorithm. On real-world dataset, F1-score of the proposed method is 0.550, which is higher than that of fBox and EvilCohort while very close to that of M-Zoom algorithm (0.529). Theoretical analysis and simulation results show that the proposed method can be applied to telecommunication fraud detection in the bank effectively, and is suitable for big datasets in practice.
Humanoid robot local environment and capability map model based on Octomap
YI Kang, ZHAO Yuting, QI Xinshe
2019, 39(4): 1220-1223. DOI:
10.11772/j.issn.1001-9081.2018091935
Asbtract
(
)
PDF
(657KB) (
)
References
|
Related Articles
|
Metrics
The 3D capability map model of humanoid robot based on 3D point cloud data has the disadvantage of large voxel mesh searching computation. Considering the hierarchical advantage of OcTree in 3D space subdivision, a local environment and capability map model based on Octomap was proposed. Firstly, a binary-tree-like kinematics model of NAO humanoid robot was constructed according to the joint composition, forward kinematics, inverse kinematics and rigid body coordinate transformation of NAO robot. Secondly, the forward kinematics was used to calculate the 3D discrete reachable point clouds in Cartesian space, which were used as the basic workspace of the robot terminal effector. Thirdly, the methods of transforming the point cloud space representation into Octomap space node representation, especially the probability updating method of space node, were described emphatically. Finally, an optimization method of space node updating order selection was proposed according to the geometric relationship of nodes. With this optimization method, the space optimization representation of the humanoid robot's capability map was realized efficiently. Experimental results show that compared with the original Octomap updating method, the proposed algorithm can reduce the number of space nodes by nearly 30% and improve the computional efficiency.
Brain network analysis method based on feature vector of electroencephalograph subsequence
YANG Xiong, YAO Rong, YANG Pengfei, WANG Zhe, LI Haifang
2019, 39(4): 1224-1228. DOI:
10.11772/j.issn.1001-9081.2018092037
Asbtract
(
)
PDF
(819KB) (
)
References
|
Related Articles
|
Metrics
Working memory complex network analysis methods mostly use channels as nodes to analyze from the perspective of space, while rarely analyze channel networks from the perspective of time. Focused on the high time resolution characteristics of ElectroEncephaloGraph (EEG) and the difficulty of time series segmentation, a method of constructing and analyzing network from the time perspective was proposed. Firstly, the microstate was used to divide EEG signal of each channel into different sub-segments as nodes of the network. Secondly, the effective features in the sub-segments were extracted and selected as the sub-segment effective features, and the correlation between sub-segment feature vectors was calculated to construct channel time sequence complex network. Finally, the attributes and similarity analysis of the constructed network were analyzed and verified on the schizophrenic EEG data. The experimental results show that the analysis of schizophrenia data by the proposed method can make full use of the time characteristics of EEG signals to understand the characteristics of time series channel network constructed in working memory of patients with schizophrenia from a time perspective, and explain the significant differences between patients and normals.
Multivariate time series fault warning for wind turbine gearbox
LIU Shuai, LIU Changliang, ZHEN Chenggang
2019, 39(4): 1229-1233. DOI:
10.11772/j.issn.1001-9081.2018102087
Asbtract
(
)
PDF
(820KB) (
)
References
|
Related Articles
|
Metrics
For wind turbine fault warning, original Dynamic Time Warping (DTW) algorithm cannot measure the distance effectively between two multivariate time series data of wind turbines. Aiming at this problem, a DTW algorithm based on Hesitation Fuzzy Set (HFS-DTW) was proposed. The algorithm is an extended algorithm of the original DTW algorithm, which can measure the distance of both univariate and multivariate time series data, and has higher accuracy and speed compared to the original DTW algorithm. With the sub-sequence similarity distance applied as cost function, the length of sub-sequence and step parameters in HFS-DTW algorithm were optimized by using Imperialist Competitive Algorithm (ICA). The study shows that compared to the only DTW algorithm and the HFS-DTW algorithm with non-optimal parameter, the HFS-DTW with optimal parameter can mine more information on multi-dimensional feature point, and the output multi-dimensional feature point similar sequence has more details. And based on the proposed algorithm, the wind turbine gearbox fault can be warned 10 days in advance.
High-precision positioning algorithm for deformation monitoring based on carrier phase difference
CHEN Kai, SUN Xiyan, JI Yuanfa, WANG Shouhua, CHEN Ziqiang
2019, 39(4): 1234-1239. DOI:
10.11772/j.issn.1001-9081.2018071454
Asbtract
(
)
PDF
(985KB) (
)
References
|
Related Articles
|
Metrics
Traditional carrier phase difference algorithm is not suitable for deformation monitoring, the accuracy of Real-Time Kinematic (RTK) positioning cannot meet requirements and static relative positioning based on carrier double-differential phase has poor deformation tracking performance with continuous calculations. To solve these problems, based on the deep research of dynamic and static algorithms, a dynamic and static adaptive fusion algorithm based on carrier phase difference was proposed. The convergence of positioning results was judged by variance-change method in real time, then the state priori estimation process of Extended Kalman Filter (EKF) was adaptively adjusted. In the process, the covariance value of priori estimation error of position parameters was increased at the convergence time, so that the posteriori process of EKF tended to trust measured value. EKF iteration was used at the non-convergence time, so that the posteriori process of EKF tended to trust state predicted value. The experimental results show that compared with traditional RTK, the accuracy of the new algorithm is improved, with horizontal accuracy of ±2 mm, and altitudinal accuracy of ±4 mm. Compared with static positioning, the observation period is reduced, and the tracking performance of micro-deformation is improved.
Average consensus of heterogeneous multi-agent based on model reference
YU Jiaxing, WEI Haiping, JIN Lina, WEI Yufeng
2019, 39(4): 1240-1246. DOI:
10.11772/j.issn.1001-9081.2018081824
Asbtract
(
)
PDF
(893KB) (
)
References
|
Related Articles
|
Metrics
Focusing on heterogeneous linear Multi-Agent System (MAS) with unknown parameters, a fixed output average consensus protocol was proposed in undirected or balanced directed network to make the output of each agent reach the average of their initial output. Firstly, each agent in the network was modeled as an unknown linear system with different order and correlation of 1 or 2, which state was updated according to the output of its own and neighboring nodes. Then, based on the model reference control method, the corresponding models were defined for the agents with different correlations. Finally, a consesus protocol was proposed to converge the output of each agent to the output of its reference model, achieving the average consesus of fixed output. The simulation with an illustrative example demonstrates the effectiveness and convergence of the proposed protocol.
2024 Vol.44 No.9
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF