Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 February 2019, Volume 39 Issue 2
Previous Issue
Next Issue
Automatic text summarization scheme based on deep learning
ZHANG Kejun, LI Weinan, QIAN Rong, SHI Taimeng, JIAO Meng
2019, 39(2): 311-315. DOI:
10.11772/j.issn.1001-9081.2018081958
Asbtract
(
)
PDF
(867KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems of inadequate semantic understanding, improper summary sentences and inaccurate summary in the field of Natural Language Processing (NLP) abstractive automatic summarization, a new automatic summary solution was proposed, including an improved word vector generation technique and an abstractive automatic summarization model. The improved word vector generation technology was based on the word vector generated by the skip-gram method. Combining with the characteristics of abstract, three word features including part of speech, word frequency and inverse text frequency were introduced, which effectively improved the understanding of words. The proposed Bi-MulRnn+ abstractive automatic summarization model was based on sequence-to-sequence (seq2seq) framework and self-encoder structure. By introducing attention mechanism, Gated Recurrent Unit (GRU) gate structure, Bi-directional Recurrent Neural Network (BiRnn) and Multi-layer Recurrent Neural Network (MultiRnn), the model improved the summary accuracy and sentence fluency of abstractive summarization. The experimental results of Large-Scale Chinese Short Text Summarization (LCSTS) dataset show that the proposed scheme can effectively solve the problem of abstractive summarization of short text, and has good performance in Rouge standard evaluation system, improving summary accuracy and sentence fluency.
Multi-attribute decision making method based on Pythagorean fuzzy Frank operator
PENG Dinghong, YANG Yang
2019, 39(2): 316-322. DOI:
10.11772/j.issn.1001-9081.2018061195
Asbtract
(
)
PDF
(888KB) (
)
References
|
Related Articles
|
Metrics
To solve the multi-attribute decision making problems in Pythagorean fuzzy environment, a multi-attribute decision making method based on Pythagorean fuzzy Frank operator was proposed. Firstly, Pythagorean fuzzy number and Frank operator were combined to obtain the operation rule based on Frank operator. Then the Pythagorean fuzzy Frank operator was proposed, including Pythagorean fuzzy Frank weighted average operator and Pythagorean fuzzy Frank weighted geometric operator, and the properties of these operators were discussed. Finally, a multi-attribute decision making method based on Pythagorean fuzzy Frank operator was proposed, which was applied to an example of green supplier selection. The example analysis shows that the proposed method can be used to solve the actual multi-attribute decision making problems, and can be further applied to areas such as risk management and artificial intelligence.
Task assignment method of product development based on knowledge similarity
CHEN Youling, ZUO Lidan, NIU Yufei, WANG Long
2019, 39(2): 323-329. DOI:
10.11772/j.issn.1001-9081.2018061325
Asbtract
(
)
PDF
(1181KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that knowledge is not equal in task assignment of product development, a task assignment model of product development based on bilateral matching of task and designer was proposed. Firstly, the matching between task and designer was transformed into the similarity of knowledge between task and designer from the perspective of knowledge quantification, then the order value matrix was established and transformed into the satisfaction degree matrix of task to designer matching. Secondly, according to the degree of preference of the designer to the task under different task attributes, the order value matrix of designer satisfaction to task was obtained. Thirdly, based on the principle of maximum satisfaction between the two parties, a multi-objective optimization model based on knowledge similarity and designer preference was constructured. The method of weighted sums based on membership function was used to change the multi-objective optimization model into linear programming model, then the obtained model would be solved by Matlab programming. Finally, taking a crankshaft linkage mechanism produced by one enterprise as an example, the matching result between four tasks and seven designer was obtained to determine the final assignment plan. By comparing with task assignment methods of product development based on clustering analysis and bilateral matching, the apparent difference between the knowledge similarity and the designer preference between the designer 3 and 7 indicated that the proposed method can assign tasks more efficiently.
Scheduled competition learning based multi-objective particle swarm optimization algorithm
LIU Ming, DONG Minggang, JING Chao
2019, 39(2): 330-335. DOI:
10.11772/j.issn.1001-9081.2018061201
Asbtract
(
)
PDF
(933KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the diversity of population and the convergence performance of algorithm, a Scheduled competition learning based Multi-Objective Particle Swarm Optimization (SMOPSO) algorithm was proposed. The multi-objective particle swarm optimization algorithm and the competition learning mechanism were combined and the competition learning mechanism was used in every certain iterations to maintain the diversity of the population. Meanwhile, to improve the convergence of algorithm without using the global best external archive, the elite particles were selected from the current swarm, and then a global best particle was randomly selected from these elite particles. The performance of the proposed algorithm was verified on 21 benchmarks and compared with 8 algorithms, such as Multi-objective Particle Swarm Optimization algorithm based on Decomposition (MPSOD), Competitive Mechanism based multi-Objective Particle Swarm Optimizer (CMOPSO) and Reference Vector guided Evolutionary Algorithm (RVEA). The experimental results prove that the proposed algorithm can get a more uniform Pareto front and a smaller Inverted Generational Distance (IGD).
Krill herd algorithm based on generalized opposition-based learning and its application in data clustering
DING Cheng, WANG Qiuping, WANG Xiaofeng
2019, 39(2): 336-342. DOI:
10.11772/j.issn.1001-9081.2018061437
Asbtract
(
)
PDF
(963KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of premature convergence caused by the decrease of population diversity in the optimization process of Krill Herd (KH) algorithm, an improved krill herd algorithm based on Generalized Opposition-Based Learning was proposed, namely GOBL-KH. Firstly, step size factors were determined by cosine decreasing strategy to balance the exploration and exploitation ability of the algorithm. Then, a generalized opposition-based learning strategy was added to search each krill, which enhanced the ability of the krill to explore the neighborhood space around it. The proposed algorithm was tested on fifteen benchmark functions and compared with the original KH algorithm, KH with Linear Decreasing step (KHLD) and KH with Cosiner Decreasing step (KHCD). The experimental results show that the proposed algorithm can effectively avoid premature and has higher accuracy. In order to demonstrate the effectiveness of the proposed algorithm, it was combined with K-means algorithm to solve the data clustering problem, namely HK-KH. In this fusion algorithm, after each iteration, the worst individual was replaced by the optimal individual or a new individual after the K-means iteration. Five datasets of UCI were used to test HK-KH algorithm and the results were compared with the K-means, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), KH, KH Clustering Algorithm (KHCA), Improved KH (IKH) algorithm for clustering problems. The experimental results show that HK-KH algorithm is suitable to solve the data clustering problem and has strong global convergence and high stability.
Object tracking algorithm based on parallel tracking and detection framework and deep learning
YAN Ruoyi, XIONG Dan, YU Qinghua, XIAO Junhao, LU Huimin
2019, 39(2): 343-347. DOI:
10.11772/j.issn.1001-9081.2018061211
Asbtract
(
)
PDF
(973KB) (
)
References
|
Related Articles
|
Metrics
In the context of air-ground robot collaboration, the apperance of the moving ground object will change greatly from the perspective of the drone and traditional object tracking algorithms can hardly accomplish target tracking in such scenarios. In order to solve this problem, based on the Parallel Tracking And Detection (PTAD) framework and deep learning, an object detecting and tracking algorithm was proposed. Firstly, the Single Shot MultiBox detector (SSD) object detection algorithm based on Convolutional Neural Network (CNN) was used as the detector in the PTAD framework to process the keyframe to obtain the object information and provide it to the tracker. Secondly, the detector and tracker processed image frames in parallel and calculated the overlap between the detection and tracking results and the confidence level of the tracking results. Finally, the proposed algorithm determined whether the tracker or detector need to be updated according to the tracking or detection status, and realized real-time tracking of the object in image frames. Based on the comparison with the original algorithm of the PTAD on video sequences captured from the perspective of the drone, the experimental results show that the performance of the proposed algorithm is better than that of the best algorithm with the PTAD framework, its real-time performance is improved by 13%, verifying the effectiveness of the proposed algorithm.
Continuous action segmentation and recognition based on sliding window and dynamic programming
YANG Shiqiang, LUO Xiaoyu, QIAO Dan, LIU Peilei, LI Dexin
2019, 39(2): 348-353. DOI:
10.11772/j.issn.1001-9081.2018061344
Asbtract
(
)
PDF
(911KB) (
)
References
|
Related Articles
|
Metrics
Concerning the fact that there are few researches on continuous action recognition in the field of action recognition and single algorithms have poor effect on continuous action recognition, a segmentation and recognition method of continuous actions was proposed based on single motion modeling by combining sliding window method and dynamic programming method. Firstly, the single action model was constructed based on the Deep Belief Network and Hidden Markov Model (DBN-HMM). Secondly, the logarithmic likelihood value of the trained action model and the sliding window method were used to estimate the score of the continous action, detecting the initial segmentation points. Thirdly, the dynamic programming method was used to optimize the location of the segmentation points and identify the single action. Finally, the testing experiments of continuous action segmentation and recognition were conducted with an open action database MSR Action3D. The experimental results show that the dynamic programming based on sliding window can optimize the selection of segmentation points to improve the recognition accuracy, which can be used to recognize continuous action.
Image caption genaration algorithm based on multi-attention and multi-scale feature fusion
CHEN Longjie, ZHANG Yu, ZHANG Yumei, WU Xiaojun
2019, 39(2): 354-359. DOI:
10.11772/j.issn.1001-9081.2018071464
Asbtract
(
)
PDF
(1033KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issues of low quality of image caption, insufficient utilization of image features and single-level structure of recurrent neural network in image caption generation, an image caption generation algorithm based on multi-attention and multi-scale feature fusion was proposed. The pre-trained target detection network was used to extract the features of the image from the convolutional neural network, which were input into the multi-attention structures at different layers. Each attention part with features of different levels was related to the multi-level recurrent neural networks sequentially, constructing a multi-level image caption generation network model. By introducing residual connections in the recurrent networks, the network complexity was reduced and the network degradation caused by deepening network was avoided. In MSCOCO datasets, the BLEU-1 and CIDEr scores of the proposed algorithm can achieve 0.804 and 1.167, which is obviously superior to top-down image caption generation algorithm based on single attention structure. Both artificial observation and comparison results velidate that the image caption generated by the proposed algorithm can show better details.
Efficient subgraph matching method based on resource description framework graph segmentation and vertex selectivity
GUAN Haoyuan, ZHU Bin, LI Guanyu, CAI Yongjia
2019, 39(2): 360-369. DOI:
10.11772/j.issn.1001-9081.2018061262
Asbtract
(
)
PDF
(1749KB) (
)
References
|
Related Articles
|
Metrics
As the graph-based query in SPARQL query processing becames more and more inefficient due to the increasing structure complexity of Resource Description Framework (RDF) in the graph, by analyzing the basic structure of RDF graphs and the selectivity of the RDF vertices, RDF Triple Patterns Selectivity (RTPS) was proposed to improve the efficienccy of subgraph matching for graph with RDF, which is a graph structure segmentation rule based on selectivity of RDF vertices. Firstly, according the commonality of the predicate structure in the data graph and the query graph, an RDF Adjacent Predicate Path (RAPP) index was built, and the data graph structure was transformed into incoming-outgoing predicate path structure to determine the search space of query vertices and speed up the filtering of RDF vertices. Secondly, the model of Integer Linear Programming (ILP) problem was built to divide a RDF query graph with complicated structure into several query subgraphs with simple structure. By analyzing the structure characteristics of the RDF vertices in the adjacent subgraphs, the selectivity of the query vertices was established and the optimal segmentation method was determined. Thirdly, with the searching space narrowed down by the RDF vertex selectivity and structure characteristics of adjacent subgraphs, the matchable RDF vertices in the data graph were found. Finally, the RDF data graph was traversed to find the subgraphs whose structure matched the structure of query subgraphs. Then, the result graph was output by joining the subgraphs together. The controlling variable method was used in the experiment to compare the query response time of RTPS, RDF Subgraph Matiching (RSM), RDF-3X, GraSS and R3F. The experimental results show that, compared with the other four methods, when the number of triple patterns in a query graph is more than 9, RTPS has shorter query response time and higher query efficiency.
Fault detection method for batch process based on deep long short-term memory network and batch normalization
WANG Shuo, WANG Peiliang
2019, 39(2): 370-375. DOI:
10.11772/j.issn.1001-9081.2018061371
Asbtract
(
)
PDF
(961KB) (
)
References
|
Related Articles
|
Metrics
Traditional fault detection methods for batch process based on data-driven often need to make assumptions about the distribution of process data, and often lead to false positives and false negatives when dealing with non-linear data and other complex data. To solve this problem, a supervised learning algorithm based on Long Short-Term Memory (LSTM) network and Batch Normalization (BN) was proposed, which does not need to make assumptions about the distribution of original data. Firstly, a preprocessing method based on variable-wise unfolding and continuous sampling was applied to the batch process raw data, so that the processed data could be input to the LSTM unit. Then, the improved deep LSTM network was used for feature learning. By adding the BN layer and the representation method of cross entropy loss, the network was able to effectively extract the characteristics of the batch process data and learned quickly. Finally, a simulation experiment was performed on a semiconductor etching process. The experimental results show that compared with Multilinear Principal Component Analysis (MPCA) method, the proposed method can identify more faults types, which can effectively identify various faults, and the overall detection rate of faults reaches more than 95%. Compared with the traditional single-LSTM model, it has higher recognition speed, and its overall detection rate of faults is increased by more than 8%, and it is suitable for dealing with fault detection problems with non-linear and multi-case characteristics in the batch process.
Fish recognition method for submarine observation video based on deep learning
ZHANG Junlong, ZENG Guosun, QIN Rufu
2019, 39(2): 376-381. DOI:
10.11772/j.issn.1001-9081.2018061372
Asbtract
(
)
PDF
(1013KB) (
)
References
|
Related Articles
|
Metrics
As it is hard to recognize marine fishes occurred in submarine observation videos due to the bad undersea environment and low quality of the video, a recognition method based on deep learning was proposed. Firstly, the video was split into pictures, and as this type of video contains a large proportion of useless data, a background subtraction algorithm was used to filter the pictures without fish to save the time of processing all data. Then, considering the undersea environment is blurring with low bright, based on the dark channel prior algorithm, the pictures were preprocessed to improve their quality before recognition. Finally, a recognition deep learning model based on Convolutional Neural Network (CNN) was consructed with weighted convolution process to improve the robustness of the model. The experimental results show that, facing submarine observation video frames with poor quality, compared with traditional CNN, the method with preprocessing and weighted convolution as hidden layer can increase the recognition accuracy by 23%, contributing to the recognition of marine fishes in submarine observation video.
Improved remote sensing image classification algorithm based on deep learning
WANG Xin, LI Ke, XU Mingjun, NING Chen
2019, 39(2): 382-387. DOI:
10.11772/j.issn.1001-9081.2018061324
Asbtract
(
)
PDF
(1083KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that the traditional deep learning based remote sensing image classification algorithms cannot effectively fuse multiple deep learning features and their classifiers have poor performance, an improved high-resolution remote sensing image classification algorithm based on deep learning was proposed. Firstly, a seven-layer convolutional neural network was designed and constructed. Secondly, the high-resolution remote sensing images were input into the network to train it, and the last two fully connected layer outputs were taken as two different high-level features for the remote sensing images. Thirdly, Principal Component Analysis (PCA) was applied to the output of the fifth pooling layer in the network, and the obtained dimensionality reduction result was taken as the third high-level features for the remote sensing images. Fourthly, the above three kinds of features were concatenated to get an effective deep learning based remote sensing image feature. Finally, a logical regression based classifier was designed for remote sensing image classification. Compared with the traditional deep learning algorithms, the accuracy of the proposed algorithm was increased. The experimental results show that the proposed algorithm performs excellent in terms of classification accuracy, misclassification rate and Kappa coefficient, and achieves good classification results.
Hyperspectral face recognition system based on VGGNet and multi-band recurrent network
XIE Zhihua, JIANG Peng, YU Xinhe, ZHANG Shuai
2019, 39(2): 388-391. DOI:
10.11772/j.issn.1001-9081.2018081788
Asbtract
(
)
PDF
(635KB) (
)
References
|
Related Articles
|
Metrics
To improve the effectiveness of facial feature represented by hyperspectral face data, a VGGNet and multi-band recurrent training based method for hyperspectral face recognition was proposed. Firstly, a Multi-Task Convolutional Neural Network (MTCNN) was used to locate the hyperspectral face image accurately in preprocessing phase, and the hyperspectral face data was enhanced by mixed channel. Then, a Convolutional Neural Network (CNN) structure based VGG12 deep network was built for hyperspectral face recognition. Finally, multi-band recurrent training was introduced to train the VGG12 network and realize the recognition based on the characteristics of hyperspectral face data. The experimental results of UWA-HSFD and PolyU-HSFD databases reveal that the proposed method is superior to other deep networks such as DeepID, DeepFace and VGGNet.
Image retrieval algorithm for pulmonary nodules based on multi-scale dense network
QIN Pingle, LI Qi, ZENG Jianchao, ZHANG Na, SONG Yulong
2019, 39(2): 392-397. DOI:
10.11772/j.issn.1001-9081.2018071451
Asbtract
(
)
PDF
(1084KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the insufficiency of feature extraction in the existing Content-Based Medical Image Retrieval (CBMIR) algorithms, which resulted in imperfect semantic information representation and poor image retrieval performance, an algorithm based on multi-scale dense network was proposed. Firstly, the size of pulmonary nodule image was reduced from 512×512 to 64×64, and the dense block was added to solve the gap between the extracted low-level features and high-level semantic features. Secondly, as the information of pulmonary nodule images extracted by different layers in the network was varied, in order to improve the retrieval accuracy and efficiency, the multi-scale method was used to combine the global features of the image and the local features of the nodules, so as to generate the retrieval hash code. Finally, the experimental results show that compared with the Adaptive Bit Retrieval (ABR) algorithm, the average retrieval accuracy for pulmonary nodule images based on the proposed algorithm under 64-bit hash code length can reach 91.17%, which is increased by 3.5 percentage points; and the average time required to retrieve a lung slice is 48 μs. The retrieval results of the proposed algorithm are superior to other comparative network structures in expressing rich semantic features and retrieval efficiency of images. The proposed algorithm can contribute to doctor diagnosis and patient treament.
Integrated algorithm based on density peaks and density-based clustering
WANG Zhihe, HUANG Mengying, DU Hui, QIN Hongwu
2019, 39(2): 398-402. DOI:
10.11772/j.issn.1001-9081.2018061411
Asbtract
(
)
PDF
(783KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that Clustering by Fast Search and Find of Density Peaks (CFSFDP) needs to manually select the center on the decision graph, an Integrated Algorithm Based on Density Peaks and Density-based Clustering (IABDPDC) was proposed. Firstly, learning from the principle of CFSFDP, the data with the largest local density was selected as the first center. Then, from the first center, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm improved by Warshall algorithm was used to cluster to obtain the first category. Finally, from the data that has not been clustered, the maximum local density data was found out as the center of the next category and was clustered again by the above algorithm, until all the data was clustered or some data was considered as noise. The proposed algorithm not only solves the problem of manual center selection in CFSFDP, but also optimizes the DBSCAN algorithm, in which, every iteration starts from the current best point (the point with the largest local density). By comparing with the classical algorithms (such as CFSFDP, DBSCAN, fuzzy C-means (FCM) and K-means) on visual datasets and non-visualized datasets, the experimental results show that the proposed algorithm has better clustering effect with higher accuracy.
Mixed density peaks clustering algorithm
WANG Jun, ZHOU Kai, CHENG Yong
2019, 39(2): 403-408. DOI:
10.11772/j.issn.1001-9081.2018061373
Asbtract
(
)
PDF
(842KB) (
)
References
|
Related Articles
|
Metrics
As a new density-based clustering algorithm, clustering by fast search and find of Density Peaks (DP) algorithm regards each density peak as a potential clustering center when dealing with a single cluster with multiple density peaks, therefore it is difficult to determine the correct number of clusters in the data set. To solve this problem, a mixed density peak clustering algorithm namely C-DP was proposed. Firstly, the density peak points were considered as the initial clustering centers and the dataset was divided into sub-clusters. Then, learned from the Clustering Using Representatives algorithm (CURE), the scattered representative points were selected from the sub-clusters, the clusters of the representative point pairs with the smallest distance were merged, and a parameter contraction factor was introduced to control the shape of the clusters. The experimental results show that the C-DP algorithm has better clustering effect than the DP algorithm on four synthetic datasets. The comparison of the Rand Index indicator on real datasets shows that on the dataset S1 and 4k2_far, the performance of C-DP is 2.32% and 1.13% higher than that of the DP. It can be seen that the C-DP algorithm improves the accuracy of clustering when datasets contain multiple density peaks in a single cluster.
Clustering by fast search and find of density peaks based on spectrum analysis
HAN Zhonghua, BI Kaiyuan, SI Wen, LYU Zhe
2019, 39(2): 409-413. DOI:
10.11772/j.issn.1001-9081.2018061381
Asbtract
(
)
PDF
(869KB) (
)
References
|
Related Articles
|
Metrics
For different clustering effects of Clustering by Fast Search and Find of Density Peaks (CFSFDP) on different datasets, an improved CFSFDP algorithm based on spectral clustering was proposed, namely CFSFDP-SA (CFSFDP based on Spectrum Analysis). Firstly, a high-dimensional non-linear dataset was mapped into a low-dimensional subspace to realize dimension reduction, then the clustering problem was transformed into the optimal partitioning problem of the graph to enhance the algorithm adaptability to the global structure of the data. Secondly, the CFSFDP algorithm was used to cluster the processed dataset. Combining the advantages of these two clustering algorithms, the clustering performance was further improved. The clustering results of two artificial linear datasets, three artificial nonlinear datasets and four real datasets in UCI show that compared with CFSFDP, the CFSFDP-SA algorithm has higher clustering precision, achieving up to 14% improvement in accuracy for high-dimensional dataset, which means CFSFDP-SA is more adaptable to the original datasets.
Time series motif discovery algorithm based on subsequence full join and maximum clique
ZHU Yuelong, ZHU Xiaoxiao, WANG Jimin
2019, 39(2): 414-420. DOI:
10.11772/j.issn.1001-9081.2018061326
Asbtract
(
)
PDF
(1058KB) (
)
References
|
Related Articles
|
Metrics
Existing time series motif discovery algorithms have high computational complexity and cannot find multi-instance motifs. To overcome these defects, a Time Series motif discovery algorithm based on Subsequence full Joins and Maximum Clique (TSSJMC) was proposed. Firstly, the fast time series subsequence full join algorithm was used to obtain the distance between all subsequences and generate the distance matrix. Then, the similarity threshold was set, the distance matrix was transformed into the adjacency matrix, and the sub-sequence similarity graph was constructed. Finally, the maximum clique in the similarity graph was extracted by the maximum clique search algorithm, and the time series corresponding to the vertices of the maximum clique were the motifs containing most instances. In the experiments on public time series datasets, TSSJMC algorithm was compared with Brute Force algorithm and Random Projection algorithm which also could find multi-instance motifs in accuracy, efficiency, scalability and robustness. The experimental results demonstrate that compared with Random Projection algorithm, TSSJMC algorithm has obvious advantages in terms of efficiency, scalability and robustness; compared with Bruce Force algorithm, TSSJMC algorithm finds slightly less motif instances, but its efficiency and scalability are better. Therefore, TSSJMC is an algorithm that balances quality and efficiency.
Time lag based temporal dependency episodes discovery
GU Peiyue, LIU Zheng, LI Yun, LI Tao
2019, 39(2): 421-428. DOI:
10.11772/j.issn.1001-9081.2018061366
Asbtract
(
)
PDF
(1181KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that a predefined time window is usually used to mine simple association dependencies between events in the traditional frequent episode discovery, which cannot effectively handle interleaved temporal correlations between events, a concept of time-lag episode discovery was proposed. And on the basis of frequent episode discovery, Adjacent Event Matching set (AEM) based time-lag episode discovery algorithm was proposed. Firstly, a probability statistical model introduced with time-lag was introduced to realize event sequence matching and handle optional interleaved associations without a predefined time window. Then the discovery of time lag was formulated as an optimization problem which can be solved iteratively to obtain time interval distribution between time-lag episodes. Finally, the hypothesis test was used to distinguish serial and parallel time-lag episodes. The experimental results show that compared with Iterative Closest Event (ICE) algorithm which is the latest method of time-lag mining, the Kullback-Leibler (KL) divergence between true and experimental distributions discovered by AEM is 0.056 on average, which is decreased by 20.68%. AEM algorithm measures the possibility of multiple matches of events through a probability statistical model of time lag and obtains a one-to-many adjacent event matching set, which is more effective than the one-to-one matching set in ICE for simulating the actual situation.
Bus arrival time prediction system based on Spark and particle filter algorithm
LIU Jing, XIAO Guanfeng
2019, 39(2): 429-435. DOI:
10.11772/j.issn.1001-9081.2018081800
Asbtract
(
)
PDF
(1285KB) (
)
References
|
Related Articles
|
Metrics
To improve the accuracy of bus arrival time prediction, a Particle Filter (PF) algorithm with stream computing characteristic was used to establish a bus arrival time prediction model. In order to solve the problems of prediction error and particle optimization in the process of using PF algorithm, the prediction model was improved by introducing the latest bus speed and constructing observations, making it predict bus arrival time closer to the actual road conditions and simultaneously predict the arrival times of multiple buses. Based on the above model and Spark platform, a real-time bus arrival time prediction software system was implemented. Compared with actual results, for the off-peak period, the maximum absolute error was 207 s, and the mean absolute error was 71.67 s; for the peak period, the maximum absolute error was 270 s, and the mean absolute error was 87.61 s. The mean absolute error of the predicted results was within 2 min which was a recognized ideal result. The experimental results show that the proposed model and implementated system can accurately predict bus arrival time and meet passengers' actual demand.
Anomaly detection method for hydrologic sensor data based on SparkR
LIU Zihao, LI Ling, YE Feng
2019, 39(2): 436-440. DOI:
10.11772/j.issn.1001-9081.2018081782
Asbtract
(
)
PDF
(891KB) (
)
References
|
Related Articles
|
Metrics
To efficiently detect outliers in massive hydrologic sensor data, an anomaly detection method for hydrological time series based on SparkR was proposed. Firstly, a sliding window and Autoregressive Integrated Moving Average (ARIMA) model were used to forecast the cleaned data on SparkR platform. Then, the confidence interval was calculated for the prediction results, and the results outside the interval range were judged as anomaly data. Finally, based on the detection results,
K-
Means algorithm was used to cluster the original data, the state transition probability was calculated, and the anomaly data were evaluated in quality. Taking the data of hydrologic sensor obtained from the Chu River as experimental data, experiments on the detection time and outlier detection performance were carried out respectively. The results show that the millions of data calculation by two slaves costs more time than that by one slave, but when calculating the tens of milllions of data, the time costed by two slaves is less than that by one slave, and the maximum reduction is 16.21%. The sensitivity of the evaluation is increased from 5.24% to 92.98%. It shows that under big data platform, the proposed algorithm which is based on the characteristics of hydrological data and combines forecast test and cluster test can effectively improve the computational efficiency of hydrologic time series detection for tens of millions data and has a significant improvement in sensitivity.
Improved canonical-order tree algorithm based on restructure
DU Yuan, ZHANG Shiwei
2019, 39(2): 441-445. DOI:
10.11772/j.issn.1001-9081.2018061328
Asbtract
(
)
PDF
(864KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problems such as too many nodes and low compressibility in the tree structure constructed by CANonical-order tree (CAN-tree) algorithm, an improved CAN-tree algorithm based on restructure was proposed. Firstly, a tree structure was constructed directly with canonical-order, which scans the database only once in the frequent itemset mining algorithm. Then, in order to get a tree structure with high compressibility, a pruning operation was used with support in desending order to restructure the tree. Finally, frequent itemsets were mined out for the reconstructed tree structure. The experimental results show that compared with original CAN-tree algorithm, the number of nodes constructed by the improved CAN-tree algorithm is reduced to less than 20%,and the execution efficiency is improved by 4 to 6 times. The proposed algorithm shortens the execution time of the frequent itemset mining algorithm and effectively compresses the tree structure in it.
5G network slicing function migration strategy based on security threat prediction
HE Zanyuan, WANG Kai, NIU Ben, YOU Wei, TANG Hongbo
2019, 39(2): 446-452. DOI:
10.11772/j.issn.1001-9081.2018061399
Asbtract
(
)
PDF
(1142KB) (
)
References
|
Related Articles
|
Metrics
With the development of virtualization technology, co-resident attack becomes a common means to steal sensitive information from users. Aiming at the hysteresis of existing virtual machine dynamic migration method reacting to co-resident attacks, a virtual network function migration strategy based on security threat prediction in the context of 5G network slicing was proposed. Firstly, network slicing operation security was modeled based on Hidden Markov Model (HMM), and the network security threats were predicted by multi-source heterogeneous data. Then according to the security prediction results, the migration cost was minimized by adopting the corresponding virtual network function migration strategy. Simulation experimental results show that the proposed strategy can effectively predict the security threats and effectively reduce the migration overhead and information leakage time by using HMM, which has a better defense effect against co-resident attack.
Controller deployment and switch dynamic migration strategy in software defined WAN
GUO Xuancheng, LIN Hui, YE Xiucai, XU Chuanfeng
2019, 39(2): 453-457. DOI:
10.11772/j.issn.1001-9081.2018082061
Asbtract
(
)
PDF
(801KB) (
)
References
|
Related Articles
|
Metrics
Due to the wide coverage of the Wide Area Network (WAN), the single-controller deployment of Software Defined-Wide Area Network (SD-WAN) cannot meet its needs in capacity, load and security, the deployment of multiple controllers becomes necessary. However, the static configuration of the whole network after the deployment of multiple controllers was difficult to be adapted to the change of dynamic network flow, which can easily lead to load unbalance of controllers, reducing the network performance. To solve this problem, a multi-controller deployment algorithm named SC-cSNN (Spectral Clustering-closeness of the Shared Nearest Neighbors) was proposed to reduce the propagation delay between the controller and the switch, and a dynamic switch migration method based on features such as time-delay, capacity and security was proposed to solve the problem of controller overload. Simulation results indicate that compared with existing controller deployment algorithms based on
k-
means and spectral clustering, the multi-controller deployment algorithm and the dynamic switch migration method can effectively minimize the average maximum delay between the controller and the switch and solve the problem of controller overload.
Security mechanism for Internet of things information sharing based on blockchain technology
GE Lin, JI Xinsheng, JIANG Tao, JIANG Yiming
2019, 39(2): 458-463. DOI:
10.11772/j.issn.1001-9081.2018061247
Asbtract
(
)
PDF
(1032KB) (
)
References
|
Related Articles
|
Metrics
A lightweight framework of Internet of Things (IoT) information sharing security based on blockchain technology was proposed to solve the problems of IoT's information sharing, such as source data susceptible to tampering, lack of credit guarantee mechanism and islands of information. The framework used double-chain pattern including data blockchain and transaction blockchain. Distributed storage and tamper-proof were realized on the data blockchain, and the registration efficiency was improved through a modified Practical Byzantine Fault Tolerance (PBFT). Resource and data transactions were realized on the transaction blockchain, the transaction efficiency was improved and privacy protection was realized through the improved algorithm based on partial blind signature algorithm. The simulation experiments were carried out to analyse, test and verify anti-attack capability, double-chain processing capacity and time delay. Simulation results show that the proposed framework has security, effectiveness and feasibility, which can be applied to most situations of the real IoT.
Contextual authentication method based on device fingerprint of Internet of Things
DU Junxiong, CHEN Wei, LI Xueyan
2019, 39(2): 464-469. DOI:
10.11772/j.issn.1001-9081.2018081955
Asbtract
(
)
PDF
(1014KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the security problem of remote control brought by illegal device access in Internet of Things (IoT), a contextual authentication method based on device fingerprint was proposed. Firstly, the fingerprint of IoT device was extracted by a proposed single byte analysis method in the interaction traffic. Secondly, the process framework of the authentication was proposed, and the identity authentication was performed according to six contextual factors including device fingerprint. Finally, in the experiments on IoT devices, relevant device fingerprint features were extracted and decision tree classification algorithms were combined to verify the feasibility of contextual authentication method. Experimental results show that the classification accuracy of the proposed method is 90%, and the 10% false negative situations are special cases but also meet the certification requirements. The results show that the contextual authentication method based on the fingerprint of IoT devices can ensure that only trusted IoT terminal equipment access the network.
Multi-authority access control scheme with policy hiding of satellite network
WANG Yaqiong, SHI Guozhen, XIE Rongna, LI Fenghua, WANG Yazhe
2019, 39(2): 470-475. DOI:
10.11772/j.issn.1001-9081.2018081959
Asbtract
(
)
PDF
(1000KB) (
)
References
|
Related Articles
|
Metrics
Satellite network has unique characteristics that differ from traditional networks, such as channel openness, node exposure and limited onboard processing capability. However, existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE) access control is not suitable for the satellite network due to its policy explosion and attribute-based authorization manner. To address this problem, a multi-authority access control scheme with policy hiding of satellite network was proposed. Linear Secret Sharing Scheme (LSSS) matrix access structure was adopted to guarantee data confidentiality and hide the access control policy completely by obfuscating the access structure. In addition, multi-authority was used to achieve fine-grained attribute management, eliminating the performance bottleneck of central authority. Each attribute authority worked independently and generated partial key of the user, which makes it resistant to collusion attacks. The security and performance analysis show that the proposed scheme can satisfy the security requirements of data confidentiality, collusion attack resistance and complete policy hiding, and is more suitable for satellite network than the comparison solutions.
Certificateless authentication group key agreement protocol for Ad Hoc networks
CAO Zhenhuan, GU Xiaozhuo, GU Menghe
2019, 39(2): 476-482. DOI:
10.11772/j.issn.1001-9081.2018051019
Asbtract
(
)
PDF
(1235KB) (
)
References
|
Related Articles
|
Metrics
Security and efficiency are two key factors that affect whether a certificateless authenticated group key agreement protocol can be applied in Ad Hoc networks. To improve the security and efficiency of key management problems in securing group communications of Ad Hoc networks, a certificateless group key agreement protocol was proposed, which utilizes Elliptic Curves Cryptography (ECC) multiplication to achieve the group key agreement and authentication without pairing. Meanwhile, the Huffman key tree was used to optimize the rounds of key negotiation, decreasing the computation and communication overheads and improving the group key negotiation efficiency. Security analysis and performance comparison demonstrate that the proposed protocol has good efficiency and security in group key negotiation, which can satisfy group key establishment and rekeying for dynamic groups with restrained resources.
3D medical image reversible watermarking algorithm based on unidirectional prediction error expansion
LI Qi, YAN Bin, CHEN Na, YANG Hongmei
2019, 39(2): 483-487. DOI:
10.11772/j.issn.1001-9081.2018071471
Asbtract
(
)
PDF
(830KB) (
)
References
|
Related Articles
|
Metrics
For the application of reversible watermarking technology in three-Dimensional (3D) medical images, a 3D medical image reversible watermarking algorithm based on unidirectional prediction error expansion was proposed. Firstly, the intermediate pixels were predicted according to the 3D gradient changes between them and their neighborhood pixels to obtain the prediction errors. Then, considering the features of the 3D medical image generated by magnetic resonance imaging, the external information was embedded into the 3D medical image by combining unidirectional histogram shifting with prediction error expansion. Finally, the pixels were re-predicted to extract the external information and restore the original 3D image. Experimental results on MR-head and MR-chest data show that compared with two-dimensional (2D) gradient-based prediction, the mean absolute deviation of prediction error produced by 3D gradient-based prediction are reduced by 1.09 and 1.40, respectively; and the maximal embedding capacity of each pixel is increased by 0.0456 and 0.1291 bits, respectively. The proposed algorithm can predict the pixels more accurately and embed more external information, so it is applicable to 3D medical image tempering detection and privacy protection for patients.
Business data security of system wide information management based on content mining
MA Lan, WANG Jingjie, CHEN Huan
2019, 39(2): 488-493. DOI:
10.11772/j.issn.1001-9081.2018071449
Asbtract
(
)
PDF
(1015KB) (
)
References
|
Related Articles
|
Metrics
Considering the data security problems of service sharing in SWIM (System Wide Information Management), the risks in the SWIM business process were analyzed, and a malicious data filtering method based on Latent Dirichlet Allocation (LDA) topic model and content mining was proposed. Firstly, big data analysis was performed on four kinds of SWIM business data, then LDA model was used for feature extraction of business data to realize content mining. Finally, the pattern string was searched in the main string by using KMP (Knuth-Morris-Pratt) matching algorithm to detect SWIM business data containing malicious keywords. The proposed method was tested in the Linux kernel. The experimental results show that the proposed method can effectively mine the content of SWIM business data and has better detection performance than other methods.
Online task and resource scheduling designing for container cloud queue based on Lyapunov optimization method
LI Lei, XUE Yang, LYU Nianling, FENG Min
2019, 39(2): 494-500. DOI:
10.11772/j.issn.1001-9081.2018061243
Asbtract
(
)
PDF
(1156KB) (
)
References
|
Related Articles
|
Metrics
To improve the resource utilization with Quality of Service (QoS) guarantee, a task and resource scheduling method under Lyapunov optimization for container cloud queue was proposed. Firstly, based on the queueing model of cloud computing, the Lyapunov function was used to analyze the variety of the task queue length. Secondly, the minimum energy consumption objective function was constructed under the task QoS guarantee. Finally, Lyapunov optimization method was used to solve the minimum cost objective function to obtain an optimization scheduling policy for the online tasks and container resources, improving the resource utilization and guaranteeing the QoS. The CloudSim simulation results show that, the proposed task and resource scheduling policy achieves high resource utilization under the QoS guarantee, which realizes the online task and resource optimization scheduling of container cloud and provides necessary reference for cloud computing task and resource overall optimization based on queuing model.
Scheduling strategy of cloud robots based on parallel reinforcement learning
SHA Zongxuan, XUE Fei, ZHU Jie
2019, 39(2): 501-508. DOI:
10.11772/j.issn.1001-9081.2018061406
Asbtract
(
)
PDF
(1403KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of slow convergence speed of reinforcement learning tasks with large state space, a priority-based parallel reinforcement learning task scheduling strategy was proposed. Firstly, the convergence of
Q-
learning in asynchronous parallel computing mode was proved. Secondly, complex problems were divided according to state spaces, then sub-problems and computing nodes were matched at the scheduling center, and each computing node completed the reinforcement learning tasks of sub-problems and gave feedback to the center to realize parallel reinforcement learning in the computer cluster. Finally, the experimental environment was built based on CloudSim, the parameters such as optimal step length, discount rate and sub-problem size were solved and the performance of the proposed strategy with different computing nodes was proved by solving practical problems. With 64 computing nodes, compared with round-robin scheduling and random scheduling, the efficiency of the proposed strategy was improved by 61% and 86% respectively. Experimental results show that the proposed scheduling strategy can effectively speed up the convergence under parallel computing, and it takes about 1.6×10
5
s to get the optimal strategy for the control probelm with 1 million state space.
Three-length-path structure connectivity and substructure connectivity of hypercube networks
YANG Yuxing, LI Xiaohui
2019, 39(2): 509-512. DOI:
10.11772/j.issn.1001-9081.2018061402
Asbtract
(
)
PDF
(660KB) (
)
References
|
Related Articles
|
Metrics
In order to evaluate the reliability and fault-tolerant ability of multi-processor system which takes hypercubes as underlying networks, combining the fact that structural faults often occur when the system is invaded by computer viruses, three-length-path structure connectivity and substructure connectivity of the
n-
cube network were investigated. Firstly, by using the three-length-path structure-cut of the
n-
cube network, an upper bound of three-length-path structure connectivity of the network was obtained. Secondly, by using an equivalent transformation or a reductive transformation of the three-length-path substructure-set of the
n-
cube network, a lower bound of three-length-path substructure connectivity of the network was obtained. Finally, combining with the property that three-length-path structure connectivity of a network is not less than its three-length-path substructure connectivity, it was proved that both three-length-path structure connectivity and substructure connectivity of a
n-
cube network were half of
n.
The results show that to destroy the enemy's multi-processor system which take the
n-
cubes as underlying networks under three-length-path structure fault model, at least half of
n
three-length-path structures or substructures of the system should be attacked.
Energy-saving method for wireless body area network based on synchronous prediction with penalty error matrix
ZHENG Zhuoran, ZHENG Xiangwei, TIAN Jie
2019, 39(2): 513-517. DOI:
10.11772/j.issn.1001-9081.2018071478
Asbtract
(
)
PDF
(785KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem that traditional Wireless Body Area Network (WBAN) prediction model has low prediction accuracy, large computational complexity and high energy consumption, an adaptive cubic exponential smoothing algorithm based on penalty error matrix was proposed. Firstly, a lightweight prediction model was established between the sensing node and the routing node. Secondly, blanket search was used to optimize the parameters of the prediction model. Finally, penalty error matrix was used to further refine the parameters of the prediction model. The experimental results showed that compared with the ZigBee protocol, the proposed method saved about 12% energy in 1000 time slot range; compared with blanket search method, the prediction accuracy was improved by 3.306% by using penalty error matrix. The proposed algorithm can effectively reduce the computational complexity and further reduce the energy consumption of WBAN.
Routing algorithm based on node cognitive interaction in Internet of vehicles environment
FAN Na, ZHU Guangyuan, KANG Jun, TANG Lei, ZHU Yishui, WANG Luyang, DUAN Jiaxin
2019, 39(2): 518-522. DOI:
10.11772/j.issn.1001-9081.2018061256
Asbtract
(
)
PDF
(799KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problems such as low transmission efficiency and high network resource overhead in Internet of Vehicles (IoV) environment, a new routing algorithm based on node cognitive interaction, which is suitable for urban traffic environment, was proposed. Firstly, based on trust theory, a concept of cognitive interaction degree was proposed. Then, based on this, the vehicle nodes in IoV were classified and given with different initial values of cognitive interaction degree. Meanwhile, the influence factors such as interaction time, interaction frequency, physical distance, hops between nodes and the Time-To-Live of message were introduced, and a cognitive interaction evaluation model of vehicle nodes was constructed. The cognitive interaction degrees of vehicle nodes were calculated and updated by using the proposed model, and a neighbor node with higher cognitive interaction degree than others could be selected as relay node to forward the messages after the comparison between the nodes. Simulation results show that compared with Epidemic and Prophet routing algorithms, the proposed algorithm effectively increases the message delivery rate and reduces the message delivery delay, while significantly reducing the overhead of network resources and helping to improve the quality of message transmission in IoV environment
Two-dimensional parameter estimation of near-field sources based on iterative adaptive approach
WANG Bo, LIU Deliang
2019, 39(2): 523-527. DOI:
10.11772/j.issn.1001-9081.2018061417
Asbtract
(
)
PDF
(810KB) (
)
References
|
Related Articles
|
Metrics
A Near-Field Iterative Adaptive Approach (NF-IAA) was proposed for the joint estimation of Direction Of Arrival (DOA) and range of near-field sources. Firstly, all possible source locations in the neaar field region were represented by dividing two-dimensional grids. Each location was considered to have a potential incident source mapping to an array, indicating the output data model of the array. Then, through the loop iteration, the signal covariance matrix was constructed by using the previous spectral estimation results, and the inverse of the covariance matrix was used as the weighting matrix to estimate the energy of the potential sources corresponding to each location. Finally, the three-dimensional energy spectrum was figured. Since only the energy of real existing source is not 0, the angles and distances corresponding to the peaks are the two-dimensional location parameters of real existing sources. Simulation experimental results show that the DOA resolution probability of the proposed NF-IAA reaches 90% with 10 snapshots, while the DOA resolution probablity of Two-Dimension Multiple Signal Classification (2D-MUSIC) algorithm is only 40%. When the number of snapshots is reduced to 2, 2D-MUSIC algorithm has failed, but NF-IAA can still distinguish 3 incident sources and accurately estimate the two-dimension location parameters. As the number of snapshots and Signal-to-Noise Ratio (SNR) increase, NF-IAA always performs better than 2D-MUSIC. The experimental results show that NF-IAA has the ability to estimate the two-dimensional location parameters of near-field sources with high precision and high resolution when the number of snapshots is low.
Software crowdsourcing worker selection mechanism based on active time grouping
ZHOU Zhuang, YU Dunhui, ZHANG Wanshan, WANG Yi
2019, 39(2): 528-533. DOI:
10.11772/j.issn.1001-9081.2018061309
Asbtract
(
)
PDF
(953KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that existing software crowdsourcing worker selection mechanisms do not consider the collaboration among workers, a crowdsourcing worker selection mechanism with bidding model based on active time grouping was proposed. Firstly, crowd-sourced workers were divided into multiple collaborative working groups based on active time. Then, the weights of the working groups were calculated according to the development capabilities of the workers in the group and collaboration factors. Finally, the collaborative working group with the highest weight was selected as the optimal working group, and the most suitable worker from this group was selected for each task module according to the complexity of the module. The experimental results show that the proposed mechanism has a gap of only 0.57% in the average worker ability compared to the ability only allocation method. At the same time, it reduces the project risk by an average of 32% due to the ensurence of the cooperation between workers, which can effectively guide the selection of workers for multi-person collaborative crowdsourcing software tasks.
Test suite reduction method based on weak mutation criterion
WANG Shuyan, YUAN Jiajuan, SUN Jiaze
2019, 39(2): 534-539. DOI:
10.11772/j.issn.1001-9081.2018071467
Asbtract
(
)
PDF
(1016KB) (
)
References
|
Related Articles
|
Metrics
In view of the problem that the test cost is increased by a large number of test suites in regression testing, a test suite reduction method based on weak mutation criterion was proposed. Firstly, the relation matrix between test suites and mutation branches was obtained based on weak mutation criterion. Then, four invalid test requirements and subset test suites were reduced repeatedly. Finally, the current optimal test suite was selected by using artificial fish swarm algorithm, and the simplification and test suite selection operations were performed alternately until all the test requirements were covered. Compared with Greedy algorithm and HGS (Harrold-Gupta-Soff) algorithm on six classical programs, when using weak mutation criterion with no changing or slightly changing mutation score, the reduction rate was improved by 73.4% and 8.2% respectively, and the time consumption was decreased by 25.3% and 56.1% respectively. The experimental results show that the proposed method can effectively reduce the test suites and save the test cost in regression testing.
Improved panchromatic sharpening algorithm based on sparse representation
WU Zongjun, WU Wei, YANG Xiaomin, LIU Kai, Gwanggil Jeon, YUAN Hao
2019, 39(2): 540-545. DOI:
10.11772/j.issn.1001-9081.2018061374
Asbtract
(
)
PDF
(1149KB) (
)
References
|
Related Articles
|
Metrics
In order to more effectively combine the detail information of high resolution PANchromatic (PAN) image and the spectral information of low resolution MultiSpectral (MS) image, an improved panchromatic sharpening algorithm based on sparse representation was proposed. Firstly, the intensity channel of an MS image was down-sampled and then up-sampled to get its low-frequency components. Secondly, the MS image intensity channel minus low-frequency components to obtain its high-frequency components. Random sampling was performed in the acquired high and low frequency components to construct a dictionary. Thirdly, the PAN image was decomposed to get the high-frequency components by using the constructed overcomplete dictionary. Finally, the high-frequency components of the PAN image were injected into the MS image to obtain the desired high-resolution MS image. After a number of experiments, it was found that the proposed algorithm subjectively retains the spectral information and injects a large amount of spatial details. Compared with component substitution method, multiresolution analysis method and sparse representation method, the reconstructed high resolution MS image by the proposed algorithm is more clear, and the correlation coefficient and other objective evaluation indicators of the proposed algorithm are also better.
Kernelized correlation filtering method based on fast discriminative scale estimation
XIONG Xiaoxuan, WANG Wenwei
2019, 39(2): 546-550. DOI:
10.11772/j.issn.1001-9081.2018061360
Asbtract
(
)
PDF
(881KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that the Kernelized Correlation Filter (KCF) can not respond to the target scale change, a KCF target tracking algorithm based on fast discriminative scale estimation was proposed. Firstly, the target position was estimated by KCF. Then, a fast discriminative scale filter was learned online by using a set of target samples with different scales. Finally, an accurate estimation of the target size was obtained by applying the learned scale filter at the target position. The experiments were conducted on Visual Tracker Benchmark video sequence sets, and comparison was performed with the KCF algorithm based on Discriminative Scale Space Tracking (DSST) and the traditional KCF algorithm. Experimental results show that the tracking accuracy of the proposed algorithm is 2.2% to 10.8% higher than that of two contrast algorithms when the target scale changes, and the average frame rate of the proposed algorithm is also 19.1% to 68.5% higher than that of KCF algorithm based on DSST. The proposed algorithm has strong adaptability and high real-time performance to target scale change.
Image denoising algorithm based on grouped dictionaries and variational model
TAO Yongpeng, JING Yu, XU Cong
2019, 39(2): 551-555. DOI:
10.11772/j.issn.1001-9081.2018061198
Asbtract
(
)
PDF
(838KB) (
)
References
|
Related Articles
|
Metrics
Aiming at problem of additive Gauss noise removal, an improved image restoration algorithm based on the existing K-means Singular Value Decomposition (K-SVD) method was proposed by integrating dictionary learning and variational model. Firstly, according to geometric and photometric information, image blocks were clustered into different groups, and these groups were classified into different types according to the texture and edge categories, then an adaptive dictionary was trained according to the types of these groups and the size of the atoms determined by the noise level. Secondly, a variational model was constructed by fusing the sparse representation priori obtained from the dictionary with the non-local similarity priori of the image itself. Finally, the final denoised image was obtained by solving the variational model. The experimental results show that compared with similar denoising algorithms, when the noise ratio is high, the proposed method has better visual effect, solving the problems of poor accuracy, serious texture loss and visual artifacts; the structural similarity index is also significantly improved, and the Peak Signal-to-Noise Ratio (PSNR) is increased by an average of more than 10%.
Video shadow removal method using region matching guided by illumination transfer
LIAO Bin, WU Wen
2019, 39(2): 556-563. DOI:
10.11772/j.issn.1001-9081.2018061227
Asbtract
(
)
PDF
(1465KB) (
)
References
|
Related Articles
|
Metrics
In order to solve spatio-temporally incoherent problem of traditional shadow removal methods for videos captured by free moving cameras, a shadow detection and removal approach using region matching guided by illumination transfer was proposed. Firstly, the input video was segmented by using Mean Shift method based on Scale Invariant Feature Transform (SIFT), and the video shadow was detected by Support Vector Machine (SVM) classifier. Secondly, the input video was decomposed into overlapped 2D patches, and a Markov Random Field (MRF) for this video was set up, and the corresponding lit patch for every shadow patch was found via region matching guided by optical flow. Finally, in order to get spatio-temporally coherent results, each shadow patch was processed with its matched lit patch by local illumination transfer operation and global shadow removal. The experimental results show that the proposed algorithm obtains higher accuracy and lower error than the traditional methods based on illumination transfer, the comprehensive evaluation metric is improved by about 6.23%, and the Root Mean Square Error (RMSE) is reduced by about 30.12%. It can obtain better shadow removal results with more spatio-temporal coherence but much less time.
Construction of non-separable Laplacian pyramid and its application in multi-spectral image fusion
LIU Bin, XIN Jianan, CHEN Wenjiang, XIAO Huiyong
2019, 39(2): 564-570. DOI:
10.11772/j.issn.1001-9081.2018061346
Asbtract
(
)
PDF
(1259KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that classical Laplacian Pyramid (LP) transformin image fusion losses high frequency information of the fused image seriously and has no translation invariance, with the use of non-separable wavelet which has translation invariance and accurate description of image details, a new construction method of non-sampling non-separable LP was proposed and applied to the multi-spectral image fusion. Firstly, a six-channel non-separable low-pass filter was constructed and used to construct non-sampling non-separable wavelet pyramid for multi-spectral image and panchromatic image, and then the image was processed by non-sampling non-separable LP decomposition. Then, different fusion rules were used for the fusion of different decomposition layers. Finally, the fused image was obtained by using non-separable LP reconstruction algorithm. The experimental results show that compared with the algorithms based on Discrete Wavelet Transformation (DWT), Contourlet Transformation (CT), and Midway Histogram Equalization (MHE), the spatial correlation coefficient of the proposed method was increased by 1.84%, 1.56%, and 11.06% respectively, and the relative global dimensional synthesis error of the proposed method was reduced by 49.26%, 48.15%, and 89.19% respectively. The proposed method can effectively improve the spatial resolution while obtaining good spectral information of image, well preserve the edge information and structure information of the image.
Automatic stitching and restoration algorithm for paper fragments based on angle and edge features
SHI Baozhu, LI Mei'an
2019, 39(2): 571-576. DOI:
10.11772/j.issn.1001-9081.2018061369
Asbtract
(
)
PDF
(934KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problems of too many attempts, slow splicing speed, low restoration accuracy and completeness in artificially restored paper-based cultural relics, an automatic splicing algorithm based on angle and edge length of fragments was proposed. Firstly, the fragment images were pre-processed and coarsely matched according to the angle value of the fragments, and the fragment images with the same angle value were found. Then, on the basis of coarse matching, thin matching was made by using the edge lengths of the angles of the fragments to reduce overlap, and the basic matching results of the fragment images were obtained. Finally, a concave-convex function was used to make up the fragment images of opposite direction, and a oscillating function was used to make up the gap of the final matching images to obtain complete splicing results. Theoretical analysis and splicing simulation experimental results show that compared with automatic splicing algorithms such as feature points, approximate polygon fitting and angle sequence matching, the splicing accuracy, splicing completion and splicing time of the proposed algorithm were improved by at least 12, 11 and 10 percentage points, respectively. The proposed algorithm based on angle and edge features reduces the cumbersome image calculation and accurately corrects the fragment matching result, which enables efficient and highly accurate matching of irregular fragments in actual relic restoration.
Image super-resolution reconstruction based on image patch classification
DU Kaimin, KANG Baosheng
2019, 39(2): 577-581. DOI:
10.11772/j.issn.1001-9081.2018061368
Asbtract
(
)
PDF
(920KB) (
)
References
|
Related Articles
|
Metrics
Concerning the poor quality of existing image super-resolution reconstruction caused by single dictionary, a new single image super-resolution algorithm based on classified image patches and image cartoon-texture decomposition was proposed. Firstly, an image was divided into image patches which were classified into smooth patches, edge patches and texture patches, and the texture class was divided into cartoon part and texture part by Morphological Component Analysis (MCA) algorithm. Secondly, ege patches, cartoon part and texture part of texture patches were applied respectively to train the dictionaries of low-resolution and high-resolution. Finally, the sparse coefficients were calculated, then the image patches were reconstructed by using the corresponding high-resolution dictionary and sparse coefficients. In the comparison experiments with Sparse Coding Super-Resolution (SCSR) algorithm and Single Image Super-Resolution (SISR) algorithm, the Peak Signal-to-Noise Ratio (PSNR) of the proposed algorithm was increased by 0.26 dB and 0.14 dB respectively. The experimental results show that the proposed algorithm can obtain more details in texture with better reconstruction effect.
Non-rigid multi-modal brain image registration by using improved Zernike moment based local descriptor and graph cuts discrete optimization
WANG Lifang, WANG Yanli, LIN Suzhen, QIN Pinle, GAO Yuan
2019, 39(2): 582-588. DOI:
10.11772/j.issn.1001-9081.2018061423
Asbtract
(
)
PDF
(1232KB) (
)
References
|
Related Articles
|
Metrics
When noise and intensity distortion exist in brain images, the method based on structural information cannot accurately extract image intensity information, edge and texture features at the same time. In addition, the computational complexity of continuous optimization is relatively high. To solve these problems, according to the structural information of the image, a non-rigid multi-modal brain image registration method based on Improved Zernike Moment based Local Descriptor (IZMLD) and Graph Cuts (GC) discrete optimization was proposed. Firstly, the image registration problem was regarded as the discrete label problem of Markov Random Field (MRF), and the energy function was constructed. The two energy terms were composed of the pixel similarity and smoothness of the displacement vector field. Secondly, a smoothness constraint based on the first derivative of the deformation vector field was used to penalize displacement labels with sharp changes between adjacent pixels. The similarity metric based on IZMLD was used as a data item to represent pixel similarity. Thirdly, the Zernike moments of the image patches were used to calculate the self-similarity of the reference image and the floating image in the local neighborhood and construct an effective local descriptor. The Sum of Absolute Difference (SAD) between the descriptors was taken as the similarity metric. Finally, the whole energy function was discretized and its minimum value was obtained by using an extended optimization algorithm of GC. The experimental results show that compared with the registration method based on the Sum of Squared Differences on Entropy images (ESSD), the Modality Independent Neighborhood Descriptor (MIND) and the Stochastic Second-Order Entropy Image (SSOEI), the mean of the target registration error of the proposed method was decreased by 18.78%, 10.26% and 8.89% respectively; and the registration time of the proposed method was shortened by about 20 s compared to the continuous optimization algorithm. The proposed method achieves efficient and accurate registration for images with noise and intensity distortion.
Modeling and analysis of fault tolerant service composition for intelligent logistics systems of Internet of Things
GUO Rongzuo, FENG Chaosheng, QIN Zhiguang
2019, 39(2): 589-597. DOI:
10.11772/j.issn.1001-9081.2018061320
Asbtract
(
)
PDF
(1487KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that the service composition in the logistics field has poor tolerance and unreliable service, a model of logistics service fault-tolerant composition for intelligent logistics system of Internet of Things (IoT) based on π-net was built. Firstly, after a brief introdution of IoT intelligent logistics system, a fault-tolerant service composition framework for the system was provided. Then, a model of logistics service fault-tolerant composition for the system based on π-net was built, and the correctness of fault tolerance and fitting degree of the model were analyzed. Finally, the service reliability and the fault-tolerant reliability of the model were tested, and the comparison with Petri-net, QoS (Quality of Service) dynamic prediction, fuzzy Kano model and modified particle swarm optimization methods in the service composition execution time, user satisfaction, reliability and optimal degree were carried out. The results show that the proposed model has high service reliability and fault-tolerant reliability, and has certain advantages in terms of service composition execution time, user satisfaction, reliability and optimal degree.
Optimization of intercity train operation plan considering regional coordination
LIN Li, MENG Xuelei, SONG Zhongzhong
2019, 39(2): 598-603. DOI:
10.11772/j.issn.1001-9081.2018061337
Asbtract
(
)
PDF
(895KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that intercity train operation plans fail to match urban rail transit capacity effectively, an optimization method of intercity train operation plan considering regional coordination was proposed. Firstly, the minimum passenger travel cost and maximal benefit of railway department were considered as the optimization objectives, the transport capacity of intercity train, traffic demand between origins and destinations and carrying capacity were considered as constraints of this model. Secondly, the matching degree limit of transportation capacity was considered, a multi-objective nonlinear programming model of intercity train operation plan considering regional coordination was constructed and an improved simulated annealing algorithm was designed to solve the model. Finally, the Guangzhou-Shenzhen intercity railway was taken as an example to make two pairs of comparative analyses. The experimental results show that the train operation plan considering the regional coordination makes the total travel cost of passengers reduced by 4.06%, the railway department revenue increased by 9.58%, the total cost of passengers and railway system decreased by 23.27%. Compared with genetic algorithm, the improved simulated annealing algorithm is better in solving quality and convergence speed. The proposed model and algorithm can give full consideration to the interests of both passengers and railway department, and provide an effective solution for the optimization of intercity train operation plan.
Two-echelon closed-loop logistics network location-routing optimization based on customer clustering and product recovery
LIANG xi, Kevin Assogba
2019, 39(2): 604-610. DOI:
10.11772/j.issn.1001-9081.2018061318
Asbtract
(
)
PDF
(1191KB) (
)
References
|
Related Articles
|
Metrics
With regard to unreasonable waste collection and considerable environmental pollution due to logistics activities, a two-echelon closed-loop logistics network location-routing optimization model based on customer clustering and product recovery was proposed. Firstly, considering the dynamic nature of actual logistics network, the uncertain characteristics of customer demand and recovery rate were assumed, and location-routing optimization model based on minimum operating cost and minimum environmental impact was established. Secondly, based on improvement of multi-objective evolutionary algorithm, an algorithm for two-echelon closed-loop logistics network location-routing optimization model based on customer clustering and product recovery was proposed. Finally, the performance of the proposed optimization algorithm was analyzed and a practical experimentation of model and algorithm was conducted on the location-routing problem of a company in Chongqing city. Analyses show that the proposed model and algorithm can alleviate the final decision difficulty and improve operational efficiency of the logistics system while the optimization scheme obtained can reduce total cost and environmental impact.
Multi-type liner scheduling considering tidal effects
ZHENG Hongxing, WANG Quanhui, REN Yaqun
2019, 39(2): 611-617. DOI:
10.11772/j.issn.1001-9081.2018071470
Asbtract
(
)
PDF
(1008KB) (
)
References
|
Related Articles
|
Metrics
The multi-type liner scheduling problem in liner enterprises caused by the fluctuation of cargo demand and tide with line schedule announced in advance was studied. Firstly, the structure of near-sea transportation routes of a liner enterprise was systematically analyzed. Then, with the consideration of the real situations like large ships need to tide in and out of ports, ship renting is permitted under appropriate conditions, and the limits of a liner schedule, a nonlinear programming model of multi-type liner scheduling was built with the objective of minimizing the total transportation cost. Finally, in view of the characteristics of the model, an Improved Genetic Algorithm (IGA) embedded with gene repair was designed to solve the problem. Experimental results show that the proposed liner scheduling scheme can improve the ship utilization ratio by 25%-35% compared with the traditional experiential liner scheduling scheme, the CPU processing time of IGA is reduced by 32% on average compared with CPLEX in medium scale, and the transportation cost of IGA is reduced by 12% on average compared with ant colony algorithm in medium and large scales. All above demonstrates the validity of the proposed model and algorithm which can provide a reference for liner enterprises in liner scheduling.
Credit card fraud classification based on GAN-AdaBoost-DT imbalanced classification algorithm
MO Zan, GAI Yanrong, FAN Guanlong
2019, 39(2): 618-622. DOI:
10.11772/j.issn.1001-9081.2018061382
Asbtract
(
)
PDF
(771KB) (
)
References
|
Related Articles
|
Metrics
Concerning that traditional single classifiers have poor classification effect for imbalanced data classification, a new binary-class imbalanced data classification algorithm was proposed based on Generative Adversarial Nets (GAN) and ensemble learning, namely Generative Adversarial Nets-Adaptive Boosting-Decision Tree (GAN-AdaBoost-DT). Firstly, GAN training was adopted to get a generative model which produced minority class samples to reduce imbalance ratio. Then, the minority class samples were brought into Adaptive Boosting (AdaBoost) learning framework and their weights were changed to improve AdaBoost model and classification performance of AdaBoost with Decision Tree (DT) as base classifier. Area Under the Carve (AUC) was used to evaluate the performance of classifier when dealing with imbalanced classification problems. The experimental results on credit card fraud data set illustrate that compared with synthetic minority over-sampling ensemble learning method, the accuracy of the proposed algorithm was increased by 4.5%, the AUC of it was improved by 6.5%; compared with modified synthetic minority over-sampling ensemble learning method, the accuracy was increased by 4.9%, the AUC was improved by 5.9%; compared with random under-sampling ensemble learning method, the accuracy was increased by 4.5%, the AUC was improved by 5.4%. The experimental results on other data sets of UCI and KEEL illustrate that the proposed algorithm can improve the accuracy of imbalanced classification and the overall classifier performance.
2024 Vol.44 No.10
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF