Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 December 2020, Volume 40 Issue 12
Previous Issue
Next Issue
2020 China Conference on Granular Computing and Knowledge Discovery(CGCKD 2020)
Fast spectral clustering algorithm without eigen-decomposition
LIU Jingshu, WANG Li, LIU Jinglei
2020, 40(12): 3413-3422. DOI:
10.11772/j.issn.1001-9081.2020061040
Asbtract
(
)
PDF
(1407KB) (
)
References
|
Related Articles
|
Metrics
The traditional spectral clustering algorithm needs too much time to perform eigen-decomposition when the number of samples is very large. In order to solve the problem, a fast spectral clustering algorithm without eigen-decomposition was proposed to reduce the time overhead by multiplication update iteration. Firstly, the Nyström algorithm was used for random sampling in order to establish the relationship between the sampling matrix and the original matrix. Then, the indicator matrix was updated iteratively based on the principle of multiplication update iteration. Finally, the correctness and convergence analysis of the designed algorithm were given theoretically. The proposed algorithm was tested on five widely used real datasets and three synthetic datasets. Experimental results on real datasets show that:the average Normalized Mutual Information (NMI) of the proposed algorithm is 0.45, which is improved by 12.5% compared with that of the
k
-means clustering algorithm; the computing time of the proposed algorithm achieves 61.73 s, which is decreased by 61.13% compared with that of the traditional spectral clustering algorithm; and the performance of the proposed algorithm is superior to that of the hierarchical clustering algorithm, which verify the effectiveness of the proposed algorithm.
Improved label propagation algorithm based on random walk
ZHENG Wenping, YUE Xiangdou, YANG Gui
2020, 40(12): 3423-3429. DOI:
10.11772/j.issn.1001-9081.2020061048
Asbtract
(
)
PDF
(2160KB) (
)
References
|
Related Articles
|
Metrics
Community detection is a useful tool for mining hidden information in social networks. And Label Propagation Algorithm (LPA) is a common algorithm in the community detection algorithm, which does not require any prior knowledge and runs fast. Aiming at the problem of the instability of community detection algorithm results caused by the strong randomness of label propagation algorithm, an improved Label Propagation Algorithm based on Random Walk (LPARW) was proposed. Firstly, according to the random walk on the network, the importance order of nodes was determined, so as to obtain the update order of nodes. Secondly, the update sequence of nodes was traversed, and the similarity calculation between each node and the node before it was performed. If the node and the node before it were neighbor nodes and the similarity between them was greater than the threshold, then the node before it was selected as the seed node. Finally, the label of the seed node was propagated to the rest of the nodes in order to obtain the final division result of the communities. The proposed algorithm was comparatively analyzed with some classic label propagation algorithms on 4 labeled networks and 5 unlabeled real networks. Experimental results show that the proposed algorithm is better than other comparison algorithms on classic evaluation indicators such as Normalized Mutual Information (NMI), Adjusted Rand Index (ARI) and modularity. It can be seen that the proposed algorithm has the good community division effect.
Semi-supervised learning algorithm of graph based on label metric learning
LYU Yali, MIAO Junzhong, HU Weixin
2020, 40(12): 3430-3436. DOI:
10.11772/j.issn.1001-9081.2020060893
Asbtract
(
)
PDF
(967KB) (
)
References
|
Related Articles
|
Metrics
Most graph-based semi-supervised learning methods do not use the known label information and the label information obtained from the label propagation process when measuring the similarity between samples. At the same time, these methods have the measurement methods relatively fixed, which cannot effectively measure the similarity between data samples with complex and varied distribution structures. In order to solve the problems, a semi-supervised learning algorithm of graph based on label metric learning was proposed. Firstly, the similarity measurement method of samples was given, and then the similarity matrix was constructed. Secondly, labels were propagated based on the similarity matrix and
k
samples with low entropy were selected as the new obtained label information. Finally, the similarity measure method was updated by fully using all label information, and this process was repeated until all label information was learned. The proposed algorithm not only uses label information to improve the measurement method of similarity between samples, but also makes full use of intermediate results to reduce the demand for labeled data in the semi-supervised learning. Experimental results on six real datasets show that, compared with three traditional graph-based semi-supervised learning algorithms, the proposed algorithm achieves higher classification accuracy in more than 95% of the cases.
Multi-category active learning algorithm based on multiple clustering algorithms and multivariate linear regression
WANG Min, WU Yubo, MIN Fan
2020, 40(12): 3437-3444. DOI:
10.11772/j.issn.1001-9081.2020060921
Asbtract
(
)
PDF
(1151KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that traditional lithology identification methods have low recognition accuracy and are difficult to integrate with geological experience organically, a multi-category Active Learning algorithm based on multiple Clustering algorithms and multivariate Linear regression algorithm (ALCL) was proposed. Firstly, the category matrix corresponding to each algorithm was obtained through multiple heterogeneous clustering algorithms, and the category matrices were labeled and pre-classified by querying common points. Secondly, the key examples used to train the weight coefficient model of the clustering algorithm were selected through the proposed priority largest search strategy and the most confusing query strategy. Thirdly, the objective solving function was defined, and the weight coefficients of clustering algorithms were obtained by training the key examples. Finally, the samples with high confidence in the results were classified by performing the classification calculation combined with the weight coefficient. Six public lithology datasets of oil wells in Daqing oilfield were used to carry out experiments. Experimental results show that when the classification accuracy of ALCL is the highest, it is improved by 2.07%-14.01% compared with those of the traditional supervised learning algorithms and other active learning algorithms. The results of hypothesis test and significance analysis prove that ALCL has better classification effect in lithology identification.
Collaborative filtering recommendation algorithm based on dual most relevant attention network
ZHANG Wenlong, QIAN Fulan, CHEN Jie, ZHAO Shu, ZHANG Yanping
2020, 40(12): 3445-3450. DOI:
10.11772/j.issn.1001-9081.2020061023
Asbtract
(
)
PDF
(948KB) (
)
References
|
Related Articles
|
Metrics
Item-based collaborative filtering learns user preferences from the user's historical interaction items and recommends similar new items based on the user's preferences. The existing collaborative filtering methods assume that a set of historical items that user has interacted with have the same impact on user, and all historical interaction items are considered to have the same contribution to the prediction of target item, which limits the accuracy of these recommendation methods. In order to solve the problems, a new collaborative filtering recommendation algorithm based on dual most relevant attention network was proposed, which contained two attention network layers. Firstly, the item-level attention network was used to assign different weights to different historical items in order to capture the most relevant items in the user historical interaction items. Then, the item-interaction-level attention network was used to perceive the correlation degrees of the interactions between the different historical items and the target item. Finally, the fine-grained preferences of users on the historical interaction items and the target item were simultaneously captured through the two attention network layers, so as to make the better recommendations for the next step. The experiments were conducted on two real datasets of MovieLens and Pinterest. Experimental results show that, the proposed algorithm improves the recommendation hit rate by 2.3 percentage points and 1.5 percentage points respectively compared with the benchmark model Deep Item-based Collaborative Filtering (DeepICF) algorithm, which verifies the effectiveness of the proposed algorithm on making personalized recommendations for users.
Lightweight convolutional neural network based on cross-channel fusion and cross-module connection
CHEN Li, DING Shifei, YU Wenjia
2020, 40(12): 3451-3457. DOI:
10.11772/j.issn.1001-9081.2020060882
Asbtract
(
)
PDF
(1104KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problems of too many parameters and high computational complexity of traditional convolutional neural networks, a lightweight convolutional neural network architecture named C-Net based on cross-channel fusion and cross-module connection was proposed. Firstly, a method called cross-channel fusion was proposed. With it, the shortcoming of lacking information flow between different groups of grouped convolution was solved to a certain extent, and the information communication between different groups was realized efficiently and easily. Then, a method called cross-module connection was proposed. With it, the shortcoming that the basic building blocks in the traditional lightweight architecture were independent to each other was overcome, and the information fusion between different modules with the same resolution feature mapping within the same stage was achieved, enhancing the feature extraction capability. Finally, a novel lightweight convolutional neural network architecture C-Net was designed based on the two proposed methods. The accuracy of C-Net on the Food_101 dataset is 69.41%, and the accuracy of C-Net on the Caltech_256 dataset is 63.93%. Experimental results show that C-Net reduces the memory cost and computational complexity in comparison with the state-of-the-art lightweight convolutional neural network models. The ablation experiment verifies the effectiveness of the two proposed methods on the Cifar_10 dataset.
Detection of negative emotion burst topic in microblog text stream
LI Yanhong, ZHAO Hongwei, WANG Suge, LI Deyu
2020, 40(12): 3458-3464. DOI:
10.11772/j.issn.1001-9081.2020060880
Asbtract
(
)
PDF
(1188KB) (
)
References
|
Related Articles
|
Metrics
How to find negative emotion burst topic in time from massive and noisy microblog text stream is essential for emergency response and handling of emergencies. However, the traditional burst topic detection methods often ignore the differences between negative emotion burst topic and non-negative emotion burst topic. Therefore, a Negative Emotion Burst Topic Detection (NE-BTD) algorithm for microblog text stream was proposed. Firstly, the accelerations of keyword pairs in microblog and the change rate of negative emotion intensity were used as the basis for judging the topics of negative emotion. Secondly, the speeds of burst word pairs were used to determine the window range of negative emotion burst topics. Finally, a Gibbs Sampling Dirichlet Multinomial Mixture model (GSDMM) clustering algorithm was used to obtain the topic structures of the negative emotion burst topics in the window. In the experiments, the proposed NE-BTD algorithm was compared with an existing Emotion-Based Method of Topic Detection (EBM-TD) algorithm. The results show that the NE-BTD algorithm was at least 20% higher in accuracy and recall than the EBM-TD algorithm, and it can detect negative emotion burst topic at least 40 minutes earlier.
Intrusion detection method based on variable precision covering rough set
OU Binli, ZHONG Xiaru, DAI Jianhua, YANG Tian
2020, 40(12): 3465-3470. DOI:
10.11772/j.issn.1001-9081.2020060918
Asbtract
(
)
PDF
(906KB) (
)
References
|
Related Articles
|
Metrics
It is an important task for an Intrusion Detection System (IDS) to identify abnormal user behaviors accurately and quickly. In order to solve the problems of high dimensionality and large sample size of intrusion detection data, a related family attribute reduction method based on variable precision covering rough set was proposed, and was applied to the intrusion detection data. Firstly, the variable precision related families with condition attributes were generated based on the covering decision table. Then, a heuristic algorithm was used to obtain the attribute reduction of the decision table based on all the variable precision related families with condition attributes. Finally, the intrusion detection data was detected by combining with the classifier on the above basis. Experimental results show that, the proposed method has the low time complexity of calculating attribute reduction, and on large sample datasets, the running time of attribute reduction algorithm named Neighborhood Fuzzy Rough Sets (NFRS) based on fuzzy rough set dependency is 96 times of that of the proposed method. On the NSL-KDD dataset, the proposed method can identify key attributes quickly, eliminate invalid information, and has the overall accuracy reached 90.53% and the accuracy of Normal reached 97%.
Super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning
SUN Zhongfan, ZHOU Zhenghua, ZHAO Jianwei
2020, 40(12): 3471-3477. DOI:
10.11772/j.issn.1001-9081.2020060966
Asbtract
(
)
PDF
(875KB) (
)
References
|
Related Articles
|
Metrics
For the problem that the existing deep-learning based super-resolution reconstruction methods mainly study on the reconstruction problem of amplifying integer times, not on the cases of amplifying arbitrary times (e.g. non-integer times), a super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning was proposed. Firstly, the coordinate projection was used to find the correspondence between the coordinates of high-resolution image and low-resolution image. Secondly, based on the meta-learning network, considering the spatial information of feature map, the extracted spatial features and coordinate positions were combined as the input of weighted prediction network. Finally, the convolution kernels predicted by the weighted prediction network were combined with the feature map in order to amplify the size of feature map effectively and obtain the high-resolution image with arbitrary magnification. The proposed spatial meta-learning module was able to be combined with other deep networks to obtain super-resolution reconstruction methods with arbitrary magnification. The provided super-resolution reconstruction method with arbitrary magnification (non-integer magnification) was able to solve the reconstruction problem with a fixed size but non-integer scale in the real life. Experimental results show that, when the space complexity (network parameters) is equivalent, the time complexity (computational cost) of the proposed method is 25%-50% of that of the other reconstruction methods, the Peak Signal-to-Noise Ratio (PSNR) of the proposed method is 0.01-5 dB higher than that of the others, and the Structural Similarity (SSIM) of the proposed method is 0.03-0.11 higher than that of the others.
Multi-level feature selection algorithm based on mutual information
YONG Juya, ZHOU Zhongmei
2020, 40(12): 3478-3484. DOI:
10.11772/j.issn.1001-9081.2020060871
Asbtract
(
)
PDF
(1067KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem that the process of removing redundancy will be very complicated due to the large number of the selected features, and the problem that some features only can have strong correlation with label after being combined with other features in the feature selection, a Multi-Level Feature Selection algorithm based on Mutual Information (MI_MLFS) was proposed. Firstly, the features were divided into strongly correlated, sub-strongly correlated and other features according to the degrees of correlations between features and label. Secondly, after selecting strongly correlated features, features with low redundancy in the sub-strongly correlated features were selected. Finally, the features which were able to enhance the correlation between the selected feature subset and label were selected. Among 15 datasets, MI_MLFS was compared with the algorithms of ReliefF, minimal-Redundancy-Maximal-Relevance criterion (mRMR), Joint Mutual Information (JMI), Conditional Mutual Information Maximization criterion (CMIM) and Double Input Symmetrical Relevance (DISR). The results show that MI_MLFS achieves the highest classification accuracy in 13 datasets and 11 datasets with Support Vector Machine (SVM) classifier and Classification And Regression Tree (CART) classifier respectively. MI_MLFS has better classification performance than many classical feature selection algorithms.
2020 Asian Conference on Artificial Intelligence Technology (ACAIT 2020)
Chinese short text classification model with multi-head self-attention mechanism
ZHANG Xiaochuan, DAI Xuyao, LIU Lu, FENG Tianshuo
2020, 40(12): 3485-3489. DOI:
10.11772/j.issn.1001-9081.2020060914
Asbtract
(
)
PDF
(806KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the semantic ambiguity caused by the lack of context information in Chinese short texts results in feature sparsity, a text classification model combing Convolutional Neural Network and Multi-Head self-Attention mechanism (CNN-MHA) was proposed. Firstly, the existing Bidirectional Encoder Representations from Transformers (BERT) pre-training language model was used to format the sentence-level short texts in the form of character-level vectors. Secondly, in order to reduce the noise, the Multi-Head self-Attention mechanism (MHA) was used to learn the word dependence inside the text sequence and generate the hidden layer vector with global semantic information. Then, the hidden layer vector was input into the Convolutional Neural Network (CNN) to generate the text classification feature vector. In order to improve the optimization effect of classification, the output of convolutional layer was fused with the sentence features extracted by BERT model, and then inputted to the classifier for re-classification. Finally, the CNN-MHA model was compared with TextCNN model, BERT model and TextRCNN model respectively. Experimental results show that, the F1 performance of the improved model is increased by 3.99%, 0.76% and 2.89% respectively compared to those of the comparison models on SogouCS dataset, which proves the effectiveness of the improved model.
General chess piece positioning method under uneven illumination
WANG Yajie, ZHANG Yunbo, WU Yanyan, DING Aodong, QI Bingzhi
2020, 40(12): 3490-3498. DOI:
10.11772/j.issn.1001-9081.2020060892
Asbtract
(
)
PDF
(3060KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem of chess piece positioning error in the chess robot system under uneven illumination distribution, a general chess piece positioning method based on block convex hull detection and image mask was proposed. Firstly, the set of points on the outline of the chessboard were extracted, the coordinates of the four vertices of the chessboard were detected using the block convex hull method. Secondly, the coordinates of the four vertices of the chessboard in the standard chessboard image were defined, and the transformation matrix was calculated by the perspective transformation principle. Thirdly, the type of the chessboard was recognized based on the difference between the small square areas of different chessboards. Finally, the captured chessboard images were successively corrected to the standard chessboard images, and the difference images of two adjacent standard chessboard images were obtained, then the dilation, image mask multiplication and erosion operations were performed on the difference images in order to obtain the effective areas of chess pieces and calculate their center coordinates. Experimental results demonstrate that, the proposed method has the average positioning accuracy of Go and Chinese chess pieces arrived by 95.5% and 99.06% respectively under four kinds of uneven illumination conditions, which are significantly improved in comparison with other chess piece positioning algorithms. At the same time, the proposed method can solve the inaccurate local positioning problem of chess pieces caused by adhesion of chess pieces, chess piece projection and lens distortion.
Artificial intelligence
Multi-robot path planning algorithm based on 3D spatiotemporal maps and motion decomposition
QU Licheng, LYU Jiao, ZHAO Ming, WANG Haifei, QU Yihua
2020, 40(12): 3499-3507. DOI:
10.11772/j.issn.1001-9081.2020050673
Asbtract
(
)
PDF
(1398KB) (
)
References
|
Related Articles
|
Metrics
In view of the shortcomings of the current path planning strategies for multiple robots, such as high path coupling, long total path, long waiting time for collision avoidance, and the resulting problems of low system robustness and low robot utilization, a multi-robot path planning algorithm based on 3D spatiotemporal maps and motion decomposition was proposed. Firstly, the dynamic temporary obstacles in time dimension were generated according to the existing path set and the current robots' positions, and were expanded into 3D search space together with the static obstacles. Secondly, in the 3D search space, the total time of path motion was divided into three parameters:motion time, turning time, and in-situ dwell time, and the conditional depth first search strategy was used to calculate the set of all paths from the starting node to the target node that met the parameter requirements. Finally, all paths in the path set were traversed. For each path, the actual total time consumption was calculated. If the difference between the actual total time consumption and the theoretical total time consumption of a path was less than the specified maximum error, the path was considered as the shortest path. Otherwise, the remaining paths were continued to traverse. And if the differences between the actual total time and the theoretical total time of all paths in the set were greater than the maximum error, the parameters needed to be adjusted dynamically, and then the initial steps of algorithm were continued to execute. Experimental results show that, the path planned by the proposed algorithm has the advantages of short total length, less running time, no collision and high robustness, and the proposed algorithm can solve the problem of completing continuous random tasks by multi-robot system.
Path planning of mobile robot based on improved artificial potential field method
XU Xiaoqiang, WANG Mingyong, MAO Yan
2020, 40(12): 3508-3512. DOI:
10.11772/j.issn.1001-9081.2020050640
Asbtract
(
)
PDF
(849KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the traditional artificial potential field method is easy to fall into trap area and local minimum in the path planning process, an improved artificial potential field method was proposed. Firstly, the concept of safe distance was proposed to avoid unnecessary paths, so as to solve the problems of long path length and long algorithm running time. Then, in order to avoid the robot being trapped in the local minimum and trap area, the predictive distance was introduced into the algorithm, so that the algorithm was able to react before the robot being trapped in the local minimum or trap area. Finally, the robot was guided to avoid the local minimum and trap area by setting the virtual target points reasonably. The experimental results show that, the improved algorithm can effectively solve the problem that the traditional algorithm is easy to fall into the local minimum and trap area. At the same time, compared with those of the traditional artificial potential field method, the path length planned by this proposed algorithm is reduced by 5.2% and its speed is increased by 405.56%.
Hot new word discovery applied for detection of network hot news
WANG Yu, XU Jianmin
2020, 40(12): 3513-3519. DOI:
10.11772/j.issn.1001-9081.2020040549
Asbtract
(
)
PDF
(987KB) (
)
References
|
Related Articles
|
Metrics
By analyzing the characteristics of hot words in network news, a hot new word discovery method was proposed for detection of network hot news. Firstly, the Frequent Pattern tree (FP-tree) algorithm was improved to extract the frequent word strings as the hot new word candidates. A lot of useless information in the news data was reduced by deleting the infrequent 1-word strings from news data and cutting news data based on infrequent 1, 2-infrequent word strings, so as to greatly decrease the complexity of FP-tree. Secondly, the multivariant Pointwise Mutual Information (PMI)was formed by expanding the binary PMI, and the Time PMI (TPMI) was formed by introducing the time features of hot words. TPMI was used to judge the internal cohesion degree and timeliness of hot new word candidates, so as to remove the unqualified candidates. Finally, the branch entropy was used to determine the boundary of new words for selecting new hot words. The dataset formed by 7 222 news headlines collected from Baidu network news was used for the experiments. When the events reported at least 8 times in half a month were selected as hot news, and the adjustment coefficient of
time feature was set 2, TPMI correctly recognized 51 hot words, missed 2 hot words because they were hot for a long time and 2 less-hot words because they occurred insufficiently; the multivariant PMI without time features correctly recognized all 55 hot words, but incorrectly recognized 97 non-hot words. It can be seen from the analysis that the time and space cost is reduced by decreasing the complexity of FP-tree, and experimental results show that the recognition rate of hot new words is improved by introducing time feature during the hot new word judgement.
Recognition and localization method of super-large-scale variance objects in the same scene
WANG Yiting, ZHANG Ke, LI Jie, HAO Zongbo, DUAN Chang, ZHU Ce
2020, 40(12): 3520-3525. DOI:
10.11772/j.issn.1001-9081.2020040466
Asbtract
(
)
PDF
(1355KB) (
)
References
|
Related Articles
|
Metrics
In recent years, deep learning achieves very good results and has great improvement in object detection. However, in some special scenes, for example, when it is required to simultaneously detect objects with greatly different scales (difference greater than 100 times), common object recognition methods' performance will drop dramatically. Aiming at the problem of recognizing and locating objects with super-large-scale variance in the same scene, the You Only Look Once version3 (YOLOv3) framework was improved, the image pyramid technology was combined to extract the multi-scale features of the image. And in the training process, the strategy of using dynamic Intersection over Union (IoU) was proposed for different scale objects, which was able to better solve the problem of sample imbalance. Experimental results show that the proposed model significantly improves the recognition ability of super-large and super-small objects in the same scene. The proposed model has been applied to the airport environment and achieved good application results.
Object detection algorithm based on asymmetric hourglass network structure
LIU Ziwei, DENG Chunhua, LIU Jing
2020, 40(12): 3526-3533. DOI:
10.11772/j.issn.1001-9081.2020050641
Asbtract
(
)
PDF
(1337KB) (
)
References
|
Related Articles
|
Metrics
Anchor-free deep learning based object detection is a mainstream single-stage object detection algorithm. An hourglass network structure that incorporates multiple layers of supervisory information can significantly improve the accuracy of the anchor-free object detection algorithm, but its speed is much lower than that of a common network at the same level, and the features of different scale objects will interfere with each other. In order to solve the above problems, an object detection algorithm based on asymmetric hourglass network structure was proposed. The proposed algorithm is not constrained by the shape and size when fusing the features of different network layers, and can quickly and efficiently abstract the semantic information of network, making it easier for the model to learn the differences between various scales. Aiming at the problem of object detection at different scales, a multi-scale output hourglass network structure was designed to solve the problem of feature mutual interference between different scale objects and refine the output detection results. In addition, a special non-maximum suppression algorithm for multi-scale outputs was used to improve the recall rate of the detection algorithm. Experimental results show that the AP50 index of the proposed algorithm on Common Objects in COntext (COCO) dataset reaches 61.3%, which is 4.2 percentage points higher than that of anchor-free network CenterNet. The proposed algorithm surpasses the original algorithm in the balance of accuracy and time, and is particularly suitable for real-time object detection in industry.
Remaining useful life prediction for turbofan engines by genetic algorithm-based selective ensembling and temporal convolutional network
ZHU Lin, NING Qian, LEI Yinjie, CHEN Bingcai
2020, 40(12): 3534-3540. DOI:
10.11772/j.issn.1001-9081.2020050661
Asbtract
(
)
PDF
(970KB) (
)
References
|
Related Articles
|
Metrics
As the turbofan engine is one of the core equipment in the field of aerospace, its health condition determines whether the aircraft could work stably and reliably. And the prediction of the Remaining Useful Life (RUL) of turbofan engine is an important part of equipment monitoring and maintenance. In view of the characteristics such as complicated operating conditions, diverse monitoring data, and long time span existing in the turbofan engine monitoring process, a remaining useful life prediction model for turbofan engines integrating Genetic Algorithm-based Selective ENsembling (GASEN) and Temporal Convolutional Network (TCN) (GASEN-TCN) was proposed. Firstly, TCN was used to capture the inner relationship between data under long span, so as to predict the RUL. Then, GASEN was applied to ensemble multiple independent TCNs for enhancing the generalization performance of the model. Finally, the proposed model was compared with the popular machine learning methods and other deep neural networks on the general Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset. Experimental results show that, the proposed model has higher prediction accuracy and lower prediction error than the state-of-the-art Bidirectional Long-Short Term Memory (Bi-LSTM) network under many different operating modes and fault conditions. Taking FD001 dataset as an example:on this dataset, the Root Mean Square Error (RMSE) of the proposed model is 17.08% lower than that of Bi-LSTM, and the relative accuracy (Accuracy) of the proposed model is 12.16% higher than that of Bi-LSTM. It can be seen that the proposed model has considerable application prospect in intelligent overhaul and maintenance of equipment.
Civil aviation engine module maintenance level decision-making and cost optimization based on annealing frog leaping particle swarm algorithm
ZHANG Qing, ZHENG Yan
2020, 40(12): 3541-3549. DOI:
10.11772/j.issn.1001-9081.2020040565
Asbtract
(
)
PDF
(1129KB) (
)
References
|
Related Articles
|
Metrics
For the problems of scope decision-making of maintenance for civil aviation engine module and cost optimization of full-life maintenance, the engine module maintenance level decision-making and cost optimization model based on annealing frog leaping particle swarm optimization algorithm with return time interval as variable was proposed. Firstly, according to the maintenance logic diagram for each module in maintenance instruction manual and the replacement situation of life-limited parts, the engine shop visit cost function was built. Secondly, by using the annealing frog leaping particle swarm optimization algorithm, the shop visit costs of different return times and the maintenance level for each module in full life time were determined. Finally, based on examples, the proposed algorithm was compared with the basic particle swarm optimization algorithm, annealing particle swarm optimization algorithm and shuffled frog leaping optimization algorithm, and the influence of different return times on maintenance cost and reliability was analyzed. Experimental results indicate that, when the engine has five shop visits in its full life time, the average cost obtained using annealing frog leaping particle swarm optimization algorithm was 322.479 1 $/flight hour, which was the optimum value compared with those of the other three optimization algorithms. The proposed algorithm can facilitate the shop visit decision-making of airlines and overhaul companies.
Deep learning classification method of Landsat 8 OLI images based on inaccurate prior knowledge
XU Changqing, CHEN Zhenjie, HOU Renfu
2020, 40(12): 3550-3557. DOI:
10.11772/j.issn.1001-9081.2020040446
Asbtract
(
)
PDF
(2305KB) (
)
References
|
Related Articles
|
Metrics
Remote sensing image interpretation plays an important role in the acquisition of Land Use and Land Cover (LULC) information, and automatic classification serves as the key to improve the efficiency of LULC information acquisition. The actual scenes have a great mount of inaccurate prior knowledge. Extracting and integrating the available knowledge in the prior knowledge can help to further improve the accuracy, automation rate and scale application ability of image classification methods. Based on the above situation, a new deep learning classification method of Landsat 8 OLI images based on inaccurate prior knowledge was proposed. For the proposed method, inaccurate units in prior knowledge were avoided automatically, realizing automatic region selection and feature extraction of classified samples and obtaining high confidence knowledge in the constraint space of patches. Then, the deep residual network was trained by using these classified samples, and the accurate classification of large-area images was achieved. In the experiment, Xinbei district of Changzhou city was taken as the example, the data of 2009 land use status of this district was selected as the prior data, and the 2014 Landsat 8 OLI image of this district was selected as the to-be-classified image. The experimental results show that the proposed method has advantages such as the integration of inaccurate prior knowledge and the accurate classification of large-area contiguous LULC information. Besides, it can obtain the accurate boundary of main land use patches, and has the accuracy for patch classification in the whole image of 88.7% and the Kappa coefficient of 0.842.The proposed method can cooperate with deep learning method to achieve high precision Landsat 8 OLI remote sensing image classification.
Remote sensing image target detection and identification based on deep learning
SHI Wenxu, BAO Jiahui, YAO Yu
2020, 40(12): 3558-3562. DOI:
10.11772/j.issn.1001-9081.2020040579
Asbtract
(
)
PDF
(1188KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the precision and speed of existing remote sensing image target detection algorithms in small-scale target detection, a remote sensing image target detection and identification algorithm based on deep learning was proposed. Firstly, a dataset of remote sensing images with different scales was constructed for model training and testing. Secondly, based on the original Single Shot multibox Detector (SSD) network model, the shallow feature fusion module, shallow feature enhancement module and deep feature enhancement module were designed and fused. Finally, the focal loss function was introduced into the training strategy to solve the problem of the imbalance of positive and negative samples in the training process, and the experiment was carried out on the remote sensing image dataset. Experimental results on high-resolution remote sensing image dataset show that the detection mean Average Precision (mAP) of the proposed algorithm achieves 77.95%, which is 3.99 percentage points higher than that of SSD network model, and has the detection speed of 33.8 frame/s. In the extended experiment, the performance of the proposed algorithm is better than that of SSD network model for the detection of fuzzy targets in high-resolution remote sensing images. Experimental results show that the proposed algorithm can effectively improve the precision of remote sensing image target detection.
Network and communications
Heterogeneous directional sensor node scheduling algorithm for differentiated coverage
LI Ming, HU Jiangping, CAO Xiaoli, PENG Peng
2020, 40(12): 3563-3570. DOI:
10.11772/j.issn.1001-9081.2020050696
Asbtract
(
)
PDF
(986KB) (
)
References
|
Related Articles
|
Metrics
In order to prolong the lifespan of heterogeneous directional sensor network, a node scheduling algorithm based on Enhanced Coral Reef Optimization algorithm (ECRO) and with different monitoring requirements for different monitoring targets was proposed. ECRO was utilized to divide the sensor set into multiple sets satisfying the coverage requirements, so that the network lifespan was able to be prolonged by the scheduling among sets. The improvement of Coral Reef Optimization algorithm (CRO) was reflected in four aspects. Firstly, the migration operation in biogeography-based optimization algorithm was introduced into the brooding of coral reef to preserve the excellent solutions of the original population. Secondly, the differential mutation operator with chaotic parameter was adopted in brooding to enhance the optimization ability of the offspring. Thirdly, a random reverse learning strategy were performed on the worst individual of population in order to improve the diversity of population. Forthly, by combining CRO and simulated annealing algorithm, the local searching capability of algorithm was increased. Extensive simulation experiments on both numerical benchmark functions and node scheduling were conducted. The results of numerical test show that, compared with genetic algorithm, simulated annealing algorithm, differential evolution algorithm and the improved differential evolution algorithm, ECRO has better optimization ability. The results of sensor network node scheduling show that, compared with greedy algorithm, the Learning Automata Differential Evolution (LADE) algorithm, the original CRO, ECRO has the network lifespan improved by 53.8%, 19.0% and 26.6% respectively, which demonstrates the effectiveness of the proposed algorithm.
Multipath transmission selection algorithm based on immune connectivity model
ZHANG Zhengwan, ZHANG Chunjiong, LI Hongbing, XIE Tao
2020, 40(12): 3571-3577. DOI:
10.11772/j.issn.1001-9081.2020040492
Asbtract
(
)
PDF
(1066KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problems of high node energy consumption and low data transmission reliability caused by the uneven node deployment in Wireless Sensor Network (WSN), a multipath transmission selection algorithm based on immune connectivity model was proposed. When data transmission failed, the immune mechanism was used to select the fitness functions of paths, so as to optimize the transmission path and reduce the energy consumption of nodes. The experiments were performed to evaluate the proposed algorithm by the indicators such as network lifetime, end-to-end transmission delay, coverage ratio, transmission reliability and load distribution. The experimental results show that the proposed algorithm can better balance the load, improve the life cycle of network, and ensure the reliability of data transmission. The proposed algorithm can be applied to the design of sensor networks with high requirements on energy efficiency, scalability, prolonging network life and reducing network overhead.
Overlapping community detection algorithm fusing label preprocessing and node influence
WU Qingshou, CHEN Rongwang, YU Wensen, LIU Genggeng
2020, 40(12): 3578-3585. DOI:
10.11772/j.issn.1001-9081.2020060942
Asbtract
(
)
PDF
(1099KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of scattered initial labels and large randomness of label propagation, an overlapping community detection algorithm fusing label preprocessing and node influence was proposed. Firstly, the influence value of each node was calculated, and the node with the largest influence value was selected as the central node gradually. Secondly, the label of the central node was used to preprocess the labels of the homogeneous neighbor nodes, so as to reduce the number of initial labels as well as the randomness of subsequent label propagation, and preliminarily identify the overlapping nodes. Thirdly, the overlapping nodes were identified by the label belonging coefficient, and the labels of non-overlapping nodes were selected by the node influence values, improving the stability and accuracy of the proposed algorithm. Finally, in order to maximize the increment of the adaptive function, the communities with weak cohesion were merged together to improve the quality of communities. The simulation experimental results show that the proposed algorithm has the largest extended modularity value on 50% datasets of the six real networks, and has the best performance in Normalized Mutual Information (NMI) index on the artificial benchmark networks with different mixing degrees, overlapping degrees of node and the maximum numbers of communities to which the node belongs. In conclusion, the algorithm has good adaptability to all kinds of networks, and has nearly linear time complexity.
Security-risk-oriented distributed resource allocation method in power wireless private network
HUANG Xiuli, HUANG Jin, YU Pengfei, MIAO Weiwei, YANG Ruxia, LI Yijing, YU Peng
2020, 40(12): 3586-3593. DOI:
10.11772/j.issn.1001-9081.2020040488
Asbtract
(
)
PDF
(2051KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of ensuring terminal communication in the scenarios of strong interference and high failure risk in the power wireless private network, a security-risk-oriented energy-efficient distributed resource allocation method was proposed. Firstly, the energy consumption compositions of the base stations were analyzed, and the resource allocation model of system energy efficiency maximization was established. Then,
K
-means++ algorithm was adopted to cluster the base stations in the network, so as to divide the whole network into several independent areas, and the high-risk base stations were separately processed in each cluster. Then, in each cluster, the high-risk base stations were turned into the sleep mode based on the risk values of the base stations, and the users under the high-risk base stations were transferred to other base stations in the same cluster. Finally, the transmission powers of normal base stations in clusters were optimized. Theoretical analysis and simulation experimental results show that, the clustering of base stations greatly reduces the complexity of base station sleeping as well as power optimization and allocation, and the overall network energy efficiency is increased from 0.158 9 Mb/J to 0.195 4 Mb/J after turning off the high-risk base stations. The proposed distributed resource allocation method can effectively improve the energy efficiency of system.
Computer software technology
Performance optimization of distributed file system based on new type storage devices
DONG Cong, ZHANG Xiao, CHENG Wendi, SHI Jia
2020, 40(12): 3594-3603. DOI:
10.11772/j.issn.1001-9081.2020050632
Asbtract
(
)
PDF
(1323KB) (
)
References
|
Related Articles
|
Metrics
The I/O performance of new type storage devices is usually an order of magnitude higher than that of traditional Solid State Disk (SSD). However, simply replacing SSD with new type storage device will not significantly improve the performance of distributed file system. This means that the current distributed file system cannot give full play to the performance of new type storage devices. To solve the problem, the data writing process and transmission process of Hadoop Distributed File System (HDFS) were analyzed quantitatively. Through quantitative analysis of the time consumptions of different stages of HDFS writing process, the most time-consuming data transmission between nodes was found in each stage of writing data. Therefore, the corresponding optimization strategy was proposed, that is, the processes of data transmission and processing were parallelized by using asynchronous write. So that the processing stages of different data packets were parallel to each other, shortening the total processing time of data writing, thereby the write performance of HDFS was improved. Experimental results show the proposed scheme improves the HDFS write throughput by 15%-24%, and reduces the overall write execution time by 28%-36%.
Microservice identification method based on class dependencies under resource constraints
SHAO Jianwei, LIU Qiqun, WANG Huanqiang, CHEN Yaowang, YU Dongjin, SALAMAT Boranbaev
2020, 40(12): 3604-3611. DOI:
10.11772/j.issn.1001-9081.2020040495
Asbtract
(
)
PDF
(1213KB) (
)
References
|
Related Articles
|
Metrics
To effectively improve the automation level of legacy software system reconstruction based on the microservice architecture, according to the principle that there is a certain correlation between resource data operated by two classes with dependencies, a microservice identification method based on class dependencies under resource constraints was proposed. Firstly, the class dependency graph was built based on the class dependencies in the legacy software program, and the resource entity label for each class was set. Then, a dividing algorithm was designed for the class dependency graph based on the resource entity label, which was used to divide the original software system and obtain the candidate microservices. Finally, the candidate microservices with higher dependency degrees were combined to obtain the final microservice set. Experimental results based on four open source projects from GitHub demonstrate that, the proposed method achieves the microservice division accuracy of higher than 90%, which proves that it is reasonable and effective to identify microservices by considering both class dependencies and resource constraints.
Virtual reality and multimedia computing
Text-to-image synthesis method based on multi-level progressive resolution generative adversarial networks
XU Yining, HE Xiaohai, ZHANG Jin, QING Linbo
2020, 40(12): 3612-3617. DOI:
10.11772/j.issn.1001-9081.2020040575
Asbtract
(
)
PDF
(1238KB) (
)
References
|
Related Articles
|
Metrics
To address the problem that the results of text-to-image synthesis tasks have wrong target structures and unclear image textures, a Multi-level Progressive Resolution Generative Adversarial Network (MPRGAN) model was proposed based on Attentional Generative Adversarial Network (AttnGAN). Firstly, a semantic separation-fusion generation module was used in low-resolution layer, and the text feature was separated into three feature vectors by the guidance of self-attention mechanism and the feature vectors were used to generate feature maps respectively. Then, the feature maps were fused into low-resolution map, and the mask images were used as semantic constraints to improve the stability of the low-resolution generator. Finally, the progressive resolution residual structure was adopted in high-resolution layers. At the same time, the word attention mechanism and pixel shuffle were combined to further improve the quality of the generated images. Experimental results showed that, the Inception Score (IS) of the proposed model reaches 4.70 and 3.53 respectively on datasets of Caltech-UCSD Birds-200-2011 (CUB-200-2011) and 102 category flower dataset (Oxford-102), which are 7.80% and 3.82% higher than those of AttnGAN, respectively. The MPRGAN model can solve the instability problem of structure generation to a certain extent, and the images generated by the proposed model is closer to the real images.
Single image super-resolution method based on non-local channel attention mechanism
YE Yang, CAI Qiong, DU Xiaobiao
2020, 40(12): 3618-3623. DOI:
10.11772/j.issn.1001-9081.2020050681
Asbtract
(
)
PDF
(1173KB) (
)
References
|
Related Articles
|
Metrics
Single image super-resolution is an ill-posed problem, which aims to reconstruct the texture pattern with the given blurry and low-resolution image. Recently, Convolution Neural Network (CNN) was introduced into the field of super-resolution. Although excellent performance was obtained by current studies through designing the structure and the connection way of CNN, the use of edge data for training more powerful model was ignored. Therefore, a method based on edge data enhancement, that is, Non-local Channel Attention (NCA) method for single image super-resolution was proposed. The proposed method can make full use of the training data and improve performance by non-local channel attention. Not only the guideline to design the network was provided by the proposed method, but also the interpretation of super-resolution task was able to be performed by using the proposed method. The NCA Network (NCAN) model was composed of main branch and edge enhancement branch. The main branch self-attention was made for reconstructing the super-resolution images by taking the low-resolution images as input of the model and predicting the edge data. Experimental results show that, compared with the Second-order Attention Network (SAN) model, NCAN has the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) improved by 0.21 dB and 0.009 respectively on the benchmark dataset BSD100 at the magnification factor of 3; compared with the deep Residual Channel Attention Network (RCAN) model, NCAN has the PSNR and SSIM significantly improved on benchmark datasets of Set5 and Set14 at the magnification factor of 3 and 4. NCAN outperforms the state-of-the-art models on comparable parameters.
Abdominal MRI image multi-scale super-resolution reconstruction based on parallel channel-spatial attention mechanism
FAN Fan, GAO Yuan, QIN Pinle, WANG Lifang
2020, 40(12): 3624-3630. DOI:
10.11772/j.issn.1001-9081.2020050670
Asbtract
(
)
PDF
(1111KB) (
)
References
|
Related Articles
|
Metrics
In order to effectively solve the problems of not obvious boundaries, unclear abdominal organ display caused by high-frequency detail loss as well as the inconvenient application of single-model single-scale reconstruction in the super-resolution reconstruction of abdominal Magnetic Resonance Imaging (MRI) images, a multi-scale super-resolution algorithm based on parallel channel-spatial attention mechanism was proposed. Firstly, parallel channel-spatial attention residual blocks were built. The correlation between the key area and high-frequency information was obtained by the spatial attention module, and the channel attention module was used to study the weights of the channels of the image to the key information response degree. At the same time, the feature extraction layer of the network was widened to increase the feature information flowing into the attention module. In addition, the weight normalized layer was added to ensure the training efficiency of the network. Finally, a multi-scale up-sampling layer was applied at the end of the network to increase the flexibility and applicability of the network. Experimental results show that, compared with the image super-resolution using very deep Residual Channel Attention Network (RCAN), the proposed algorithm has the Peak Signal-to-Noise Ratio (PSNR) averagely increased by 0.68 dB at the×2,×3 and×4 scales. The proposed algorithm effectively improves the reconstructed image quality.
Interactive augmentation method for aircraft engine borescope inspection images based on style transfer
FAN Wei, DUAN Bokun, HUANG Rui, LIU Ting, ZHANG Ning
2020, 40(12): 3631-3636. DOI:
10.11772/j.issn.1001-9081.2020040585
Asbtract
(
)
PDF
(3282KB) (
)
References
|
Related Articles
|
Metrics
The number of defect region samples is far less than that of the normal region samples in aircraft engine borescope inspection image defect detection task, and the defect samples cannot cover the whole sample space, which result in poor generalization of the detection algorithms. In order to solve the problems, a new interactive data augmentation method based on style transfer technology was proposed. Firstly, background image and defect targets were selected according to the interactive interface, and the informations such as size, angle and position of the target needed to be pasted were specified according to the background image. Then, the style of background image was transferred to the target image through style transfer technology, so that the background image and the target to be detected had the same style. Finally, the boundary of the fusion region was modified by Poisson fusion algorithm to achieve the effect of natural transition of the connected region. Two-class classification and defect detection were conducted to verify the effectiveness of the proposed method. The testers achieve 44.0% classification error rate for the two-class classification on the dataset with real images and augmented images averagely. In the detection task based on Mask Region-based Convolutional Neural Network (Mask R-CNN) model, the proposed method has the Average Precision (AP) of classification and segmentation improved by 99.5% and 91.9% respectively compared to those of the traditional methods.
Indoor robot simultaneous localization and mapping based on RGB-D image
ZHAO Hong, LIU Xiangdong, YANG Yongjuan
2020, 40(12): 3637-3643. DOI:
10.11772/j.issn.1001-9081.2020040518
Asbtract
(
)
PDF
(1227KB) (
)
References
|
Related Articles
|
Metrics
Simultaneous Localization and Mapping (SLAM) is a key technology for robots to realize autonomous navigation in unknown environments. Aiming at the poor real-time performance and low accuracy of the commonly used RGB-Depth (RGB-D) SLAM system, a new RGB-D SLAM system was proposed to further improve the real-time performance and accuracy. Firstly, the Oriented FAST and Rotated BRIEF (ORB) algorithm was used to detect the image feature points, and the extracted feature points were processed by using the quadtree-based homogenization strategy, and the Bag of Words (BoW) was used to perform feature matching. Then, in the stage of system camera pose initial value estimation, an initial value which was closer to the optimal value was provided for back-end optimization by combining the Perspective
n
Point (P
n
P) and nonlinear optimization methods. In the back-end optimization, the Bundle Adjustment (BA) was used to optimize the initial value of the camera pose iteratively for obtaining the optimal value of the camera pose. Finally, according to the correspondence between the camera pose and the point cloud map of each frame, all the point cloud data were registered in a coordinate system to obtain the dense point cloud map of the scene, and the octree was used to compress the point cloud map recursively, so as to obtain a 3D map for robot navigation. On the TUM RGB-D dataset, the proposed RGB-D SLAM system, RGB-D SLAMv2 system and ORB-SLAM2 system were compared. Experimental results show that the proposed RGB-D SLAM system has better comprehensive performance on real-time and accuracy.
Portrait segmentation on mobile devices based on deep neural network
YANG Jianwei, YAN Qun, YAO Jianmin, LIN Zhixian
2020, 40(12): 3644-3650. DOI:
10.11772/j.issn.1001-9081.2020050699
Asbtract
(
)
PDF
(1778KB) (
)
References
|
Related Articles
|
Metrics
Most of the existing portrait segmentation algorithms ignore the hardware limitation of mobile devices and blindly pursue the effect, so that they cannot meet the segmentation speed requirement of mobile terminals. Therefore, a portrait segmentation network which could run efficiently on mobile devices was proposed. Firstly, the network was constructed based on the lightweight U-shaped architecture of encoder-decoder. Secondly, in order to make up for the fact that the Fully Convolutional Network (FCN) was limited by a small sensing domain, so that it was not able to fully capture the long-distance information, an Expectation Maximization Attention Unit (EMAU) was introduced after the encoder and before the decoder. Thirdly, for improving the accuracy of portrait boundary contour, a multi-layer boundary auxiliary loss was added at the training stage. Finally, the model was quantized and compressed. The proposed network was compared with other networks such as PortraitFCN+, ENet and BiSeNet on Veer dataset. Experimental results show that, the proposed network can improve the image reasoning speed and segmentation effect, as well as process the RGB images with the resolution of 224×224 at the accuracy of 95.57%.
Semantic face image inpainting based on U-Net with dense blocks
YANG Wenxia, WANG Meng, ZHANG Liang
2020, 40(12): 3651-3657. DOI:
10.11772/j.issn.1001-9081.2020040522
Asbtract
(
)
PDF
(1765KB) (
)
References
|
Related Articles
|
Metrics
When the areas to be inpainted in the face image are large, there are some visual defects caused by the inpainting of the existing methods, such as unreasonable image semantic understanding and incoherent boundary. To solve this problem, an end-to-end image inpainting model of U-Net structure based on dense blocks was proposed to achieve the inpainting of semantic face of any mask. Firstly, the idea of generative adversarial network was adopted. In the generator, the convolutional layers in U-Net were replaced with dense blocks to capture the semantic information of the missing regions of the image and to make sure the features of the previous layers were reused. Then, the skip connections were adopted to reduce the information loss caused by the down-sampling, so as to extract the semantics of the missing regions. Finally, by introducing the joint loss function combining adversarial loss, content loss and local Total Variation (TV) loss to train the generator, the visual consistency between the inpainted boundary and the surrounding real image was ensured, and Hinge loss was used to train the discriminator. The proposed model was compared with Globally and Locally Consistent image completion(GLC),Deep Fusion(DF) and Gated Convolution(GC) on CelebA-HQ face dataset. Experimental results show that the proposed model can effectively extract the semantic information of face images, and its inpainting results have the boundaries with natural transition and clear local details. Compared with the second-best GC, the proposed model has the Structure SIMilarity index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) increased by 5.68% and 7.87% respectively, while the Frechet Inception Distance (FID) decreased by 7.86% for the central masks; and has the SSIM and PSNR increased by 7.06% and 4.80% respectively while the FID decreased by 6.85% for the random masks.
Lightweight face liveness detection method based on multi-modal feature fusion
PI Jiatian, YANG Jiezhi, YANG Linxi, PENG Mingjie, DENG Xiong, ZHAO Lijun, TANG Wanmei, WU Zhiyou
2020, 40(12): 3658-3665. DOI:
10.11772/j.issn.1001-9081.2020050660
Asbtract
(
)
PDF
(1582KB) (
)
References
|
Related Articles
|
Metrics
Face liveness detection is an important part of the face recognition process, and is particularly important for the security of identity verification. In view of the cheating methods such as photo, video, mask, hood and head model in the face recognition process, the RGB map and depth map information of the face was collected by the Intel Realsense camera, and a lightweight liveness detection of feature fusion was proposed based on MobileNetV3 to fuse the features of the depth map and the RGB map together and perform the end-to-end training. To solve the problem of large parameter quantity in deep learning and the distinction of the weight areas by the network tail, the method of using Streaming Module at the network tail was proposed to reduce the quantity of network parameters and distinguish weight regions. Simulation experiments were performed on CASIA-SURF dataset and the constructed CQNU-LN dataset. The results show that, on both datasets, the proposed method achieves an accuracy of 95% with TPR@FPR=10E-4, which is increased by 0.1% and 0.05% respectively compared to ShuffleNet with the highest accuracy in the comparison methods. The accuracy of the proposed method reaches an accuracy of 95.2% with TPR@FPR=10E-4 on the constructed CQNU-3Dmask dataset, which is improved by 0.9% and 6.5% respectively compared to those of the method training RGB maps only and the method training depth maps only. In addition, the proposed model has the parameter quantity of only 1.8 MB and FLoating-point Operations Per second (FLOPs) of only 1.5×10
6
. The proposed method can perform accurate and real-time liveness detection on the extracted face target in practical applications.
Face recognition security system based on liveness detection and authentication
CHEN Fang, LIU Xiaorui, YANG Mingye
2020, 40(12): 3666-3672. DOI:
10.11772/j.issn.1001-9081.2020040478
Asbtract
(
)
PDF
(1545KB) (
)
References
|
Related Articles
|
Metrics
Face recognition is widely applied in various practical conditions such as entrance guard due to its convenience and practicability. But it is vulnerable to various forms of spoofing attacks (such as photo attacks and video attacks). The liveness detection based on deep Convolution Neural Network (CNN) can solve the above problem but has disadvantages such as high calculation cost, unfriendly interaction mode and difficult deployment on embedded devices. Therefore, a real-time and lightweight security classification method of face recognition was proposed. The face liveness detection algorithm based on color and texture analysis was integrated with the face authentication algorithm, and a face recognition algorithm performing face liveness detection and face authentication in the situation of monocular camera without user cooperation was proposed. The proposed algorithm can support real-time face recognition and has higher liveness recognition rate and robustness. In order to validate the performance of the proposed algorithm, Chinese Academy of Sciences Institute of Automation-Face Anti-Spoofing Dataset (CASIA-FASD) and Replay-Attack dataset were utilized as the benchmark datasets of the experiment. The experimental results show that, in the liveness detection, the proposed algorithm has the Half Total Error Rate (HTER) of 9.7% and Equal Error Rate (EER) of 5.5% respectively, and has the time cost of 0.12 s to process a frame of image in the whole process. The above results verify the feasibility and effectiveness of the proposed algorithm.
Palm vein image recognition based on side chain connected convolution neural network
LOU Mengying, WANG Tianjing, LIU Yaqin, YANG Feng, HUANG Jing
2020, 40(12): 3673-3678. DOI:
10.11772/j.issn.1001-9081.2020050667
Asbtract
(
)
PDF
(916KB) (
)
References
|
Related Articles
|
Metrics
To overcome the performance degradation of palm vein recognition system due to the small quantity and the uneven quality of palm vein images, a palm vein image recognition method based on side chain connected convolutional neural network was proposed. Firstly, palm vein features were extracted by convolution layer and pooling layer based on ResNet model. Secondly, the Exponential Linear Unit (ELU) activation function, Batch Normalization (BN) and Dropout technology were used to improve and optimize the model, so as to alleviate gradient disappear, prevent over fitting, speed up convergence and enhance the generalization ability of the model. Finally, Densely Connected Network (DenseNet) was introduced to make the extracted palm vein features more abundant and effective. Experimental results on two public databases and one self-built database show that, the recognition rates of the proposed method on the three databases are 99.98%, 97.95%, 97.96% respectively, indicating that the proposed method can effectively improve the performance of palm vein recognition system, and is more suitable for the practical applications of palm vein recognition.
Power station rotary switch status recognition based on YOLO-tiny-RFB model
SHI Meng'an, LU Zhenyu
2020, 40(12): 3679-3686. DOI:
10.11772/j.issn.1001-9081.2020071084
Asbtract
(
)
PDF
(1475KB) (
)
References
|
Related Articles
|
Metrics
Data samples were always limited for multi-category object detection in some specific scenes. In order to improve the stability and accuracy of the light-weight neural networks for small object recognition in robotic system, an object status recognition module based on Robotic Operating System (ROS) was designed. Firstly, considering the computing power limitation of embedded devices, a lightweight network YOLO-tiny was used as the main architecture of object recognition model, then the Respective Field Block (RFB) was introduced in YOLO-tiny, so as to construct the YOLO-tiny-RFB model. Secondly, MobileNet was employed to conduct an accurate classification of multiple statuses of rotary switches. Finally, the data association rules were designed, and algorithms such as image alignment and Intersection Over Union (IOU) calculation were used to make the recognition module complete the fusion of multiple recognition results of the same scene, so that users were able to track the statuses of each meter at different times. Experimental results show that on the constructed power station instrument recognition dataset, compared with the YOLO-tiny, the YOLO-tiny-RFB model increases the object recognition mean Average Precision (mAP) by 17.9%, which is achieved to 82.4% with a small increase in computational load of model. In the case of extremely unbalanced rotary switch data distribution, the average accuracy of model reaches 90.7% by introducing various data enhancement methods. The proposed object detection module and status recognition network model can complete the status recognition of all kinds of instruments effectively and accurately, meanwhile they can fuse the recognition results of instrument status at different times.
Frontier & interdisciplinary applications
Joint optimization of picking operation based on nested genetic algorithm
SUN Junyan, CHEN Zhirui, NIU Yaru, ZHANG Yuanyuan, HAN Fang
2020, 40(12): 3687-3694. DOI:
10.11772/j.issn.1001-9081.2020050639
Asbtract
(
)
PDF
(998KB) (
)
References
|
Related Articles
|
Metrics
It is difficult to obtain the overall optimal solution by the traditional order batching and the picking path step-by-step optimization of picking operation in the logistics distribution center. In order to improve the efficiency of picking operation, a joint picking strategy based on nested genetic algorithm for order batching and path optimization was proposed. Firstly, the joint optimization model of order batching and picking path was established with the shortest total picking time as the objective function. Then, a nested genetic algorithm was designed to solve the model with the consideration of the complexity of double optimizations. The order batching result was continuously optimized in the outer layer, and the picking path was optimized in the inner layer according to the order batching result in the outer layer. Results of the examples show that, compared with the traditional strategies of order step-by-step optimization and step-by-step optimization in batches, the proposed strategy has reduced the picking time by 45.6% and 6% respectively, and the joint optimization model based on nested genetic algorithm results in shorter picking path and less picking time. To verify that the proposed algorithm has better performance on orders with different sizes, the simulation experiments were performed to the examples with 10, 20, 50 orders respectively. The results show that, with the increase of order quantity, the overall picking distance and time are further reduced, the decrease of picking time is risen from 6% to 7.2%.The joint optimization model of picking operation based on nested genetic algorithm and its solution algorithm can effectively solve the joint optimization problem of order batching and picking path, and provide the basis for the optimization of picking system in the distribution center.
Pseudoinverse-based motion planning scheme for deviation correction of rail manipulator joint velocity
LI Kene, ZHANG Zeng, WANG Wenxin
2020, 40(12): 3695-3700. DOI:
10.11772/j.issn.1001-9081.2020040560
Asbtract
(
)
PDF
(1145KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the joint velocity of the rail manipulator deviates from the expected value during the process of task execution, a pseudoinverse-based motion planning scheme for deviation correction of joint velocity of rail manipulator was proposed. Firstly, according to the joint angle state of the manipulator and the motion state of the end-effector, the pseudoinverse algorithm was used to analyze the redundancy of the rail manipulator on the velocity level. Secondly, a time-varying function was designed to perform constraint and adjustment of the joint velocity, making the deviated joint velocity converge to the expected value. Thirdly, an error correction method was employed to reduce the position error of the end-effector for ensuring the successful execution of the trajectory tracking task. Finally, the motion planning scheme was simulated on Matlab software with the four-bar redundant manipulator with the base of linear movement and circular movement as the example. The simulation results show that the proposed motion planning scheme can correct the joint velocity of the rail manipulator deviated from the expected value during the task execution, and can make the end-effector obtain higher accuracy in trajectory tracking.
Extended target tracking algorithm based on ET-PHD filter and variational Bayesian approximation
HE Xiangyu, LI Jing, YANG Shuqiang, XIA Yujie
2020, 40(12): 3701-3706. DOI:
10.11772/j.issn.1001-9081.2020040451
Asbtract
(
)
PDF
(1020KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the tracking problem of multiple extended targets under the circumstances with unknown measurement noise covariance, an extension of standard Extended Target Probability Hypothesis Density (ET-PHD) filter and the way to realize its analysis were proposed by using ET-PHD filter and Variational Bayesian (VB) approximation theory. Firstly, on the basis of the target state equations and measurement equations of the standard ET-PHD filter, the augmented state variables of target state and measurement noise covariance as well as the joint transition function of the above variables were defined. Then, the prediction and update equations of the extended ET-PHD filter were established based on the standard ET-PHD filter. And finally, under the condition of linear Gaussian assumptions, the joint posterior intensity function was expressed as the Gaussian and Inverse-Gamma (IG) mixture distribution, and the analysis of the extended ET-PHD filter was realized. Simulation results demonstrate that the proposed algorithm can obtain reliable tracking results, and can effectively track multiple extended targets in the circumstances with unknown measurement noise covariance.
2024 Vol.44 No.11
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF