Loading...

Table of Content

    10 November 2020, Volume 40 Issue 11
    Artificial intelligence
    Social event participation prediction based on event description
    SUN Heli, SUN Yuzhu, ZHANG Xiaoyun
    2020, 40(11):  3101-3106.  DOI: 10.11772/j.issn.1001-9081.2020030418
    Asbtract ( )   PDF (676KB) ( )  
    References | Related Articles | Metrics
    In the related research of Event Based Social Networks (EBSNs), it is difficult to predict the participation of social events based on event description. The related studies are very limited, and the research difficulty mainly comes from the evaluation subjectivity of event description and limitations of language modeling algorithms. To solve these problems, first the concepts of successful event, similar event and event similarity were defined. Based on these concepts, the social data collected from the Meetup platform was extracted. At the same time, the analysis and prediction methods based on Lasso regression, Convolutional Neural Network (CNN) and Gated Recurrent Neural Network (GRNN) were separately designed. In the experiment, part of the extracted data was selected to train the three models, and the remaining data was used for the analysis and prediction. The results showed that, compared with the events without event description, the prediction accuracy of the events processed by the Lasso regression model was improved by 2.35% to 3.8% in different classifiers, and the prediction accuracy of the events processed by the GRNN model was improved by 4.5% to 8.9%, and the result of the CNN model processing was not ideal. This study proves that event description can improve event participation, and the GRNN model has the highest prediction accuracy among the three models.
    Deep transfer adaptation network based on improved maximum mean discrepancy algorithm
    ZHENG Zongsheng, HU Chenyu, JIANG Xiaoyi
    2020, 40(11):  3107-3112.  DOI: 10.11772/j.issn.1001-9081.2020020263
    Asbtract ( )   PDF (2506KB) ( )  
    References | Related Articles | Metrics
    In the study of model parameter based transfer learning, both the sample distribution discrepancy between two domains and the co-adaptation between convolutional layers of the source model impact performance of model. In response to these problems, a Multi-Convolution Adaptation (MCA) deep transfer framework was proposed and applied to the grade classification of typhoons in satellite cloud images, and a CE-MMD loss function was defined by adding the improved L-MMD (Maximum Mean Discrepancy) algorithm as a regular term to the cross-entropy function and applying the linear unbiased estimation to the distribution of the samples in Reproducing Kernel Hilbert Space (RKHS). In the back propagation process, the residual error and the distribution discrepancy between the samples in two domains were used as common indexes to update the network parameters, making model converge faster and have higher accuracy. Comparison experimental results of L-MMD and two measurement algorithms-Bregman difference and KL (Kullback-Leibler) divergence on the self-built typhoon dataset show that the precision of the proposed algorithm is improved by 11.76 percentage points and 8.05 percentage points respectively compared to those of the other two algorithms. It verifies that L-MMD is superior to other measurement algorithms and the MCA deep transfer framework is feasible.
    Learning monkey algorithm based on Lagrange interpolation to solve discounted {0-1} knapsack problem
    XU Xiaoping, XU Li, WANG Feng, LIU Long
    2020, 40(11):  3113-3118.  DOI: 10.11772/j.issn.1001-9081.2020040482
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    The purpose of the Discounted {0-1} Knapsack Problem (D{0-1}KP) is to maximize the sum of the value coefficients of all items loaded into the knapsack without exceeding the weight limit of the knapsack. In order to solve the problem of low accuracy when the existing algorithms solve the D{0-1}KP with large scale and high complexity, the Lagrange Interpolation based Learning Monkey Algorithm (LSTMA) was proposed. Firstly, the length of the visual field was redefined during the look process of the basic monkey algorithm. Then, the best individual in the population was introduced as the second pivot point and the search mechanism was adjusted during the jump process. Finally, the Lagrange interpolation operation was introduced after the jump process to improve the search performance of the algorithm. The simulation results on four types of examples show that LSMTA solves the D{0-1}KP with higher accuracy than the comparison algorithms, and it has good robustness.
    Dynamic cooperative random drift particle swarm optimization algorithm assisted by evolution information
    ZHAO Ji, CHENG Cheng
    2020, 40(11):  3119-3126.  DOI: 10.11772/j.issn.1001-9081.2020040481
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    A dynamic Cooperative Random Drift Particle Swarm Optimization (CRDPSO) algorithm assisted by evolution information was proposed in order to improve the population diversity of random drift particle swarm optimization. By using the vector information of context particles, the population diversity was increased by the dynamic cooperation between the particles, to improve the search ability of the swarm and make the whole swarm cooperatively search for the global optimum. At the same time, at each iteration during evolution, the positions and the fitness values of the evaluated solutions in the algorithm were stored by a binary space partitioning tree structure archive, which led to the fast fitness function approximation. The mutation was adaptive and nonparametric because of the fitness function approximation enhanced the mutation strategy. CRDPSO algorithm was compared with Differential Evolution (DE), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), continuous Non-revisiting Genetic Algorithm (cNrGA) and three improved Quantum-behaved Particle Swarm Optimization (QPSO) algorithms through a series of standard test functions. Experimental results show that the performance of CRDPSO is optimal for both unimodal and multimodal test functions, which proves the effectiveness of the algorithm.
    Particle flow filter algorithm based on “innovation error”
    ZHOU Deyun, LIU Bin, SU Qian
    2020, 40(11):  3127-3132.  DOI: 10.11772/j.issn.1001-9081.2020030402
    Asbtract ( )   PDF (598KB) ( )  
    References | Related Articles | Metrics
    There exist some problems in the process of Particle Filter (PF), such as particle weight degeneracy, curse of dimensionality, and high computational cost. By constructing a logarithmic homotopy function, particle flow filter can avoid the problem of particle weight degeneracy, but it relies on the observation equation too much when solving the boundary value problem, and has poor effect when the noise is high. To address these problems, an improved particle flow filter algorithm was proposed. Firstly, an "innovation error" structure was introduced into the process of particle flow, so that the update of each particle is independent. Then, the Galerkin finite element method was utilized to obtain the numerical solution of the boundary value problem, so as to avoid the numerical instability problem that may be caused by the fitting sample prior. Finally, the performance of the improved algorithm was tested in the common nonlinear filter model and the maneuvering target tracking model. Simulation results show that the improved algorithm can suppress the dependency of the system on observation information, and has relatively good results with increasing noise, which effectively improve the filtering accuracy, and in multi-dimensional target tracking cases, the algorithm's computational efficiency and filtering accuracy are higher than those of the standard particle filter.
    Newton-soft threshold iteration algorithm for robust principal component analysis
    WANG Haipeng, JIANG Ailian, LI Pengxiang
    2020, 40(11):  3133-3138.  DOI: 10.11772/j.issn.1001-9081.2020030375
    Asbtract ( )   PDF (3222KB) ( )  
    References | Related Articles | Metrics
    Aiming at Robust Principal Component Analysis (RPCA) problem, a Newton-Soft Threshold Iteration (NSTI) algorithm was proposed for reducing time complexity of RPCA algorithm. Firstly, the NSTI algorithm model was constructed by using the sum of the Frobenius norm of the low-rank matrix and the l1-norm of the sparse matrix. Secondly, two different optimization methods were used to calculate different parts of the model at the same time. Newton method was used to quickly calculate the low-rank matrix. Soft threshold iteration algorithm was used to quickly calculate the sparse matrix. The decomposition of low-rank matrix and sparse matrix of original data was calculated by alternately using the two optimization methods. Finally, the low-rank features of the original data were obtained. Under the condition that the data scale is 5 000×5 000 and rank of the low-rank matrix is 20, NSTI algorithm can improve the time efficiency by 24.6% and 45.5% compared with Gradient Descent (GD) algorithm and Low-Rank Matrix Fitting (LMaFit) algorithm. For foreground and background separation of 180-frame video, NSTI takes 3.63 s and has the time efficiency 78.7% and 82.1% higher than GD algorithm and LMaFit algorithm. In the experiment of image denoising, NSTI algorithm takes 0.244 s, and the residual error of the image processed by NSTI and the original image is 0.381 3, showing that the time efficiency and the accuracy of the proposed algorithm are 64.3% more efficient and 45.3% more accurate than those of GD algorithm and LMaFit algorithm. Experimental results prove that NSTI algorithm can effectively solve the RPCA problem and improve the time efficiency of the RPCA algorithm.
    Multiple birth support vector machine based on Rescaled Hinge loss function
    LI Hui, YANG Zhixia
    2020, 40(11):  3139-3145.  DOI: 10.11772/j.issn.1001-9081.2020030381
    Asbtract ( )   PDF (817KB) ( )  
    References | Related Articles | Metrics
    As the performance of multi-classification learning model is effected by outliers, a Multiple Birth Support Vector Machine based on Rescaled Hinge loss function (RHMBSVM) was proposed. First, the corresponding optimization problem was constructed by introducing a bounded non-convex Rescaled Hinge loss function. Then, the conjugate function theory was used to make equivalent transformation of the optimization problem. Finally, the variable alternation strategy was used to form an iterative algorithm to solve the non-convex optimization problem. The penalty weight of each sample point was automatically adjusted during the solution process, so that the effect of outliers on K hyperplanes was eliminated, and the robustness was enhanced. The method of 5-fold cross-validation was used to complete the numerical experiment. Results show that, in the case of no outliers in the datasets, the accuracy of the proposed method is 1.11 percentage point higher than that of Multiple Birth Support Vector Machine (MBSVM) and 0.74 percentage point higher than that of Robust Support Vector Machine based on Rescaled Hinge loss function (RSVM-RHHQ); in the case of having outliers in the datasets, the accuracy of the proposed method is 2.10 percentage point higher than that of MBSVM and 1.47 percentage point higher than that of RSVM-RHHQ. Experimental results verify the robustness of the proposed method in solving multi-classification problems with outliers.
    Convolution neural network model compression method based on pruning and tensor decomposition
    GONG Kaiqiang, ZHANG Chunmei, ZENG Guanghua
    2020, 40(11):  3146-3151.  DOI: 10.11772/j.issn.1001-9081.2020030362
    Asbtract ( )   PDF (1488KB) ( )  
    References | Related Articles | Metrics
    Focused on the problem that the huge number of parameters and calculations of Convolutional Neural Network (CNN) limit the application of CNN on resource-constrained devices such as embedded systems, a neural network compression method of statistics based network pruning and tensor decomposition was proposed. The core idea was to use the mean and variance as the basis for evaluating the weight contribution. Firstly, Lenet5 was used as a pruning model, the mean and variance distribution of each convolutional layer of the network were clustered to separate filters with weaker extracted features, and the retained filters were used to reconstruct the next convolutional layer. Secondly, the pruning method was combined with tensor decomposition to compress the Faster Region with Convolutional Neural Network (Faster RCNN). The pruning method was adopted for the low-dimensional convolution layers, and the high-dimensional convolutional layers were decomposed into three cascaded convolutional layers. Finally, the compressed model was fine-tuned, making the model be at the convergence state once again on the training set. Experimental results on the PASCAL VOC test set show that the proposed method reduces the storage space of the Faster RCNN model by 54% while the decrease of the accuracy is only 0.58%, at the same time, the method can reach 1.4 times acceleration of forward computing on the Raspberry Pi 4B system, which helpful for the deployment of deep CNN models on resource-constrained embedded devices.
    Multi-attribute group decision making method for probabilistic linguistic term set based on regret theory and distance from average solution method
    TONG Yuzhen, WANG Yingming
    2020, 40(11):  3152-3158.  DOI: 10.11772/j.issn.1001-9081.2020010131
    Asbtract ( )   PDF (504KB) ( )  
    References | Related Articles | Metrics
    For the group decision making problem with unknown attribute weight, a multi-attribute group decision making method was proposed combining Evaluation based on Distance from Average Solution (EDAS) and Probabilistic Linguistic Term Set (PLTS), considering the decision maker's psychological behavior of regret avoidance. First, the entropy and cross entropy of PLTS were defined according to the properties of PLTS and the attribute weight model was established. Second, the group satisfaction formula was extended to the PLTS environment and was used for the calculation of the utility values in regret theory. Third, the model and the group satisfaction formula were determined based on the attribute weights of PLTS, the regret theory and EDAS method were combined to propose a new multi-attribute decision making method, and the selection sort was performed to the alternatives. Finally, taking the selection sort of real internet public opinion emergencies as an example, the proposed method was verified, and the effectiveness of the method was proved through comparative analysis.
    Activity semantic recognition method based on joint features and XGBoost
    GUO Maozu, ZHANG Bin, ZHAO Lingling, ZHANG Yu
    2020, 40(11):  3159-3165.  DOI: 10.11772/j.issn.1001-9081.2020030301
    Asbtract ( )   PDF (2125KB) ( )  
    References | Related Articles | Metrics
    The current research on the activity semantic recognition only extracts the sequence features and periodic features on the time dimension, and lacks deep mining of spatial information. To solve these problems, an activity semantic recognition method based on joint features and eXtreme Gradient Boosting (XGBoost) was proposed. Firstly, the activity periodic features in the temporal information as well as the latitude and longitude features in the spatial information were extracted. Then the latitude and longitude information was used to extract the heat features of the spatial region based on the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm. The user activity semantics was represented by the feature vectors combined with these features. Finally, the activity semantic recognition model was established through the XGBoost algorithm in the integrated learning method. On two public check-in datasets of FourSquare, the model based on joint features has a 28 percentage points improvement in recognition accuracy compared to the model with only temporal features, and compared with the Context-Aware Hybrid (CAH) method and the Spatial Temporal Activity Preference (STAP) method, the proposed method has the recognition accuracy increased by 30 percentage points and 5 percentage points respectively. Experimental results show that the proposed method is more accurate and effective on the problem of activity semantic recognition compared to the the comparison methods.
    No-reference image quality assessment algorithm with enhanced adversarial learning
    CAO Yudong, CAI Xibiao
    2020, 40(11):  3166-3171.  DOI: 10.11772/j.issn.1001-9081.2020010012
    Asbtract ( )   PDF (1035KB) ( )  
    References | Related Articles | Metrics
    To improve performance of current Non-Reference Image Quality Assessment (NR-IQA) methods, a no-reference image quality assessment algorithm with enhanced adversarial learning was proposed under the latest deep Generative Adversarial Network (GAN) technology. In the proposed algorithm, the adversarial learning was strengthened by improving the loss function and the structure of the network model, so as to output more reliable simulated "reference images" to simulate human visual comparison process as the Full-Reference Image Quality Assessment (FR-IQA) method does. First, the distorted image and undistorted original image were input to train the network model based on the enhanced adversarial learning. Then, a simulated image of the image to be tested was output from the trained model, and the deep convolution features of the reference image were extracted. Finally, the deep convolution features of reference image and the distorted image to be tested were merged and input into the trained quality assessment regression network, and the assessment score of the image was output. Datasets LIVE, TID2008 and TID2013 were used to perform the experiments. Experimental results show that the overall subjective performance on image quality assessment of the proposed algorithm is superior to those of the existing mainstream algorithms and is consistent with the performance of the human subjective assessment.
    Environment sound recognition based on lightweight deep neural network
    YANG Lei, ZHAO Hongdong
    2020, 40(11):  3172-3177.  DOI: 10.11772/j.issn.1001-9081.2020030433
    Asbtract ( )   PDF (903KB) ( )  
    References | Related Articles | Metrics
    The existing Convolutional Neural Network (CNN) models have a large number of redundant parameters. In order to address this problem, two lightweight network models named Fnet1 and Fnet2, based on the SqueezeNet core structure Fire module, were proposed. Then, in the view of the characteristics of distributed data collection and processing of mobile terminals, based on Fnet2, a new network model named FnetDNN, with Fnet2 integrated with Deep Neural Network (DNN), was proposed according to Dempster-Shafer (D-S) evidence theory. Firstly, a neural network named Cent with four convolutional layers was used as the benchmark, and Mel Frequency Cepstral Coefficient (MFCC) as the input feature. From aspects of the network structure characteristics, calculation cost, number of convolution kernel parameters and recognition accuracy, Fnet1, Fnet2 and Cent were analyzed. Results showed that Fnet1 only used 10.3% parameters of that of Cnet, and had the recognition accuracy of 86.7%. Secondly, MFCC and the global feature vector were input into the FnetDNN model, which improved the recognition accuracy of the model to 94.4%. Experimental results indicate that the proposed Fnet network model can compress redundant parameters as well as integrate with other networks, which has the ability to expand the model.
    Human action recognition model based on tightly coupled spatiotemporal two-stream convolution neural network
    LI Qian, YANG Wenzhu, CHEN Xiangyang, YUAN Tongtong, WANG Yuxia
    2020, 40(11):  3178-3183.  DOI: 10.11772/j.issn.1001-9081.2020030399
    Asbtract ( )   PDF (2537KB) ( )  
    References | Related Articles | Metrics
    In consideration of the problems of low utilization rate of action information and insufficient attention of temporal information in video human action recognition, a human action recognition model based on tightly coupled spatiotemporal two-stream convolutional neural network was proposed. Firstly, two 2D convolutional neural networks were used to separately extract the spatial and temporal features in the video. Then, the forget gate module in the Long Short-Term Memory (LSTM) network was used to establish the feature-level tightly coupled connections between different sampled segments to achieve the transfer of information flow. After that, the Bi-directional Long Short-Term Memory (Bi-LSTM) network was used to evaluate the importance of each sampled segment and assign adaptive weight to it. Finally, the spatiotemporal two-stream features were combined to complete the human action recognition. The accuracy rates of this model on the datasets UCF101 and HMDB51 selected for the experiment and verification were 94.2% and 70.1% respectively. Experimental results show that the proposed model can effectively improve the utilization rate of temporal information and the ability of overall action representation, thus significantly improving the accuracy of human action recognition.
    Data science and technology
    Survey of large-scale resource description framework data partitioning methods in distributed environment
    YANG Cheng, LU Jiamin, FENG Jun
    2020, 40(11):  3184-3191.  DOI: 10.11772/j.issn.1001-9081.2020040539
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of knowledge graph and its wide usage in various vertical domains, the requirements for efficient processing of Resource Description Framework (RDF) data has increasingly become a new topic in the field of modern big data management. RDF is a data model proposed by W3C to describe knowledge graph entities and inter-entity relationships. In order to effectively cope with the storage and query of the large-scale RDF data, many scholars consider managing RDF data in a distributed environment. The key problem faced by the distributed storage of RDF data is data partitioning, and the performance of Simple Protocol and RDF Query Language (SPARQL) queries is largely determined by the results of partitioning. From the perspective of data partitioning, two types:graph structure-based RDF data partitioning methods and semantics-based RDF data partitioning methods, were mainly focused on and described in depth. The former include multi-granularity hierarchical partitioning, template partitioning and clustering partitioning, and are suitable for the wide semantic categories scenes of general domain query, while the latter include hash partitioning, vertical partitioning and pattern partitioning, and are more suitable for the environments of the relatively fixed semantic categories of vertical domain query. In addition, several typical partitioning methods were compared and analyzed to provide enlightenment for the future research on RDF data partitioning methods. Finally, the future research directions of RDF data partitioning methods were summarized.
    Query extension based on deep semantic information
    LIU Gaojun, FANG Xiao, DUAN Jianyong
    2020, 40(11):  3192-3197.  DOI: 10.11772/j.issn.1001-9081.2020040473
    Asbtract ( )   PDF (591KB) ( )  
    References | Related Articles | Metrics
    With the advent of the Internet era, search engines begin to be widely used. In the case of unpopular data, the search engine is unable to retrieve the required data due to the small range of the user's search term. At this time, the query extension system can effectively assist the search engine to provide the reliable services. Based on the query extension method of global document analysis, a semantic relevance model which combines the neural network model with the corpus containing semantic information was proposed to extract semantic information between words in a deeper level. This deep semantic information can provide more comprehensive and effective feature support for the query extension system, so as to analyze the extensible relationship between words. The local extensible word distribution was extracted from the semantic data such as thesaurus and language knowledge base "HowNet" sememe annotation information, and the local extensible word distribution of each word in corpus space was fitted to the global extensible word distribution by using the deep mining ability of the neural network model. In the comparison experiment with the query extension methods based on language model and thesaurus respectively, the query extension method based on semantic relevance model has a higher query extension efficiency; especially for the unpopular search data, the recall rate of semantic relevance model increases by 11.1 percentage points and 5.29 percentage points compared to those of the comparison methods respectively.
    Research team mining algorithm based on teacher-student relationship
    LI Shasha, LIANG Dongyang, YU Jie, JI Bin, MA Jun, TAN Yusong, WU Qingbo
    2020, 40(11):  3198-3202.  DOI: 10.11772/j.issn.1001-9081.2020040516
    Asbtract ( )   PDF (2268KB) ( )  
    References | Related Articles | Metrics
    For mining research teams more rationally, a teacher-student relationship based research team mining algorithm was proposed. First, the BiLSTM-CRF neural network model was used to extract the teacher and classmate named entities from the acknowledgement parts of academic dissertations. Secondly, the guidance and cooperation network between teachers and students was constructed. Thirdly, the Leuven algorithm was improved, and the teacher-student relationship based Leuven algorithm was proposed to mine the research teams. The performance comparison was performed to the label propagation algorithm, the clustering coefficient algorithm and the Leuven algorithm on the datasets such as American College football dataset. Moreover, the operating efficiency of the teacher-student relationship based Leuven algorithm was compared to the operating efficiency of the original Leuven algorithm on three academic dissertation datasets with different scales. Experimental results show that the larger the data size, the more obvious performance improvement of the teacher-student relationship based Leuven algorithm. Finally, based on the academic dissertation dataset of National University of Defense Technology, the performance of the teacher-student relationship based Leuven algorithm was validated. Experimental results show that research teams mined by the proposed algorithm are more reasonable compared to academic paper cooperation network based mining method in the aspects of team cooperation closeness, team scale, team internal relationship and team stability.
    Overlapping community detection method based on improved symmetric binary nonnegative matrix factorization
    CHENG Qiwei, CHEN Qimai, HE Chaobo, LIU Hai
    2020, 40(11):  3203-3210.  DOI: 10.11772/j.issn.1001-9081.2020020260
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of overlapping community detection in complex networks, many types of methods have been proposed, and Symmetric Binary Nonnegative Matrix Factorization (SBNMF) based overlapping community detection method is one of the most representative methods. However, SBNMF performs poorly when dealing with complex networks with sparse links within communities. In view of this, an Improved SBNMF (ISBNMF) based overlapping community detection method was proposed. Firstly, the factor matrix obtained by the symmetric nonnegative matrix factorization was used to construct a new network with dense links within communities. Then, the SBNMF model based on Frobenius norm was used to factorize the adjacency matrix of the new network. Finally, a binary matrix that can explicitly indicate the community membership of nodes was obtained by means of grid search method or gradient descent method. Extensive experiments were conducted on synthetic and real network datasets. The results show that ISBNMF performs better than SBNMF and other representative methods.
    Multi-view spectral clustering algorithm based on shared nearest neighbor
    SONG Yan, YIN Jun
    2020, 40(11):  3211-3216.  DOI: 10.11772/j.issn.1001-9081.2020020228
    Asbtract ( )   PDF (883KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the construction of the similarity matrix in the spectral clustering algorithm cannot meet the higher similarity of the data points within the cluster, a Multi-View spectral clustering algorithm based on Shared Nearest Neighbor (MV-SNN) was given. Firstly, the similarity between two data points with a large number of shared neighbors was increased, making the similarity between the data points in the same cluster higher. Then, the improved similarity matrices of multiple views were integrated to obtain a global similarity matrix. Finally, considering that the general spectral clustering methods still need k-means clustering algorithm to divide the data points at the later stage, a rank constraint method of Laplacian matrix was proposed to directly obtain the final cluster structure through the global similarity matrix. Experimental results show that compared with other multi-view spectral algorithms, MV-SNN algorithm has the three measurement standards of clustering:accuracy, purity and normalized mutual information improved by 1%-20%, and the clustering time reduced by about 50%. It can be seen that MV-SNN algorithm can improve the clustering performance and reduce the clustering time.
    Gaussian mixture clustering algorithm combining elbow method and expectation-maximization for power system customer segmentation
    CHEN Yu, TIAN Bojin, PENG Yunzhu, LIAO Yong
    2020, 40(11):  3217-3223.  DOI: 10.11772/j.issn.1001-9081.2020050672
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics
    In order to further improve the user experience of power system customers, and aiming at the problems of poor optimization ability, lack of compactness and difficulty in solving the optimal number of clusters, a Gaussian mixture clustering algorithm combining elbow method and Expectation-Maximization (EM) was proposed, which can mine the potential information in a large number of customer data. The good clustering results were obtained by EM algorithm iteration. Aiming at the shortcoming of the traditional Gaussian mixture clustering algorithm that needs to obtain the number of user clusters in advance, the number of customer clusters was reasonably found by using elbow method. The case study shows that compared with hierarchical clustering algorithm and K-Means algorithm, the proposed algorithm has the increase of both FM (Fowlkes-Mallows) and AR (Adjusted-Rand) indexes more than 10%, and the decrease of Compactness Index (CI) and Degree of Separation (DS) less than 15% and 25% respectively. It can be seen that the performance of the algorithm is greatly improved.
    Cyber security
    In-vehicle CAN bus-off attack and its intrusion detection algorithm
    LI Zhongwei, TAN Kai, GUAN Yadong, JIANG Wenqi, YE Lin
    2020, 40(11):  3224-3228.  DOI: 10.11772/j.issn.1001-9081.2020040534
    Asbtract ( )   PDF (1941KB) ( )  
    References | Related Articles | Metrics
    As a new type of attack, the CAN (Controller Area Network) bus-off attack can force the node to generate communication errors continuously and disconnect from the CAN bus through the error handling mechanism of the CAN bus communication. Aiming at the security problem of in-vehicle CAN bus communication caused by the bus-off attack, an intrusion detection algorithm for the in-vehicle CAN bus-off attack was proposed. Firstly, the conditions and characteristics of the CAN bus-off attack were summarized. It was pointed out that the synchronous transmission of normal message and malicious message is the difficulty of realizing the bus-off attack. And the front-end message satisfying the condition of synchronous transmission was used to realize the bus-off attack. Secondly, the characteristics of the CAN bus-off attack were extracted. By accumulating the transmission number of error frames and according to the change of message transmission frequency, the detection of the CAN bus-off attack was realized. Finally, the CAN communication node based on STM32F407ZGT6 was used to simulate the Electronic Control Unit (ECU) in the vehicle, and the synchronous transmission of the malicious message and the attacked message was realized. The experiment of CAN bus-off attack and the verification of intrusion detection algorithm were carried out. Experimental results show that the detection rate of the algorithm for high priority malicious messages is more than 95%, so the algorithm can effectively protect the security of the in-vehicle CAN bus communication network.
    Detection method of physical-layer impersonation attack based on deep Q-network in edge computing
    YANG Jianxi, ZHANG Yuanli, JIANG Hua, ZHU Xiaochen
    2020, 40(11):  3229-3235.  DOI: 10.11772/j.issn.1001-9081.2020020179
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics
    In the edge computing, the communication between edge computing nodes and terminal devices is vulnerable to impersonation attacks, therefore a physical-layer impersonation attack detection algorithm based on Deep Q-Network (DQN) was proposed. Firstly, an impersonation attack model was built in the edge computing network, a hypothesis test based on the physical-layer Channel State Information (CSI) was established by the receiver, and the Euclidean distance between the currently measured CSI and the last recorded CSI was taken as the test statistics. Secondly, for the dynamic environment of edge computing, the DQN algorithm was used to adaptively select the optimal test threshold with the goal of maximizing the gain of the receiver. Finally, whether the current sender was an impersonation attacker was determined by comparing the statistics with the test threshold. The simulation results show that the Signal-to-Interference plus Noise Ratio (SINR) and channel gain ratio have certain effect on the performance of the detection algorithm, but when the relative change of channel gain is lower than 0.2, the false alarm rate, miss rate and average error rate of the algorithm are less than 5%. Therefore, the detection algorithm is adaptive to the dynamical environment of edge computing.
    Identity-based dynamic clustering authentication algorithm for wireless sensor networks
    YUAN Chi
    2020, 40(11):  3236-3241.  DOI: 10.11772/j.issn.1001-9081.2020030400
    Asbtract ( )   PDF (572KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the Wireless Sensor Network (WSN) is vulnerable to malicious attacks and the private key escrow problem caused by the existing identity-based cryptosystem, an Identity-based Dynamic Clustering authentication (IDC) algorithm was proposed. Firstly, PRivate Key Generator (PRKG) was avoided in the algorithm, the applicants' public key was generated by the PUblic Key Generator (PUKG), and the private key was chosen by the user himself separately, so that the key escrow problem in the identity-based cryptosystem was resolved completely. At the same time, the pseudo-secret matrix was generated dynamically by the algorithm, which could avoid collusion attacks so as to guarantee the security of the algorithm. Finally, in view of the differences in the resources owned by different nodes, the layered and hierarchical processing was used to complete the (un) signcryption at once, therefore reducing the node load of calculation and storage. The time consumption and energy consumption of the newly proposed IDC algorithm are reduced by more than 20% compared to those of the other three algorithms of the same type. In the term of algorithm robustness, when the network data packet increases rapidly, IDC algorithm performs more stably, which means the energy consumption is between 1 mJ and 10 mJ, with the span not more than 1.3 mJ. The time consumption of the algorithm is between 0.002 s and 0.006 s. Simulation experiments show that the newly proposed IDC algorithm is more suitable for the WSN with strict requirements on safety and energy consumption.
    High-precision histogram publishing method based on differential privacy
    LI Kunming, WANG Chaoqian, NI Weiwei, BAO Xiaohan
    2020, 40(11):  3242-3248.  DOI: 10.11772/j.issn.1001-9081.2020030379
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the existing privacy protection histogram publishing methods based on grouping to suppress differential noise errors cannot effectively balance the group approximation error and the Differential Privacy (DP) Laplacian error, resulting in the lack of histogram availability, a High-Precision Histogram Publishing method (HPHP) was proposed. First, the constraint inference method was used to achieve the histogram ordering under the premise of satisfying the DP constraints. Then, based on the ordered histogram, the dynamic programming grouping method was used to generate groups with the smallest total error on the noise-added histogram. Finally, the Laplacian noise was added to each group mean. For the convenience of comparative analysis, the privacy protection histogram publishing method with the theoretical minimum error (Optimal) was proposed. Experimental analysis results between HPHP, DP method with noise added directly, AHP (Accurate Histogram Publication) method and Optimal show that the Kullback-Leibler Divergence (KLD) of the histogram published by HPHP is reduced by 90% compared to that of AHP method and is close to the effect of Optimal. In conclusion, under the same pre-conditions, HPHP can publish higher-precision histograms on the premise of ensuring DP.
    Privacy preservation algorithm of original data in mobile crowd sensing
    JIN Xin, WAN Taochun, LYU Chengmei, WANG Chengtian, CHEN Fulong, ZHAO Chuanxin
    2020, 40(11):  3249-3254.  DOI: 10.11772/j.issn.1001-9081.2020020236
    Asbtract ( )   PDF (631KB) ( )  
    References | Related Articles | Metrics
    With the popularity of mobile smart devices, Mobile Crowd Sensing (MCS) has been widely used while facing serious privacy leaks. Focusing on the issue that the existing original data privacy protection scheme is unable to resist collusion attacks and reduce the perception data availability, a Data Privacy Protection algorithm based on Mobile Node (DPPMN) was proposed. Firstly, the node manager in DPPMN was used to establish an online node list and send it to the source node. An anonymous path for data transmission was built by the source node through the list. Then, the data was encrypted by using paillier encryption scheme, and the ciphertext was uploaded to the application server along the path. Finally, the required perception data was obtained by the server using ciphertext decryption. The data was encrypted and decrypted during transmission, making sure that the attacker was not able to wiretap the content of the perception data and trace the source of the data along the path. The DPPMN ensures that the application server can access the original data without the privacy invasion of the nodes. Theoretical analysis and experimental results show that DPPMN has higher data security with increasing appropriate communication, and can resist collusion attacks without affecting the availability of data.
    Design and implementation of fingerprint authentication terminal APP in mobile cloud environment based on TrustZone
    WANG Zhiheng, XU Yanyan
    2020, 40(11):  3255-3260.  DOI: 10.11772/j.issn.1001-9081.2020020273
    Asbtract ( )   PDF (892KB) ( )  
    References | Related Articles | Metrics
    Focused on the potential safety hazard of leakage of fingerprint and other biometrics in the cloud environment, as well as the lack of security or convenience of the existing biometric authentication schemes, a terminal APP of trusted fingerprint authentication based on orthogonal decomposition and TrustZone was designed and implemented. The sensitive operations such as fingerprint feature extraction, fingerprint template generation were executed in the trusted execution environment provided by the hardware isolation mechanism of TrustZone, making these operations isolated from the applications in the general execution environment to resist the attacks of malicious programs and ensure the security of the authentication process. The fingerprint template generated on the basis of orthogonal decomposition algorithm integrate the random noise while remaining the matching ability, so that it was able to resist the attack against the feature template to a certain extent. As a result, the fingerprint template was able to be stored and transmitted in the cloud environment, so that the user and the device were unbound, which improved the convenience of biometric authentication. Experiments and theoretical analysis show that the correlation and randomness of the fingerprint template of the proposed algorithm is higher than those of original feature and random projection algorithms, so that the algorithm has stronger security. In addition, the experimental results of time and storage overheads as well as recognition accuracy show that, both convenience and security are considered in this APP, meeting the requirements of security authentication in mobile cloud environment.
    Computer software technology
    Software safety requirement analysis and verification method based on system theoretic process analysis
    QIN Nan, MA Liang, HUANG Rui
    2020, 40(11):  3261-3266.  DOI: 10.11772/j.issn.1001-9081.2020040548
    Asbtract ( )   PDF (2126KB) ( )  
    References | Related Articles | Metrics
    There are two problems to be solved in the traditional System Theoretic Process Analysis (STPA) method. One is the lack of automation means of realization, the other is the ambiguity problem caused by natural language result analysis. To solve these problems, a software safety requirement analysis and verification method based on STPA was proposed. Firstly, the software safety requirements were extracted and converted into formal expressions by the algorithm. Secondly, the state diagram model was built to describe the logic of software safety control behaviors and converted the logic into the readable formal language. Finally, the formal verification was carried out by model checking technology. The effectiveness of the method was verified by the case of a weapon launch control system. The results show that the proposed method can generate the safety requirements automatically and perform formal verification to them, avoid the dependence on manual intervention and solve the natural language description problems in traditional methods.
    QoS verification of microservice composition platform based on model checking
    MAO Xinyi, NIU Jun, DING Xueer, ZHANG Kaile
    2020, 40(11):  3267-3272.  DOI: 10.11772/j.issn.1001-9081.2020030387
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that microservice composition platform is short of analysis and verification of Quality of Service (QoS) indicators, a formal verification method based on model checking was proposed to analyze and evaluate the factors that affect the microservice composite platform performance. First, the service resource configuration process of microservice composition was divided into three phases:service request, configuration and service execution. These three phases were implemented by three modules:service request queue, resource configurator of service requests and virtual machine for providing service resources. After that, the implementation processes of the three modules were modeled as Labelled Markov Reward Models (LMRM), and the global model of microservice composition process was obtained by using a synchronization concept similar to the process algebra. Then, the logic formula of continuous random reward logic was used to describe the expected QoS indicators. Last, the formal model and logic formula were regarded as the input of model detection tool PRISM to obtain the verification results. Experimental results prove that LMRM can realize the QoS verification and analysis as well as the construction of microservice composite platform.
    Data preprocessing method in software defect prediction
    PAN Chunxia, YANG Qiuhui, TAN Wukun, DENG Huixin, WU Jia
    2020, 40(11):  3273-3279.  DOI: 10.11772/j.issn.1001-9081.2020040464
    Asbtract ( )   PDF (691KB) ( )  
    References | Related Articles | Metrics
    Software defect prediction is a hot research topic in the field of software quality assurance. The quality of defect prediction models is closely related to the training data. The datasets used for defect prediction mainly have the problems of data feature selection and data class imbalance. Aiming at the problem of data feature selection, common process features of software development and the newly proposed extended process features were used, and then the feature selection algorithm based on clustering analysis was used to perform feature selection. Aiming at the data class imbalance problem, an improved Borderline-SMOTE (Borderline-Synthetic Minority Oversampling Technique) method was proposed to make the numbers of positive and negative samples in the training dataset relatively balanced, and make the characteristics of the synthesized samples more consistent with the actual sample characteristics. Experiments were performed by using the open source datasets of projects such as bugzilla and jUnit. The results show that the used feature selection algorithm can reduce the model training time by 57.94% while keeping high F-measure value of the model; compared to the defect prediction model obtained by using the original method to process samples, the model obtained by the improved Borderline-SMOTE method respectively increase the Precision, Recall, F-measure, and AUC (Area Under the Curve) by 2.36 percentage points, 1.8 percentage points, 2.13 percentage points and 2.36 percentage points on average; the defect prediction model obtained by introducing the extended process features has an average improvement of 3.79% in F-measure value compared to the model without the extended process features; compared with the models obtained by methods in the literatures, the model obtained by the proposed method has an average increase of 15.79% in F-measure value. The experimental results prove that the proposed method can effectively improve the quality of the defect prediction model.
    Virtual reality and multimedia computing
    Review of image edge detection algorithms based on deep learning
    LI Cuijin, QU Zhong
    2020, 40(11):  3280-3288.  DOI: 10.11772/j.issn.1001-9081.2020030314
    Asbtract ( )   PDF (922KB) ( )  
    References | Related Articles | Metrics
    Edge detection is the process of extracting the important information of mutations in the image. It is a research hotspot in the field of computer vision and the basis of many middle-and high-level vision tasks such as image segmentation, target detection and recognition. In recent years, in view of the problems of thick edge contour lines and low detection accuracy, edge detection algorithms based on deep learning such as spectral clustering, multi-scale fusion, and cross-layer fusion were proposed by the industry. In order to make more researchers understand the research status of edge detection, firstly, the implementation theory and methods of traditional edge detection were introduced. Then, the main edge detection methods based on deep learning in resent years were summarized, and these methods were classified according to the implementation technologies of the methods. And the analysis of the key technologies of these methods show that the multi-scale multi-level fusion and selection of loss function was the important research directions. Various methods were compared to each other through evaluation indicators. It can be seen that the Optimal Dataset Scale (ODS) of edge detection algorithm on the Berkeley Segmentation Data Set and benchmark 500 (BSDS500) was increased from 0.598 to 0.828, which was close to the level of human vision. Finally, the development direction of edge detection algorithm research was forecasted.
    Fast mismatch elimination algorithm and map-building based on ORB-SLAM2 system
    XI Zhihong, WANG Hongxu, HAN Shuangquan
    2020, 40(11):  3289-3294.  DOI: 10.11772/j.issn.1001-9081.2020010092
    Asbtract ( )   PDF (4356KB) ( )  
    References | Related Articles | Metrics
    To address the problem that the RANdom SAmple Consensus (RANSAC) algorithm in the ORB-SLAM2 system has a low efficiency due to the randomness of the algorithm when eliminating mismatches and fails to build dense point cloud map in ORB-SLAM2 system, a PROgressive SAmple Consensus (PROSAC) algorithm was adopted to improve the mismatch elimination in the ORB-SLAM2 system and the dense point cloud map and the octree map building threads were added in this system. Firstly, compared with RANSAC algorithm, in PROSAC algorithm, the feature points were preordered according to the evaluation function, and the feature points with high evaluation quality were selected to solve the homography matrix. According to the solution of the homography matrix and the matching error threshold, the mismatches were eliminated. Secondly, the pose estimation and relocation of the camera were carried out according to the ORB-SLAM2 system. Finally, the dense point cloud map and the octree map were constructed according to the selected key frames. According to the experimental results on TUM dataset, PROSAC algorithm took about 50% time to perform the mismatch elimination of the same images compared to RANSAC algorithm, and the proposed system had the absolute trajectory error and relative pose error basically consistent with the ORB-SLAM2 system, showing good robustness. Besides, compared with the sparse point cloud map, the proposed new maps could be directly used for robot navigation and path planning.
    Optimizing webcam-based eye tracking system via head pose analysis
    ZHAO Xinchen, YANG Nan
    2020, 40(11):  3295-3299.  DOI: 10.11772/j.issn.1001-9081.2020010008
    Asbtract ( )   PDF (1001KB) ( )  
    References | Related Articles | Metrics
    Real-time eye tracking technology is the key technology of intelligent eye movement operating system. Compared to the technology based on eye tracker, the technology based on webcam has the advantages of low cost and high universality. Aiming at the low accuracy problem of the existing webcam based algorithms only with the eye image features considered, an optimization technology for eye tracking algorithm with head pose analysis introduced was proposed. Firstly, the head pose features were constructed based on the results of facial feature point tracking to provide head pose context for the calibration data. Secondly, a new similarity algorithm was studied to calculate the similarity of the head pose context. Finally, during the eye tracking, the head pose similarity was used to filter the calibration data, and the data with higher head pose similarity to the current input frame was selected from the calibration dataset for prediction. A large number of experiments were carried out on the data of populations with different characteristics. The comparison experimental results show that compared with WebGazer, the proposed algorithm has the average error reduced by 58 to 63 px. The proposed algorithm can effectively improve the accuracy and stability of the tracking results, and expand the application scenarios of webcam in the field of eye tracking.
    Multi-level feature enhancement for real-time visual tracking
    FEI Dasheng, SONG Huihui, ZHANG Kaihua
    2020, 40(11):  3300-3305.  DOI: 10.11772/j.issn.1001-9081.2020040514
    Asbtract ( )   PDF (2493KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of Fully-Convolutional Siamese visual tracking network (SiamFC) that the tracking target drifts when the similar semantic information interferers occur, resulting in tracking failure, a Multi-level Feature Enhanced Siamese network (MFESiam) was designed to improve the robustness of the tracker by enhancing the representation capabilities of the high-level and shallow-level features respectively. Firstly, a lightweight and effective feature fusion strategy was adopted for shallow-level features. A data enhancement technology was utilized to simulate some changes in complex scenes, such as occlusion, similarity interference and fast motion, to enhance the texture characteristics of shallow features. Secondly, for high-level features, a Pixel-aware global Contextual Attention Module (PCAM) was proposed to improve the localization ability to capture long-range dependence. Finally, many experiments were conducted on three challenging tracking benchmarks:OTB2015, GOT-10K and 2018 Visual-Object-Tracking (VOT2018). Experimental results show that the proposed algorithm has the success rate index on OTB2015 and GOT-10K better than the benchmark SiamFC by 6.3 percentage points and 4.1 percentage points respectively and runs at 45 frames per second to achieve the real-time tracking. The expected average overlap index of the proposed algorithm surpasses the champion in the VOT2018 real-time challenge, that is the high-performance Siamese with Region Proposal Network (SiamRPN), which verifies the effectiveness of the proposed algorithm.
    3D face reconstruction and dense face alignment method based on improved 3D morphable model
    ZHOU Jian, HUANG Zhangjin
    2020, 40(11):  3306-3313.  DOI: 10.11772/j.issn.1001-9081.2020030420
    Asbtract ( )   PDF (2638KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the currently widely used 3D morphable model has insufficient expression ability, resulting in poor generalization performance of the reconstructed 3D face model, a novel method for 3D face reconstruction and dense face alignment based on a single face image under unknown pose, expression and illumination was proposed. First, the existing 3D morphable model was improved by convolutional neural network to improve the expression ability of the 3D face model. Then, based on the smoothness of the face and the similarity of the image, a new loss function was proposed at the feature point and pixel level, and the weakly-supervised learning was used to train the convolutional neural network model. Finally, the trained network model was used to perform the 3D face reconstruction and dense face alignment. Experimental results show that, for 3D face reconstruction, the proposed model has the normalized mean error on AFLW2000-3D reduced to 2.25, and for dense face alignment, the proposed model has the normalized mean errors on AFLW2000-3D and AFLW-LFPA reduced to 3.80 and 3.34 respectively. Compared with the original method using 3D morphable model, the proposed model has the normalized mean errors reduced by 7.4% and 7.8% respectively in 3D face reconstruction and dense face alignment. Therefore, for face images with different lighting environments and angles, this network model is accurate in reconstruction and robust, and has high 3D face reconstruction and dense face alignment quality.
    Person re-identification algorithm based on low-pass filter model
    HUA Chao, WANG Gengrun, CHEN Lei
    2020, 40(11):  3314-3319.  DOI: 10.11772/j.issn.1001-9081.2020030351
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics
    Because a large number of useless features exist in the image of person re-identification due to occlusion and background interference, a person re-identification method based on low-pass filtering model was proposed. First, the person images were divided into blocks. Then the similar number of small blocks in each image were calculated. Among them, the blocks with higher similarity number were marked as high-frequency noise features and the blocks with smaller similarity number were the beneficial features. Finally, different from the low-pass filter which filtered the mutation features and maintained the smooth features in the common image processing, the low-pass filter in the communication system was used to achieve the goal of suppressing high-frequency noise features and gain beneficial features in the proposed method. Experimental results show that the identification rate of the proposed method on ETHZ dataset is nearly 20% higher than that of the classic Symmetry-Driven Accumulation of Local Features (SDALF) method, and at the same time, this method achieves similar results on VIPeR (Viewpoint Invariant Pedestrian Recognition) and I-LIDS (Imagery Library for Intelligent Detection Systems) datasets.
    Virtual reality arbitrary shape selection model based on Fitts' law
    WANG Yi, LYU Jian, YOU Qian, ZHAO Zeyu, YAN Baoming, ZHU Shuman
    2020, 40(11):  3320-3326.  DOI: 10.11772/j.issn.1001-9081.2020030404
    Asbtract ( )   PDF (1218KB) ( )  
    References | Related Articles | Metrics
    To evaluate the click efficiency of different graphic designs of Virtual Reality (VR) interactive interface, a prediction method for the completion time of directional tasks in virtual scenarios was proposed based on probabilistic Fitts' law, and an arbitrary shape selection model of VR was constructed. First, according to the actual needs of VR interface design, the influence of shape on the completion time of directional tasks was added, a relation function between hit probability and task difficulty index was constructed, the target centroid was set as the center point of the function to be integrated, and the definition of probabilistic Fitts' model under the virtual scenario was completed. Then, the first experiment was designed to obtain the constant term value of the probability function in the improved probabilistic Fitts' model. On this basis, the second experiment was designed to calculate the constant term of the prediction function in the improved probabilistic Fitts' model, so as to construct the improved probabilistic Fitts' model. Finally, the model was verified and evaluated on the actual click tasks in a VR tobacco sorting system. Experimental results show that the model can predict the task completion time in the virtual scenario.
    Matrix completion algorithm based on nonlocal self-similarity and low-rank matrix approximation
    ZHANG Li, KONG Xu, SUN Zhonggui
    2020, 40(11):  3327-3331.  DOI: 10.11772/j.issn.1001-9081.2020030419
    Asbtract ( )   PDF (11540KB) ( )  
    References | Related Articles | Metrics
    Aiming at the shortage of traditional matrix completion algorithm in image reconstruction, a completion algorithm based on NonLocal self-similarity and Low Rank Matrix Approximation (NL-LRMA) was proposed. Firstly, the nonlocal similar patches corresponding to the local patches in the image were found through similarity measurement, and the corresponding grayscale matrices were vectorized to construct the nonlocal similar patch matrix. Secondly, aiming at the low-rank property of the obtained similarity matrix, Low-Rank Matrix Approximation (LRMA) was carried out. Finally, the completion results were recombined to achieve the goal of restoring the original image. Reconstruction experiments were performed on grayscale and RGB images. The results show that the average Peak Signal-to-Noise Ratio (PSNR) of NL-LRMA algorithm is 4 dB to 7 dB higher than that of the original LRMA algorithm on a classic dataset; at the same time, NL-LRMA algorithm is better than IRNN (Iteratively Reweighted Nuclear Norm), WNNM (Weighted Nuclear Norm Minimization), LRMA (Low-Rank Matrix Approximation) and other traditional algorithms in the terms of visual effect and PSNR value. In short, NL-LRMA algorithm effectively make up for the shortcomings of traditional algorithms in natural image reconstruction, so as to provide an effective solution for image reconstruction.
    Chromosome image segmentation framework based on improved Mask R-CNN
    FENG Tao, CHEN Bin, ZHANG Yuefei
    2020, 40(11):  3332-3339.  DOI: 10.11772/j.issn.1001-9081.2020030355
    Asbtract ( )   PDF (2168KB) ( )  
    References | Related Articles | Metrics
    The manual segmentation of chromosome images is time-consuming and laborious, and the accuracy of current automatic segmentation methods is not poor. Therefore, based on improved Mask R-CNN (Mask Region-based Convolutional Neural Network), a chromosome image segmentation framework named Mask Oriented R-CNN (Mask Oriented Region-based Convolutional Neural Network) was proposed, which introduced orientation information to perform instance segmentation of chromosome images. Firstly, the regression branch of oriented bounding boxes was added to predict the compact bounding boxes and obtain orientation information. Secondly, a novel Intersection-over-Union (IoU) metric called AwIoU (Angle-weighted Intersection-over-Union) was proposed to improve the criterion of redundant bounding boxes by combining the relationship between the orientation information and edges. Finally, the oriented convolutional path structure was realized to reduce the interference in mask prediction by copying the path of mask branch and selecting the training path according to the orientation information of the instances. Experimental results show that compared with the baseline model Mask R-CNN, Mask Oriented R-CNN has the mean average precision increased by 10.22 percentage points when the IoU threshold is 0.5, and the mean metric increased by 4.91 percentage points when the IoU threshold is from 0.5 to 0.95. Experimental results show that the Mask Oriented R-CNN framework achieves better segmentation results than the baseline model in chromosome image segmentation, which is helpful to achieve automatic segmentation of chromosome images.
    Magnetic resonance image segmentation of articular synovium based on improved U-Net
    WEI Xiaona, XING Jiaqi, WANG Zhenyu, WANG Yingshan, SHI Jie, ZHAO Di, WANG Hongzhi
    2020, 40(11):  3340-3345.  DOI: 10.11772/j.issn.1001-9081.2020030390
    Asbtract ( )   PDF (901KB) ( )  
    References | Related Articles | Metrics
    In order to accurately diagnose the synovitis patient's condition, doctors mainly rely on manual labeling and outlining method to extract synovial hyperplasia areas in the Magnetic Resonance Image (MRI). This method is time-consuming and inefficient, has certain subjectivity and is of low utilization rate of image information. To solve this problem, a new articular synovium segmentation algorithm, named 2D ResU-net segmentation algorithm was proposed. Firstly, the two-layer residual block in the Residual Network (ResNet) was integrated into the U-Net to construct the 2D ResU-net. Secondly, the sample dataset was divided into training set and testing set, and data augmentation was performed to the training set. Finally, all the training samples after augmentation were applied to the training of the network model. In order to test the segmentation effect of the model, the tomographic images containing synovitis in the testing set were selected for segmentation test. The final average segmentation accuracy indexes are as follow:Dice Similarity Coefficient (DSC) of 69.98%, IOU (Intersection over Union) index of 79.90% and Volumetric Overlap Error (VOE)of 12.11%. Compared with U-Net algorithm, 2D ResU-net algorithm has the DSC increased by 10.72%, IOU index increased by 4.24% and VOE decreased by 11.57%. Experimental results show that this algorithm can achieve better segmentation effect of synovial hyperplasia areas in MRI images, and can assist doctors to make diagnosis of the disease condition in time.
    Frontier & interdisciplinary applications
    Review of gaze tracking and its application in intelligent education
    ZHANG Junjie, SUN Guangmin, ZHENG Kun
    2020, 40(11):  3346-3356.  DOI: 10.11772/j.issn.1001-9081.2020040443
    Asbtract ( )   PDF (1506KB) ( )  
    References | Related Articles | Metrics
    Combining artificial intelligence with education is one of the hottest topics of artificial intelligence research. It is important to obtain learning state information in intelligent education. The changes of sight line can reflect mental and state changes directly or indirectly. Thus, gaze tracking plays an important role in the field of intelligent education. Firstly, the development of intelligent education was introduced. Then, the development, current research works and research status of gaze tracking technology were summed up and analyzed. And the related applications and research work of gaze tracking technology in education field of the past three years were summarized. Finally, the summary and prospect of the development trend of gaze tracking in education field were given.
    Design and implementation of electronic file circulation based on blockchain
    HAN Yanyan, ZHANG Qi, YAN Xiaoxuan, LIU Peihe, XU Pengge
    2020, 40(11):  3357-3365.  DOI: 10.11772/j.issn.1001-9081.2020040526
    Asbtract ( )   PDF (2881KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that there is no unified registration of files, the whereabouts of files are not tracked, and the process of circulation is not standardized in the circulation of electronic files under the Internet ecology, a blockchain-based electronic file circulation scheme was proposed. Firstly, the design goals and design architecture of the electronic file circulation system based on blockchain were proposed using the multi-centralized system of the consortium blockchain in the blockchain. Secondly, blockchain-based electronic file circulation system was implemented by using a cloud storage platform to upload files for electronic file storage and adding time-stamps of the ownership transfer data of files to make the circulation process continuous, relevant, traceable, honest and credible. The data synchronization and tracing of the blockchain-based electronic file circulation system was achieved through using database calls to realize the data access. Finally, a smart contract for electronic file ownership transfer and query to verify and protect the contents of the files by reading the file identification. The security analysis and performance tests show that compared to the original one, the proposed scheme is more secure and enhances the credibility of the circulation information, at the same time, the shorter execution time of the smart contract makes the system have better reliability and traceability.
    Robot path planning based on improved ant colony and pigeon inspired optimization algorithm
    LIU Ang, JIANG Jin, XU Kefeng
    2020, 40(11):  3366-3372.  DOI: 10.11772/j.issn.1001-9081.2020040538
    Asbtract ( )   PDF (1570KB) ( )  
    References | Related Articles | Metrics
    A method was proposed by combining the global and local path planning algorithms for the problems of slow iteration and not good path in mobile robot path planning under complex environment. Firstly, the synchronous bidirectional A* algorithm was used for the pheromone optimization of ant colony algorithm, and the transition probability and pheromone update mechanism of ant colony algorithm were improved, so that the global optimization of the algorithm was faster and the path length of the mobile robot was shortened. Furthermore, the static path was used to initialize the pigeon inspired optimization algorithm. Secondly, the improved pigeon inspired optimization algorithm was used for the local path planning of the mobile robot. The simulated annealing criteria were introduced to solve the local optimum problem, and the logarithmic S-type transfer function was used to optimize the step size of the number of pigeons, so as to better avoid the collision with dynamic obstacles. Finally, the cubic B-spline curve was used to smooth and replan the route. Simulation results indicate that the algorithm can generate smooth paths with short length and small evaluation value in both global static and local dynamic phases, and converge quickly, which is suitable for mobile robot to travel in dynamic and complex environments.
    Multi-directional path planning algorithm for unmanned surface vehicle
    TONG Xinchi, ZHANG Huajun, GUO Hang
    2020, 40(11):  3373-3378.  DOI: 10.11772/j.issn.1001-9081.2020030422
    Asbtract ( )   PDF (1060KB) ( )  
    References | Related Articles | Metrics
    Aiming at the safety and smoothness problems of path planning for Unmanned Surface Vehicle (USV) in complex marine environment, a multi-directional A* path planning algorithm was proposed for obtaining global optimal path. Firstly, combining the electronic chart, the rasterized environment information was established, and a safe area model of USV was established according to the safety nevigation distance constraint. And the A* heuristic function with safety distance constraint was designed based on the traditional A* algorithm to ensure the safety of the generated path nodes. Secondly, a multi-directional search mode was proposed by improving the eight-directional search mode of the traditional A* algorithm to adjust the redundant points and inflection points in the generated path. Finally, the path smoothing algorithm was used to smooth the inflection points to obtain the continuous smooth path that meets the actual navigation requirements. In the simulation experiment, the path distance planned by the improved A* algorithm is 7 043 m, which is 9.7%, 26.6% and 7.9% lower than those of Dijkstra algorithm, traditional A* four-directional search algorithm and traditional A* eight-directional search algorithm. The simulation results show that the improved multi-directional A* search algorithm can effectively reduce the path distance, and is more suitable for the path planning of USV.
    Path planning method for spraying robot based on discrete grey wolf optimizer algorithm
    MEI Wei, ZHAO Yuntao, MAO Xuesong, LI Weigang
    2020, 40(11):  3379-3384.  DOI: 10.11772/j.issn.1001-9081.2020040448
    Asbtract ( )   PDF (3282KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of low efficiency, not to consider collision and poor applicability of the current robot path planning method for spraying entities with complex structure, a discrete grey wolf optimizer algorithm for solving multilayer decision problems was proposed and applied to the above path planning problem. In order to transfer the grey wolf optimizer algorithm with continuous domain to discrete grey wolf optimizer algorithm for solving multilayer decision problems, the matrix coding method was used to solve the coding problem of multilayer decision problem, a hybrid initialization method based on prior knowledge and random selection was proposed to improve the solving efficiency and precision of the algorithm, the crossover operator and the two-level mutation operator were used to define the population update strategy of the discrete grey wolf optimizer algorithm. In addition, the path planning problem of spraying robot was simplified to the generalized traveling salesman problem by the graph theory, and the shortest path model and path collision model of this problem were established. In the path planning experiment, compared with particle swarm optimization algorithm, genetic algorithm and ant colony optimization algorithm, the proposed algorithm has the average planned path length decreased by 5.0%, 5.5% and 6.6%, has the collision time reduced to 0, and has smoother paths. Experimental results show that the proposed algorithm can effectively improve the spraying efficiency of spraying robot as well as the safety and applicability of the spraying path.
    Spatio-temporal hybrid prediction model for air quality
    HUANG Weijian, LI Danyang, HUANG Yuan
    2020, 40(11):  3385-3392.  DOI: 10.11772/j.issn.1001-9081.2020040471
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics
    Because the air quality in different regions of the city are correlated with each other in both time and space, the traditional deep learning model structure is relatively simple, and it is difficult to model from the perspectives of time and space. Aiming at this problem, a Spatio Temporal Air Quality Index (STAQI) model that can simultaneously extract the complex spatial and temporal relationships between air qualities was proposed for air quality prediction. The model was composed of local components and global components, which were used to describe the influences of local pollutant concentration and air quality states of adjacent sites on the air quality prediction of target site, and the prediction results were obtained by using the weighted fusion component output. In the global component, the graph convolutional network was used to improve the input part of the gated recurrent unit network, so as to extract the spatial characteristics of the input data. Finally, STAQI model was compared with various baseline models and variant models. Among them, the Root Mean Square Error (RMSE) of STAQI model is decreased by about 19% and 16% respectively compared with those of the gated recurrent unit model and the global component variant model. The results show that STAQI model has the best prediction performance for any time window, and the prediction results of different target sites verify the strong generalization ability of the model.
    Prediction of protein subcellular localization based on deep learning
    WANG Yihao, DING Hongwei, LI Bo, BAO Liyong, ZHANG Yingjie
    2020, 40(11):  3393-3399.  DOI: 10.11772/j.issn.1001-9081.2020040510
    Asbtract ( )   PDF (678KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that traditional machine learning algorithms still need to manually represent features, a protein subcellular localization algorithm based on the deep network of Stacked Denoising AutoEncoder (SDAE) was proposed. Firstly, the improved Pseudo-Amino Acid Composition (PseAAC), Pseudo Position Specific Scoring Matrix (PsePSSM) and Conjoint Traid (CT) were used to extract the features of the protein sequence respectively, and the feature vectors obtained by these three methods were fused to obtain a new feature expression model of protein sequence. Secondly, the fused feature vector was input into the SDAE deep network to automatically learn more effective feature representation. Thirdly, the Softmax regression classifier was adopted to make the classification and prediction of subcells, and leave-one-out cross validation was performed on Viral proteins and Plant proteins datasets. Finally, the results of the proposed algorithm were compared with those of the existing algorithms such as mGOASVM (multi-label protein subcellular localization based on Gene Ontology and Support Vector Machine) and HybridGO-Loc (mining Hybrid features on Gene Ontology for predicting subcellular Localization of multi-location proteins). Experimental results show that the new algorithm achieves 98.24% accuracy on Viral proteins dataset, which is 9.35 Percentage Points higher than that of mGOASVM algorithm. And the new algorithm achieves 97.63% accuracy on Plant proteins dataset, which is 10.21 percentage points and 4.07 percentage points higher than those of mGOASVM algorithm and HybridGO-Loc algorithm respectively. To sum up, it can be shown that the proposed new algorithm can effectively improve the accuracy of the prediction of protein subcellular localization.
    Data driven time delay identification and main steam temperature prediction in thermal power units
    GUI Ning, HUA Jingyun
    2020, 40(11):  3400-3406.  DOI: 10.11772/j.issn.1001-9081.2020030291
    Asbtract ( )   PDF (904KB) ( )  
    References | Related Articles | Metrics
    With massive features and long unit delays, it is very difficult to effectively select the most appropriate features and corresponding delays during the modeling of the main steam temperature of thermal power unit. Therefore, a modeling method of the fusion model jointly considering feature selection and delay selection was proposed. Aiming at the high dimensionality of the features of thermal power units, the features highly associated with the main steam temperature were selected through the correlation coefficients and the feature selection of gradient boosting machine. For the delay identification, the Temporal Correlation Coefficient-based Time Delay(TD-CORT) calculation algorithm was designed to estimate the time delay between each parameter and the predicted target main steam temperature. And the automatic matching of the sliding window size was realized for the prediction target and the calculation complexity. Finally, the fusion model of Deep Neural Network (DNN) and Long Short-Term Memory (LSTM) was used to predict the main steam temperature of the thermal power unit. The deployment results on a 1 000 MW ultra-supercritical coal-fired unit in China show that the proposed method has the prediction Mean Absolute Error (MAE) value of 0.101 6, and the prediction accuracy 57.42% higher than the neural network without considering the delay.
    Fast convergence average TimeSynch algorithm for apron sensor network
    CHEN Weixing, LIU Qingtao, SUN Xixi, CHEN Bin
    2020, 40(11):  3407-3412.  DOI: 10.11772/j.issn.1001-9081.2020030290
    Asbtract ( )   PDF (665KB) ( )  
    References | Related Articles | Metrics
    The traditional Average TimeSynch (ATS) for APron Sensor Network (APSN) has slow convergence and low algorithm efficiency due to its distributed iteration characteristics, based on the principle that the algebraic connectivity affects the convergence speed of the consensus algorithm, a Fast Convergence Average TimeSynch (FCATS) was proposed. Firstly, the virtual link was added between the two-hop neighbor nodes in APSN to increase the network connectivity. Then, the relative clock skew, logical clock skew and offset of the node were updated based on the information of the single-hop and two-hop neighbor nodes. Finally, according to the clock parameter update process, the consensus iteration was performed. The simulation results show that FCATS can be converged after the consensus iteration. Compared with ATS, it has the convergence speed increased by about 50%. And under different topological conditions, the convergence speed of it can be increased by more than 20%. It can be seen that the convergence speed is significantly improved.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF