Loading...

Table of Content

    10 July 2020, Volume 40 Issue 7
    Artificial intelligence
    Overview of content and semantic based 3D model retrieval
    PEI Yandong, GU Kejiang
    2020, 40(7):  1863-1872.  DOI: 10.11772/j.issn.1001-9081.2019112034
    Asbtract ( )   PDF (1598KB) ( )  
    References | Related Articles | Metrics
    Retrieval of multimedia data is one of the most important issues in information reuse. As a key step of 3D modeling, 3D model retrieval has been deeply studied in recent years due to the widespread use of 3D modeling. Aiming at the current progress of 3D model retrieval technology, content-based retrieval technologies were firstly introduced. According to the extracted features, these technologies were divided into four categories:based on statistical data, based on geometric shape, based on topological structure and based on visual features. The main achievements, advantages and disadvantages of each technology were presented respectively. And then the semantic-based retrieval technologies considering semantic information to solve the "semantic gap" phenomenon were introduced. They were divided into three categories:relevance feedback, active learning and ontology technology. Then, the relationship and characteristics of these technologies were introduced. Finally, the future research directions of 3D model retrieval were concluded and proposed.
    Knowledge base question answering system based on multi-feature semantic matching
    ZHAO Xiaohu, ZHAO Chenglong
    2020, 40(7):  1873-1878.  DOI: 10.11772/j.issn.1001-9081.2019111895
    Asbtract ( )   PDF (880KB) ( )  
    References | Related Articles | Metrics
    The task of Question Answering over Knowledge Base (KBQA) mainly aims at accurately matching natural language question with triples in the Knowledge Base (KB). However, traditional KBQA methods usually focus on entity recognition and predicate matching, and the errors in entity recognition may lead to error propagation and thus fail to get the right answer. To solve the above problem, an end-to-end solution was proposed to directly match the question and triples. This system consists of two parts:candidate triples generation and candidate triples ranking. Firstly, the candidate triples were generated by the BM25 algorithm calculating the correlation between the question and the triples in the knowledge base. Then, Multi-Feature Semantic Matching Model (MFSMM) was used to realize the ranking of the triples, which means the semantic similarity and character similarity were calculated by MFSMM through Bi-directional Long Short Term Memory Network (Bi-LSTM) and Convolutional Neural Network (CNN) respectively, and the triples were ranked by fusion. With NLPCC-ICCPOL 2016 KBQA as the dataset, the average F1 of the proposed system is 80.35%, which is close to the existing best performance.
    News named entity recognition and sentiment classification based on attention-based bi-directional long short-term memory neural network and conditional random field
    HU Tiantian, DAN Yabo, HU Jie, LI Xiang, LI Shaobo
    2020, 40(7):  1879-1883.  DOI: 10.11772/j.issn.1001-9081.2019111965
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    Attention-based Bi-directional Long Short-Term Memory neural network and Conditional Random Field (AttBi-LSTM-CRF) model was proposed for the corpus core entity recognition and core entity sentiment analysis task of Sohu coreEntityEmotion_train. Firstly, the text was pre-trained, each word was mapped into a low-dimensional vector with the same dimension. Then, these vectors were input into the Attention-based Bi-directional Long Short-Term Memory neural network (AttBi-LSTM) to obtain the long-term context information and focus on the information highly related to the output label. Finally, the optimal label of the entire sequence was obtained through the Conditional Random Field (CRF) layer. The comparison experiments were conducted among AttBi-LSTM-CRF model, Bi-directional Long Short-Term Memory neural network (Bi-LSTM), AttBi-LSTM and Bi-directional Long Short-Term Memory neural network and Conditional Random Field (Bi-LSTM-CRF) model. The experimental results show that, the accuracy of AttBi-LSTM-CRF model is 0.78, the recall is 0.667, and the F1 value is 0.553, which are better than those of the comparison models. The superiority of AttBi-LSTM-CRF performance is verified.
    Sequence generation model with dynamic routing for multi-label text classification
    WANG Minrui, GAO Shu, YUAN Ziyong, YUAN Lei
    2020, 40(7):  1884-1890.  DOI: 10.11772/j.issn.1001-9081.2019112027
    Asbtract ( )   PDF (978KB) ( )  
    References | Related Articles | Metrics
    In the real world, multi-label text has a wider application scenario than single-label text. At the same time, due to its huge output space, it brings a lot of challenges to the classification task. The multi-label text classification problem was regarded as label sequence generation problem, and the Sequence Generation Model (SGM) was applied to the multi-label text classification field. Aiming at the problems such as that the sequence structure of the model is easy to produce the cumulative error, an SGM based on Dynamic Routing (DR-SGM) was proposed. The model was based on Encoder-Decoder mode. In the Encoder layer, Bi-directional Long Short-Term Memory (Bi-LSTM) neural network+Attention was used to encode the semantic information. In the Decoder layer, a decoder structure with the dynamic routing aggregation layer was designed which reduces the influence of the cumulative error added behind the hidden layer. At the same time, the part-part and part-glob position information in the text was captured by dynamic routing. And by optimizing the dynamic routing algorithm, the semantic clustering effect was further improved. DR-SGM was applied to the classification of multi-label texts. The experimental results show that DR-SGM improves multi-label text classification results on the RCV1-V2, AAPD and Slashdot datasets.
    Non-autoregressive method for Uyghur-Chinese neural machine translation
    ZHU Xiangrong, WANG Lei, YANG Yating, DONG Rui, ZHANG Jun
    2020, 40(7):  1891-1895.  DOI: 10.11772/j.issn.1001-9081.2019111974
    Asbtract ( )   PDF (1003KB) ( )  
    References | Related Articles | Metrics
    Although the existing autoregressive translation models based on recurrent neural network, convolutional neural network or Transformer have good translation performance, they have the problem of low translation speed due to low decoding parallelism. Therefore, a non-autoregressive model based learning rate optimization strategy was proposed. On the basis of the non-autoregressive sequence model based on iterative optimization, the learning rate adjustment method was changed, which means that warm up was replaced with liner annealing. Firstly, liner annealing was evaluated to be better than warm up; then liner annealing was applied to the non-autoregressive sequence model in order to obtain the optimal balance between translation quality and decoding speed; finally a comparison between this method and the method of autoregressive model was carried out. Experimental results show that compared with the autoregressive model Transformer, when the decoding speed is increased by 2.74 times, this method has the BiLingual Evaluation Understudy (BLEU) score value of translation quality of 41.31, which reached 95.34% of that of the Transformer. It can be seen that the non-autoregressive sequence model of liner annealing can effectively improve the decoding speed under the condition of reducing a little translation quality, which is suitable for the platforms with urgent need for translation speed.
    Unsupervised feature selection method based on regularized mutual representation
    WANG Zhiyuan, JIANG Ailian, MUHAMMAD Osman
    2020, 40(7):  1896-1900.  DOI: 10.11772/j.issn.1001-9081.2019122075
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics
    The redundant features of high-dimensional data affect the training efficiency and generalization ability of machine learning. In order to improve the accuracy of pattern recognition and reduce the computational complexity, an unsupervised feature selection method based on Regularized Mutual Representation (RMR) property was proposed. Firstly, the correlations between features were utilized to establish a mathematical model for unsupervised feature selection constrained by Frobenius norm. Then, a divide-and-conquer ridge regression optimization algorithm was designed to quickly optimize the model. Finally, the importances of the features were jointly evaluated according to the optimal solution to the model, and a representative feature subset was selected from the original data. On the clustering accuracy, RMR method is improved by 7 percentage points compared with the Laplacian method, improved by 7 percentage points compared with the Nonnegative Discriminative Feature Selection (NDFS) method, improved by 6 percentage points compared with the Regularized Self-Representation (RSR) method, and improved by 3 percentage points compared with the Self-Representation Feature Selection (SR_FS) method. On the redundancy rate, RMR method is reduced by 10 percentage points compared with the Laplacian method, reduced by 7 percentage points compared with the NDFS method, reduced by 3 percentage points compared with the RSR method, and reduced by 2 percentage points compared with the SR_FS method. The experimental results show that RMR method can effectively select important features, reduce redundancy rate of data and improve clustering accuracy of samples.
    Hybrid recommendation algorithm by fusion of topic information and convolution neural network
    TIAN Baojun, LIU Shuang, FANG Jiandong
    2020, 40(7):  1901-1907.  DOI: 10.11772/j.issn.1001-9081.2019122067
    Asbtract ( )   PDF (1419KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of data sparsity and inaccuracy of recommendation results in the traditional collaborative filtering algorithms, a Probability Matrix Factorization recommendation model based on Latent Dirichlet Allocations (LDA) and Convolutional Neural Network (CNN) named LCPMF was proposed, which considers the topic information and deep semantic information of project review document comprehensively. Firstly, the LDA topic model and the text CNN were used to model the project review document respectively. Then, the significant potential low-dimensional topic information and the global deep semantic information of project review document were obtained in order to capture the multi-level feature representation of the project document. Finally, the obtained features of users and multi-level projects were integrated into the Probability Matrix Factorization (PMF) model to generate the prediction score for recommendation. LCPMF was compared with the classical PMF, Collaborative Deep Learning (CDL) and Convolutional Matrix Factorization (ConvMF) models on the real datasets Movielens 1M, Movielens 10M and Amazon. The experimental results show that, compared to PMF, CDL and ConvMF models, on the Movielens 1M dataset, the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of the proposed recommender model LCPMF are reduced by 6. 03% and 5.38%, 5.12% and 4.03%, 1.46% and 2.00% respectively; on the Movielens 10M dataset, the RMSE and MAE of LCPMF are reduced by 5.35% and 5.67%, 2.50% and 3.64%, 1.75% and 1.74% respectively; while on the Amazon dataset, the RMSE and MAE of LCPMF are reduced by 17.71% and 23.63%, 14.92% and 17.47%, 3.51% and 4.87% respectively. The feasibility and effectiveness of the proposed model in the recommendation system are verified.
    Improved hybrid cuckoo search-based quantum-behaved particle swarm optimization algorithm for bi-level programming
    ZENG Minghua, QUAN Ke
    2020, 40(7):  1908-1912.  DOI: 10.11772/j.issn.1001-9081.2019122237
    Asbtract ( )   PDF (881KB) ( )  
    References | Related Articles | Metrics
    Because the Particle Swarm Optimization (PSO) algorithm is easily trapped into local optimal solutions when solving the bi-level programming problems, an Improved hybrid Cuckoo Search-based Quantum-behaved Particle Swarm Optimization (ICSQPSO) algorithm based on Simulated Annealing (SA) Metropolis criterion was proposed. Firstly, the Metropolis criterion of SA algorithm was introduced into the hybrid algorithm to enhance the global optimization ability by accepting good solutions as well as bad solutions with a probability during solving process. Secondly, a Lévy flight with dynamic step size was designed for cuckoo search algorithm in order to maintain the high diversity of particle swarm during optimization, so as to guarantee search range. Finally, the preference random walk mechanism in the cuckoo algorithm was used to help the particles jump out of local optimal solutions. The numerical results of 13 bi-level programming cases including nonlinear ones, fractional ones, and those with multiple lower levels show that the objective functions optimal values of 12 cases obtained by ICSQPSO algorithm are significantly better than those of the algorithms for comparison in literatures, only the result of 1 case is slightly worse, and the results of half of the 13 cases are 50% better than those of the algorithms to be compared. Therefore, the ICSQPSO algorithm is superior to the algorithms to be compared on the optimization ability for bi-level programming.
    Hybrid particle swarm optimization algorithm with topological time-varying and search disturbance
    ZHOU Wenfeng, LIANG Xiaolei, TANG Kexin, LI Zhanghong, FU Xiuwen
    2020, 40(7):  1913-1918.  DOI: 10.11772/j.issn.1001-9081.2019112022
    Asbtract ( )   PDF (1193KB) ( )  
    References | Related Articles | Metrics
    Particle Swarm Optimization (PSO) algorithm is easy to be premature and drop into the local optimum and cannot jump out when solving complex multimodal functions. Related researches show that changing the topological structure among particles and adjusting the updating mechanism are helpful to improve the diversity of the population and the optimization ability of the algorithm. Therefore, a Hybrid PSO with Topological time-varying and Search disturbance (HPSO-TS) was proposed. In the algorithm, a K-medoids clustering algorithm was adapted to cluster the particle swarm dynamically for forming several heterogeneous subgroups, so as to facilitate the information flow among the particles in the subgroups. In the speed updating, by adding the guide of the optimal particle of the swarm and introducing the disturbance of nonlinear changing extreme, the particles were able to search more areas. Then, the transformation probability of the Flower Pollination Algorithm (FPA) was introduced into the position updating process, so the particles were able to transform their states between the global search and the local search. In the global search, a lioness foraging mechanism in the lion swarm optimization algorithm was introduced to update the positions of the particles; while in the local search, a sinusoidal disturbance factor was applied to help particles jump out of the local optimum. The experimental results show that the proposed algorithm is superior to FPA, PSO, Improved PSO (IPSO) algorithm and PSO algorithm with Topology (PSO-T) in the accuracy and robustness. With the increase of testing dimension and times, these advantages are more and more obvious. The topological time-varying strategy and search disturbance mechanism introduced by this algorithm can effectively improve the diversity of population and the activity of particles, so as to improve the optimization ability.
    Motion planning for autonomous driving with directional navigation based on deep spatio-temporal Q-network
    HU Xuemin, CHENG Yu, CHEN Guowen, ZHANG Ruohan, TONG Xiuchi
    2020, 40(7):  1919-1925.  DOI: 10.11772/j.issn.1001-9081.2019101798
    Asbtract ( )   PDF (2633KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of requiring a large number of samples, not associating with time information, and not using global navigation information in motion planning for autonomous driving based on machine learning, a motion planning method for autonomous driving with directional navigation based on deep spatio-temporal Q-network was proposed. Firstly, in order to extract the spatial features in images and the temporal information between continuous frames for autonomous driving, a new deep spatio-temporal Q-network was proposed based on the original deep Q-network and combined with the long short-term memory network. Then, to make full use of the global navigation information of autonomous driving, directional navigation was realized by adding the guide signal into the images for extracting environment information. Finally, based on the proposed deep spatio-temporal Q-network, a learning strategy oriented to autonomous driving motion planning model was designed to achieve the end-to-end motion planning, where the data of steering wheel angle, accelerator and brake were predicted from the input sequential images. The experimental results of training and testing results in the driving simulator named Carla show that in the four test roads, the average deviation of this algorithm is less than 0.7 m, and the stability performance of this algorithm is better than that of four comparison algorithms. It is proved that the proposed method has better learning performance, stability performance and real-time performance to realize the motion planning for autonomous driving with global navigation route.
    End-to-end autonomous driving model based on deep visual attention neural network
    HU Xuemin, TONG Xiuchi, GUO Lin, ZHANG Ruohan, KONG Li
    2020, 40(7):  1926-1931.  DOI: 10.11772/j.issn.1001-9081.2019112054
    Asbtract ( )   PDF (1287KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of low accuracy of driving command prediction, bulky model structure and a large amount of information redundancy in existing end-to-end autonomous driving methods, a new end-to-end autonomous driving model based on deep visual attention neural network was proposed. In order to effectively extract features of autonomous driving scenes, a deep visual attention neural network, which is composed of the convolutional neural network, the visual attention layer and the long short-term memory network, was proposed by introducing a visual attention mechanism into the end-to-end autonomous driving model. The proposed model was able to effectively extract spatial and temporal features of driving scene images, focus on important information and reduce information redundancy for realizing the end-to-end autonomous driving that predicts driving commands from sequential images input by front-facing camera. The data from a simulated driving environment were used for training and testing. The root mean square errors of the proposed model for prediction of the steering angle in four scenes including country road, highway, tunnel and mountain road are 0.009 14, 0.009 48, 0.002 89 and 0.010 78 respectively, which are all lower than the results of the method proposed by NVIDIA and the method based on the deep cascaded neural network. Moreover, the proposed model has fewer network layers compared with the networks without the visual attention mechanism.
    Instance segmentation based lane line detection and adaptive fitting algorithm
    TIAN Jin, YUAN Jiazheng, LIU Hongzhe
    2020, 40(7):  1932-1937.  DOI: 10.11772/j.issn.1001-9081.2019112030
    Asbtract ( )   PDF (2929KB) ( )  
    References | Related Articles | Metrics
    Lane line detection is an important part of intelligent driving system. The traditional lane line detection method relies heavily on manual selection of features, which requires a large amount of work and has low accuracy when it is interfered by complex scenes such as object occlusion, illumination change and road abrasion. Therefore, designing a robust detection algorithm faces a lot of challenges. In order to overcome these shortcomings, a lane line detection model based on deep learning instance segmentation method was proposed. This model is based on the improved Mask R-CNN model. Firstly, the instance segmentation model was used to segment the lane line image, so as to improve the detection ability of lane line feature information. Then, the cluster model was used to extract the discrete feature information points of lane lines. Finally, an adaptive fitting method was proposed, and two fitting methods, linear and polynomial, were used to fit the feature points in different fields of view, and the optimal lane line parameter equation was generated. The experimental results show that the method improves the detection speed, has better detection accuracy in different scenes, and can achieve robust extraction of lane line information in various complex practical conditions.
    Orthodontic path planning based on improved particle swarm optimization algorithm
    XU Xiaoqiang, QIN Pinle, ZENG Jianchao
    2020, 40(7):  1938-1943.  DOI: 10.11772/j.issn.1001-9081.2019112055
    Asbtract ( )   PDF (1792KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of tooth movement path planning in virtual orthodontic treatment system, a method of tooth movement path planning based on simplified mean particle swarm with normal distribution was proposed. Firstly, the mathematical models of single tooth and whole teeth were established. According to the characteristics of tooth movement, the orthodontic path planning problem was transformed into a constrained optimization problem. Secondly, based on the simplified particle swarm optimization algorithm, a Simplified Mean Particle Swarm Optimization based on the Normal distribution (NSMPSO) algorithm was proposed by introducing the idea of normal distribution and mean particle swarm optimization. Finally, a fitness function with high security was constructed from five aspects:translation path length, rotation angle, collision detection, single-stage tooth moving amount and rotation amount, so as to realize the orthodontic movement path planning. NSMPSO was compared with basic Particle Swarm Optimization (PSO) algorithm, the mean Particle Swarm Optimization (MPSO) algorithm and the Simplified Mean Particle Swarm Optimization with Dynamic adjustment of inertia weight(DSMPSO) algorithm. Results show that on Sphere, Griewank and Ackley, these three benchmark test functions, this improved algorithm tends to be stable and convergent within 50 iteration times, and has the fastest convergence speed and the highest convergence precision. Through the simulation experiments in Matlab, the optimal path obtained by the mathematical models and the improved algorithm is verified to be safe and reliable, which can provide assisted diagnosis for doctors.
    Data science and technology
    Reverse influence maximization algorithm in social networks
    YANG Shuxin, LIANG Wen, ZHU Kaili
    2020, 40(7):  1944-1949.  DOI: 10.11772/j.issn.1001-9081.2019091695
    Asbtract ( )   PDF (1320KB) ( )  
    References | Related Articles | Metrics
    Existing research works on the influence of social networks mainly focus on the propagation of single-source information, and rarely consider the reverse form of propagation. Aiming at the problem of reverse influence maximization, the heat diffusion model was extended to the multi-source heat diffusion model, and a Pre-Selected Greedy Approximation (PSGA) algorithm was designed. In order to verify the validity of the algorithm, seven representative seed mining methods were selected, and the experiments were carried out on different kinds of social network datasets with the propagation revenue of reverse influence maximization, the running time of the algorithm and the degree of seed enrichment degree as evaluation indexes. The results show that the seeds selected by PSGA algorithm have stronger propagation ability, low intensity, and high stability performance, and have advantage in the early stage of propagation. It can be thought that PSGA algorithm can solve the problem of reverse influence maximization.
    Spatial crowdsourcing task allocation algorithm for global optimization
    NIE Xichan, ZHANG Yang, YU Dunhui, ZHANG Xingsheng
    2020, 40(7):  1950-1958.  DOI: 10.11772/j.issn.1001-9081.2019112025
    Asbtract ( )   PDF (1314KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that in the research of spatial crowdsourcing task allocation, the benefits of multiple participants and the global optimization of continuous task allocation are not considered, which leads to the problem of poor allocation effect, an online task allocation algorithm was proposed for the global optimization of tripartite comprehensive benefit. Firstly, the distribution of crowdsourcing objects (crowdsourcing tasks and workers) in the next time stamp was predicted based on online random forest and gated recurrent unit network. Then, a bipartite graph model was constructed based on the situation of crowdsourcing objects in the current time stamp. Finally, the optimal matching algorithm of weighted bipartite graph was used to complete the task allocation. The experimental results show that the proposed algorithm realize the global optimization of continuous task allocation. Compared with greedy algorithm, this algorithm improves the success rate of task allocation by 25.7%, the average comprehensive benefit by 32.2% and the average opportunity cost of workers by 37.8%; compared with random threshold algorithm, the algorithm improves the success rate of task allocation by 27.4%, the average comprehensive benefit by 34.7% and the average opportunity cost of workers by 40.2%.
    Cyber security
    Decryption structure of multi-key homomorphic encryption scheme based on NTRU
    CHE Xiaoliang, ZHOU Haonan, ZHOU Tanping, LI Ningbo, YANG Xiaoyuan
    2020, 40(7):  1959-1964.  DOI: 10.11772/j.issn.1001-9081.2020010051
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics
    In order to further improve the security and efficiency of Number Theory Research Unit (NTRU)-type Multi-Key Fully Homomorphic Encryption (MKFHE) schemes, based on the prime power cyclotomic rings, the properties of the original decryption structure of NTRU-type multi-key fully homomorphic encryption were studied, and two optimization methods of multi-key homomorphic decryption structures were proposed. Firstly, by reducing the polynomial's coefficients, the "Regev-Style" multi-key decryption structure was designed. Secondly, the "Ciphertext-Expansion" multi-key decryption structure was designed by expanding the dimension of ciphertexts. Compared with the original decryption structure of NTRU-type multi-key homomorphic encryption scheme, the "Regev-Style" multi-key decryption structure reduced the magnitude of error, which was able to reduce the number of key-switching and modulo-switching when it was used in the design of NTRU-type multi-key homomorphic encryption scheme; the "Ciphertext-Expansion" multi-key decryption structure eliminated the key-switching operation, reduced the magnitude of error, and was able to process the ciphertext product of repeated users more effectively. The security of the optimized multi-key decryption structures was based on the Learning With Errors (LWE) problem and Decisional Small Polynomial Ratio (DSPR) assumption on the prime power cyclotomic rings, so these structures were able to resist subfield attacks well. Therefore, they can be used to design a more secure and efficient NTRU-type multi-key fully homomorphic encryption scheme by selecting appropriate parameters.
    MinRank analysis of cubic multivariate public key cryptosystem
    ZHANG Qi, NIE Xuyun
    2020, 40(7):  1965-1969.  DOI: 10.11772/j.issn.1001-9081.2019112052
    Asbtract ( )   PDF (661KB) ( )  
    References | Related Articles | Metrics
    The cubic cryptosystem is the improvement of the classical multivariable cryptosystem Square. By increasing the degree of central mapping from square mapping to cubic mapping, the public key polynomial was promoted from quadratic to cubic in order to resist the MinRank attack against the quadratic multivariable public key cryptosystem. Aiming at this system, a MinRank attack combining with difference was proposed to recover its private key. Firstly, the central mapping difference of the system was analyzed, and its rank was determined according to the structure after difference. Then, the difference of the public key was solved and the coefficient matrices of the quadratic term were extracted. After that, a MinRank problem was constructed by the coefficient matrix and the determined rank. Finally, the extended Kipnis-Shamir method was combined to solve the problem. The experimental results show that the private key of cubic cryptosystem can be recovered by using MinRank attack.
    WeChat payment behavior recognition model based on division of large and small burst blocks
    LIANG Denggao, ZHOU Anmin, ZHENG Rongfeng, LIU Liang, DING Jianwei
    2020, 40(7):  1970-1976.  DOI: 10.11772/j.issn.1001-9081.2019122063
    Asbtract ( )   PDF (1310KB) ( )  
    References | Related Articles | Metrics
    For the facts that WeChat red packet and fund transfer functions are used for illegal activities such as red packet gambling and illegal transactions, and the existing research work in this field is difficult to identify the specific numbers of sending and receiving red packets and fund transfers in WeChat, and there are problems of low recognition rate and high resource consumption, a method for dividing large and small burst blocks of traffic was proposed to extract the characteristics of traffic, so as to effectively identify the sending and receiving of red packets and the transfer behaviors. Firstly, by taking advantage of the suddenness of sending and receiving red packets and fund transfers, a large burst time threshold was set to define the burst blocks of such behaviors. Then, according to the feature that the behaviors of sending and receiving red packets and fund transfers consist of several consecutive user operations, a small burst threshold was set to further divide the traffic block into small bursts. Finally, synthesizing the features of small burst blocks in the big burst block, the final features were obtained. The experimental results show that the proposed method is generally better than the existing research on WeChat payment behavior recognition in terms of time efficiency, space occupancy rate, recognition accuracy and algorithm universality, with an average accuracy rate up to 97.58%. The test results of the real environment show that the proposed method can basically accurately identify the numbers of sending and receiving red packets and fund transfers for a user in a period of time.
    Stepwise correlation power analysis of SM4 cryptographic algorithm
    CONG Jing, WEI Yongzhuang, LIU Zhenghong
    2020, 40(7):  1977-1982.  DOI: 10.11772/j.issn.1001-9081.2019122209
    Asbtract ( )   PDF (1949KB) ( )  
    References | Related Articles | Metrics
    Focused on the low analysis efficiency of Correlation Power Analysis (CPA) interfered by noise, a stepwise CPA scheme was proposed. Firstly, the utilization of information in CPA was improved by constructing a new stepwise scheme. Secondly, the problem that the accuracies of previous analyses were not guaranteed was solved by introducing the confidence index to improve the accuracy of each analysis. Finally, a stepwise CPA scheme was proposed based on the structure of SM4 cryptographic algorithm. The results of simulation experiments show that, on the premise of the success rate up to 90%, stepwise CPA reduces the demand of power traces by 25% compared to classic CPA. Field Programmable Gate Array (FPGA) based experiments indicate that the ability of stepwise CPA to recover the whole round key is very close to the limit of expanding the search space to the maximum. Stepwise CPA can reduce the interference of noise and improve the efficiency of analysis with a small amount of calculation.
    Privacy-preserving determination of integer point-interval relationship
    MA Minyao, WU Lian, LIU Zhuo, XU Yi
    2020, 40(7):  1983-1988.  DOI: 10.11772/j.issn.1001-9081.2020010091
    Asbtract ( )   PDF (839KB) ( )  
    References | Related Articles | Metrics
    The determination of the relationship between integer point and integer interval in the sense of privacy preserving is an important secure multi-party computation problem, but there are some defects in the existing solutions, such as low efficiency, privacy disclosure, and even possible wrong determination. Aiming at these defects, an improved secure two-party computation protocol for solving this determination problem was constructed. Firstly, analysis of the existing protocols was given and some shortcomings of the protocols were pointed out. Secondly, a new 0-1 coding rule for integer point and integer interval was defined, based on this, a necessary and sufficient condition for an integer point belonging to an integer interval was proved. Finally, by using the necessary and sufficient condition as the determination standard, a secure two-party computation protocol for determining wether the integer point belonging to the integer interval was proposed based on the Goldwasser-Micali encryption system, and its correctness and the security under the semi-honest model were proved. Analysis shows that compared with the existing solutions, the proposed protocol has better privacy preserving feature and will not output wrong results, in addition, both the computation complexity and the communication complexity of the protocol are reduced by about half while the round complexity remains the same.
    Secure electronic voting scheme based on blockchain
    WU Zhihan, CUI Zhe, LIU Ting, PU Hongquan
    2020, 40(7):  1989-1995.  DOI: 10.11772/j.issn.1001-9081.2019122171
    Asbtract ( )   PDF (1116KB) ( )  
    References | Related Articles | Metrics
    There are two main contradictions in the existing electronic voting schemes, one is to ensure the legality and compliance of election behavior while ensuring the anonymity of election process, and the other is to ensure the privacy security of ballot information while ensuring the public verifiability of election results. Focusing on these contradictions, a decentralized electronic voting scheme based on Ethereum blockchain and zero-knowledge proof was proposed. In the proposed scheme, the non-interactive zero-knowledge proof algorithm and decentralized blockchain architecture were fused to build zero knowledge proof of voter identity and zero knowledge proof of ballot legality. And smart contract and Paillier algorithm were used to realize self-counting without trusted third-party counting mechanism. The theoretical analysis and simulation results show that the scheme can achieve security requirements of electronic voting and can be applied to small-scale community election.
    Network intrusion detection method based on improved rough set attribute reduction and K-means clustering
    WANG Lei
    2020, 40(7):  1996-2002.  DOI: 10.11772/j.issn.1001-9081.2019111915
    Asbtract ( )   PDF (1071KB) ( )  
    References | Related Articles | Metrics
    Under increasingly complex network environment, traditional intrusion detection methods have high false alarm rate, low detection efficiency and the contradiction between accuracy and interpretability in the optimization process. Therefore, an Improved Rough Set Attribute Reduction and optimized K-means Clustering Approach for Network Intrusion Detection (IRSAR-KCANID) was proposed. Firstly, the dataset was preprocessed based on the attribute reduction of fuzzy rough set in order to optimize the anomalous intrusion detection features. Then, the threshold of intrusion range was estimated by improved K-means clustering algorithm, and the network features were classified. After that, according to the linear canonical correlation used for feature optimization, the feature association impact scale was explored from the selected optimal features in order to form the table of feature association impact scale, and the detection of anomalous network intrusion was completed. The experimental results show that the minimum measured feature association impact scale table after feature optimization clustering can minimize the complexity of intrusion detection process and shorten the completion time on the premise of guaranteeing maximum prediction accuracy.
    Simulation and effectiveness evaluation of network warfare based on LightGBM algorithm
    CHEN Xiaonan, HU Jianmin, CHEN Xi, ZHANG Wei
    2020, 40(7):  2003-2008.  DOI: 10.11772/j.issn.1001-9081.2019122129
    Asbtract ( )   PDF (879KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of high abstraction degree of network warfare and insufficient means of simulation and effectiveness evaluation of network warfare under the condition of informationization, a method of network warfare simulation and effectiveness evaluation integrating multiple indexes of both attack and defense sides was proposed. Firstly, for the network warfare attacker, four kinds of attack methods were introduced to attack the network; and for the network defender, the network node structure, content importance and emergency response ability were introduced as the defense indicators of the network. Then, the network warfare effectiveness evaluation model was established by integrating PageRank algorithm and fuzzy comprehensive evaluation method into Light Gradient Boosting Machine (LightGBM) algorithm. Finally, by defining the node damage effectiveness curve, the evaluation results of residual effectiveness and damage effectiveness in the whole network warfare attack and defense system were obtained. The simulation results show that the effectiveness evaluation model of network warfare can effectively evaluate the operational effectiveness of both attack and defense sides of network warfare, which verifies the rationality and feasibility of the effectiveness evaluation method of network warfare.
    Advanced computing
    Automatic generation algorithm of orthogonal grid based on recurrent neural network
    HUANG Zhongzhan, XU Shiming
    2020, 40(7):  2009-2015.  DOI: 10.11772/j.issn.1001-9081.2019112062
    Asbtract ( )   PDF (1651KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of computer graphics, industrial design, natural science and other fields, the demand for high-quality scientific computing methods is increased. These scientific computing methods are inseparable from high-quality grid generation algorithms. For the commonly used orthogonal grid generation algorithms, whether they can reduce the computational amount and whether the manual intervention can be reduced are still the main challenges faced by them. Aiming at these challenges, for the single-connected target region, an automatic generation algorithm of orthogonal grid was proposed based on Long Short-Term Memory network (LSTM), one of the recurrent neural networks and Schwarz-Christoffel conformal mapping (SC mapping). Firstly, the basic conditions of the Gridgen-c tool based on SC mapping were used to transform the grid generation problem into an integer programming problem with linear constraints. Next, a classifier, which is capable of calculating the probability of the corner type of each vertex of the target polygonal region, was obtained by using the pre-processed GADM dataset and LSTM training. This classifier was able to greatly reduce the time complexity of integer programming problem, making the problem be solved quickly and automatically. Finally, the simple graphics areas, animated graphics areas and geographical boundary areas were taken as examples to conduct a grid generation experiment. Results show that for simple graphic areas, the proposed algorithm can reach the optimal solution on all examples. For animated graphic areas and geographical boundary areas with complex boundaries, the example grid results show that the proposed algorithm can make the calculation amount in these target areas reduced by 88.42% and 91.16% respectively, and can automatically generate better orthogonal grid.
    Solution space dynamic compression strategy for permutation and combination problems
    LI Zhanghong, LIANG Xiaolei, TIAN Mengdan, ZHOU Wenfeng
    2020, 40(7):  2016-2020.  DOI: 10.11772/j.issn.1001-9081.2019112006
    Asbtract ( )   PDF (862KB) ( )  
    References | Related Articles | Metrics
    The performance of swarm intelligent algorithms in solving large-scale permutation and combinatorial optimization problems is influenced by the large search space, so a Solution Space Dynamic Compression (SSDC) strategy was proposed to cut down the search space of the algorithms dynamically. In the proposed strategy, two times of initial solutions of the the permutation and combination optimization problem were obtained by the intelligent algorithm firstly. Then the repetitive segments of the two solutions were recognized and merged together. And the new points after merging were taken into the original solution space to perform the compression and update of the solution space. In the next intelligent algorithm solving process, the search was carried out in the compressed feasible space, so as to improve the searching ability of the individuals in the limited space and reduce the searching time cost. Based on five high-dimensional benchmark Travel Salesmen Problems (TSP) and two Vehicle Routing Problems (VRP), the performances of several swarm intelligent algorithms combined with the solution space dynamic compression strategy were tested. The results show that the swarm intelligent algorithms combined with the proposed strategy are superior to the corresponding original algorithms in the search accuracy and stability. It is proved that the solution space dynamic compression strategy can effectively improve the performance of swarm algorithms.
    Path planning for unmanned vehicle based on improved A* algorithm
    QI Xuanxuan, HUANG Jiajun, CAO Jian'an
    2020, 40(7):  2021-2027.  DOI: 10.11772/j.issn.1001-9081.2019112016
    Asbtract ( )   PDF (1648KB) ( )  
    References | Related Articles | Metrics
    The traditional A* algorithm has the disadvantages of long planning time and large search range in unmanned vehicle path planning. After comprehensively analyzing the calculation process of the A* algorithm, the A* algorithm was improved from four aspects. Firstly, targeted expansion, that is, different quadrants were selected with target for node expansion according to the relative position of the node to be expanded and the target node. Secondly, target visibility judgment, that is, whether there were obstacles between the node to be expanded and the target point was determined, if there were no obstacles, A* algorithm jumped out of the exploration process to reduce redundant searches. Thirdly, the heuristic function of the A* algorithm was changed, that is, the cost estimation of the n-th generation parent node of the node to be expanded to the target point was increased, thereby reducing the local optimal situation of the cost estimation to the target point. Fourthly, the selection strategy of the expanded nodes was changed, that is, the traditional method of minimizing the heuristic function to select the expanded nodes was changed, and the simulated annealing method was introduced to optimize the selection method of the expanded nodes, so that the search process was performed as close to the target point as possible. Finally, the Matlab simulation experimental results show that, under the simulated map environment, the improved A* algorithm has the running time reduced by 67.06%, the number of experienced grids decreased by 73.53%, and the fluctuation range of the optimized path length is ±0.6%.
    Network and communications
    Low complexity offset min-sum algorithm for 5G low density parity check codes
    CHEN Fatang, ZHANG Youshou, DU Zheng
    2020, 40(7):  2028-2032.  DOI: 10.11772/j.issn.1001-9081.2019111897
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics
    In order to improve the error code performance of Low Density Parity Check (LDPC) code Offset Min-Sum (OMS) algorithm, a low complexity OMS algorithm for 5G LDPC codes was proposed based on 5G NR standard. Aiming at the problem that the offset factor value calculation in the traditional algorithm is not accurate enough, the density evolution was used to obtain a more accurate offset factor value, which was used to the check node updating in order to enhance the performance of OMS algorithm. And the obtained offset factor value was approximated by using the linear approximation method, so as to reduce the complexity of the algorithm while ensuring decoding performance. For the influence of the variable node oscillation phenomenon on the decoding, the Log-Likelihood Ratio (LLR) message values before and after node updating were weighted, so the oscillation of the variable node was reduced, and the convergence speed of the decoder was improved. The simulation results show that compared with Normalized-Min-Sum (NMS) algorithm and OMS algorithm, the proposed algorithm improves the decoding performance by 0.3-0.5 dB when the Bit-Error Rate (BER) is 10-5, and the average iteration times reduced by 48.1% and 24.3% respectively. At the same time, the difference between the performance of the proposed algorithm and LLR-BP (Log-Likelihood Ratio-Belief Propagation) algorithm performance is only nearly 0.1 dB.
    Wireless sensor network deployment algorithm based on basic architecture
    SHI Jiaqi, TAN Li, TANG Xiaojiang, LIAN Xiaofeng, WANG Haoyu
    2020, 40(7):  2033-2037.  DOI: 10.11772/j.issn.1001-9081.2019122211
    Asbtract ( )   PDF (2295KB) ( )  
    References | Related Articles | Metrics
    At present, the deployment of nodes in wireless sensor network mainly adopts the algorithm based on Voronoi diagram. In the process of deployment using Voronoi algorithm, due to the large number of nodes involved in the deployment and the high complexity of the algorithm, the iteration time of the algorithm is long. In order to solve the problem of long iteration time in node deployment, a Deployment Algorithm based on Basic Architecture (DABA) was proposed. Firstly the nodes were combined into basic architectures, then center position coordinates of the basic architecture were calculated, finally the node deployment was performed by using Voronoi diagram. The algorithm was still able to realize the deployment effectively under the condition that there were obstacles in the deployment area. The experimental results show that DABA can reduce the deployment time by two thirds compared with the Voronoi algorithm. The proposed algorithm can significantly reduce the iteration time and the complexity of the algorithm.
    Analysis of three-time-slot P-persistent CSMA protocol with variable collision duration in wireless sensor network
    LI Mingliang, DING Hongwei, LI Bo, WANG Liqing, BAO Liyong
    2020, 40(7):  2038-2045.  DOI: 10.11772/j.issn.1001-9081.2019112028
    Asbtract ( )   PDF (4238KB) ( )  
    References | Related Articles | Metrics
    Random multiple access communication is an indispensable part of computer communication research. A three-slot P-Persistent Carrier Sense Multiple Access (P-CSMA) protocol with variable collision duration in Wireless Sensor Network (WSN) was proposed to solve the problem of traditional P-CSMA protocol in transmitting and controlling WSN and energy consumption of system. In this protocol, the collision duration was added to the traditional two-time-slot P-CSMA protocol in order to change the system model to three-time-slot model, that is, the duration of information packet being sent successfully, the duration of packet collision and the idle duration of the system.Through the modeling, the throughput, collision rate and idle rate of the system under this model were analyzed. It was found that by changing the collision duration, the loss of the system was reduced. Compared with the traditional P-CSMA protocol, this protocol makes the system performance improved, and makes the lifetime of the system nodes obtained based on the battery model obviously extended. Through the analysis, the system simulation flowchart of this protocol is obtained. Finally, by comparing and analyzing the theoretical values and simulation values of different indexes, the correctness of the theoretical derivation is proved.
    Virtual reality and multimedia computing
    Low-resolution image recognition algorithm with edge learning
    LIU Ying, LIU Yuxia, BI Ping
    2020, 40(7):  2046-2052.  DOI: 10.11772/j.issn.1001-9081.2019112041
    Asbtract ( )   PDF (6039KB) ( )  
    References | Related Articles | Metrics
    Due to the influence of lighting conditions, shooting angles, transmission equipments and the surrounding environments, target objects in criminal investigation video images often have low-resolution, which are difficult to recognize. In order to improve the recognition rate of low-resolution images, on the basis of the classic LeNet-5 recognition network, a low-resolution image recognition algorithm based on adversarial edge learning was proposed. Firstly, the adversarial edge learning network was used to generate the fantasy edge of low-resolution image, which is similar to the edge of high-resolution image. Secondly, the edge information of this low-resolution image was fused into the recognition network as prior information for the recognition of the low-resolution image. Experiments were performed on three datasets:MNIST, EMNIST and Fashion-mnist. The results show that fusing the fantasy edge of low-resolution image into the recognition network can effectively increase the recognition rate of low-resolution images.
    Cross-layer fusion feature based on richer convolutional features for edge detection
    SONG Jie, YU Yu, LUO Qifeng
    2020, 40(7):  2053-2058.  DOI: 10.11772/j.issn.1001-9081.2019112057
    Asbtract ( )   PDF (1496KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems such as chaotic and fuzzy edge lines caused by current deep learning based edge detection technology, an end-to-end Cross-layer Fusion Feature for edge detection (CFF) model based on RCF (Richer Convolutional Features) was proposed. In this model, RCF was used as a baseline, the CBAM (Convolutional Block Attention Module) was added to the backbone network, translation-invariant downsampling technology was adopted, and some downsampling operations in the backbone network were removed in order to preserve the image details information, dilated convolution technique was used to increase the model receptive field at the same time. In addition, the method of cross-layer fusion of feature maps was adopted to enable high-level and low-level features to be fully fused together. In order to balance the relationship between the loss in each stage and the fusion loss, and to avoid the phenomenon of excessive loss of low-level details after multi-scale feature fusion, the weight parameters were added to the losses. The model was trained on Berkeley Segmentation Data Set (BSDS500) and PASCAL VOL Context dataset, and the image pyramid technology was used in testing to improve the quality of edge images. Experimental results show that the contour extracted by CFF model is clearer than that extracted by the baseline network and can solve the edge blurring problem. The evaluation performed on the BSDS500 benchmark shows that, the Optimal Dataset Scale (ODS) and the Optimal Image Scale (OIS) are improved to 0.818 and 0.839 respectively by this model.
    Point cloud compression method combining density threshold and triangle group approximation
    ZHONG Wenbin, SUN Si, LI Xurui, LIU Guangshuai
    2020, 40(7):  2059-2068.  DOI: 10.11772/j.issn.1001-9081.2019111909
    Asbtract ( )   PDF (4027KB) ( )  
    References | Related Articles | Metrics
    For the difficulty in balancing compression precision and compression time in the compression of non-uniformly collected point cloud data, a compression method combining density threshold and triangle group approximation was proposed, and the triangle group was constructed by setting the density threshold of non-empty voxels obtained by the octree division in order to realize the point cloud surface simulation. Firstly, the vertices of triangles were determined according to the distribution of the points in the voxel. Secondly, the vertices were sorted to generate each triangle. Finally, the density threshold was introduced to construct the rays parallel to the coordinate axis, and the subdivision points on different density regions were generated according to the intersections of the triangles and the rays. Using the point cloud data of dragon, horse, skull, radome, dog and PCB, the improved regional center of gravity method, the curvature-based compression method, the improved curvature-grading-based compression method, the K-neighborhood cuboid method and the proposed method were compared. The experimental results show that:under the same voxel size, the feature expression of the proposed method is better than that of the improved regional center of gravity method; in the case of close compression ratio, the proposed method is superior to the curvature-based compression method, the curvature-grading-based compression method and the K-neighborhood cuboid method in time cost; in the term of compression accuracy, the maximum deviation, standard deviation and surface area change rate of the model built by the proposed method are all better than those of the models built by the improved regional center of gravity method, the curvature-based compression method, the curvature-grading-based compression method and the K-neighborhood cuboid method. The experimental results show that the proposed method can effectively compress the point cloud in a short time while retaining the feature information well.
    Image super-resolution reconstruction based on hybrid deep convolutional network
    HU Xueying, GUO Hairu, ZHU Rong
    2020, 40(7):  2069-2076.  DOI: 10.11772/j.issn.1001-9081.2019122149
    Asbtract ( )   PDF (1446KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of blurred image, large noise, and poor visual perception in the traditional image super-resolution reconstruction methods, a method of image super-resolution reconstruction based on hybrid deep convolutional network was proposed. Firstly, the low-resolution image was scaled down to the specified size in the up-sampling phase. Secondly, the initial features of the low-resolution image were extracted in the feature extraction phase. Thirdly, the extracted initial features were sent to the convolutional coding and decoding structure for image feature denoising. Finally, high-dimensional feature extraction and computation were performed on the reconstruction layer using the dilated convolution in order to reconstruct the high-resolution image, and the residual learning was used to quickly optimize the network in order to reduce the noise and make the reconstructed image have better definition and visual effect. Based on the Set14 dataset and scale of 4x, the proposed method was compared with Bicubic interpolation (Bicubic), Anchored neighborhood regression (A+), Super-Resolution Convolutional Neural Network (SRCNN), Very Deep Super-Resolution network (VDSR), Restoration Encoder-Decoder Network (REDNet). In the super-resolution experiments, compared with the above methods, the proposed method has the Peak Signal-to-Noise Ratio (PSNR) increased by 2.73 dB,1.41 dB,1.24 dB,0.72 dB and 1.15 dB respectively, and the Structural SIMilarity (SSIM) improved by 0.067 3,0.020 9,0.019 7,0.002 6 and 0.004 6 respectively. The experimental results show that the hybrid deep convolutional network can effectively perform super-resolution reconstruction of the image.
    Image super-resolution reconstruction based on deep progressive back-projection attention network
    HU Gaopeng, CHEN Ziliu, WANG Xiaoming, ZHANG Kaifang
    2020, 40(7):  2077-2083.  DOI: 10.11772/j.issn.1001-9081.2019122155
    Asbtract ( )   PDF (1931KB) ( )  
    References | Related Articles | Metrics
    Focused on the problems of Single Image Super-Resolution (SISR) reconstruction methods, such as the loss of high frequency information during the process of image reconstruction, the introduction of noise during the process of upsampling and the difficulty of determining the interdependence relationships between the channels of the feature map, a deep progressive back-projection attention network was proposed. Firstly, a progressive upsampling method was used to gradually scale the Low Resolution (LR) image to a given magnification in order to alleviate problems such as high-frequency information loss caused by upsampling. Then, at each stage of progressive upsampling, iterative back-projection idea was merged to learn mapping relationship between High Resolution (HR) and LR feature maps and reduce the introduced noise in the upsampling process. Finally, the attention mechanism was used to dynamically allocate attention resources to the feature maps generated at different stages of the progressive back-projection network, so that the interdependence relationships between the feature maps were learned by the network model. Experimental results show that the proposed method can increase the Peak Signal-to-Noise Ratio (PSNR) by up to 3.16 dB and the structural similarity by up to 0.218 4.
    Infrared image data augmentation based on generative adversarial network
    CHEN Foji, ZHU Feng, WU Qingxiao, HAO Yingming, WANG Ende
    2020, 40(7):  2084-2088.  DOI: 10.11772/j.issn.1001-9081.2019122253
    Asbtract ( )   PDF (1753KB) ( )  
    References | Related Articles | Metrics
    The great performance of deep learning in many visual tasks largely depends on the big data volume and the improvement of computing power. But in many practical projects, it is usually difficult to provide enough data to complete the task. Concerning the problem that the number of infrared images is small and the infrared images are hard to collect, a method to generate infrared images based on color images was proposed to obtain more infrared image data. Firstly, the existing color image and infrared image data were employed to construct the paired datasets. Secondly, the generator and the discriminator of Generative Adversarial Network (GAN) model were formed based on the convolutional neural network and the transposed convolutional neural network. Thirdly, the GAN model was trained based on the paired datasets until the Nash equilibrium between the generator and the discriminator was reached. Finally, the trained generator was used to transform the color image from the color field to the infrared field. The experimental results were evaluated based on quantitative evaluation metrics. The evaluation results show that the proposed method can generate high-quality infrared images. In addition, after the L1 or L2 regularization constraint was added to the loss function, the FID (Fréchet Inception Distance) score was respectively reduced by 23.95, 20.89 on average compared to the FID score of loss function not adding the constraint. As an unsupervised data augmentation method, the method can also be applied to many other visual tasks that lack train data, such as target recognition, target detection and data imbalance.
    Interactive liveness detection combining with head pose and facial expression
    HUANG Jun, ZHANG Nana, ZHANG Hui
    2020, 40(7):  2089-2095.  DOI: 10.11772/j.issn.1001-9081.2019112059
    Asbtract ( )   PDF (1450KB) ( )  
    References | Related Articles | Metrics
    In order to prevent photo and video attacks in the face recognition system, an interactive liveness detection algorithm was proposed which combines the head pose and facial expression. Firstly, the number of convolution kernels, network layers, and regularization of VGGNet were adjusted and optimized, and a multi-layer convolutional head pose estimation network was constructed. Secondly, the methods such as global average pooling, local response normalization and convolutional replacement pooling were introduced to improve VGGNet and build an expression recognition network. Finally, the above two networks were fused to realize an interactive liveness detection system, which sends random instructions to users to complete liveness detection in real time. The experimental results show that the proposed head pose estimation network and expression recognition network achieve 99.87% and 99.60% accuracy on CAS-PEAL-R1 dataset and CK+ dataset respectively, and the liveness detection system has the comprehensive accuracy reached 96.70%, the running speed reaches 20-28 frames per second, which make the generalization ability of the system outstanding in the practical application.
    Face liveness detection method based on near-infrared and visible binocular vision
    DENG Xiwen, FENG Ziliang, QIU Chenpeng
    2020, 40(7):  2096-2103.  DOI: 10.11772/j.issn.1001-9081.2019122184
    Asbtract ( )   PDF (1703KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that face recognition systems are suspectable to be affected by forgery attacks, a face liveness detection method based on near-infrared and visible binocular vision was proposed. Firstly, the binocular device was used to obtain the face images of near-infrared and visible light synchronously. Then, the facial feature points of two images were extracted, and the binocular relation was used to match the feature points and obtain their depth information, which was used for three-dimensional point cloud reconstruction. Secondly, all facial feature points were divided into four regions, and the average variance of facial feature points in the depth direction within each region was calculated. Thirdly, the key feature points of face were selected. With the nasal tip point as the reference point, the spatial distances between the nasal tip point and the key feature points were calculated. Finally, the feature vectors were constructed by using the depth value variances and spatial distances of facial feature points. And Support Vector Machine (SVM) was used for the judgment of real faces. The experimental results show that the proposed method can detect real faces accurately and resist the attacks of fake faces effectively, achieves the recognition rate of 99.0% in experimental tests, and is superior in accuracy and robustness to the similar algorithm using depth information of facial feature points for detection.
    Multi-modal brain tumor segmentation method under same feature space
    CHEN Hao, QIN Zhiguang, DING Yi
    2020, 40(7):  2104-2109.  DOI: 10.11772/j.issn.1001-9081.2019122233
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics
    Glioma segmentation depends on multi-modal Magnetic Resonance Imaging (MRI) images. Convolutional Neural Network (CNN)-based segmentation algorithms are often trained and tested on fixed multi-modal images, which ignores the problem of missing or increasing of modal images. To solve this problem, a method mapping images of different modalities to the same feature space by CNN and using the features in the same feature space to segment tumors was proposed. Firstly, the features of different modalities were extracted through the same deep CNN. Then, the features of different modal images were concatenated, and passed through the fully connected layer to realize the feature fusion. Finally, the fused features were used to segment the brain tumor. The proposed model was trained and tested on the BRATS2015 dataset, and verified with the Dice coefficient. The experimental results show that, the proposed model can effectively alleviate the problem of data missing. At the same time, compared with multi-modal joint method, this model is more flexible, and can deal with the problem of modal data increasing.
    Detection method of pulmonary nodules based on improved residual structure
    SHI Lukui, MA Hongqi, ZHANG Chaozong, FAN Shiyan
    2020, 40(7):  2110-2116.  DOI: 10.11772/j.issn.1001-9081.2019122095
    Asbtract ( )   PDF (2429KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of high computing cost and over-fitting of the model caused by complicated network structure in pulmonary nodule detection method, an improved residual network structure combining deep separable convolution and pre-activation was proposed. And the proposed network structure was applied to a pulmonary nodule detection model. Based on the target detection network Faster R-CNN, with U-Net coder-decoder structure adopted, the deep separable convolution and pre-activation operations were used by the model to improve the 3D residual network structure. Firstly, with the use of deep separable convolution, the complexity and computing cost of the model were reduced. Then, the regularization of the model was improved by introducing the pre-activation operation, and the phenomenon of overfitting was alleviated. Finally, the rectangular convolution kernel was used to expand the receptive field of the convolution operation on the premise that the computing cost of the model was slightly increased, so as to effectively take into account both the global and local characteristics of the pulmonary nodules. On the LUNA16 dataset, the proposed method has the sensitivity of 96.04%, and the Free-response area under the Receiver Operating Characteristic curve (FROC) score of 83.23%. The experimental results show that the method improves the sensitivity of pulmonary nodule detection, effectively reduces the average number of false positives in the detection results, and improves the detection efficiency. This proposed method can effectively assist radiologists in detecting pulmonary nodules.
    Pulmonary nodule segmentation method based on deep transfer learning
    MA Jinlin, WEI Meng, MA Ziping
    2020, 40(7):  2117-2125.  DOI: 10.11772/j.issn.1001-9081.2019112012
    Asbtract ( )   PDF (1631KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that U-Net has a poor segmentation effect for small-volume pulmonary nodules, a segmentation method based on deep transfer learning was proposed, and Block Superimposed Fine-Tuning (BSFT) strategy was used to assist the segmentation of pulmonary nodules. Firstly, convolutional neural network was used to learn the feature information of large natural image datasets. Then, the learned features were transferred to the network for the segmentation of small pulmonary nodule image datasets. From the last sampling layer of the network, the network was released block by block and fine-tuned until the network completed the superimposition of the last layer. Finally, the similarity coefficient of Dice was quantitatively analyzed to determine the optimal segmentation network. The experimental results show that the Dice value of BSFT on LUNA16 pulmonary nodule open dataset reaches 0.917 9, which is obviously better than that of the mainstream pulmonary nodule segmentation algorithms.
    Thick cloud removal algorithm for multi-temporal remote sensing images based on total variation model
    WANG Rui, HUANG Wei, HU Nanqiang
    2020, 40(7):  2126-2130.  DOI: 10.11772/j.issn.1001-9081.2019111902
    Asbtract ( )   PDF (1436KB) ( )  
    References | Related Articles | Metrics
    Brightness inconsistency and obvious boundary affect the reconstruction results of multi-temporal remote sensing images. In order to solve the problem, an improved thick cloud removal algorithm for multi-temporal remote sensing image was proposed by combining total variation model and Poisson equation. Firstly, the brightness correction coefficient was calculated by the brightness information of the common area of multi-temporal remote sensing images in order to correct the brightness of the images, so as to reduce the effect of brightness differences on cloud removal results. Then, multi-temporal images after brightness correction were reconstructed based on selective multi-source total variation model, and the fusion results' spatial smoothnesses and their similarities with the original images were improved. Finally, the local areas of the reconstruction image were optimized by using Poisson equation. The experimental results show that this method can effectively solve the problems of brightness inconsistency and boundary.
    Single image shadow detection method based on entropy driven domain adaptive learning
    YUAN Yuan, WU Wen, WAN Yi
    2020, 40(7):  2131-2136.  DOI: 10.11772/j.issn.1001-9081.2019122068
    Asbtract ( )   PDF (1610KB) ( )  
    References | Related Articles | Metrics
    Cross-domain discrepancy frequently hinders deep neural networks to generalize to different datasets. In order to improve the robustness of shadow detection, a novel unsupervised domain adaptive shadow detection framework was proposed. Firstly, in order to reduce the data bias between different domains, a multi-level domain adaptive model was introduced to align the feature distributions of source domain and target domain from low level to high level. Secondly, to improve the model ability of soft shadow detection, a boundary-driven adversarial branch was proposed to guarantee the structured shadow boundary was also able to be obtained by the model on the target dataset. Thirdly, the entropy adversarial branch was combined to further suppress the high uncertainty at shadow boundary of the prediction result, so as to obtain an accurate and smooth shadow mask. Compared with the existing deep learning-based shadow detection methods, the proposed method has the Balance Error Rate (BER) averagely reduced by 10.5% and 18.75% on ISTD dataset and SBU dataset respectively. The experimental results demonstrate that the shadow detection results of the proposed algorithm have better boundary structure.
    Speech separation algorithm based on convolutional encoder decoder and gated recurrent unit
    CHEN Xiukai, LU Zhihua, ZHOU Yu
    2020, 40(7):  2137-2141.  DOI: 10.11772/j.issn.1001-9081.2019111968
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics
    In most speech separation and speech enhancement algorithms based on deep learning, the spectrum feature after Fourier transform is used as the input feature of the neural network, without considering the phase information in the speech signal. However, some previous studies show that phase information is essential to improve speech quality, especially at low Signal-to-Noise Ratio (SNR). To solve this problem, a speech separation algorithm based on Convolutional Encoder Decoder network and Gated Recurrent Unit (CED-GRU) network was proposed. Firstly, based on the characteristic that the original waveform contains both amplitude information and phase information, the original waveform of the mixed speech signal was used as the input feature. Secondly, the timing problem in speech signal was able to be effectively solved by combining the Convolutional Encoder Decoder (CED) network and the Gated Recurrent Unit (GRU) network. Compared with Permutation Invariant Training (PIT) algorithm, DC (Deep Clustering) algorithm, Deep Attractor Network (DAN) algorithm, the improved algorithm has the Perceptual Evaluation of Speech Quality (PESQ) and Short-Time Objective Intelligibility (STOI) of men and men, men and women, women and women increased by 1.16 and 0.29, 1.37 and 0.27, 1.08 and 0.3; 0.87 and 0.21, 1.11 and 0.22, 0.81 and 0.24; 0.64 and 0.24, 1.01 and 0.34, 0.73 and 0.29 percentage points. The experimental results show that the speech separation system based on CED-GRU has great value in practical application.
    Frontier & interdisciplinary applications
    Overview of modeling method of emergency organization decision in disaster operations management
    CAO Cejun, LIU Ju
    2020, 40(7):  2142-2149.  DOI: 10.11772/j.issn.1001-9081.2019112015
    Asbtract ( )   PDF (1082KB) ( )  
    References | Related Articles | Metrics
    To improve the utilization rate of human resource, reduce various losses caused by disasters, and contribute to the sustainable development, to apply efficient approaches to model the decisions of emergency organization is a critical and urgently-to-be-solved issue. Firstly, the concepts and connotations of disaster operations management and emergency organization were given. Secondly, the current states of the application and research of Semantic Bill of X (S-BOX), fractal theory, organizational theory, mathematical programming, evolutionary game theory, multi-agent simulation and other methods in emergency organization decision modeling were presented respectively. Finally, the potential research directions of emergency organization decision modeling were presented based on bi-level optimization theory, multi-swarm evolutionary game, big data, digital twin and blockchain technology.
    H-Algorand:public blockchain consensus mechanism based on multi-block output
    WANG Bo, REN Yingqi, HUANG Dongyan
    2020, 40(7):  2150-2154.  DOI: 10.11772/j.issn.1001-9081.2019111916
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics
    The public blockchain, which is open to the whole network and has no user authorization mechanism, has received widespread attention from the industry. The Algorand mechanism with good scalability and low fork probability is widely used in the public blockchain. However, the Algorand mechanism has low consensus efficiency and does not satisfy the high frequency trading scenarios. In order to solve these problems, firstly, a Multi-Block Algorand (MB-Algorand) mechanism was proposed to improve the block consensus efficiency. Then, a Hybid-Algorand (H-Algorand) mechanism based on Algorand mechanism and MB-Algorand mechanism was proposed, which can ensure both the block consensus efficiency and security. The simulation results show that the H-Algorand mechanism can obtain a significant improvement in the consensus efficiency of blockchain network in expense of a small loss of security performance when the committee is under the Distributed Denial of Service (DDoS) attack. When the probability of a block consensus failure is 1%, the proposed mechanism makes the blockchain network consensus efficiency increased by 37.87% with only 4.9% loss in the security performance.
    Path planning for automated guided vehicles based on tempo-spatial network at automated container terminal
    GAO Yilu, HU Zhihua
    2020, 40(7):  2155-2163.  DOI: 10.11772/j.issn.1001-9081.2019122117
    Asbtract ( )   PDF (1282KB) ( )  
    References | Related Articles | Metrics
    In order to solve the path conflict problem of automated guided vehicles in horizontal handling operations of automated container terminals, a path optimization method based on tempo-spatial network was proposed. For single transportation demand, firstly, the road network was discretized into grid network, and a tempo-spatial network which is updateable by time was designed. Secondly, the minimum of the completion time was taken as an objective, and a vehicle path optimization model was established based on the set of available road segments under tempo-spatial network. Finally, the shortest path algorithm was used on the tempo-spatial network to obtain the shortest path. For multiple transportation demands, in order to avoid conflicts between paths, the tempo-spatial network of the next transportation demand was updated according to the path planning results of the current transportation demand, and the path planning meeting the collision avoidance and congestion easing conditions were finally obtained through iteration. In the calculation experiment, compared with the basic shortest path solution strategy (solving algorithm P), the proposed method has the number of collisions reduced to 0 and the minimum relative distance always greater than the safety distance; compared with the parking waiting solution strategy (solving algorithm SP), the proposed method has total delay time of task reduced to 24 s, the proportion of delayed tasks and average congestion rate of road network significantly reduced, and the maximum reduction were 2.25% and 0.68% respectively. The experimental results show that the proposed method can effectively solve large-scale conflict-free path planning problems and significantly improve the operation efficiency of automated guided vehicles.
    Intelligent layout optimization algorithm for 3D pipelines of ships
    XIONG Yong, ZHANG Jia, YU Jiajun, ZHANG Benren, LIANG Xuanzhuo, ZHU Qige
    2020, 40(7):  2164-2170.  DOI: 10.11772/j.issn.1001-9081.2020010075
    Asbtract ( )   PDF (1094KB) ( )  
    References | Related Articles | Metrics
    In the ship pipeline layout at three-dimensional environment, aiming at the problems that there are too many constraints, the engineering rules are difficult to quantify and the appropriate optimization evaluation function is hard to determine, a new ship pipeline automatic layout method was proposed. Firstly, the hull and ship equipments were simplified by the Aixe Align Bounding Box (AABB) method, which means that they were discretized into space nodes, and the initial pheromones and energy values of them were given, the obstacles in the space were marked, and the specific quantitative forms for the main pipe-laying rules were given. Secondly, with the combination of Rapidly-exploring Random Tree (RRT) algorithm and Ant Colony Optimization (ACO) algorithm, the direction selection strategy, obstacle avoidance strategy and variable step strategy were introduced to improve the search efficiency and success rate of the algorithm, and then the ACO algorithm was used to optimize the path iteratively by establishing the optimization evaluation function, so as to obtain the comprehensive optimal solution that meets the engineering rules. Finally, the computer simulated cabin space layout environment was used to carry out automatic pipe-laying simulation experiments, which verified the effectiveness and practicability of the proposed method.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF