Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 March 2020, Volume 40 Issue 3
Previous Issue
Next Issue
Artificial intelligence
Accelerated compression method for convolutional neural network combining with pruning and stream merging
XIE Binhong, ZHONG Rixin, PAN Lihu, ZHANG Yingjun
2020, 40(3): 621-625. DOI:
10.11772/j.issn.1001-9081.2019081363
Asbtract
(
)
PDF
(740KB) (
)
References
|
Related Articles
|
Metrics
Deep convolutional neural networks are generally large in scale and complex in computation, which limits their application in high real-time and resource-constrained environments. Therefore, it is necessary to optimize the compression and acceleration of the existing structures of convolutional neural networks. In order to solve this problem, a hybrid compression method combining pruning and stream merging was proposed. In the method, the model was decompressed through different angles, further reducing the memory consumption and time consumption caused by parameter redundancy and structural redundancy. Firstly, the redundant parameters in each layer were cut off from the inside of the model. Then the non-essential layers were merged with the important layers from the structure of the model. Finally, the accuracy of the model was restored by retraining. The experimental results on the MNIST dataset show that the proposed hybrid compression method compresses LeNet-5 to 1/20 and improves its running speed by 8 times without reducing the accuracy of the model.
Interference entropy feature selection method for two-class distinguishing ability
ZENG Yuanpeng, WANG Kaijun, LIN Song
2020, 40(3): 626-630. DOI:
10.11772/j.issn.1001-9081.2019071200
Asbtract
(
)
PDF
(977KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the existing feature selection methods lacking the ability to measure the overlap/separation of different classes of data, an Interference Entropy of Two-Class Distinguishing (IET-CD) method was proposed to evaluate the two-class distinguishing ability of features. For the feature containing two classes (positive and negative), firstly, the mixed conditional probability of the negative class samples within the range of positive class data and the probability of the negative class samples belonging to the positive class were calculated; then, the confusion probability was calculated by the mixed conditional probability and attribution probability, and the confusion probability was used to calculate the positive interference entropy. In the similar way, the negative interference entropy was calculated. Finally, the sum of positive and negative interference entropies was taken as the two-class interference entropy of the feature. The interference entropy was used to evaluate the distinguishing ability of the feature to the two-class sample. The smaller the interference entropy value of the feature, the stronger the two-class distinguishing ability of the feature. On three UCI datasets and one simulated gene expression dataset, five optimal features were selected for each method, and the two-class distinguishing ability of the features were compared, so as to compare the performance of the methods. The experimental results show that the proposed method is equivalent or better than the NEFS (Neighborhood Entropy Feature Selection) method, and compared with the Single-indexed Neighborhood Entropy Feature Selection (SNEFS), feature selection based on Max-Relevance and Min-Redundancy (MRMR), Joint Mutual Information (JMI) and Relief method, the proposed method is better in most cases. The IET-CD method can effectively select features with better two-class distinguishing ability.
Novel bidirectional aggregation degree feature extraction method forpatent new word discovery
CHEN Meijie, XIE Zhenping, CHEN Xiaoqi, XU Peng
2020, 40(3): 631-637. DOI:
10.11772/j.issn.1001-9081.2019071193
Asbtract
(
)
PDF
(772KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the poor effect of general new word discovery method on the recognition of patent long words, the low flexibility of part of speech collocation template of patent terminology, and the lack of unsupervised methods for Chinese patent long word recognition, a novel bidirectional aggregation degree feature extraction method for patent new word discovery was proposed.Firstly, a bidirectional conditional probability was introduced on the statistical information between the first and last words on a double word term. Secondly, a word boundary filtering rule was extendedly introduced by using the above feature. Finally, new patent words were able to be extracted by combining the above aggregation degree feature and word boundary filtering rule. Experimental analysis show that, the new method improves the overall F-score by 6.7 percentage points compared with the new word discovery method in the general field, improves the overall F-score by 19.2 and 17.2 percentage points respectively compared with two latest patent terminology collocation template methods, and significantly increase the F-score for the discovery of new words with 4 to 8 characters. In summary, the proposed method greatly improves the performance of patent new word discovery, and can extract high compound long words in patent documents more effectively, while reducing the reliance on pre-training processes and extra complex rule base, with better practicality.
Image style transfer network based on texture feature analysis
YU Yingdong, YANG Yi, LIN Lan
2020, 40(3): 638-644. DOI:
10.11772/j.issn.1001-9081.2019081461
Asbtract
(
)
PDF
(1464KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the low efficiency and poor effect of image style transfer, a feedforward residual image style transfer algorithm based on pre-trained network and combined with image texture feature analysis was proposed. In the algorithm, the pre-trained deep network was applied to extract the deep features of the style image, and the residual network was used to perform deep training and realize image transfer. Meanwhile, by analyzing the influence of input style image and content image texture on transfer effect, the corresponding measures were adopted for different input images to improve the transfer effect. Experimental results show that the algorithm can achieve better output visual effect, lower normalized style loss and less time consumption. Besides, according to the information entropy and moment invariant calculation of the input image to guide the setting and adjustment of the network parameters, the network was optimized pertinently, and good effect was obtained.
Image classification algorithm based on lightweight group-wise attention module
ZHANG Panpan, LI Qishen, YANG Cihui
2020, 40(3): 645-650. DOI:
10.11772/j.issn.1001-9081.2019081425
Asbtract
(
)
PDF
(1029KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the existing neural network models have insufficient ability to characterize the features of classification objects in image classification tasks and cannot achieve high recognition accuracy, an image classification algorithm based on Lightweight Group-wise Attention Module (LGAM) was proposed. The proposed module reconstructed the feature maps from the channel and space of the input feature maps. Firstly, the input feature maps were grouped along the channel direction, and channel attention weight corresponding to each group was generated. At the same time, ladder type structure was used to solve the problem that the information between the groups was not circulated. Secondly, the global spatial attention weight was generated based on the new feature maps concatenated by each group, and the reconstructed feature maps were obtained by weighting the two attention weights. Finally, the reconstructed feature maps were merged with the input feature maps to generate the enhanced feature maps. Experiments were performed on the Cifar10 and Cifar100 datasets and part of the ImageNet2012 dataset with using the classification Top-1 error rate as the evaluation indicator to compare the ResNet, Wide-ResNet and ResNeXt enhanced by LGAM. Experimental results show that the Top-1 error rates of the neural network models enhanced by LGAM are 1 to 2 percentage points lower than those of the models before enhancing. LGAM can improve the feature characterization ability of existing neural network models, thus improving the recognition accuracy of image classification.
Sentiment analysis using embedding from language model and multi-scale convolutional neural network
ZHAO Ya'ou, ZHANG Jiachong, LI Yibin, FU Xianrui, SHENG Wei
2020, 40(3): 651-657. DOI:
10.11772/j.issn.1001-9081.2019071210
Asbtract
(
)
PDF
(866KB) (
)
References
|
Related Articles
|
Metrics
Only one semantic vector can be generated by word-embedding technologies such as Word2vec or GloVe for polysemous word. In order to solve the problem, a sentiment analysis model based on ELMo (Embedding from Language Model) and Multi-Scale Convolutional Neural Network (MSCNN) was proposed. Firstly, ELMo model was used to learn the pre-training corpus and generate the context-related word vectors. Compared with the traditional word embedding technology, in ELMo model, word features and context features were combined by bidirectional LSTM (Long Short-Term Memory) network to accurately express different semantics of polysemous word. Besides, due to the number of Chinese characters is much more than English characters, ELMo model is difficult to train for Chinese corpus. So the pre-trained Chinese characters were used to initialize the embedding layer of ELMo model. Compared with random initialization, the model training was able to be faster and more accurate by this method. Then, the multi-scale convolutional neural network was applied to secondly extract and fuse the features of word vectors, and generate the semantic representation for the whole sentence. Experiments were carried out on the hotel review dataset and NLPCC2014 task2 dataset. The results show that compared with the attention based bidirectional LSTM model, the proposed model obtain 1.08 percentage points improvement of the accuracy on hotel review dataset, and on NLPCC2014 task2 dataset, the proposed model gain 2.16 percentage points improvement of the accuracy compared with the hybrid model based on LSTM and CNN.
Spatiotemporal crowdsourcing online task allocation algorithm based ondynamic threshold
YU Dunhui, YUAN Xu, ZHANG Wanshan, WANG Chenxu
2020, 40(3): 658-664. DOI:
10.11772/j.issn.1001-9081.2019071282
Asbtract
(
)
PDF
(974KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the total utility of task allocation in spatiotemporal crowdsourcing dynamic reality, a Dynamic Threshold algorithm based on online Random Forest (DTRF) was proposed. Firstly, the online random forest was initialized based on the historical matching data of workers and tasks on the crowdsourcing platform. Then, the online random forest was used to predict the expected task return rate of each worker as the threshold, and the candidate matching set was selected for each worker according to the threshold. Finally, the matching with the highest sum of current utility was selected from the candidate match set, and the online random forest was updated based on the allocation result. The experiments show that the algorithm can improve the average income of workers while increasing the total utility. Compared with the greedy algorithm, the proposed algorithm has the task assignment rate increased by 4.1%, the total utility increased by 18.2%, and the average worker income increased by 11.2%. Compared with the random threshold algorithm, this algorithm has a better improvement in task allocation rate, total utility, average income of workers with better stability.
Human activity recognition based on improved particle swarm optimization-support vector machine and context-awareness
WANG Yang, ZHAO Hongdong
2020, 40(3): 665-671. DOI:
10.11772/j.issn.1001-9081.2019091551
Asbtract
(
)
PDF
(754KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of low accuracy of human activity recognition, a recognition method combining Support Vector Machine (SVM) with context-awareness (actual logic or statistical model of human motion state transition) was proposed to identify six types of human activities (walking, going upstairs, going downstairs, sitting, standing, lying). Logical relationships existing between human activity samples were used by the method. Firstly, the SVM model was optimized by using the Improved Particle Swarm Optimization (IPSO) algorithm. Then, the optimized SVM was used to classify the human activities. Finally, the context-awareness was used to correct the error recognition results. Experimental results show that the classification accuracy of the proposed method reaches 94.2% on the Human Activity Recognition Using Smartphones (HARUS) dataset of University of California, Irvine (UCI), which is higher than that of traditional classification method based on pattern recognition.
Pedestrian re-identification feature extraction method based on attention mechanism
LIU Ziyan, WAN Peipei
2020, 40(3): 672-676. DOI:
10.11772/j.issn.1001-9081.2019081356
Asbtract
(
)
PDF
(726KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of the low pedestrian re-identification accuracy with disjoint multiple cameras in real environment caused by different camera scenes, perspectives, illuminations and other factors, a pedestrian re-identification feature extraction method based on attention mechanism was proposed. Firstly, the random erasure method was used to enhance the data of the input pedestrian image in order to improve the robustness of the network. Then, by constructing a from-top-to-bottom attention mechanism network, the saliency of the spatial pixel feature was enhanced, and the attention mechanism network was embedded in the ResNet50 network to extract the entire pedestrian salient features. Finally, the similarity measurement and ranking were performed on the entire salient features of pedestrians in order to obtain the accuracy of pedestrian re-identification. The pedestrian re-identification feature extraction method based on attention mechanism has Rank1 of 88.53% and mAP (mean Average Precision) of 70.70% on the Market1501 dataset, and has Rank1 of 77.33% and mAP of 59.47% on the DukeMTMC-reID dataset. The proposed method has significantly improved performance on the two major pedestrian re-identification datasets, and has certain application value.
Next location recommendation based on spatiotemporal-aware GRU and attention
LI Quan, XU Xinhua, LIU Xinghong, CHEN Qi
2020, 40(3): 677-682. DOI:
10.11772/j.issn.1001-9081.2019071289
Asbtract
(
)
PDF
(669KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the influence of time and space information of the location was not considered when making the location recommendation by Gated Recurrent Unit (GRU) of recurrent neural network, the spatiotemporal-aware GRU model was proposed. In addition, aiming at the noise problem generated by the unrelated check-in data in check-in sequence, the next location recommendation method of SpatioTemporal-aware GRU and Attention (ST-GRU+Attention) was proposed. Firstly, time gate and distance gate were added in the GRU model by counting the time slot and distance gap between two locations. The influence of time and space information on recommending next location was controlled by setting the weight matrices. Secondly, the attention mechanism was introduced. The attention weight coefficients of the user were obtained by calculating the attention weight scores of the user preferences, and the personalized preference of the user was obtained. Finally, the objective function was constructed and the model parameters were learned by Bayesian Personalized Ranking (BPR) algorithm. The experimental results show that the accuracy of ST-GRU+Attention is improved significantly compared to the recommendation methods of Factorizing Personalized Markov Chain and Localized Region (FPMC-LR), Personalized Ranking Metric Embedding (PRME) and Spatial Temporal Recurrent Neural Network (ST-RNN), and the precision and recall of ST-GRU+Attention are increased by 15.4% and 17.1% respectively compared to those of ST-RNN which is the best of the three methods. The recommendation method of ST-GRU+Attention can effectively improve the effect of next location recommendation.
Target tracking algorithm based on kernelized correlation filter with block-based model
XU Xiaochao, YAN Hua
2020, 40(3): 683-688. DOI:
10.11772/j.issn.1001-9081.2019071173
Asbtract
(
)
PDF
(1929KB) (
)
References
|
Related Articles
|
Metrics
To reduce the influence of factors such as illumination variation, scale variation, partial occlusion in target tracking, a target tracking algorithm based on Kernelized Correlation Filter (KCF) with block-based model was proposed. Firstly, the feature of histogram of oriented gradients and the feature of color name were combined to better characterize the target. Secondly, the method of scale pyramid was adopted to estimate the target scale. Finally, the peak to sidelobe ratio of the feature response map was used to detect occlusion, and the partial occlusion problem was solved by introducing a high-confidence block relocation module and a dynamic strategy for model adaptive updating. To verify the effectiveness of the proposed algorithm, comparative experiments with several mainstream algorithms on various datasets were conducted. Experimental results show that the proposed algorithm has the highest precision and success rate which are respectively 11.89% and 15.24% higher than those of KCF algorithm, indicating that the proposed algorithm has stronger robustness in dealing with factors like illumination variation, scale variation and partial occlusion.
Parallel machine scheduling optimization based on improved discrete artificial bee colony algorithm
ZHANG Jiapeng, NI Zhiwei, NI Liping, ZHU Xuhui, WU Zhangjun
2020, 40(3): 689-697. DOI:
10.11772/j.issn.1001-9081.2019071203
Asbtract
(
)
PDF
(786KB) (
)
References
|
Related Articles
|
Metrics
For the parallel machine scheduling problem of minimizing the maximum completion time, an Improved Discrete Artificial Bee Colony algorithm (IDABC) was proposed by considering the processing efficiency of the machine and the delivery time of the product as well as introducing the mathematical model of the problem. Firstly, a uniformly distributed population and a generation strategy of the parameters to be optimized were achieved by adopting the population initialization strategy, resulting in the improvement of the convergence speed of population. Secondly, the mutation operator in the differential evolution algorithm and the idea of simulated annealing algorithm were used to improve the local search strategy for the employed bee and the following bee, and the scout bee was improved by using the high-quality information of the optimal solution, resulting in the increasement of the population diversity and the avoidance of trapping into the local optimum. Finally, the proposed algorithm was applied in the parallel machine scheduling problem to analyze the performance and parameters of the algorithm. The experimental results on 15 examples show that compared with the Hybrid Discrete Artificial Bee Colony algorithm (HDABC), IDABC has the accuracy and stability improved by 4.1% and 26.9% respectively, and has better convergence, which indicates that IDABC can effectively solve the parallel machine scheduling problem in the actual scene.
Denoising autoencoder deep convolution process neural network and its application in time-varying signal classification
ZHU Zhe, XU Shaohua
2020, 40(3): 698-703. DOI:
10.11772/j.issn.1001-9081.2019081435
Asbtract
(
)
PDF
(709KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem of nonlinear time-varying signal classification, a Denoising AutoEncoder Deep Convolution Process Neural Network (DAE-DCPNN) was proposed, which combines the information processing mechanism of Process Neural Network (PNN) with convolution operation. The model consists of a time-varying signal input layer, a Convolution Process Neuron (CPN) hidden layer, a deep Denoising AutoEncoder (DAE) network structure and a softmax classifier. The inputs of CPN were time-series signals, and the convolution kernel was taken as a five-order array with gradient property. And convolution operation was carried out based on sliding window to realize the spatio-temporal aggregation of time-series signals and the extraction of process features. After the CPN hidden layer, the DAE deep network and the softmax classifier were stacked to realize the high-level extraction and classification of features of time-varying signals. The properties of DAE-DCPNN were analyzed, and the comprehensive training algorithm of the initial value assignment training based on each information unit and the overall optimization of model parameters was given. Taking 7 kinds of cardiovascular disease classification diagnosis based on 12-lead ElectroCardioGram (ECG) signals as an example, the experimental results verify the effectiveness of the proposed model and algorithm.
Lightweight and multi-pose face recognition method based on deep learning
GONG Rui, DING Sheng, ZHANG Chaohua, SU Hao
2020, 40(3): 704-709. DOI:
10.11772/j.issn.1001-9081.2019071272
Asbtract
(
)
PDF
(852KB) (
)
References
|
Related Articles
|
Metrics
At present, the face recognition methods based on deep learning have the problems of large model parameter size and slow feature extraction speed, and the existing face datasets have the problem of single pose, which cannot achieve good recognition effect in the actual face recognition task. Aiming at this problem, a multi-pose face dataset was established, and a lightweight multi-pose face recognition method was proposed. Firstly, the MTCNN (Multi-Task cascaded Convolutional Neural Network) algorithm was used by the method for face detection, and the high-level features included in the last network of MTCNN were used for face tracking. Then, the face pose was judged according to the positions of the detected face key points, the current face features were extracted by the neural network with ArcFace as loss function, and the current face features were compared with the face features of the corresponding pose in the face database to obtain the face recognition result. The experimental results show that the accuracy of the proposed method is 96.25% on the multi-pose face dataset, which is 2.67% higher than that on the face dataset with single pose. It shows that the proposed multi-pose face recognition method can effectively improve the recognition accuracy.
Face hallucination algorithm via combined learning
XU Ruobo, LU Tao, WANG Yu, ZHANG Yanduo
2020, 40(3): 710-716. DOI:
10.11772/j.issn.1001-9081.2019071178
Asbtract
(
)
PDF
(1595KB) (
)
References
|
Related Articles
|
Metrics
Most of the existing deep learning based face hallucination algorithms only use a single network partition to reconstruct high-resolution output images without considering the structural information in the face images, resulting in the lack of sufficient details in the reconstruction of vital organs on the face. Therefore, a face hallucination algorithm based on combined learning was proposed to tackle this problem. In the algorithm, the regions of interest were reconstructed independently by utilizing the advantages of different deep learning models, thus the data distribution of each face region was different to each other in the process of network training, and different sub-networks were able to obtain more accurate prior information. Firstly, for the face image, the superpixel segmentation algorithm was used to generate the facial component parts and facial background image. Secondly, the facial component image patches were independently reconstructed by the Component-Generative Adversarial Network (C-GAN) and the facial background reconstruction network was used to generate the facial background image. Thirdly, the facial component fusion network was used to adaptively fuse the facial component image patches reconstructed by two different models. Finally, the generated facial component image patches were merged into the facial background image to reconstruct the final face image. The experimental results on FEI dataset show that the Peak Signal to Noise Ratio (PSNR) of the proposed algorithm is respectively 1.23 dB and 1.11 dB higher than that of the face image hallucination algorithms Learning to hallucinate face images via Component Generation and Enhancement (LCGE) and Enhanced Discriminative Generative Adversarial Network (EDGAN). The proposed algorithm can perform combined learning of the advantages of different deep learning models to learn and reconstruct more accurate face images as well as expand the sources of image reconstruction prior information.
Reweighted sparse principal component analysis algorithm and its application in face recognition
LI Dongbo, HUANG Lyuwen
2020, 40(3): 717-722. DOI:
10.11772/j.issn.1001-9081.2019071270
Asbtract
(
)
PDF
(868KB) (
)
References
|
Related Articles
|
Metrics
For the problem that the principal component vector obtained by Principal Component Analysis (PCA) algorithm is not sparse enough and has many non-zero elements, PCA algorithm was optimized by the reweighting method, and a new method for extracting high-dimensional data features was proposed, namely Reweighted Sparse Principal Component Analysis (RSPCA) algorithm. Firstly, the reweighted
l
1
optimization framework and LASSO (Least Absolute Shrinkage and Selection Operator) regression model were introduced into PCA algorithm to establish a new dimensionality reduction model. Then, the model was solved by using alternat minimization algorithm, singular value decomposition algorithm and minimum angle regression algorithm. Finally, the face recognition experiment was carried out to verify the effectiveness of the algorithm. In the experiment, the K-fold cross-validation method was used to realize the recognition experiment on the ORL face dataset by using PCA algorithm and RSPCA algorithm. The experimental results show that RSCPA algorithm can obtain sparser vector while performs as good as PCA algorithm, has the average recognition accuracy reached 95.1%, which is increased by 6.2 percentage points compared with that of the best performing algorithm sPCA-rSVD (sparse PCA via regularized SVD). And in the experiment of the real-world specific practical application handwritten digit recognition, RSPCA algorithm has the average recognition accuracy of 96.4%, The superiority of the proposed algorithm in face recognition and handwritten digit recognition was proved.
Octave convolution method for lymph node metastases detection
WEI Zhe, WANG Xiaohua
2020, 40(3): 723-727. DOI:
10.11772/j.issn.1001-9081.2019071315
Asbtract
(
)
PDF
(886KB) (
)
References
|
Related Articles
|
Metrics
Focused on the problems of low accuracy and long time cost of manual detection of breast cancer lymph node metastasis, a neural network detection model based on residual network structure and with Octave convolution method to design convolution layers was proposed. Firstly, based on the convolution layer of residual network, the input and output eigenvectors in the convolution layer were divided into high frequencies and low frequencies, and the channel width and height of the low-frequencies were reduced to half of those of the high frequencies. Then, the convolution operation between the low-frequency vector and the high-frequency vector was realized by up-sampling the low-frequency vector with the reduction by half, and the convolution operation between the high-frequency vector and the low-frequency vector was realized by average pooling of the high-frequency vector. Finally, the convolutions between high-frequency vectors and between high-frequency vector and low-frequency vector were added to obtain the high-frequency output, and the convolutions between low-frequency vectors and between low-frequency vector and high-frequency vector were added to obtain the low-frequency output. In this way, Octave convolution layer was constructed, and all convolution layers in residual network were replaced by Octave convolution layers to construct the detection model. In theory, the amount of computation of convolution in Octave convolution layer was reduced by 75%, effectively speeding up the training of the model. On the cloud server with maximum memory of 13 GB and free disk size of 4.9 GB, the PCam (PatchCamelyon) dataset was used for testing. The results show that the model has the recognition accuracy of 95.1%, the memory occupied of 8.7 GB, the disk occupied of 356.4 MB, and the average single training time of 4 minutes 42 seconds. Compared with the ResNet50, this model has the accuracy reduced by 0.6%, the memory saved by 0.6 GB, the disk saved by 105.9 MB, and the single training time shortened by 1 minute. The experimental results demonstrate that the proposed model has high recognition accuracy, short training time and small memory consumption, which reduces the requirement of computing resources under the background of big data era, making the model have application value.
Improved migration operator biogeography-based optimization algorithm and its application in PID parameter tuning
PEI Pei, LI Caiwei, LYU Bote
2020, 40(3): 728-734. DOI:
10.11772/j.issn.1001-9081.2019081337
Asbtract
(
)
PDF
(735KB) (
)
References
|
Related Articles
|
Metrics
To solve the problems of insufficient search power and low convergence accuracy in the optimization process of Biogeography-Based Optimization (BBO) algorithm, an Improved Migration Operator BBO (IMO-BBO) algorithm was proposed. On the basis of BBO algorithm and combining with the evolution thinking of “survival of the fittest”, the migration operator was improved by taking migration distance into consideration, and the differential strategy was used to replace individuals unsuitable to migration, so as to increase the local exploration ability of the algorithm. At the same time, the concept of multi-population was introduced to enrich the species diversity. IMO-BBO algorithm was tested on 13 benchmark functions. The results show that compared with the Covariance Matrix based Migration BBO hybrid with Differential Evolution (CMM-DE/BBO) algorithm and the original BBO algorithm, the improved algorithm enhances the search ability for global optimal solutions and simultaneously improves the convergence speed and the accuracy significantly. IMO-BBO was applied to PID parameter tuning, the results show that the controller optimized by this algorithm has faster response speed and more stabile accuracy.
Data science and technology
Bad information diffusion modeling and optimal control strategy influenced by social networks
FENG Liping, HAN Qi, ZHOU Zhigang, BAI Zengliang
2020, 40(3): 735-739. DOI:
10.11772/j.issn.1001-9081.2019081384
Asbtract
(
)
PDF
(585KB) (
)
References
|
Related Articles
|
Metrics
In view of the existing bad information diffusion models do not consider the information diffusion between different social networks, the connectivity principle in graph theory was used to establish a dynamic model of the diffusion of bad information among multiple social networks, and the optimal control theory was applied to the model. Through the optimal control principle, the existence of the optimal control strategy was proved. Then the optimal control model of the diffusion of bad information was obtained. The experimental results show that the introduction of optimal control measures can effectively suppress the diffusion scale of bad information, and the strength of the control strategy can be dynamically adjusted as needed. In addition, by simulating the conditions with and without information transmitted between different social networks, it is found that the information transmission between social networks will increase the scale and speed of the diffusion of bad information.
Cyber security
Heterogenous cross-domain identity authentication scheme based on signcryption in cloud environment
JIANG Zetao, XU Juanjuan
2020, 40(3): 740-746. DOI:
10.11772/j.issn.1001-9081.2019071185
Asbtract
(
)
PDF
(693KB) (
)
References
|
Related Articles
|
Metrics
For the problem that secure and efficient cross-domain authentication (between Public Key Infrastructure (PKI) and Certificateless Public Key Cryptography (CLC)) in different cryptosystems cannot be achieved in cryptosystems that already existed and frequently interacted, a signcryption based heterogeneous cross-domain identity authentication method was proposed in cloud environment. The cross-domain identity authentication was re-established for heterogeneous systems, two different cryptosystems (PKI↔CLC) between Users (U) and Cloud Service Provider (CSP) was designed, the calculation of cross-domain authentication of domain management center was removed, and the third party inter-cloud authentication center (CA) was introduced to complete mutual information authentication between U and CSP, the signcryption algorithm was adopted to complete the signcryption for the U in different security domains, as a result, the bidirectional entity cross-domain identity authentication of heterogeneous system was realized and the computing overhead of U was reduced. Compared with anonymous authentication and proxy re-signature, the efficiency of the proposed cross-domain authentication is improved by 53.5% and 23.2% respectively. In addition, the method realizes legality, authenticity and security of U identities in different cryptosystems and has the ability to resist replay attack, replacement attack and man-in-the-middle attack.
Virtual field programmable gate array placement strategy based on ant colony optimization algorithm
XU Yingxin, SUN Lei, ZHAO Jiancheng, GUO Songhui
2020, 40(3): 747-752. DOI:
10.11772/j.issn.1001-9081.2019081359
Asbtract
(
)
PDF
(889KB) (
)
References
|
Related Articles
|
Metrics
To find the optimal deployment of allocating the maximum number of virtual Field Programmable Gate Array (vFPGA) in the minimum number of Field Programmable Gate Array (FPGA) in reconfigurable cryptographic resource pool, the traditional Ant Colony Optimization (ACO) algorithm was optimized, and a vFPGA deployment strategy based on optimized ACO algorithm with considering FPGAs’ characteristics and actual requirements was proposed. Firstly, the load balancing among FPGAs was achieved by giving ants the ability of perceiving resource status, at the same time, the frequent migration of vFPGAs was avoided. Secondly, the free space was designed to effectively reduce the Service Level Agreement (SLA) conflicts caused by dynamical demand change of tenants. Finally, CloudSim toolkit was extended to evaluate the performance of the proposed strategy through simulations on synthetic workflows. Simulation results show that the proposed strategy can reduce the usage number of FPGAs by improving the resource utilization under the promise of guaranteeing the system service quality.
Distributed denial of service attack detection method based on software defined Internet of things
LIU Xiangju, LIU Pengcheng, XU Hui, ZHU Xiaojuan
2020, 40(3): 753-759. DOI:
10.11772/j.issn.1001-9081.2019091611
Asbtract
(
)
PDF
(872KB) (
)
References
|
Related Articles
|
Metrics
Due to the large number, wide distribution and complex environments of Internet of Things (IoT) devices, IoT is more vulnerable to DDoS (Distributed Denial of Service) attacks than traditional networks. Concerning this problem, a Distributed Denial of Service (DDoS) attack detection method based on Equal Length of Value Range
K
-means (ELVR-
K
means) algorithm in Software Defined IoT (SD-IoT) architecture was proposed. Firstly, the centralized control characteristic of the SD-IoT controller was used to extract the flow tables of the OpenFlow switch to analyze the DDoS attack traffic characteristics in SD-IoT environment and extract the seven-tuple features related to the DDoS attack traffic. Secondly, the obtained flow tables were classified by the ELVR-
K
means algorithm to detect whether a DDoS attack had occurred. Finally, the simulation experiment environment was built to test the detection rate, accuracy and error rate of the method. The simulation results show that the proposed method can effectively detect DDoS attacks in SD-IoT environment with detection rate and accuracy of 96.43% and 98.71% respectively, and error rate of 1.29%.
Location perturbation algorithm based on geo-indistinguishability of user’s region of interest
LUO Huiwen, LONG Shigong
2020, 40(3): 760-764. DOI:
10.11772/j.issn.1001-9081.2019071313
Asbtract
(
)
PDF
(716KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem of personal location privacy leakage under the rapid development of the Internet of Things (IoT) technology, a location perturbation algorithm of Geo-indistinguishability based on the Region Of Interest (GROI) was proposed. Firstly, a random noise satisfying planar Laplacian distribution was added to the real location of the user. Secondly, the approximate location was obtained by the discretization operation. Thirdly, the query results were sanitized based on the given Region Of Interest (ROI), and the query errors were further reduced while the availability of the mechanism remained unchanged. Finally, experiments were carried out on Google map queries to compare the proposed algorithm with the geo-indistinguishable location privacy protection algorithm. The results show that the proposed algorithm has the average error of query results reduced at least 2% compared to geo-indistinguishable algorithm within a 6.0 km retrieval range, and the accuracy of query results better than that of geo-indistinguishable algorithm while the privacy level is not degraded. Especially for close-range retrieval, the proposed algorithm can reduce the query error.
Advanced computing
Revenue maximization strategy for mobile-edge computing server with limited computing resources
HUANG Dongyan, FU Zhongwei, WANG Bo
2020, 40(3): 765-769. DOI:
10.11772/j.issn.1001-9081.2019081351
Asbtract
(
)
PDF
(616KB) (
)
References
|
Related Articles
|
Metrics
Mobile Edge Computing (MEC) servers receive revenue by leasing computing resources to users. Improving revenue with limited computing resources is critical for MEC servers. Therefore, a strategy of improving MEC server revenue by the optimization of computing task execution order was proposed. Firstly, the revenue maximization problem of servers was modeled to an optimization problem with task execution order as optimization parameter. Then, an algorithm based on the branch and bound approach was proposed to find the optimal task execution order. Simulation results show that the average revenue of MEC server of the proposed algorithm is 11%, 14% and 21% higher than those of Large Task First (LTF) algorithm, Low-Latency Task First (LLTF) algorithm, and First Come First Served (FCFS) algorithm respectively. The proposed strategy can significantly improve the servers’ revenue while guaranteeing offloading users’ Quality of Service (QoS).
Network and communications
Survivable virtual network embedding guarantee mechanism based on software defined network
ZHAO Jihong, WU Doudou, QU Hua, YIN Zhenyu
2020, 40(3): 770-776. DOI:
10.11772/j.issn.1001-9081.2019071244
Asbtract
(
)
PDF
(719KB) (
)
References
|
Related Articles
|
Metrics
For the virtual network embedding in Software Defined Network (SDN), the existing researchers mainly consider the acceptance rate, but ignore the problem of the underlying resource failure in SDN. Aiming at the problem of Survivable Virtual Network Embedding (SVNE) in SDN, a virtual network embedding guarantee mechanism was proposed combining priori protection and posteriori recovery. Firstly, the regional resources of SDN physical network were perceived before a virtual request was accepted. Secondly, the backup physical resources for the virtual network elements with relatively reduced remaining resources in the mapping domain were reserved by the priori protection mechanism, and the extended virtual network was embedded into the physical network by the D-ViNE (Deterministic Virtual Network Embedding) algorithm. Finally, when a network element without reserved backup resources was out of order, the fault was recovered by the posterior recovery algorithm, which means the node and the link were recovered by remapping and rerouting respectively. Experimental results show that compared with the SDN-Survivability Virtual Network Embedding algorithm (SDN-SVNE), the proposed mechanism has the virtual request acceptance rate increased by 21.9%. And this protection mechanism has advantages in terms such as virtual level fault recovery rate and physical level fault recovery rate.
P2P transmission scheduling optimization based on software defined network
XIANG Xiong, TIAN Jian
2020, 40(3): 777-782. DOI:
10.11772/j.issn.1001-9081.2019071267
Asbtract
(
)
PDF
(766KB) (
)
References
|
Related Articles
|
Metrics
To solve the traffic optimization problem of Application Layer Multicast (ALM) in Peer-to-Peer (P2P) system, a real-time flow scheduling system based on Software Defined Network (SDN) was designed. Firstly, some network measurement technologies were used to obtain the traffic matrix of the network, which was then abstracted into a weighted network state diagram and provided to the Terminal First Steiner Tree (TFST) generation algorithm. The TFST generation algorithm was divided into two stages. When generating multicast tree in the first stage, the algorithm was dexterously made to give higher priority to the terminal nodes by modifying the distance of terminal nodes to zero. In the second stage, the amount of branch nodes was adjusted according to the preset weighting factor so that the calculated multicast tree was able to give consideration to both traffic cost and implementation cost. Finally, to prevent the degradation of network performance caused by the too frequent updates of flow table when deploying the multicast tree into the network, a recurrent neural network-based module was designed to automatically adjust the update cycle according to the network performance. The simulation results indicate that the congestion index of the network using ALM real-time flow scheduling system is reduced by 47% compared with that of the original network. In addition, under medium load, the average value of the congestion index of the method with neural network module used to automatically adjust the update cycle is reduced by 17.6% and 25% respectively compared with those of the immediate update and fixed 5-second interval update methods. It is seen that the design has great practical application value in introducing machine learning into SDN to realize intelligent network.
Outlier node detection algorithm in wireless sensor networks based ongraph signal processing
LU Guangyue, ZHOU Liang, LYU Shaoqing, SHI Cong, SU Keke
2020, 40(3): 783-787. DOI:
10.11772/j.issn.1001-9081.2019071224
Asbtract
(
)
PDF
(785KB) (
)
References
|
Related Articles
|
Metrics
Since the low security of sensors, poor detection area and resource limitation in Wireless Sensor Network (WSN) cause outlier data collected by nodes, an algorithm of the outlier node detection in WSN based on graph signal processing was proposed. Firstly, according to the sensor position features, a
K
-Nearest Neighbors (
K
NN) graph signal model was established. Secondly, the statistical test quantity was built based on the smoothness ratio of the graph signal before and after low-pass filtering. Finally, the judgement of the existence of outlier nodes was realized through the statistical test quantity and decision threshold. Experiments on the public temperature dataset and PM2.5 dataset demonstrate that compared with algorithm of outlier node detection based on graph frequency domain, the proposed algorithm has the detection rate increased by 7% under the condition of single outlier node and has the detection rate of 98% under the condition of multiple outlier nodes, and keep high detection rate under the condition of outlier node with small deviation value.
User grouping and power allocation strategy based on NOMA system
JIN Yong, LUO Ming, DONG Mingyang
2020, 40(3): 788-792. DOI:
10.11772/j.issn.1001-9081.2019071217
Asbtract
(
)
PDF
(574KB) (
)
References
|
Related Articles
|
Metrics
An improved user grouping and power allocation strategy was proposed for high complexity problem of optimal user grouping and power allocation schemes for Non-Orthogonal Multiple Access (NOMA) systems. Firstly, the users were grouped, the first user of each subchannel was determined by channel gain value, and the remaining users were allocated by greedy matching method. Then, the power of user was allocated, and the power allocation problem was divided into two parts: inter-subchannel and intra-subchannel. The power was allocated by the linear water-filling algorithm for inter-subchannels, and the power was allocated by the proposed iterative power allocation algorithm for intra-subchannels. Finally, a Lagrangian function was constructed to maximize the throughput of system under the constraints of maximizing transmit power and guaranteeing the minimum data rate for each user. The simulation results show that in the case of multiple users, compared with the LWF-FTPA (Linear WaterFilling-Fractional Transmit Power Allocation) algorithm and EQ-FTPA (EQual-Fractional Transmit Power Allocation) algorithm, the proposed strategy has system throughput increased by 8% and 20% respectively, indicating that the strategy is better than traditional algorithms.
Wireless sensor deployment optimization based on improved IHACA-CpSPIEL algorithm
DUAN Yujun, WANG Yaoli, CHANG Qing, LIU Xing
2020, 40(3): 793-798. DOI:
10.11772/j.issn.1001-9081.2019071201
Asbtract
(
)
PDF
(747KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems of low coverage and high communication cost for wireless sensor deployment, an Improved Heuristic Ant Colony Algorithm (IHACA) merging Chaos optimization of padded Sensor Placements at Informative and cost-Effective Locations algorithm (IHACA-CpSPIEL) method for sensor deployment was proposed. Firstly, the correlation between observation points and unobserved points was established by mutual information, and the communication cost was described in the form of graph theory to establish the mathematical model with submodularity. Secondly, chaos operator was introduced to improve the global searching ability of pSPIEL (padded Sensor Placements at Informative and cost-Effective Locations) algorithm for local parameters, and then the optimal number of clusters was found. Then, the factors of the colony distance heuristic function and the pheromone updating mechanism were changed to jump out of the local solution of communication cost. Finally, Chaos optimization of pSPIEL algorithm (CpSPIEL) was integrated with the IHACA to determine the shortest path, so as to achieve the purpose of low-cost deployment. The experimental results show that the proposed algorithm can jump out of the local optimal solution well, and the communication cost is reduced by 6.5% to 24.0% compared with the pSPIEL algorithm, and has a faster search speed.
Computer software technology
Service composition partitioning method based on process partitioning technology
LIU Huijian, LIU Junsong, WANG Jiawei, XUE Gang
2020, 40(3): 799-805. DOI:
10.11772/j.issn.1001-9081.2019071290
Asbtract
(
)
PDF
(843KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the bottleneck existed in the central controller of centralized service composition, a method of constructing decentralized service composition based on process partitioning was proposed. Firstly, the business process was modeled by the type directed graph. Then, a grouping algorithm was proposed based on the graph transformation method, and the process model was partitioned according to the grouping algorithm. Finally, the decentralized service composition was constructed according to the partitioning results. Test results show that compared with single thread algorithm, the grouping algorithm has the time-consuming for model 1 reduced by 21.4%, and the decentralized service composition constructed has lower response time and higher throughput. The experimental results show that the proposed method can effectively partition the business processes in the service composition, and the constructed decentralized service composition can improve the service performance.
Virtual reality and multimedia computing
Progressive mesh simplification algorithm for mobile devices
CHU Surong, NIU Zhixian, SONG Chunhua, NIU Baoning
2020, 40(3): 806-811. DOI:
10.11772/j.issn.1001-9081.2019071163
Asbtract
(
)
PDF
(1222KB) (
)
References
|
Related Articles
|
Metrics
To solve the problems that existing Progressive Mesh (PM) simplification algorithms are facing, such as, loosing key features when meshes are highly simplified, low simplification speed and limited applicability for various models, an edge-collapsing mesh simplification algorithm combining Quadric Error Metric (QEM) and curvature-like Feature value with Variable Parameter (QFVP) was proposed to build progressive meshes for mobile devices. Firstly, the variable parameter
w
was set to control the relative magnitude of quadratic error and curvature-like value in edge-collapsing error, improving the simplification quality of the algorithm and making the algorithm more applicable. Secondly, an error Back Propagation (BP) neural network was trained to determine the
w
value of the model. Thirdly, the normal vector linear estimation method in the edge-collapse process was proposed, which shortens the mesh simplification time by 23.7% on average compared to Gouraud estimation method. In the comparison experiments, the PM’s basic meshes generated by QFVP have smaller global error (measured by Hausdorff distance) than those generated by QEM algorithm or Melax algorithm. And QFVP has simplification time about 7.3% longer than QEM algorithm and 54.7% shorter than Melax algorithm.
Improved ViBe algorithm based on color layout descriptor
WANG Tong, WANG Wei, CUI Yihao, ZHU Tianyu
2020, 40(3): 812-818. DOI:
10.11772/j.issn.1001-9081.2019071208
Asbtract
(
)
PDF
(1316KB) (
)
References
|
Related Articles
|
Metrics
In view of the problem that ViBe (Visual Background extractor) algorithm sometimes produces “ghost” when detecting moving targets and the misdetection problem of moving targets caused by the interference produced by the algorithm in the target detection with dynamic background, an improved ViBe algorithm was proposed based on the techniques of three frame differencing and morphological post-processing carried on the key frames extracted by Color Layout Descriptor (CLD). Firstly, the video key frame images were extracted by CLD. Secondly, three frame differencing was performed on the selected key frame images. The background model containing the moving target was filled by the difference results, obtaining the real background image. Thirdly, the moving target was detected, so as to eliminate the “ghost”. Finally, the morphological processing technique with adaptive threshold was added in the updating stage of background model to eliminate the interference information in the dynamic background model. The experimental results show that the proposed algorithm has superiority in avoiding ghost and anti-dynamic background interference in moving target detection, and when the similar measurement threshold is selected from 0.67 to 0.72, the accuracy of the algorithm can be as high as 99.4%, indicating that the algorithm can ideally detect the position information of the moving target.
Blurred video frame interpolation method based on deep voxel flow
LIN Chuanjian, DENG Wei, TONG Tong, GAO Qinquan
2020, 40(3): 819-824. DOI:
10.11772/j.issn.1001-9081.2019081474
Asbtract
(
)
PDF
(1085KB) (
)
References
|
Related Articles
|
Metrics
Motion blur has an extremely negative effect on video frame interpolation. In order to handle this problem, a novel blurred video frame interpolation method was proposed. Firstly, a multi-task fusion convolutional neural network was proposed, which consists of a deblurring module and a frame interpolation module. In the deblurring module, based on the deep Convolutional Neural Network (CNN) with stack of ResBlocks, motion blur removal of two input frames was implemented by extracting and learning the deep blur features. And the frame interpolation module was used to estimate voxel flow between two consecutive frames after blur removal, then the obtained voxel flow was used to guide the trilinear interpolation of the pixels to synthesize the intermediate frame. Secondly, a large blurred video simulation dataset was made, and a “first separate and then combine” “from coarse to fine” training strategy was proposed, experimental results show that this strategy promotes the effective convergence of the multi-task fusion network. Finally, compared with the simple combination of the state-of-the-art deblurring and frame interpolation algorithms, experimental metrics show that the intermediate frame synthesized by the proposed method has the peak-to-noise ratio increased by 1.41 dB, the structural similarity improved by 0.020, and the interpolation error decreased by 1.99, at least. Visual comparison and reconstructed sequences show that the proposed model performs good frame rate up conversion effect for blurred videos, in other words, two blurred consecutive frames can be reconstructed end-to-end to three sharp and visually smooth frames by the model.
Image inpainting based on dilated convolution
FENG Lang, ZHANG Ling, ZHANG Xiaolong
2020, 40(3): 825-831. DOI:
10.11772/j.issn.1001-9081.2019081471
Asbtract
(
)
PDF
(1069KB) (
)
References
|
Related Articles
|
Metrics
Although the existing image inpainting methods can recover the content of the missing area of the image, there are still some problems, such as structure distortion, texture blurring and content discontinuity, so that the inpainted images cannot meet people’s visual requirements. To solve these problems, an image inpainting method based on dilated convolution was proposed. By introducing the idea of dilated convolution to increase the receptive field, the quality of image inpainting was improved. This method was based on the idea of Generative Adversarial Network (GAN), which was divided into generative network and adversarial network. The generative network included global content inpainting network and local detail inpainting network, and gated convolution was used to realize the dynamical learning of the image features, solving the problem that the traditional convolution neural network method was not able to complete the large irregular missing areas well. Firstly, the global content inpainting network was used to obtain an initial content completion result, and then the local texture details were repaired by the local detail inpainting network. The adversarial network was composed of SN-PatchGAN discriminator, and was used to evaluate the image inpainting effect. Experimental results show that compared with the current image inpainting methods, the proposed method has great improvement in Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity (SSIM) and inception score. Moreover, the method effectively solves the problem of texture blurring in traditional inpainting methods, and meets people’s visual requirements better, verifying the validity and feasibility of the proposed method.
Image reconstruction based on neuron spike signals in pigeon optic tectum
WANG Zhizhong, PANG Chen
2020, 40(3): 832-836. DOI:
10.11772/j.issn.1001-9081.2019071257
Asbtract
(
)
PDF
(886KB) (
)
References
|
Related Articles
|
Metrics
Focused on the issue of decoding visual input from neuron response signal, a method to reconstruct visual input using neurons action potential (Spike) signal was proposed. Firstly, the Spike signal from the pigeon Optic Tectum (OT) neurons was recorded and the firing rate characteristics of Spike were extracted. Then, a linear inverse filter reconstruction model and a convolution neural network reconstruction model were constructed to realize the reconstruction of the visual input. Finally, the number of channels, time bin, data time length and delay time were optimized. Under the same parameter condition, the cross correlation coefficient of image reconstruction using linear inverse filter reconstruction model reached 0.910 7±0.021 9, and the cross correlation coefficient of image reconstruction using convolution neural network reconstruction model reached 0.927 1±0.017 6. The results show that the visual input can be reconstructed effectively by extracting firing rate characteristics of neuron Spike and using linear inverse filter reconstruction model and convolution neural network reconstruction model.
Light-weight image fusion method based on SqueezeNet
WANG Jixiao, LI Yang, WANG Jiabao, MIAO Zhuang, ZHANG Yangshuo
2020, 40(3): 837-841. DOI:
10.11772/j.issn.1001-9081.2019081378
Asbtract
(
)
PDF
(855KB) (
)
References
|
Related Articles
|
Metrics
The existing deep learning based infrared and visible image fusion methods have too many parameters and require large amounts of computing resources and memory. These methods cannot meet the deployment demand of resource constrained edge devices such as cell phones and embedded devices. In order to address these problems, a light-weight image fusion method based on SqueezeNet was proposed. SqueezeNet was used to extract image features, then the weight map was obtained by these features, and the weighted fusion was performed, finally the fused image was generated. By comparing with the ResNet50 method, it is found that the proposed method compresses the model size and network parameter amount to 1/21 and 1/204 respectively, and improves the running speed to 5 times while maintaining the quality of fused images. The experimental results demonstrate that the proposed method has better fusion effect compared to existing traditional methods as well as reduces the size of fusion model and accelerates the fusion speed.
Portrait inpainting based on generative adversarial networks
YUAN Linjun, JIANG Min, LUO Dunlang, JIANG Jiajun, GUO Jia
2020, 40(3): 842-846. DOI:
10.11772/j.issn.1001-9081.2019071283
Asbtract
(
)
PDF
(907KB) (
)
References
|
Related Articles
|
Metrics
Portrait inpainting was widely used in the photo editing based on image rendering and computational photography. A lot of factors including the variety in clothing, different body types such as tall, short, fat and thin size, the high freedom degree of human body pose, bring difficulties to portrait inpainting. Therefore, an efficient portrait inpainting method based on Generating Adversarial Network (GAN) was proposed. The algorithm consists two stages. During the first stage, the image was roughly inpainted based on an encoder-decoder network, and then the body pose information in the image was estimated. During the second stage, the portrait was accurately inpainted based on the pose information and GAN. Besides, the key points of the portrait pose were connected by using portrait pose information to form the pose framework and perform the dilation operation, and the portrait pose mask was obtained. Thereby, a portrait pose loss function was constructed for network training. The experimental results show that: compared with the Contextual Attention inpainting method, the proposed method has the SSIM (Structural SIMilarity index) increased by one percentage point. The method, by adding the portrait pose information into the portrait inpainting process, effectively constrains the solution space range of portrait data in the zone to be inpainted, and strengthens the network's attention to the portrait pose information.
Face image inpainting method based on circular fields of feature parts
WANG Xiao, WEI Jiawang, YUAN Yubo
2020, 40(3): 847-853. DOI:
10.11772/j.issn.1001-9081.2019071212
Asbtract
(
)
PDF
(1301KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem of unreasonable structure and low efficiency of the example block-based image inpainting method, a method for face image inpainting based on circular fields of feature parts was proposed. Firstly, according to the distribution of feature points obtained by feature points localization, the face image was segmented into four circular fields to determine feature search domains. Then, in priority model, the attenuation trend of confidence term was changed in form of exponential function, and with the combination of structural gradient term, the priority was constrained by using local gradient information to improve structural connectivity of inpainting result. In the stage of matching patch search, according to relative position between target patch and each circular domain of feature part, the search domain of matching patch was determined to improve search efficiency. Finally, under the standard of structural similarity, face image inpainting with structural connectivity was completed by choosing the best matching patch. Compared with four state-of-the-art inpainting methods, the proposed method has the Peak Signal-to-Noise Ratio (PSNR) of inpainted image increased by 1.219 to 2.663 dB on average, and the time consumption reduced by 34.7% to 69.6% on average. The experimental results show that the proposed method is effective in maintaining structural connectivity and visual rationality of face image, and has excellent performance in accuracy and time consumption of inpainting.
Vehicle information detection based on improved RetinaNet
LIU Ge, ZHENG Yelong, ZHAO Meirong
2020, 40(3): 854-858. DOI:
10.11772/j.issn.1001-9081.2019071262
Asbtract
(
)
PDF
(745KB) (
)
References
|
Related Articles
|
Metrics
The lack of computational power and limited storage of the mobile terminals lead to the low accuracy and slow speed of vehicle information detection models. Therefore, an improved vehicle information detection algorithm based on RetinaNet was proposed to solve this problem. Firstly, a new vehicle information detection framework was developed, and the deep feature information of the FPN (Feature Pyramid Network) module was merged into the shallow feature layer, and MobileNet V3 was used as the basic feature extraction network. Secondly, the direct evaluation index of target detection task——GIoU (Generalized Intersection over Union) was introduced to guide the positioning task. Finally, the dimension clustering algorithm was used to find the better size of Anchors and match them to the corresponding feature layers. Compared with the original RetinaNet target detection algorithm, the proposed algorithm has the accuracy improved by 10.2 percentage points on the vehicle information detection dataset. When using MobileNet V3 as the basic network, the mAP (mean Average Precision) can reach 97.2% and the forward inference time of single frame can reach 100 ms on ARM v7 devices. The experimental results show that the proposed method can effectively improve the performance of mobile vehicle information detection algorithms.
Joint super-resolution and deblurring method based on generative adversarial network for text images
CHEN Saijian, ZHU Yuanping
2020, 40(3): 859-864. DOI:
10.11772/j.issn.1001-9081.2019071205
Asbtract
(
)
PDF
(905KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the difficulty to reconstruct clear high-resolution images from blurred low-resolution images by the existing super-resolution methods, a joint text image joint super-resolution and deblurring method based on Generative Adversarial Network (GAN) was proposed. Firstly, the low-resolution text images with severe blur were focused, and the down-sampling module and the deblurring module were used to generate the generator network. Secondly, the input images were down-sampled by the down-sampling module to generate blurred super-resolution images. Thirdly, the deblurring module was used to reconstruct the clear super-resolution images. Finally, in order to recover the text images better, a joint training loss including super-resolution pixel loss, deblurring pixel loss, semantic layer feature matching loss and adversarial loss was introduced. Extensive experiments on synthetic and real-world images demonstrate that compared with the existing advanced method SCGAN (Single-Class GAN), the proposed method has the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) and OCR (Optical Character Recognition) accuracy improved by 1.52 dB, 0.011 5 and 13.2 percentage points respectively. The proposed method can better deal with degraded text images in real scenes with low computational cost.
MP-CGAN: night single image dehazing algorithm based on Msmall-Patch training
WANG Yunfei, WANG Yuanyu
2020, 40(3): 865-871. DOI:
10.11772/j.issn.1001-9081.2019071219
Asbtract
(
)
PDF
(2098KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems of color distortion and noise in night image dehazing based on Dark Channel Prior (DCP) and atmospheric scattering model method, a Conditional Generated Adversarial Network (CGAN) dehazing algorithm based on Msmall-Patch training (MP-CGAN) was proposed. Firstly, UNet and Densely connected convolutional Network (DenseNet) were combined into a UDNet (U Densely connected convolutional Network) as the generator network structure. Secondly, Msmall-Patch training was performed on the generator and discriminator networks, that was, multiple small penalty regions were extracted by using the Min-Pool or Max-Pool method for the final Patch of the discriminator. These regions were degraded or easily misjudged. And, severe penalty loss was proposed for these regions, that was, multiple maximum loss values in the discriminator output were selected as the loss. Finally, a new composite loss function was proposed by combining the severe loss function, the perceptual loss and the adversarial perceptual loss. On the test set, compared with the Haze Density Prediction Network algorithm (HDP-Net), the proposed algorithm has the PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity index) increased by 59% and 37% respectively; compared with the super-pixel algorithm, the proposed algorithm has the PSNR and SSIM increased by 59% and 48% respectively. The experimental results show that the proposed algorithm can reduce the noise artifacts generated during the CGAN training process, and improve the night image dehazing quality.
Remote sensing image scene classification based on scale-attention network
BIAN Xiaoyong, FEI Xiongjun, MU Nan
2020, 40(3): 872-877. DOI:
10.11772/j.issn.1001-9081.2019071314
Asbtract
(
)
PDF
(735KB) (
)
References
|
Related Articles
|
Metrics
The Convolutional Neural Network (CNN) treats the potential object information and background information equally in the input image. However, there are many small objects and complex background in remote sensing scene images. To solve the problem above, a scale-attention network was proposed based on attention mechanism and multi-scale feature transformation. Firstly, a fast and effective attention module was developed, and the attention map was generated based on optimal feature selection. Then, with the attention map embedded, the multi-scale feature fusion layer added and the fully connected layer redesigned on the basis of ResNet50 network, a scale attention network was proposed. Secondly, the pre-training model was used to initialize the scale-attention network, and the training set was employed for the fine-tuning of the network. Finally, the fine-tuned scale-attention network was used to realize the classification prediction of test set. The classification accuracy of the proposed method on the AID scene dataset is 95.72%, which is 2.62 percentage points higher than that of ArcNet. On the NWPU-RESISC scene dataset, this method achieves classification accuracy of 92.25%, 0.95 percentage points higher than that of IORN (Improved Oriented Response Network). The experimental results demonstrate that the proposed method is able to improve the classification accuracy of remote sensing image scenes.
Speech enhancement algorithm based on MMSE spectral subtraction with Laplacian distribution
WANG Yongbiao, ZHANG Wenxi, WANG Yahui, KONG Xinxin, LYU Tong
2020, 40(3): 878-882. DOI:
10.11772/j.issn.1001-9081.2019071152
Asbtract
(
)
PDF
(1053KB) (
)
References
|
Related Articles
|
Metrics
A Minimum Mean Square Error (MMSE) speech enhancement algorithm based on Laplacian distribution was proposed to solve the problem of noise residual and speech distortion of speech enhanced by the spectral subtraction algorithm based on Gaussian distribution. Firstly, the original noisy speech signal was framed and windowed, and the Fourier transform was performed on the signal of each processed frame to obtain the Discrete-time Fourier Transform (DFT) coefficient of short-term speech. Secondly, the noisy frame detection was performed to update the noise estimation by calculating the logarithmic spectrum energy and spectral flatness of each frame. Thirdly, based on the assumption of Laplace distribution of speech DFT coefficient, the optimal spectral subtraction coefficient was derived under the MMSE criterion, and the spectral subtraction with the obtained coefficient was performed to obtain the enhanced signal spectrum. Finally, the enhanced signal spectrum was subjected to inverse Fourier transform and framing to obtain the enhanced speech. The experimental results show that the Signal-to-Noise Ratio (SNR) of the speech enhanced by the proposed algorithm is increased by 4.3 dB on average, and has 2 dB improvement compared with that of the speech enhanced by the over-subtraction method. In the term of Perceptual Evaluation of Speech Quality (PESQ) score, compared with that of over-subtraction method, the average score of the proposed algorithm has a 10% improvement. The proposed algorithm has better noise suppression and less speech distortion, and has a significant improvement in SNR and PESQ evaluation standards.
Frontier & interdisciplinary applications
Green vehicle routing problem optimization for multi-type vehicles considering traffic congestion areas
ZHAO Zhixue, LI Xiamiao, ZHOU Xiancheng
2020, 40(3): 883-890. DOI:
10.11772/j.issn.1001-9081.2019071306
Asbtract
(
)
PDF
(703KB) (
)
References
|
Related Articles
|
Metrics
In order to reduce the carbon emission of vehicles during the process of logistics distribution, on the perspective of green environmental protection, a Green Vehicle Routing Problem (GVRP) of logistics distribution vehicles with multi-type vehicles considering traffic congestion areas was analyzed. Firstly, the effect of multi-type vehicles and different traffic congestion situations on the vehicle route planning was investigated. Secondly, the metric function of carbon emission rate was introduced on the basis of vehicle speed and load. Thirdly, a dual-objective green vehicle routing model with minimizing the vehicle management cost as well as the fuel consumption and carbon emission cost as optimization objects was established. Finally, a hybrid differential evolution algorithm combined with simulated annealing algorithm was designed to solve the problem. Simulation results verify that the model and algorithm can effectively avoid the congestion areas. Compared to the simulation results only using 4 t vehicles for distribution, the proposed model has the total cost reduced by 1.5%, and the fuel consumption and carbon emission cost decreased by 4.3%. Compared the model with optimization objective of shortest driving distance, the proposed model has the total distribution cost decreased by 8.1%, demonstrating that the model can improve the economic benefits of logistics enterprises and promote the energy saving and emission reduction. At the same time, compared with the basic differential algorithm, the hybrid differential evolution algorithm with simulated annealing algorithm can reduce the total transportation cost by 3% to 6%; compared with the genetic algorithm, the proposed algorithm has more obvious optimization effect, and has the total transportation cost reduced by 4% to 11%, proving the superiority of the algorithm. In summary, the proposed model and algorithm can provide effective advices for the urban distribution routing decision of logistics enterprises.
Delivery truck strategy under uncertain interference constraints
ZHOU Leilei, LIANG Chengji, HU Xiaoyuan
2020, 40(3): 891-896. DOI:
10.11772/j.issn.1001-9081.2019071311
Asbtract
(
)
PDF
(1027KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the efficiency of operation in container terminal and reduce the influence of uncertain interference factors on the operation of delivery trucks, a method of processing the interference factors by rolling-window strategy was proposed, a mixed integer model with the goal of minimizing the operation delay penalty cost and yard crane movement cost was proposed, and Genetic Algorithm (GA) was used to solve the model. Firstly, rolling-window strategy was used to obtain the scheduling scheme of the delivery trucks in the case of no interference factors. Secondly, when the interference factor occurred, the rolling-window rescheduling mechanism was triggered to reschedule the operation order of delivery trucks. Finally, the optimal scheduling scheme in each window was calculated, and the optimal operation plan in the total planning time was proposed. By comparing and analyzing the results of case solving in different scenarios, the experimental results show that the minimum operation cost under the rolling-window strategy is 9% lower than that under the traditional operation mode in the case without interference, and in the case with interference, the rolling-window strategy makes the cost reduced by 15% compared to the traditional operation mode, which verifies the effectiveness of the algorithm and the superiority of the rolling-window strategy for the delivery truck operation.
Berth joint scheduling based on quantum genetic hybrid algorithm
CAI Yun, LIU Pengqing, XIONG Hegen
2020, 40(3): 897-901. DOI:
10.11772/j.issn.1001-9081.2019071242
Asbtract
(
)
PDF
(623KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the efficiency of container port services and reduce the tardiness costs of ship services, a new mathematical model was established with the objective of minimizing the sojourn time of the ships and the total tardiness for the tug-berth joint scheduling problem under the established conditions of port hardwares (berths, tugboats, quay cranes), and a hybrid algorithm was designed for solving it. Firstly, the serial hybrid strategy of Quantum Genetic Algorithm (QGA) and Tabu Search (TS) algorithm was analyzed and determined. Secondly, according to the characteristics of the joint scheduling problem, the update strategy of dynamic quantum revolving gate was adopted when solving key problems in the executing process of the hybrid algorithm (chromosome structure design and measurement, genetic manipulation, population regeneration, etc.). Finally, the feasibility and effectiveness of the algorithm were verified by the production examples. The experimental results show that compared with results of manual scheduling, the sojourn time of the ships and total tardiness of the hybrid algorithm are reduced by 24% and 42.7% respectively; compared with the results of genetic algorithm, they are reduced by 10.9% and 22.5% respectively. The proposed model and algorithm can not only provide optimized operation schemes for berthing, unberthing as well as loading and unloading operations of port ships, but also increase the port competitiveness.
Constrained multi-objective weapon-target assignment problem
ZHANG Kai, ZHOU Deyun, YANG Zhen, PAN Qian
2020, 40(3): 902-911. DOI:
10.11772/j.issn.1001-9081.2019071274
Asbtract
(
)
PDF
(2035KB) (
)
References
|
Related Articles
|
Metrics
The traditional point-to-point saturation attack is not ideal choice facing high-density and multi-azimuth swarming intelligence targets. The maximum killing effect with weapon number less than target number can be achieved by selecting the appropriate types of weapons and the location of aiming points to realize the fire coverage. Considering the operational requirements of security targets, damage threshold and preference assignment, the Constrained Multi-objective Weapon-Target Assignment (CMWTA) mathematical model was established at first. Then, the calculation method of the constraint violation value was designed, and the individual coding, detection and repair as well as constraint domination were fused to deal with multiple constraints. Finally, the convergence metric for multi-objective weapon-target assignment model was designed, and the approaches were verified by the frameworks of Multi-Objective Evolutionary Algorithm (MOEA). In the comparison of three MOEA frameworks, the capacity of the Pareto sets of SPEA2 (Strength Pareto Evolutionary Algorithm Ⅱ) is mainly distributed in [21,25], that of NSGA-Ⅱ (Non-dominated Sorting Genetic Algorithm Ⅱ) is mainly distributed in [16,20], and that of MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition) is less than 16. In the verification of the repair algorithm, the algorithm makes the convergence metrics of three MOEA frameworks increased by 20 %, and the proportion of infeasible non-dominated solutions in Pareto solution set of 0%. The experimental results show that SPEA2 outperforms NSGA-Ⅱ and MOEA/D on distribution and convergence metric in solving CMWTA model, and the proposed repair algorithm improves the efficiency of solving feasible non-dominated solutions.
Hybrid gradient based hard thresholding pursuit algorithm
YANG Libo, JIANG Tiegang, XU Zhiqiang
2020, 40(3): 912-916. DOI:
10.11772/j.issn.1001-9081.2019071296
Asbtract
(
)
PDF
(684KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of large number of iterations and long reconstruction time of iterative hard thresholding algorithms in Compressed Sensing (CS), a Hybrid Gradient based Hard Thresholding Pursuit (HGHTP) algorithm was proposed. Firstly, the gradient and conjugate gradient at the current iteration node were calculated in each iteration, and the support sets in the gradient domain and conjugate gradient domain were mixed and the union of these two was taken as the candidate support set for the next iteration, so that the useful information of the conjugate gradient was fully utilized in the support set selection strategy, and the support set selection strategy was optimized. Secondly, the least square method was used to perform secondary screening on the candidate support sets to quickly and accurately locate the correct support and update the sparse coefficients. The experimental results of one-dimensional random signal reconstruction show that HGHTP algorithm needs fewer iterations than the similar iterative hard thresholding algorithms on the premise of guaranteeing the success rate of reconstruction. The two-dimensional image reconstruction experimental results show that the reconstruction accuracy and anti-noise performance of HGHTP algorithm are better than those of similar iterative thresholding algorithms, and under the condition of ensuring reconstruction accuracy, HGHTP algorithm has the reconstruction time reduced by more than 32% compared with similar algorithms.
Fault detection for turboshaft engine based on local density weighted one-class SVM algorithm
HUANG Gong, ZHAO Yongping, XIE Yunlong
2020, 40(3): 917-924. DOI:
10.11772/j.issn.1001-9081.2019071309
Asbtract
(
)
PDF
(638KB) (
)
References
|
Related Articles
|
Metrics
An improved Weighted One Class Support Vector Machine (WOCSVM) algorithm—Local Density WOCSVM (LD-WOCSVM) was proposed to solve the problems of poor classification performance and weak robustness of the data-based turboshaft engine fault detection algorithm. Firstly, for each training sample,
k
nearest neighbor samples contained in the body of the ball were selected, and the ball was centered on this sample with a radius of 2% of the Mahalanobis distance from the center of all training samples to the farthest samples. Secondly, the distance from this sample to the center of selected
k
training samples was used to evaluate the probability that this sample is a fault sample, and based on this, the normalized distance was used to calculate the weight of the corresponding sample. An algorithm of weight calculation based on rapid clustering namely FCLD-WOCSVM was proposed to deal with the problem that the present algorithms were not able to reflect the characteristics of sample distribution very well. In this algorithm, by obtaining two parameters of the local density of each training sample and the distance from the sample to the high local density, the distribution position of this sample was determined, and the weight of the sample was calculated by using the two obtained parameters. The classification performance of both algorithms was improved by assigning small weights to the possible fault samples. In order to verify the effectiveness of the two algorithms, simulation experiments were carried out on 4 UCI datasets and T700 turboshaft engines respectively. Experimental results show that, compared with Adaptive WOCSVM (A-WOCSVM) algorithm, LD-WOCSVM algorithm improves the AUC (Area Under the Curve) value by 0.5%, and FCLD-WOCSVM algorithm improves the G-mean (Geometric mean) by 12.1%. These two algorithms can be used as candidate algorithms for turboshaft engine fault detection.
Pulmonary nodule detection method with semantic feature score
ZHANG Zhancheng, ZHANG Dalong, LUO Xiaoqing
2020, 40(3): 925-930. DOI:
10.11772/j.issn.1001-9081.2019081335
Asbtract
(
)
PDF
(632KB) (
)
References
|
Related Articles
|
Metrics
Since the results of existing intelligent algorithms of pulmonary nodule detection only predict the positions of nodules and cannot give semantical interpretations which are well known to doctors in clinical diagnosis, such as “lobulation”, “texture” and “spiculation”, a pulmonary nodule detection method with semantic feature score was proposed. Eight semantic features—subtlety, internal structure, lobulation, spiculation, margin, calcification, sphericity and texture were embedded into the Region Proposal Network (RPN) of Faster R-CNN, a new anchor box mechanism was designed, a fully connected network was added to realize the regression learning of semantic features, and the semantic scores were used as auxiliary information to realize the joint learning of pulmonary nodule detection and semantic prediction by training with Faster R-CNN. The proposed method was evaluated on the LIDC/IDRI dataset. Results show that the accuracy of pulmonary nodule localization is 91.2%, and the accuracy, sensitivity and specificity of benign and malignant classification are 81%, 91.2% and 70.8% respectively. On 8 semantic feature scores, the difference between doctors is 0.58±0.78 (mean absolute error±standard deviation), the proposed method achieves the difference of 0.62±1.03 with doctors, which is familiar to the former one. These results demonstrate that the modified network has good prediction accuracy and semantic feature prediction, and facilitates the understanding and clinical interpretations of machine prediction results for doctors.
2024 Vol.44 No.11
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF