Loading...

Table of Content

    10 June 2021, Volume 41 Issue 6
    National Open Distributed and Parallel Computing Conference 2020 (DPCS 2020)
    Relay selection strategy for cache-aided full-duplex simultaneous wireless information and power transfer system
    SHI Anni, LI Taoshen, WANG Zhe, HE Lu
    2021, 41(6):  1539-1545.  DOI: 10.11772/j.issn.1001-9081.2020121930
    Asbtract ( )   PDF (1136KB) ( )  
    References | Related Articles | Metrics
    In order to improve the performance of the Simultaneous Wireless Information and Power Transfer (SWIPT) system, a new cache-aided full-duplex relay collaborative system model was constructed, and the free Energy Access Points (EAPs) were considered as the extra energy supplement of relay nodes in the system. For the system throughput optimization problem, a new SWIPT relay selection strategy based on power allocation cooperation was proposed. Firstly, a problem model on the basis of the constraints such as communication service quality and source node transmit power was established. Secondly, the original nonlinear mixed integer programming problem was transformed into a pair of coupling optimization problems through mathematical transformation. Finally, the Karush-Kuhn-Tucker (KKT) condition was used to solve the internal optimization problem with the help of Lagrange function, so that the closed-form solution of the power allocation factor and the relay transmit power was obtained, and the external optimization problem was solved based on this result, so as to select the best relay for the cooperative communication. The simulation experimental results show that, the free EAPs and the configuration of cache for the relay are feasible and effective, and the proposed system is significantly better than the traditional relay cooperative communication systems in terms of throughput gain.
    Clustered wireless federated learning algorithm in high-speed internet of vehicles scenes
    WANG Jiarui, TAN Guoping, ZHOU Siyuan
    2021, 41(6):  1546-1550.  DOI: 10.11772/j.issn.1001-9081.2020121912
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics
    Existing wireless federated learning frameworks lack the effective support for the actual distributed high-speed Internet of Vehicles (IoV) scenes. Aiming at the distributed learning problem in such scenes, a distributed training algorithm based on the random network topology model named Clustered-Wireless Federated Learning Algorithm (C-WFLA) was proposed. In this algorithm, firstly, a network model was designed on the basis of the distribution situation of vehicles in the highway scene. Secondly, the path fading, Rayleigh fading and other factors during the uplink data transmission of the users were considered. Finally, a wireless federated learning method based on clustered training was designed. The proposed algorithm was used to train and test the handwriting recognition model. The simulation results show that under the situations of good channel state and little user transmit power limit, the loss functions of traditional wireless federated learning algorithm and C-WFLA can converge to similar values under the same training condition, but C-WFLA converges faster; under the situations of poor channel state and much user transmit power limit, C-WFLA can reduce the convergence value of loss function by 10% to 50% compared with the traditional centralized algorithm. It can be seen that C-WFLA is more helpful for model training in high-speed IoV scenes.
    Online short video content distribution strategy based on federated learning
    DONG Wentao, LI Zhuo, CHEN Xin
    2021, 41(6):  1551-1556.  DOI: 10.11772/j.issn.1001-9081.2020121936
    Asbtract ( )   PDF (958KB) ( )  
    References | Related Articles | Metrics
    To improve the accuracy of short video content distribution, the interest tendencies and the personalized demands for short video content of social groups that the users belong to were analyzed, and in the short video application scenarios based on the active recommendation approaches, a short video content distribution strategy was designed with the goal of maximizing the profit of video content providers. Firstly, based on the federated learning, the interest prediction model was trained by using the local album data of the user group, and the user group interest vector prediction algorithm was proposed and the interest vector representation of the user group was obtained. Secondly, using the interest vector as the input, the corresponding short video content distribution strategy was designed in real time based on the Combinatorial Upper Confidence Bound (CUCB) algorithm, so that the long-term profit obtained by the video content providers was maximized. The average profit obtained by the proposed strategy is relatively stable and significantly better than that obtained by the short video distribution strategy only based on CUCB; in terms of total profit of video providers, compared with the Upper Confidence Bound (UCB) strategy and random strategy, the proposed strategy increases by 12% and 30% respectively. Experimental results show that the proposed short video content distribution strategy can effectively improve the accuracy of short video distribution, so as to further increase the profit obtained by video content providers.
    Traffic mode recognition algorithm based on residual temporal attention neural network
    LIU Shize, ZHU Yida, CHEN Runze, LUO Haiyong, ZHAO Fang, SUN Yi, WANG Baohui
    2021, 41(6):  1557-1565.  DOI: 10.11772/j.issn.1001-9081.2020121953
    Asbtract ( )   PDF (1075KB) ( )  
    References | Related Articles | Metrics
    Traffic mode recognition is an important branch of user behavior recognition, the purpose of which is to identify the user's current traffic mode. Aiming at the demand of the modern intelligent urban transportation system to accurately perceive the user's traffic mode in the mobile device environment, a traffic mode recognition algorithm based on the residual temporal attention neural network was proposed. Firstly, the local features in the sensor time sequence were extracted through the residual network with strong local feature extraction ability. Then, the channel-based attention mechanism was used to recalibrate the different sensor features, and the attention recalibration was performed by focusing on the data heterogeneity of different sensors. Finally, the Temporal Convolutional Network (TCN) with a wider receptive field was used to extract the global features in the sensor time sequence. The data-rich High Technology Computer (HTC) traffic mode recognition dataset was used to evaluate the existing traffic mode recognition algorithms and the residual temporal attention model. Experimental results show that the proposed residual temporal attention model has the accuracy as high as 96.07% with friendly computational overhead for mobile devices, and has the precision and recall for any single class reached or exceeded 90%, which verify the accuracy and robustness of the proposed model. The proposed model can be applied to intelligent transportation, smart city and other domains as a kind of traffic mode detection for supporting mobile intelligent terminal operation.
    Traffic flow prediction algorithm based on deep residual long short-term memory network
    LIU Shize, QIN Yanjun, WANG Chenxing, SU Lin, KE Qixue, LUO Haiyong, SUN Yi, WANG Baohui
    2021, 41(6):  1566-1572.  DOI: 10.11772/j.issn.1001-9081.2020121928
    Asbtract ( )   PDF (1116KB) ( )  
    References | Related Articles | Metrics
    In the multi-step traffic flow prediction task, the spatial-temporal feature extraction effect is not good and the prediction accuracy of future traffic flow is low. In order to solve these problems, a fusion model combining Long-Short Term Memory (LSTM) network, convolutional residual network and attention mechanism was proposed. Firstly, an encoder-decoder-based architecture was used to mine the temporal domain features of different scales by adding LSTM network into the encoder-decoder. Secondly, a convolutional residual network based on the Squeeze-and-Excitation (SE) block of attention mechanism was constructed and embedded into the LSTM network structure to mine the spatial domain features of traffic flow data. Finally, the implicit state information obtained from the encoder was input into the decoder to realize the prediction of high-precision multi-step traffic flow. The real traffic data was used for the experimental testing and analysis. The results show that, compared with the original graph convolution-based model, the proposed model achieves the decrease of 1.622 and 0.08 on the Root Mean Square Error (RMSE) for Beijing and New York traffic flow public datasets, respectively. The proposed model can predict the traffic flow efficiently and accurately.
    Transportation mode recognition algorithm based on multi-scale feature extraction
    LIU Shize, QIN Yanjun, WANG Chenxing, GAO Cunyuan, LUO Haiyong, ZHAO Fang, WANG Baohui
    2021, 41(6):  1573-1580.  DOI: 10.11772/j.issn.1001-9081.2020121915
    Asbtract ( )   PDF (1478KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of high power consumption and complex scene for scene perception in universal transportation modes, a new transportation mode detection algorithm combining Residual Network (ResNet) and dilated convolution was proposed. Firstly, the 1D sensor data was converted into the 2D spectral image by using Fast Fourier Transform (FFT). Then, the Principal Component Analysis (PCA) algorithm was used to realize the downsampling of the spectral image. Finally, the ResNet was used to mine the local features of transportation modes, and the global features of transportation modes were mined with dilated convolution, so as to detect eight transportation modes. Experimental evaluation results show that, compared with 8 algorithms including decision tree, random forest and AlexNet, the transportation mode recognition algorithm combining ResNet and dilated convolution has the highest accuracy in eight traffic patterns including static, walking and running, and the proposed algorithm has good identification accuracy and robustness.
    Deployment method of dockers in cluster for dynamic workload
    YIN Fei, LONG Lingli, KONG Zheng, SHAO Han, LI Xin, QIAN Zhuzhong
    2021, 41(6):  1581-1588.  DOI: 10.11772/j.issn.1001-9081.2020121913
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of frequent migration of containers triggered by dynamic changes of cluster workload, a container deployment method based on resource reservation was proposed. Firstly, a dynamic change description mechanism of single-container resource demand based on Markov chain model was designed to describe the resource demand situation of single container. Secondly, the dynamic change of multi-container resource was analyzed based on the single-container Markov chain model to describe the container resource demand state. Thirdly, a container deployment and resource reservation algorithm for dynamic workload was proposed based on the multi-container Markov chain. Finally, the performance of the proposed algorithm was optimized based on the analysis of container resource demand characteristics. The simulation experimental environment was constructed based on the domestic software and hardware environment, and the simulation results show that in terms of resource conflict rate, the performance of the proposed method has the performance close to the optimal peak allocation strategy named Resource with Peak (RP), but its number of required hosts and container dynamic migration number are significantly less; in terms of resource utilization rate, the proposed method has the number of hosts used slightly more than the optimal valley allocation strategy named Resource with Valley (RV), but has less dynamic migration number and lower resource conflict rate; compared with the peak and valley allocation strategy named Resource with Valley and Peak (RVP), the proposed method has better comprehensive performance.
    Deep neural network compression algorithm based on combined dynamic pruning
    ZHANG Mingming, LU Qingning, LI Wenzhong, SONG Hu
    2021, 41(6):  1589-1596.  DOI: 10.11772/j.issn.1001-9081.2020121914
    Asbtract ( )   PDF (1131KB) ( )  
    References | Related Articles | Metrics
    As a branch of model compression, network pruning algorithm reduces the computational cost by removing unimportant parameters in the deep neural network. However, permanent pruning will cause irreversible loss of the model capacity. Focusing on this issue, a combined dynamic pruning algorithm was proposed to comprehensively analyze the characteristics of the convolution kernel and the input image. Part of the convolution kernels were zeroized and allowed to be updated during the training process until the network converged, thereafter the zeroized kernels would be permanently removed. At the same time, the input images were sampled to extract their features, then a channel importance prediction network was used to analyze these features to determine the channels able to be skipped during the convolution operation. Experimental results based on M-CifarNet and VGG16 show that the combined dynamic pruning can respectively provide 2.11 and 1.99 floating-point operation compression ratios, with less than 0.8 percentage points and 1.2 percentage points accuracy loss respectively compared to the benchmark model (M-CifarNet、VGG16). Compared with the existing network pruning algorithms, the combined dynamic pruning algorithm effectively reduces the Floating-Point Operations Per second (FLOPs) and the parameter scale of the model, and achieves the higher accuracy under the same compression ratio.
    Intelligent recommendation method for lock mechanism in concurrent program
    ZHANG Yang, DONG Shicheng
    2021, 41(6):  1597-1603.  DOI: 10.11772/j.issn.1001-9081.2020121929
    Asbtract ( )   PDF (1311KB) ( )  
    References | Related Articles | Metrics
    The choices of Java locks are faced by the developers during parallel programming. To solve the problem of how to choose the appropriate lock mechanism to improve the program performance, a recommendation method named LockRec for developers of concurrent program to choose lock mechanism was proposed. Firstly, the program static analysis technology was used to analyze the use of lock mechanism in concurrent programs and determine the program feature attributes that affect the program performance. Then, the improved random forest algorithm was used to build a recommendation model of lock mechanism, so as to help the developers to choose the lock among synchronization lock, re-entrant lock, read-write lock, and stamped lock. Four existing machine learning datasets were selected to experiment with LockRec. The average accuracy of the proposed LockRec is 95.1%. In addition, the real-world concurrent programs were used to analyze the recommendation results of LockRec. The experimental results show that LockRec can effectively improve the execution efficiency of concurrent programs.
    Reverse hybrid access control scheme based on object attribute matching in cloud computing environment
    GE Lina, HU Yugu, ZHANG Guifen, CHEN Yuanyuan
    2021, 41(6):  1604-1610.  DOI: 10.11772/j.issn.1001-9081.2020121954
    Asbtract ( )   PDF (1071KB) ( )  
    References | Related Articles | Metrics
    Cloud computing improves the efficiency of the use, analysis and management of big data, but also brings the worry of data security and private information disclosure of cloud service to the data contributors. To solve this problem, combined with the role-based access control, attribute-based access control methods and using the architecture of next generation access control, a reverse hybrid access control method based on object attribute matching in cloud computing environment was proposed. Firstly, the access right level of the shared file was set by the data contributor, and the minimum weight of the access object was reversely specified. Then, the weight of each attribute was directly calculated by using the variation coefficient weighting method, and the process of policy rule matching in the attribute centered role-based access control was cancelled. Finally, the right value of the data contributor setting to the data file was used as the threshold for the data visitor to be allowed to access, which not only realized the data access control, but also ensured the protection of private data. Experimental results show that, with the increase of the number of visits, the judgment standards of the proposed method for malicious behaviors and insufficient right behaviors tend to be stable, the detection ability of the method becomes stronger and stronger, and the success rate of the method tends to a relatively stable level. Compared with the traditional access control methods, the proposed method can achieve higher decision-making efficiency in the environment of large number of user visits, which verifies the effectiveness and feasibility of the proposed method.
    Traceable and revocable ciphertext-policy attribute-based encryption scheme based on cloud-fog computing
    CHEN Jiahao, YIN Xinchun
    2021, 41(6):  1611-1620.  DOI: 10.11772/j.issn.1001-9081.2020121955
    Asbtract ( )   PDF (1134KB) ( )  
    References | Related Articles | Metrics
    Focusing on the large decryption overhead of the resource limited edge devices and the lack of effective user tracking and revocation in attribute-based encryption, a traceable and revocable Ciphertext-Policy Attribute-Based Encryption (CP-ABE) scheme supporting cloud-fog computing was proposed. Firstly, through the introduction of fog nodes, the ciphertext storage and outsourcing decryption were able to be carried out on fog nodes near the users, which not only effectively protected users' private data, but also reduced users' computing overhead. Then, in response to the behaviors such as user permission changes, users intentionally or unintentionally leaking their own keys in the attribute-based encryption system, user tracking and revocation functions were added. Finally, after the identity of malicious user with the above behaviors was tracked through the algorithm, the user would be added to the revocation list, so that user's access right was cancelled. The performance analysis shows that the decryption overhead at the user end is reduced to one multiplication and one exponential operation, which can save large bandwidth and decryption time for users; at the same time, the proposed scheme supports the tracking and revocation of malicious users. Therefore, the proposed scheme is suitable for data sharing of devices with limited computing resources in cloud-fog environment.
    Lightweight anonymous mutual authentication protocol based on random operators for radio frequency identification system
    WU Kaifan, YIN Xinchun
    2021, 41(6):  1621-1630.  DOI: 10.11772/j.issn.1001-9081.2020121947
    Asbtract ( )   PDF (1437KB) ( )  
    References | Related Articles | Metrics
    The Radio Frequency Identification (RFID) system is vulnerable to malicious attacks in the wireless channel and the privacy of the tag owners is often violated. In order to solve the problems, a lightweight RFID authentication protocol supporting anonymity was proposed. Firstly, the random number generator was used to generate the unpredictable sequence for specifying the lightweight operators participating in the protocol. Then, the seed was specified to achieve the key negotiation between the reader and the tag. Finally, the mutual authentication and information updating were achieved. The comparison results with some representative lightweight schemes show that the proposed scheme saves the the tag storage overhead by up to 42% compared with the similar lightweight protocols, and the has the communication overhead also maintained at the low level of similar schemes at the same time, and is able to support the multiple security requirements. The proposed scheme is suitable for low-cost RFID systems.
    Artificial intelligence
    Sequential multimodal sentiment analysis model based on multi-task learning
    ZHANG Sun, YIN Chunyong
    2021, 41(6):  1631-1639.  DOI: 10.11772/j.issn.1001-9081.2020091416
    Asbtract ( )   PDF (1150KB) ( )  
    References | Related Articles | Metrics
    Considering the issues of unimodal feature representation and cross-modal feature fusion in sequential multimodal sentiment analysis, a multi-task learning based sentiment analysis model was proposed by combining with multi-head attention mechanism. Firstly, Convolution Neural Network (CNN), Bidirectional Gated Recurrent Unit (BiGRU) and Multi-Head Self-Attention (MHSA) were used to realize the sequential unimodal feature representation. Secondly, the bidirectional cross-modal information was fused by multi-head attention. Finally, based on multi-task learning, the sentiment polarity classification and sentiment intensity regression were added as auxiliary tasks to improve the comprehensive performance of the main task of sentiment score regression. Experimental results demonstrate that the proposed model improves the accuracy of binary classification by 7.8 percentage points and 3.1 percentage points respectively on CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) and CMU Multimodal Opinion level Sentiment Intensity (CMU-MOSI) datasets compared with multimodal factorization model. Therefore, the proposed model is applicable for the sentiment analysis problems under multimodal scenarios, and can provide the decision supports for product recommendation, stock market forecasting, public opinion monitoring and other relevant applications.
    Image captioning algorithm based on multi-feature extraction
    ZHAO Xiaohu, LI Xiao
    2021, 41(6):  1640-1646.  DOI: 10.11772/j.issn.1001-9081.2020091439
    Asbtract ( )   PDF (1144KB) ( )  
    References | Related Articles | Metrics
    In image caption methods, image feature information is not completely extracted and the vanishing gradient is generated by the Recurrent Neural Network (RNN). In order to solve the problems, a new image captioning algorithm based on multi-feature extraction was proposed. The constructed model was consisted of three parts:Convolutional Neural Network (CNN) was used for image feature extraction, ATTribute extraction model (ATT) was used for image attribute extraction, and Bidirectional Long Short-Term Memory (Bi-LSTM) network was used for word prediction. In the constructed model, image representation was enhanced by extracting image attribute information, so as to accurately describe the things in the image, and Bi-LSTM was used to capture bidirectional semantic dependency, so that the long-term visual language interaction learning was carried out. Firstly, CNN and ATT were used to extract the global image features and image attribute features respectively. Then, the two kinds of feature information were input into Bi-LSTM to generate sentences that were able to reflect the image content. Finally, the effectiveness of the proposed method was validated on Microsoft COCO Caption, Flickr8k, and Flickr30k datasets. Experimental results show that, compared with the multimodal Recurrent Neural Network (m-RNN) method, the proposed algorithm has improved the description performance by 6.8-11.6 percentage points. The proposed algorithm can effectively improve the semantic description performance of the constructed model for images.
    Application of Transformer optimized by pointer generator network and coverage loss in field of abstractive text summarization
    LI Xiang, WANG Weibing, SHANG Xueda
    2021, 41(6):  1647-1651.  DOI: 10.11772/j.issn.1001-9081.2020091375
    Asbtract ( )   PDF (836KB) ( )  
    References | Related Articles | Metrics
    Aiming at the application scenario of abstractive text summarization, a Transformer-based summarization model with Pointer Generator network and Coverage Loss added to the Transformer model for optimization was proposed. First, the method based on the Transformer model as the basic structure was proposed, and its attention mechanism was used to better capture the semantic information of the context. Then, the Coverage Loss was introduced into the loss function of the model to punish the distribution and coverage of repeated words, so as to solve the problem that the attention mechanism in the Transformer model continuously generates the same word in abstractive tasks. Finally, the Pointer Generator network was added to the model, which allowed the model to copy words from the source text as generated words to solve the Out of Vocabulary (OOV) problem. Whether the improved model reduced inaccurate expressions and whether the phenomenon of repeated occurrence of the same word was solved were explored. Compared with the original Transformer model, the improved model improved the score on ROUGE-L evaluation function by 1.98 percentage points, the score on ROUGE-2 evaluation function by 0.95 percentage points, and the score on ROUGE-L evaluation function by 2.27 percentage points, and improved the readability and accuracy of the summarization results. Experimental results show that Transformer can be applied to the field of abstractive text summarization after adding Coverage Loss and Pointer Generator network.
    Chinese-Vietnamese pseudo-parallel corpus generation based on monolingual language model
    JIA Chengxun, LAI Hua, YU Zhengtao, WEN Yonghua, YU Zhiqiang
    2021, 41(6):  1652-1658.  DOI: 10.11772/j.issn.1001-9081.2020071017
    Asbtract ( )   PDF (1333KB) ( )  
    References | Related Articles | Metrics
    Neural machine translation achieves good translation results on resource-rich languages, but due to data scarcity, it performs poorly on low-resource language pairs such as Chinese-Vietnamese. At present, one of the most effective ways to alleviate this problem is to use existing resources to generate pseudo-parallel data. Considering the availability of monolingual data, based on the back-translation method, firstly the language model trained by a large amount of monolingual data was fused with the neural machine translation model. Then, the language features were integrated into the language model in the back-translation process to generate more standardized and better quality pseudo-parallel data. Finally, the generated corpus was added to the original small-scale corpus to train the final translation model. Experimental results on the Chinese-Vietnamese translation tasks show that compared with the ordinary back-translation methods, the Chinese-Vietnamese neural machine translation has the BiLingual Evaluation Understudy (BLEU) value improved by 1.41 percentage points by fusing the pseudo-parallel data generated by the language model.
    Multi-face foreground extraction method based on skin color learning
    DAI Yanran, DAI Guoqing, YUAN Yubo
    2021, 41(6):  1659-1666.  DOI: 10.11772/j.issn.1001-9081.2020091397
    Asbtract ( )   PDF (1935KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of quickly and accurately extracting face content in multi-face scenes, a multi-face foreground extraction method based on skin color learning was proposed. Firstly, a skin color foreground segmentation model based on skin color learning was given. According to the results of the papers of skin color experts, 1 200 faces of the famous SPA database were collected for skin color sampling. The learning model was established to obtain the skin color parameters of each race in the color space. The skin color image was segmented according to the parameters to obtain the skin color foreground. Secondly, the face seed area was segmented by using face feature point learning algorithm and skin color foreground information and with 68 common feature points of the face as the target. And the centers of the faces were calculated to construct the elliptical boundary model of the faces and determine the genetic range. Finally, an effective extraction algorithm was established, and the genetic mechanism was used within the elliptical boundaries of the faces to regenerate the faces, so that the effective face areas were extracted. Based on three different databases, 100 representative multi-face images were collected. Experimental results show that the accuracy of the multi-face extraction results of the proposed method is up to 98.4%, and the proposed method has a significant effect on the face content extraction of medium-density crowds as well as provides a basis for the accuracy and usability of the face recognition algorithm.
    Multi-angle head pose estimation method based on optimized LeNet-5 network
    ZHANG Hui, ZHANG Nana, HUANG Jun
    2021, 41(6):  1667-1672.  DOI: 10.11772/j.issn.1001-9081.2020091427
    Asbtract ( )   PDF (1102KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems that the accuracy is low or the head pose estimation cannot be performed by traditional head pose estimation methods when the key feature points of the face cannot be located due to partial occlusion or too large angle, a multi-angle head pose estimation method based on optimized LeNet-5 network was proposed. Firstly, the depth, the size of the convolution kernel and other parameters of the Convolutional Neural Network (CNN) were optimized to better capture the global features of the image. Then, the pooling layers were improved, and a convolutional operation was used to replace the pooling operation to increase the nonlinear ability of the network. Finally, the AdaBound optimizer was introduced, and the Softmax regression model was used to perform the pose classification training. During the training, hair occlusion, exaggerated expressions and wearing glasses were added to the self-built dataset to increase the generalization ability of the network. Experimental results show that, the proposed method can realize the head pose estimation under multi-angle rotations, such as head up, head down and head tilting without locating key facial feature points, under the occlusion of light, shadow and hair, with the accuracy of 98.7% on Pointing04 public dataset and CAS-PEAL-R1 public dataset, and the average running speed of 22-29 frames per second.
    Wind turbine fault sampling algorithm based on improved BSMOTE and sequential characteristics
    YANG Xian, ZHAO Jisheng, QIANG Baohua, MI Luzhong, PENG Bo, TANG Chenghua, LI Baolian
    2021, 41(6):  1673-1678.  DOI: 10.11772/j.issn.1001-9081.2020091384
    Asbtract ( )   PDF (1063KB) ( )  
    References | Related Articles | Metrics
    To solve the imbalance problem of wind turbine dataset, a Borderline Synthetic Minority Oversampling Technique-Sequence (BSMOTE-Sequence) sampling algorithm was proposed. In the algorithm, when synthesizing new samples, the space and time characteristics were considered comprehensively, and the new samples were cleaned, so as to effectively reduce the generation of noise points. Firstly, the minority class samples were divided into security class samples, boundary class samples and noise class samples according to the class proportion of the nearest neighbor samples of each minority class sample. Secondly, for each boundary class sample, the minority class sample set with the closest spatial distance and time span was selected, the new samples were synthesized by linear interpolation method, and the noise class samples and the overlapping samples between classes were filtered out. Finally, Support Vector Machine (SVM), Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) were used as the fault detection models of wind turbine gear box, and F1-Score, Area Under Curve (AUC) and G-mean were used as performance evaluation indices of the models, and the proposed algorithm was compared with other sampling algorithms on real wind turbine datasets. Experimental results show that, compared with those of the existing algorithms, the classification effect of the samples generated by BSMOTE-Sequence algorithm is better with an average increase of 3% in F1-Score, AUC and G-mean of the detection models. The proposed algorithm can be effectively applicable to the field of wind turbine fault detection where the data with sequential rule is imbalanced.
    Data science and technology
    Extended isolation forest algorithm based on random subspace
    XIE Yu, JIANG Yu, LONG Chaoqi
    2021, 41(6):  1679-1685.  DOI: 10.11772/j.issn.1001-9081.2020091436
    Asbtract ( )   PDF (1335KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of excessive time overhead of the Extended Isolation Forest (EIF) algorithm, a new algorithm named Extended Isolation Forest based on Random Subspace (RS-EIF) was proposed. Firstly, multiple random subspaces were determined in the original data space. Then, in each random subspace, the extended isolated tree was constructed by calculating the intercept vector and slope of each node, and multiple extended isolated trees were integrated into a subspace extended isolation forest. Finally, the average traversal depth of data point in the extended isolation forest was calculated to determine whether the data point was abnormal. Experimental results on 9 real datasets in Outliter Detection DataSet (ODDS) and 7 synthetic datasets with multivariate distribution show that, the RS-EIF algorithm is sensitive to local anomalies and reduces the time overhead by about 60% compared with the EIF algorithm; on the ODDS datasets with many samples, its recognition accuracy is 2 percentage points to 12 percentage points higher than those of the isolation Forest (iForest) algorithm, Lightweight On-line Detection of Anomalies (LODA) algorithm and COPula-based Outlier Detection (COPOD) algorithm. The RS-EIF algorithm has the higher recognition efficiency in the dataset with a large number of samples.
    Bichromatic reverse k nearest neighbor query method based on distance-keyword similarity constraint
    ZHANG Hao, ZHU Rui, SONG Fuyao, FANG Peng, XIA Xiufeng
    2021, 41(6):  1686-1693.  DOI: 10.11772/j.issn.1001-9081.2020091453
    Asbtract ( )   PDF (1025KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low quality of results returned by spatial keyword bichromatic reverse k nearest neighbor query, a bichromatic reverse k nearest neighbor query method based on distance-keyword similarity constraint was proposed. Firstly, a threshold was set to filter out the low-quality users in the query results, so that the existence of users with relatively long spatial distance in the query results was avoided and the quality of the query results was ensured. Then, in order to support this query, an index of Keyword Multiresolution Grid rectangle-tree (KMG-tree) was proposed to manage the data. Finally, the Six-region-optimize algorithm based on Six-region algorithm was proposed to improve the query processing efficiency. The query efficiency of the Six-region-optimize algorithm was about 85.71% and 23.45% on average higher than those of the baseline and Six-region algorithms respectively. Experimental test and analysis were carried out based on real spatio-temporal data. The experimental results verify the effectiveness and high efficiency of the Six-region-optimize algorithm.
    Parameter independent weighted local mean-based pseudo nearest neighbor classification algorithm
    CAI Ruiguang, ZHANG Desheng, XIAO Yanting
    2021, 41(6):  1694-1700.  DOI: 10.11772/j.issn.1001-9081.2020091370
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the Local Mean-based Pseudo Nearest Neighbor (LMPNN) algorithm is sensitive to the value of k and ignores the different influence of different attributes on the classification results, a Parameter Independent Weighted Local Mean-based Pseudo Nearest Neighbor classification (PIW-LMPNN) algorithm was proposed. Firstly, the Success-History based parameter Adaptation for Differential Evolution (SHADE) algorithm, the latest variant of differential evolution algorithm, was used to optimize the training set samples to obtain the best k value and a set of best weights related to the classes. Secondly, when calculating the distance between samples, different weights were assigned to different attributes of different classes, and the test set samples were classified. Finally, simulations were performed on 15 real datasets and the proposed algorithm was compared to other eight classification algorithms. The results show that the proposed algorithm has the classification accuracy and F1 value increased by about 28 percentage points and 23.1 percentage points respectively. At the same time, the comparision results of Wilcoxon signed-rank test, Friedman rank variance test and Hollander-Wolfe's pairwise processing show that the proposed improved algorithm outperforms the other eight classification algorithms in terms of classification accuracy and k value selection.
    Cyber security
    Anomaly detection method based on multi-task temporal convolutional network in cloud workflow
    YAO Jie, CHENG Chunling, HAN Jing, LIU Zheng
    2021, 41(6):  1701-1708.  DOI: 10.11772/j.issn.1001-9081.2020091383
    Asbtract ( )   PDF (1677KB) ( )  
    References | Related Articles | Metrics
    Numerous logs generated during the daily deployment and operation process in cloud computing platforms help system administrators perform anomaly detection. Common anomalies in cloud workflow include pathway anomalies and time delay anomalies. Traditional anomaly detection methods train the learning models corresponding to the two kinds of anomaly detection tasks respectively and ignore the correlation between these two tasks, which leads to the decline of the accuracy of anomaly detection. In order to solve the problems, an anomaly detection method based on multi-task temporal convolutional network was proposed. Firstly, the event sequence and time sequence were generated based on the event templates of log stream. Then, the deep learning model based on the multi-task temporal convolutional network was trained. In the model, the event and the time characteristics were learnt in parallel from the normal system execution processes by sharing the shallow layers of the temporal convolutional network. Finally, the anomalies in the cloud computing workflow were analyzed, and the related anomaly detection logic was designed. Experimental results on the OpenStack dataset demonstrate that, the proposed method improves the anomaly detection accuracy at least by 7.7 percentage points compared to the state-of-art log anomaly detection algorithm DeepLog and the method based on Principal Component Analysis (PCA).
    Oversampling method for intrusion detection based on clustering and instance hardness
    WANG Yao, SUN Guozi
    2021, 41(6):  1709-1714.  DOI: 10.11772/j.issn.1001-9081.2020091378
    Asbtract ( )   PDF (1211KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low detection efficiency of intrusion detection models due to the imbalance of network traffic data, a new Clustering and instance Hardness-based Oversampling method for intrusion detection (CHO) was proposed. Firstly, the hardness values of the minority data were measured as input by calculating the proportion of the majority class samples in the neighbors of minority class samples. Secondly, the Canopy clustering approach was used to pre-cluster the minority data, and the obtained cluster values were taken as the clustering parameter of K-means++ clustering approach to cluster again. Then, the average hardness and the standard deviation of different clusters were calculated, and the former was taken as the "investigation cost" in the optimum allocation theory of statistics, and the amount of data to be generated in each cluster was determined by this theory. Finally, the "safe" regions in the clusters were further identified according to the hardness values, and the specified amount of data was generated in the safe regions in the clusters by using the interpolation method. The comparative experiment was carried out on 6 open intrusion detection datasets. The proposed method achieves the optimal values of 1.33 on both Area Under Curve (AUC) and Geometric mean (G-mean), and has the AUC increased by 1.6 percentage points on average compared to Synthetic Minority Oversampling TEchnique (SMOTE) on 4 of the 6 datasets. The experimental results show that the proposed method can be well applied to imbalance problems in intrusion detection.
    Dynamic group based effective identity authentication and key agreement scheme in LTE-A networks
    DU Xinyu, WANG Huaqun
    2021, 41(6):  1715-1722.  DOI: 10.11772/j.issn.1001-9081.2020091428
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics
    As one of the communication methods in future mobile communications, Machine Type Communication (MTC) is an important mobile communication method in Internet of Things (IoT). When many MTC devices want to access the network at the same time, each MTC device needs to perform independent identity authentication, which will cause network congestion. In order to solve this problem and improve the security of key agreement of MTC device, a dynamic group based effective identity authentication and key agreement scheme was proposed in Long Term Evolution-Advanced (LTE-A) networks. Based on symmetric bivariate polynomials, the proposed scheme was able to authenticate a large number of MTC devices at the same time and establish independent session keys between the devices and the network. In the proposed scheme, multiple group authentications were supported, and the updating of access policies was provided. Compared with the scheme based on linear polynomials, bandwidth analysis shows that the bandwidth consumptions of the proposed scheme during transmission are optimized:the transmission bandwidth between the MTC devices in the Home Network (HN) and the Service Network (SN) is reduced by 132 bit for each group authentication, the transmission bandwidth between the MTC devices within the HN is reduced by 18.2%. Security analysis and experimental results show that the proposed scheme is safe in actual identity authentication and session key establishment, and can effectively avoid signaling congestion in the network.
    Optimized CKKS scheme based on learning with errors problem
    ZHENG Shangwen, LIU Yao, ZHOU Tanping, YANG Xiaoyuan
    2021, 41(6):  1723-1728.  DOI: 10.11772/j.issn.1001-9081.2020091447
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme based on the Learning With Errors (LWE) problem has large ciphertext, complicated calculation key generation and low homomorphic calculation efficiency in the encrypted data calculation, an optimized scheme of LWE type CKKS was proposed through the method of bit discarding and homomorphic calculation key reorganization. Firstly, the size of the ciphertext in the homomorphic multiplication process was reduced by discarding part of the low-order bits of the ciphertext vector and part of the low-order bits of the ciphertext tensor product in the homomorphic multiplication. Secondly, the method of bit discarding was used to reorganize and optimize the homomorphic calculation key, so as to remove the irrelevant extension items in powersof2 during the key exchange procedure and reduce the scale of the calculation key as well as the noise increase in the process of homomorphic multiplication. On the basis of ensuring the security of the original scheme, the proposed optimized scheme makes the dimension of the calculation key reduced, and the computational complexity of the homomorphic multiplication reduced. The analysis results show that the proposed optimized scheme reduces the computational complexity of the homomorphic calculation and calculation key generation process to a certain extent, so as to reduce the storage overhead and improve the efficiency of the homomorphic multiplication operation.
    Scrambling and hiding algorithm of streaming media image information based on state view
    YANG Panpan, ZHAO Jichun
    2021, 41(6):  1729-1733.  DOI: 10.11772/j.issn.1001-9081.2020091422
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics
    Aiming at the information security risks of streaming media images, a new scrambling and hiding algorithm for streaming media image information based on state view was proposed. Firstly, the streaming media image enhancement algorithm based on Neighborhood Limited Empirical Mode Decomposition (NLEMD) was used to enhance the streaming media image and highlight the details of the streaming media image, so as to realize the effect of streaming media image enhancement. Then, the efficient encoding and decoding algorithm based on state view was used to encode and decode the streaming media image information, so that the streaming media image information was scrambled and hidden. Experimental results show that, the proposed algorithm can effectively and comprehensively scramble and hide plant and text streaming media image information, and it can significantly enhance the streaming media images. In the scrambling and hiding of streaming media image information, the scrambling and hiding degree of the proposed algorithm is higher than 95%, which indicates that the proposed algorithm can protect the security of streaming media image information.
    Advanced computing
    Parallel design and implementation of synthetic view distortion change algorithm in reconfigurable structure
    JIANG Lin, SHI Jiaqi, LI Yuancheng
    2021, 41(6):  1734-1740.  DOI: 10.11772/j.issn.1001-9081.2020091462
    Asbtract ( )   PDF (1262KB) ( )  
    References | Related Articles | Metrics
    Focused on the high computational time complexity of the depth map based Synthesized View Distortion Change (SVDC) algorithm in 3D High Efficiency Video Coding (3D-HEVC), a new parallelization method of SVDC algorithm based on hybrid granularity was proposed under the reconfigurable array structure. Firstly, the SVDC algorithm was divided into two parts:Virtual View Synthesis (VVS) and distortion value calculation. Secondly, the VVS part was accelerated by pipeline operation, and the distortion value calculation part was accelerated by dividing into two levels:task level, which means dividing the synthesized image according to pixels, and instruction level, that is dividing the distortion values inside the pixel by the calculation process. Finally, a reconfigurable mechanism was used to parallelize the VVS part and distortion value calculation part. Theoretical analysis and hardware simulation results show that, in terms of execution time, the proposed method has the speedup ratio of 2.11 with 4 Process Elements (PEs). Compared with the SVDC algorithms based on Low Level Virtual Machine (LLVM) and Open Multi-Processing (OpenMP), the proposed method has the calculation time reduced by 18.56% and 21.93% respectively. It can be seen that the proposed method can mine the parallelism of the SVDC algorithm, and effectively shorten the execution time of the SVDC algorithm by combining with the characteristics of the reconfigurable array structure.
    Communication coverage reduction method of parallel programs based on dominant relation
    ZHANG Chen, TIAN Tian, YANG Xiuting, GONG Dunwei
    2021, 41(6):  1741-1747.  DOI: 10.11772/j.issn.1001-9081.2020091369
    Asbtract ( )   PDF (944KB) ( )  
    References | Related Articles | Metrics
    The increase of communication scale and non-deterministic communication make the communication test of Message-Passing Interface (MPI) parallel programs more difficult. In order to solve the problems, a new method of reducing communication coverage based on dominant relation was proposed. Firstly, based on the correspondence between communications and communication statements, the reduction problem of communications was converted into a reduction problem of communication statements. Then, the dominant relation of statements was used to solve the reduction set of communication statement set. Finally, the communications related to the reduction set were selected as the targets to be covered, so that the test data covering these targets covered all the communications. The proposed method was applied to 7 typical programs under test. Experimental results show that, compared with the test data generation method with all communication as coverage targets, the proposed method can reduce the generation time of test data by up to 95% without reducing the coverage rate of communications, indicating that the proposed method can improve the generation efficiency of communication coverage test data.
    Self-organized migrating algorithm for multi-task optimization with information filtering
    CHENG Meiying, QIAN Qian, NI Zhiwei, ZHU Xuhui
    2021, 41(6):  1748-1755.  DOI: 10.11772/j.issn.1001-9081.2020091390
    Asbtract ( )   PDF (1172KB) ( )  
    References | Related Articles | Metrics
    The Self-Organized Migrating Algorithm (SOMA) only can solve the single task, and the "implicit parallelism" of SOMA is not fully exploited. Aiming at the shortcomings, a new Self-Organized Migrating Algorithm for Multi-task optimization with Information Filtering (SOMAMIF) was proposed to solve multiple tasks concurrently. Firstly, the multi-task uniform search space was constructed, and the subpopulations were set according to the number of tasks. Secondly, the current optimal fitness of each subpopulation was judged, and the information transfer need was generated when the evolution of a task stagnated in a successive generations. Thirdly, the useful information was chosen from the remaining subpopulations and the useless information was filtered according to a probability, so as to ensure the positive transfer and readjust the population structure at the same time. Finally, the time complexity and space complexity of SOMAMIF were analyzed. Experimental results show that, SOMAMIF converges rapidly to the global optimal solution 0 when solving multiple high-dimensional function problems simultaneously; compared with those of the original datasets, the average classification accuracies obtained on two datasets by SOMAMIF combing with the fractal technology to extract the key home returning constraints from college students with different census register increase by 0.348 66 percentage points and 0.598 57 percentage points respectively.
    Maximum common induced subgraph algorithm based on vertex conflict learning
    WANG Yu, LIU Yanli, CHEN Shaowu
    2021, 41(6):  1756-1760.  DOI: 10.11772/j.issn.1001-9081.2020091381
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    The traditional branching strategies of Maximum Common induced Subgraph (MCS) problem rely on the static properties of graphs and lack learning information about historical searches. In order to solve these problems, a branching strategy based on vertex conflict learning was proposed. Firstly, the reduction value of the upper bound was used as the reward to the branch node for completing a matching action. Secondly, when the optimal solution was updated, the optimal solution obtained actually was the result of continuous inference of the branch nodes. Therefore, the appropriate rewards were given to the branch nodes on the complete search path to strengthen the positive effect of these vertices on search. Finally, the value function of matching action was designed, and the vertices with the maximum cumulative rewards would be selected as new branch nodes. On the basis of Maximum common induced subgraph Split (McSplit) algorithm, an improved McSplit Reinforcement Learning and Routing (McSplitRLR) algorithm combined with the new branching strategy was completed. Experimental results show that, with the same computer and solution time limit, excluding the simple instances solved by all comparison algorithms within 10 seconds, compared to the state-of-the-art algorithms of McSplit and McSplit Solution-Biased Search (McSplitSBS), McSplitRLR solves 109 and 33 more hard instances respectively, and the solution rate increases by 5.6% and 1.6% respectively.
    Multimedia computing and computer simulation
    Image single distortion type judgment method based on two-channel convolutional neural network
    YAN Junhua, HOU Ping, ZHANG Yin, LYU Xiangyang, MA Yue, WANG Gaofei
    2021, 41(6):  1761-1766.  DOI: 10.11772/j.issn.1001-9081.2020091362
    Asbtract ( )   PDF (1095KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low accuracy of some distortion types judgment by image single distortion type judgment algorithm, an image single distortion type judgment method based on two-channel Convolutional Neural Network (CNN) was proposed. Firstly, the fixed size image block was obtained by cropping the image, and the high-frequency information map was obtained by Haar wavelet transform of the image block. Then, the image block and the corresponding high-frequency information map were respectively input into the convolutional layers of different channels to extract the deep feature map, and the deep features were fused and input into the fully connected layer. Finally, the values of the last layer of the fully connected layer were input into the Softmax function classifier to obtain the probability distribution of the single distortion type of the image. Experimental results on LIVE database show that, the proposed method has the image single distortion type judgement accuracy up to 95.21%, and compared with five other image single distortion type judgment methods for comparison, the proposed method has the accuracies for judging JPEG2000 and fast fading distortions improved by at least 6.69 percentage points and 2.46 percentage points respectively. The proposed method can accurately identify the single distortion type in the image.
    Image double blind denoising algorithm combining with denoising convolutional neural network and conditional generative adversarial net
    JING Beibei, GUO Jia, WANG Liqing, CHEN Jing, DING Hongwei
    2021, 41(6):  1767-1774.  DOI: 10.11772/j.issn.1001-9081.2020091355
    Asbtract ( )   PDF (1447KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of poor denoising effect and low computational efficiency in image denoising, a double blind denoising algorithm based on Denoising Convolutional Neural Network (DnCNN) and Conditional Generative Adversarial Net (CGAN) was proposed. Firstly, the improved DnCNN model was used as the CGAN generator to capture the noise distribution of the noisy image. Secondly, the noisy image after eliminating the noise distribution and the tag were sent to the discriminator to distinguish the noise reduction image. Thirdly, the results of discrimination were used to optimize the hidden layer parameters of the whole model. Finally, a balance between the generator and the discriminator was achieved in the game, and the generator's residual capture ability was optimal. Experimental results show that on Set12 dataset, when the noise levels are 15, 25, 50 respectively:compared with the DnCNN algorithm, the proposed algorithm has the Peak Signal-to-Noise Ratio (PSNR) increased by 1.388 dB, 1.725 dB and 1.639 dB respectively based on the error evaluation index between pixel points. Compared with the existing algorithms such as Block Matching 3D (BM3D), Weighted Nuclear Norm Minimization (WNNM), DnCNN, Cascade of Shrinkage Fields (CSF) and ConSensus neural NETwork (CSNET), the proposed algorithm has the index value of Structural SIMilarity (SSIM) improved by 0.000 2 to 0.104 1 on average based on the evaluation index of structural similarity. The above experimental results verify the superiority of the proposed algorithm.
    Dual-channel night vision image restoration method based on deep learning
    NIU Kangli, CHEN Yuzhang, SHEN Junfeng, ZENG Zhangfan, PAN Yongcai, WANG Yichong
    2021, 41(6):  1775-1784.  DOI: 10.11772/j.issn.1001-9081.2020091411
    Asbtract ( )   PDF (1916KB) ( )  
    References | Related Articles | Metrics
    Due to the low light level and low visibility of night scene, there are many problems in night vision image, such as low signal to noise ratio and low imaging quality. To solve the problems, a dual-channel night vision image restoration method based on deep learning was proposed. Firstly, two Convolutional Neural Network (CNN) based on Fully connected Multi-scale Residual learning Block (FMRB) were used to extract multi-scale features and fuse hierarchical features of infrared night vision images and low-light-level night vision images respectively, so as to obtain the reconstructed infrared image and enhanced low-light-level image. Then, the two processed images were fused by the adaptive weighted averaging algorithm, and the effective information of the more salient one in the two images was highlighted adaptively according to the different scenes. Finally, the night vision restoration images with high resolution and good visual effect were obtained. The reconstructed infrared night vision image obtained by the FMRB based deep learning network had the average values of Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) by 3.56 dB and 0.091 2 higher than the image obtained by Super-Resolution Convolutional Neural Network (SRCNN) reconstruction algorithm respectively, and the enhanced low-light-level night vision image obtained by the FMRB based deep learning network had the average values of PSNR and SSIM by 6.82dB and 0.132 1 higher than the image obtained by Multi-Scale Retinex with Color Restoration (MSRCR). Experimental results show that, by using the proposed method, the resolution of reconstructed image is improved obviously and the brightness of the enhanced image is also improved significantly, and the visual effect of the fusion image obtained by the above two images is better. It can be seen that the proposed algorithm can effectively restore the night vision images.
    Frontier and comprehensive applications
    Knowledge sharing behavior incentive mechanism for lead users based on evolutionary game
    LI Congdong, HUANG Hao, ZHANG Fanshun
    2021, 41(6):  1785-1791.  DOI: 10.11772/j.issn.1001-9081.2020091449
    Asbtract ( )   PDF (1217KB) ( )  
    References | Related Articles | Metrics
    The user innovation community does not consider the impact of incentive mechanism of enterprise on the knowledge sharing behavior of lead users. In order to solve the problem, a new knowledge sharing behavior incentive mechanism for lead users based on evolutionary game was proposed. Firstly, the enterprise and lead users were regarded as the main players of the evolutionary game, and the models under the conditions that the enterprise did not adopt incentive measures and the enterprise adopted incentive measures were constructed respectively. Then, to explore the dynamic evolution process and evolutionary stable strategy of the system, the local stability analysis was performed to the two models respectively. Finally, through the computer simulation, the evolution results of knowledge sharing under the two conditions were compared, and the influence factors and the best incentive strategy of the knowledge sharing behavior of lead users were analyzed. Experimental results show that, the enterprise taking incentive measures can effectively promote the knowledge sharing behavior of lead users, and when the incentive distribution coefficient is controlled within a certain range, the system will reach the best stable state; the optimal incentive distribution coefficient is determined by knowledge sharing cost, knowledge search cost and additional cost; the knowledge sharing cost, knowledge search cost and incentive distribution coefficient can significantly influence the level of knowledge sharing behavior of lead users.
    Freight routing optimization model and algorithm of battery-swapping electric vehicle
    LI Jin, WANG Feng, YANG Shenyu
    2021, 41(6):  1792-1798.  DOI: 10.11772/j.issn.1001-9081.2020091356
    Asbtract ( )   PDF (1049KB) ( )  
    References | Related Articles | Metrics
    To address the electric vehicle freight routing optimization problem considering the constrains of battery life and battery-swapping stations, a calculation method of electric vehicle carbon emissions considering multiple factors such as speed, load and distance was proposed. Firstly, with the goal of minimizing power consumption and travel time cost, a mixed integer programming model was established. Then, an adaptive genetic algorithm was proposed based on the mountain-climb optimization and batter-swapping neighborhood searching, and the crossover and mutation probabilities adaptively adjusting with the change of the population fitness were designed. Finally, the mountain-climb searching was used to enhance the local search capability of the algorithm. And the battery-swapping neighborhood searching strategy for the electric vehicle was designed to further improve the optimal solution, so as to meet the constraints of battery life and battery-swapping stations and obtain the final optimal feasible solution. The experimental results show that, the adaptive genetic algorithm can find satisfactory solution more quickly and effectively compared to the traditional genetic algorithm; the route arrangement considering power consumption and travel time can reduce the carbon emissions and total freight distribution costs; compared with the fixed parameter setting of the crossover and mutation probabilities, the adaptive parameter adjustment method can more effectively avoid the local optimum and improve the global search ability of the algorithm.
    Motion control method of two-link manipulator based on deep reinforcement learning
    WANG Jianping, WANG Gang, MAO Xiaobin, MA Enqi
    2021, 41(6):  1799-1804.  DOI: 10.11772/j.issn.1001-9081.2020091410
    Asbtract ( )   PDF (875KB) ( )  
    References | Related Articles | Metrics
    Aiming at the motion control problem of two-link manipulator, a new control method based on deep reinforcement learning was proposed. Firstly, the simulation environment of manipulator was built, which includes the two-link manipulator, target and obstacle. Then, according to the target setting, state variables as well as reward and punishment mechanism of the environment model, three kinds of deep reinforcement learning models were established for training. Finally, the motion control of the two-link manipulator was realized. After comparing and analyzing the three proposed models, Deep Deterministic Policy Gradient (DDPG) algorithm was selected for further research to improve its applicability, so as to shorten the debugging time of the manipulator model, and avoided the obstacle to reach the target smoothly. Experimental results show that, the proposed deep reinforcement learning method can effectively control the motion of two-link manipulator, the improved DDPG algorithm control model has the convergence speed increased by two times and the stability after convergence enhances. Compared with the traditional control method, the proposed deep reinforcement learning control method has higher efficiency and stronger applicability.
    3D shale digital core reconstruction method based on deep convolutional generative adversarial network with gradient penalty
    WANG Xianwu, ZHANG Ting, JI Xin, DU Yi
    2021, 41(6):  1805-1811.  DOI: 10.11772/j.issn.1001-9081.2020091367
    Asbtract ( )   PDF (2129KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of high cost, poor reusability and low reconstruction quality in traditional digital core reconstruction technology, a 3D shale digital core reconstruction method based on Deep Convolutional Generation Adversarial Network with Gradient Penalty (DCGAN-GP) was proposed. Firstly, the neural network parameters were used to describe the distribution probability of the shale training image, and the feature extraction of the training image was completed. Secondly, the trained network parameters were saved. Finally, the 3D shale digital core was constructed by using the generator. The experimental results show that, compared to the classic digital core reconstruction technologies, the proposed DCGAN-GP obtains the image closer to the training image in porosity, variogram, as well as pore size and distribution characteristics. Moreover, DCGAN-GP has the CPU usage less than half of the classic algorithms, the memory peak usage only 7.1 GB, and the reconstruction time reached 42 s per time, reflecting the characteristics of high quality and high efficiency of model reconstruction.
    Plant leaf disease recognition method based on lightweight convolutional neural network
    JIA Heming, LANG Chunbo, JIANG Zichao
    2021, 41(6):  1812-1819.  DOI: 10.11772/j.issn.1001-9081.2020091471
    Asbtract ( )   PDF (1486KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of low accuracy and poor real-time performance of plant leaf disease recognition in the field of agricultural information, a plant leaf disease recognition method based on lightweight Convolutional Neural Network (CNN) was proposed. The Depthwise Separable Convolution (DSC) and Global Average Pooling (GAP) methods were introduced in the original network to replace the standard convolution operation and the fully connected layer part at the end of the network respectively. At the same time, the technique of batch normalization was also applied to the process of training network to improve the intermediate layer data distribution and increase the convergence speed. In order to comprehensively and reliably evaluate the performance of the proposed method, experiments were conducted on the open plant leaf disease image dataset PlantVillage, and loss function convergence curve, test accuracy, parameter memory demand and other indicators were selected to verify the effectiveness of the improved strategy. Experimental results show that the improved network has higher disease recognition accuracy (99.427%) and smaller memory space occupation (6.47 MB), showing that it is superior to other leaf recognition technologies based on neural network, and has strong engineering practicability.
    Segmentation of ischemic stroke lesion based on long-distance dependency encoding and deep residual U-Net
    HUANG Li, LU Long
    2021, 41(6):  1820-1827.  DOI: 10.11772/j.issn.1001-9081.2020111788
    Asbtract ( )   PDF (1812KB) ( )  
    References | Related Articles | Metrics
    Segmenting stroke lesions automatically can provide valuable support to the clinical decision process. However, this is a challenging task due to the diversity of lesion size, shape, and location. Previous works have failed to capture global context information which is helpful to handle the diversity. To solve the problem of segmentation of ischemic stroke lesions with small sample size, an end-to-end neural network combing with residual block and non-local block on the basis of traditional U-Net was proposed to predict stroke lesion from multi-modal Magnetic Resonance Imaging (MRI) image. In this method, based on the encoder-decoder architecture of U-Net, residual blocks were stacked to solve the degradation problem and avoid the overfitting, and the non-local blocks were added to effectively encode the long-distance dependencies and provide global context information for the feature extraction process. The proposed method and its variants were evaluated on the Ischemic Stroke Lesion Segmentation (ISLES) 2017 dataset. The results showed that the proposed residual U-Net (Dice=0.29±0.23, ASSD=7.66±6.41, HD=43.71±22.11) and Residual Non-local U-Net (RN-UNet) (Dice=0.29±0.23, ASSD=7.61±6.62, HD=45.36±24.75) achieved significant improvement in all metrics compared to the baseline U-Net (Dice=0.25±0.23, ASSD=9.45±7.36, HD=54.59 ±21.19); compared with the state-of-the-art methods from ISLES website, the two methods both achieved better segmentation results, so that they can help doctors to quickly and objectively evaluate the condition of patients in clinical practices.
    Spatial frequency divided attention network for ultrasound image segmentation
    SHEN Xuewen, WANG Xiaodong, YAO Yu
    2021, 41(6):  1828-1835.  DOI: 10.11772/j.issn.1001-9081.2020091470
    Asbtract ( )   PDF (1917KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of medical ultrasound images such as many noisy points, fuzzy boundaries, and difficulty in defining the cardiac contours, a new Spatial Frequency Divided Attention Network for ultrasound image segmentation (SFDA-Net) was proposed. Firstly, with the help of Octave convolution, the high and low-frequency parallel processing of image in the entire network was realized to obtain more diverse information. Then, the Convolutional Block Attention Module (CBAM) was added for paying more attention to the effective information when image feature recovered, so as to reduce the loss of segmenting the entire target area. Finally Focal Tversky Loss was considered as the objective function to reduce the weights of simple samples and pay more attention on difficult samples, as well as decrease the errors introduced by pixel misjudgment between the categories. Through multiple sets of comparative experiments,it can be seen that with the parameter number lower than that of the original UNet++, SFDA-Net has the segmentation accuracy increased by 6.2 percentage points, Dice sore risen by 8.76 percentage points, mean Pixel Accuracy (mPA) improved to 84.09%, and mean Intersection Over Union (mIoU) increased to 75.79%. SFDA-Net steadily improves the network performance while reducing parameters, and makes the echocardiographic segmentation more accurate.
    Classification of steel surface defects based on lightweight network
    SHI Yangxiao, ZHANG Jun, CHEN Peng, WANG Bing
    2021, 41(6):  1836-1841.  DOI: 10.11772/j.issn.1001-9081.2020081244
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    Defect classification is an important part of steel surface defect detection. When the Convolutional Neural Network (CNN) has achieved good results, the increasing number of network parameters consumes a lot of computing cost, which brings great challenges to the deployment of defect classification tasks on personal computers or low computing power devices. Focusing on the above problem, a novel lightweight network model named Mix-Fusion was proposed. Firstly, two operations of group convolution and channel-shuffle were used to reduce the computational cost while maintaining the accuracy. Secondly, a narrow feature mapping was used to fuse and encode the information between the groups, and the generated features were combined with the original network, so as to effectively solve the problem that "sparse connection" convolution hindered the information exchange between the groups. Finally, a new type of Mixed depthwise Convolution (MixConv) was used to replace the traditional DepthWise Convolution (DWConv) to further improve the performance of the model. Experimental results on NEU-CLS dataset show that, the number of floating-point operations and classification accuracy of Mix-Fusion network in defect classification task is 43.4 Million FLoating-point Operations Per second (MFLOPs) and 98.61% respectively. Compared to the networks of ShuffleNetV2 and MobileNetV2, the proposed Mix-Fusion network reduces the model parameters and compresses the model size effectively, as well as obtains the better classification accuracy.
    Anomaly detection of oil drilling water flow based on shape flow
    LI Yanzhi, FAN Yong, GAO Lin
    2021, 41(6):  1842-1848.  DOI: 10.11772/j.issn.1001-9081.2020091429
    Asbtract ( )   PDF (1537KB) ( )  
    References | Related Articles | Metrics
    Intelligent monitoring technology for the water flow of oil drilling can realize the automatic monitoring of gaseous pollutant from oil drilling and minimize the cost of manual monitoring to the greatest extent. The existing feature extraction methods cannot describe the change process of water flow, it is difficult to obtain abnormal samples and fully enumerate them, and the fusion layer information is not fully utilized. In order to solve the problems, a new water flow abnormal data detection algorithm was proposed. Firstly, a new feature representation method named shape flow was proposed. Then, the classic anomaly detection unsupervised neural network GANomaly was optimized into a residual structure. Finally, a feature fusion layer was added to the GANomaly to improve the learning ability of neural network. Experimental results show that, the detection accuracy of the improved algorithm reaches 95%, which is 5 percentage points higher than that of the GANomaly algorithm. The proposed algorithm can be applied to the detection of abnormal water flow data in different scenarios, and can overcome the influence of fog on the experimental results.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF