Loading...

Table of Content

    10 September 2020, Volume 40 Issue 9
    Artificial intelligence
    Survey of person re-identification technology based on deep learning
    WEI Wenyu, YANG Wenzhong, MA Guoxiang, HUANG Mei
    2020, 40(9):  2479-2492.  DOI: 10.11772/j.issn.1001-9081.2020010038
    Asbtract ( )   PDF (1851KB) ( )  
    References | Related Articles | Metrics
    As one of intelligent video surveillance technologies, person Re-identification (Re-id) has great research significance for maintaining social order and stability, and it aims to retrieve the specific person in different camera views. For traditional hand-crafted feature methods are difficult to address the complex camera environment problem in person Re-id task, a large number of deep learning-based person Re-id methods were proposed, so as to promote the development of person Re-id technology greatly. In order to deeply understand the person Re-id technology based on deep learning, a large number of related literature were collated and analyzed. First, a comprehensive introduction was given from three aspects: image, video and cross-modality. The image-based person Re-id technology was divided into two categories: supervised and unsupervised, and the two categories were generalized respectively. Then, some related datasets were listed, and the performance of some algorithms in recent years on image and video datasets were compared and analyzed. At last, the development difficulties of person Re-id technology were summarized, and the possible future research directions of this technology were discussed.
    Person re-identification method based on GAN uniting with spatial-temporal pattern
    QIU Yaoru, SUN Weijun, HUANG Yonghui, TANG Yuqi, ZHANG Haochuan, WU Junpeng
    2020, 40(9):  2493-2498.  DOI: 10.11772/j.issn.1001-9081.2020010006
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics
    Tracking of the person crossing the cameras is a technical challenge for smart city and intelligent security. And person re-identification is the most important technology for cross-camera person tracking. Due to the domain bias, applying person re-identification algorithms for cross-scenario application leads to the dramatic accuracy reduction. To address this challenge, a method based on Generative Adversarial Network (GAN) Uniting with Spatial-Temporal pattern (STUGAN) was proposed. First, training samples of the target scenario generated by the GAN were introduced to enhance the stability of the recognition model. Second, the spatio-temporal features were used to construct the spatio-temporal pattern of the target scenario, so as to screen low-probability matching samples. Finally, the recognition model and the spatio-temporal pattern were combined to realize the person re-identification task. On classic datasets of this field named Market-1501 and DukeMTMC-reID, the proposed method was compared with BoW (Bag-of-Words), PUL (Progressive Unsupervised Learning), UMDL (Unsupervised Multi-task Dictionary Learning) and other advanced unsupervised algorithms. The experimental results show that the proposed method achieves 66.4%, 78.9% and 84.7% recognition accuracy for rank-1, rank-5 and rank-10 indicators on the Market-1501 dataset respectively, which are 5.7, 5.0 and 4.4 percentage points higher than the best results of the comparison algorithm, respectively; and the mean Average Precision (mAP) higher than the comparison algorithms except Similarity Preserving cycle-consistent Generative Adversarial Network (SPGAN).
    Multi-source adaptation classification framework with feature selection
    HUANG Xueyu, XU Haote, TAO Jianwen
    2020, 40(9):  2499-2506.  DOI: 10.11772/j.issn.1001-9081.2020010094
    Asbtract ( )   PDF (1283KB) ( )  
    References | Related Articles | Metrics
    For the problem that the existing multi-source adaptation learning schemes cannot effectively distinguish the useful information in multi-source domains and transfer the information to the target domain, a Multi-source Adaptation Classification Framework with Feature Selection (MACFFS) was proposed. Feature selection and shared feature subspace learning were integrated into a unified framework by MACFFS for joint feature learning. Specifically, multiple source domain classification models were learned and obtained by MACFFS through mapping feature data from multiple source domains into different latent spaces, so as to realize the classification of target domains. Then, the obtained multiple classification results were integrated for the learning of the target domain classification model. In addition, L2,1 norm sparse regression was used to replace the traditional least squares regression based on L2 norm by the framework to improve the robustness. Finally, a variety of existing methods were used to perform experimental comparison and analysis with MACFFS in two tasks. Experimental results show that, compared with the best performing Domain Selection Machine (DSM) in the existing methods, MACFFS has nearly 1/4 of the calculation time saved, and the recognition rate of about 2% improved. In general, with machine learning, statistical learning and other related knowledge combined, MACFFS provides a new idea for the multi-source adaptation method. Furthermore, this method has better performance than the existing methods in recognition applications in real scenes, which had been experimentally proven.
    Clustering relational network for group activity recognition
    RONG Wei, JIANG Zheyuan, XIE Zhao, WU Kewei
    2020, 40(9):  2507-2513.  DOI: 10.11772/j.issn.1001-9081.2020010019
    Asbtract ( )   PDF (1376KB) ( )  
    References | Related Articles | Metrics
    The current group behavior recognition method do not make full use of the group relational information, so that the group recognition accuracy cannot be effectively improved. Therefore, a deep neural network model based on the hierarchical relational module of Affinity Propagation (AP) algorithm was proposed, named Clustering Relational Network (CRN). First, Convolutional Neural Network (CNN) was used to extract scene features, and the regional feature clustering was used to extract person features in the scene. Second, the hierarchical relational network module of AP was adopted to extract group relational information. Finally, the individual feature sequences and group relational information were fused by Long Short-Term Memory (LSTM) network, and the final group recognition result was obtained. Compared with the Multi-Stream Convolutional Neural Network (MSCNN), CRN has the recognition accuracy improved by 5.39 and 3.33 percentage points on Volleyball dataset and Collective Activity dataset, respectively. Compared with the Confidence-Energy Recurrent Network (CERN), CRN has the recognition accuracy improved by 8.70 and 3.14 percentage points on Volleyball dataset and Collective dataset, respectively. Experimental results show that CRN has higher recognition accuracy in the group behavior recognition tasks.
    3D face recognition based on hierarchical feature network
    ZHAO Qing, YU Yuanhui
    2020, 40(9):  2514-2518.  DOI: 10.11772/j.issn.1001-9081.2020010103
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    Focused on the problems of multiple expression variations, multiple pose variations as well as varying-degree missing face point cloud data in Three-Dimensional (3D) faces, 3D point cloud face data was exploratively applied to PointNet series classification networks, and the recognition results were compared and analyzed, then a new network framework named HFN (Hierarchical Feature Network) was proposed. First, the point cloud with fixed points was randomly sampled after data preprocessing. Second, the point fixed point cloud was input into SA (Set Abstraction) module in order to obtain the centroid points and neighborhood points of the local areas, and extract the features of the local areas, then the point cloud spatial structural features extracted from DSA (Directional Spatial Aggregation) module based on multi-directional convolution were mosaicked. Finally, the full connection layer was used to perform the classification of 3D faces, so as to realize the 3D face recognition. The results on CASIA database show that the average recognition rate of the proposed method is 96.34%, which is better than those of classification networks such as PointNet, PointNet++, PointCNN and Spatial Aggregation Net (SAN).
    Non-perception class attendance method based on student body detection
    FANG Shuya, LIU Shouyin
    2020, 40(9):  2519-2524.  DOI: 10.11772/j.issn.1001-9081.2020010067
    Asbtract ( )   PDF (1151KB) ( )  
    References | Related Articles | Metrics
    Concerning the missed detection and low recognition rate in the class attendance system based on face recognition, a method that combines student body detection and face angle filtering was proposed by applying the master and slave dual-camera device. First, the bodies of students were detected from the photograph of master camera by the Mask R-CNN algorithm. Then, the slave camera (PTZ (Pan/Tilt/Zoom) camera) was controlled to acquire high-quality magnified image of each student in turn. Next, the face poses were detected and recognized in the magnified images through MTCNN (Multi-Task Convolutional Neural Network) algorithm and FSA(Fine-grained Structure Aggregation)-Net algorithm in order to filter the frontal face image of every student. Finally, the FaceNet algorithm was used to extract the features of the filtered student frontal face images for training or recognition of Support Vector Machine (SVM) classifiers. Experimental results showed that, compared with the Tiny-face algorithm, when the Intersection over Union (IOU) was 0.75, the body detection algorithm had the Average Precision (AP) value increased by about 36% and the detection time reduced by 57%; compared with the method of establishing a multi-pose face database, the method of face angle filtering improved the recognition rate by 4%; and the accuracy of student recognition in the entire classroom was close to 100% in most cases. The proposed method simplifies the student registration process, improves the face recognition rate, and provides new ideas for solving the problem of face missed detection.
    Text classification based on improved capsule network
    YIN Chunyong, HE Miao
    2020, 40(9):  2525-2530.  DOI: 10.11772/j.issn.1001-9081.2019122153
    Asbtract ( )   PDF (952KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems that the pooling operation of Convolutional Neural Network (CNN) will lose some feature information and the classification accuracy of Capsule Network (CapsNet) is not high, an improved CapsNet model was proposed. Firstly, two convolution layers were used to extract local features of feature information. Then, the CapsNet was used to extract the overall features of text. Finally, the softmax classifier was used to perform the classification. Compared with CNN and CapsNet, the proposed model improves the classification accuracy by 3.42 percentage points and 2.14 percentage points respectively. The experimental results show that the improved CapsNet model is more suitable for text classification.
    Text sentiment analysis based on gated recurrent unit and capsule features
    YANG Yunlong, SUN Jianqiang, SONG Guochao
    2020, 40(9):  2531-2535.  DOI: 10.11772/j.issn.1001-9081.2020010128
    Asbtract ( )   PDF (781KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that simple Recurrent Neural Network (RNN) cannot memorize information for a long time and single Convolutional Neural Network (CNN) lacks the ability to capture the semantics of text context, in order to improve the accuracy of text classification, a sentiment analysis model G-Caps (Gated Recurrent Unit-Capsule) was proposed, which combines Gated Recurrent Unit (GRU) and capsule features. First, the contextual global features of the text were captured through GRU in order to obtain the global scalar information. Second, the captured information was iterated through the dynamic routing algorithm at the initial capsule layer to obtain the vectorized feature information representing the overall attributes of the text. Finally, the features were combined in the main capsule part to obtain more accurate text attributes, and the sentiment polarity of the text was analyzed according to the intensity of each feature. Experimental results on the benchmark dataset MR (Movie Reviews) showed that compared with the CNN + INI (Convolutional Neural Network + Initializing convolutional filters) and CL_CNN (Critic Learning_Convolutional Neural Network) methods, G-Caps had the classification accuracy increased by 3.1 percentage points and 0.5 percentage points respectively. It can be seen that the G-Caps model effectively improves the accuracy of text sentiment analysis in practice.
    End-to-end adversarial variational Bayes method for short text sentiment classification
    YIN Chunyong, ZHANG Sun
    2020, 40(9):  2536-2542.  DOI: 10.11772/j.issn.1001-9081.2020010048
    Asbtract ( )   PDF (1653KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of low accuracy in sentiment classification caused by short text, an end-to-end short text sentiment classifier was proposed based on adversarial learning and variational inference. First, the spectrum normalization technology was employed to alleviate the vibration of discriminator in training process. Second, an additional classifier was utilized to guide the updating of the inference model. Third, the Adversarial Variational Bayes (AVB) was used to extract the topic features of the short text. Finally, topic features and pre-trained word vector features were fused by three times of attention mechanism in order to realize the classification. Experimental results on one product review and two micro-blog datasets show that the proposed model improves the accuracy by 2.9, 2.2 and 8.4 percentage points respectively compared to the Bidirectional Long Short-Term Memory network based on Self-Attention (BiLSTM-SA). It can be seen that the proposed model can be applied to mine sentiments and opinions in social short texts, which is significant for public opinion discovery, user feedback, quality supervision and other related fields.
    Sentiment analysis based on parallel hybrid network and attention mechanism
    SUN Min, LI Yang, ZHUANG Zhengfei, YU Dawei
    2020, 40(9):  2543-2548.  DOI: 10.11772/j.issn.1001-9081.2019112020
    Asbtract ( )   PDF (938KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems that the traditional Convolutional Neural Network (CNN) ignores the context and semantic information of words and loses a lot of feature information in maximal pooling processing, the traditional Recurrent Neural Network (RNN) has information memory loss and vanishing gradient, and both CNN and RNN ignore the importance of words to sentence meaning, a model based on parallel hybrid network and attention mechanism was proposed. First, the text was vectorized with Glove. After that, the CNN and the bidirectional threshold recurrent neural network were respectively used to extract text features with different characteristics through the embedding layer. Then, the features extracted by two networks were fused. And the attention mechanism was introduced to judge the importance of different words to the meaning of sentence. Multiple sets of comparative experiments were performed on the English corpus of IMDB. The experimental results show that the accuracy of the proposed model in text classification reaches 91.46% and F1-Measure reaches 91.36%.
    Intent recognition dataset for dialogue systems in power business
    LIAO Shenglan, YIN Shi, CHEN Xiaoping, ZHANG Bo, OUYANG Yu, ZHANG Heng
    2020, 40(9):  2549-2554.  DOI: 10.11772/j.issn.1001-9081.2020010119
    Asbtract ( )   PDF (826KB) ( )  
    References | Related Articles | Metrics
    For the intelligent dialogue system of customer service robots in power supply business halls, a large-scale dataset of power business user intents was constructed. The dataset includes 9 577 user queries and their labeling categories. First, the real voice data collected from the power supply business halls were cleaned, processed and filtered. In order to enable the data to drive the study of deep learning models related to intent classification, the data were labeled and augmented with high quality by the professionals according to the background knowledge of power business. In the labeling process, 35 types of service category labels were defined according to power business. In order to test the practicability and effectiveness of the proposed dataset, several classical models of intent classification were used for experiments, and the obtained intent classification models were put in the dialogue system. The classical Text classification model-Recurrent Convolutional Neural Network (Text-RCNN) was able to achieve 87.1% accuracy on this dataset. Experimental results show that the proposed dataset can effectively drive the research on power business related dialogue systems and improve user satisfaction.
    Improved redundant point filtering-based 3D object detection method
    SONG Yifan, ZHANG Peng, ZONG Libo, MA Bo, LIU Libo
    2020, 40(9):  2555-2560.  DOI: 10.11772/j.issn.1001-9081.2019122092
    Asbtract ( )   PDF (1674KB) ( )  
    References | Related Articles | Metrics
    VoxelNet is the first end-to-end object detection model based on point cloud. Taken only point cloud data as input, it has good effect. However, in VoxelNet, taking point cloud data of full scene as input makes more computation resources use on background point cloud data, and error detection and missing detection are easy to occur in complex scenes because the point cloud with only geometrical information has low recognition granularity on the targets. In order to solve these problems, an improved VoxelNet model with view frustum added was proposed. Firstly, the targets of interest were located by the RGB front view image. Then, the dimension increase was performed on the 2D targets, making the targets into the spatial view frustum. And the view frustum candidate region was extracted in the point cloud to filter out the redundant point cloud, only the point cloud within view frustum candidate region was calculated to obtain the detection results. Compared with VoxelNet, the improved algorithm reduces computation complexity of point cloud, and avoids the calculation of background point cloud data, so as to increase the efficiency of detection. At the same time, it avoids the disturbance of redundant background points and decreases the error detection rate and missing detection rate. The experimental results on KITTI dataset show that the improved algorithm outperforms VoxelNet in 3D detection with 67.92%, 59.98%, 53.95% average precision at easy, moderate and hard level.
    Ship detection based on enhanced YOLOv3 under complex environments
    NIE Xin, LIU Wen, WU Wei
    2020, 40(9):  2561-2570.  DOI: 10.11772/j.issn.1001-9081.2020010097
    Asbtract ( )   PDF (2506KB) ( )  
    References | Related Articles | Metrics
    In order to improve the intelligence level of waterway traffic safety supervision, and further improve the positioning precision and detection accuracy in the ship detection algorithms based on deep learning, based on the traditional YOLOv3, an enhanced YOLOv3 algorithm for ship detection was proposed. First, the prediction box uncertain regression was introduced in the network prediction layer in order to predict the uncertainty information of bounding box. Second, the negative logarithm likelihood function and improved binary cross entropy function were used to redesign the loss function. Then, the K-means clustering algorithm was used to redesign the scales of prior anchor boxes according to the shape of ship, and prior anchor boxes were evenly distributed to the corresponding prediction scales. During training phase, the data augmentation strategy was used to expand the number of training samples. Finally, the Non-Maximum Suppression (NMS) algorithm with Gaussian soft threshold function was used to post-process the prediction boxes. The comparison experiments of various improved methods and different object detection algorithms were conducted on real maritime video surveillance dataset. Experimental results show that, compared to the traditional YOLOv3 algorithm, the YOLOv3 algorithm with prediction box uncertainty information has the number of False Positives (FP) reduced by 35.42%, and the number of True Positives (TP) increased by 1.83%, thus improving the accuracy. The mean Average Precision (mAP) of the enhanced YOLOv3 algorithm on ship images reaches 87.74%, which is improved by 24.12% and 23.53% respectively compared to those of the traditional YOLOv3 algorithm and Faster R-CNN algorithm. The proposed algorithm has the number of images detected per second reaches 30.70, meeting the requirement of real-time detection. Experimental results indicate that the proposed algorithm can achieve high-precision, robust and real-time detection of ships under adverse weather and conditions such as fog weather and low-light condition as well as the complex navigation backgrounds.
    Pulmonary nodule detection based on feature pyramid networks
    GAO Zhiyong, HUANG Jinzhen, DU Chenggang
    2020, 40(9):  2571-2576.  DOI: 10.11772/j.issn.1001-9081.2019122122
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics
    Pulmonary nodules in Computerized Tomography (CT) images have large size variation as well as small and irregular size which leads to low detection sensitivity. In order to solve this problem, a method based on Feature Pyramid Network (FPN) was proposed. First, FPN was used to extract multi-scale features of nodules and strengthen the features of small objects and object boundary details. Second, a semantic segmentation network (named Mask FPN) was designed based on the FPN to segment and extract the pulmonary parenchyma quickly and accurately, and the pulmonary parenchyma area could be used as location map of object proposals. At the same time, a deconvolution layer was added on the top layer of FPN and a multi-scale prediction strategy was used to optimize the Faster Region Convolution Neural Network (R-CNN) in order to improve the performance of pulmonary nodule detection. Finally, to solve the problem of imbalance of positive and negative samples in the pulmonary nodule dataset, Focal Loss function was used in the Region Proposed Network (RPN) module in order to increase the detection rate of nodules. The proposed algorithm was tested on the public dataset LUNA16. Experimental results show that the improved network with FPN and deconvolution layer is helpful to the detection of pulmonary nodules, and focal loss function is also helpful to the detection. Combining with multiple improvements, when the average number of candidate nodules per scan was 46.7, the sensitivity of the presented method was 95.7%, which indicates that the method is more sensitive than the other convolutional networks such as Faster Region-Convolutional Neural Network (Faster R-CNN) and UNet. The proposed method can extract nodule features of different scales effectively and improve the detection sensitivity of pulmonary nodules in CT images. Meantime, the method can also detect small nodules effectively, which is beneficial to the diagnosis and treatment of lung cancer.
    Data science and technology
    RUFS: a pure userspace network file system
    DONG Haoyu, CHEN Kang
    2020, 40(9):  2577-2585.  DOI: 10.11772/j.issn.1001-9081.2020010077
    Asbtract ( )   PDF (1434KB) ( )  
    References | Related Articles | Metrics
    The overall performance of traditional network file system is affected by software overhead when using high-speed storage device. Therefore, a method of constructing a file system using SPDK (Storage Performance Development Kit) was proposed, and a prototype of a network file system RUFS (Remote Userspace File System) was realized on this basis. In this system, the directory tree structure of file system was simulated and the metadata of file system were managed by using key-value storage, and the file contents were stored by using SPDK. Besides, RDMA (Remote Direct Memory Access) technology was used to provide file system service to clients. Compared with NFS+ext4, on 4 KB random access, RUFS had the read and write bandwidth performance increased by 202.2% in read and 738.9% respectively, and had the average read and write latency decreased by 74.4% and 97.2% respectively; on 4 MB sequential access, RUFS had the read and write bandwidth performance increased by 153.1% and 44.0% respectively. RUFS had significant advantages over NFS+ext4 on most metadata operations, especially on the operation of folder creation, RUFS had the bandwidth performance increased by about 5 693.8%. File system service with lower latency and higher bandwidth can be provided by this system via making full use of the performance advantages of the high-speed network and high-speed storage device.
    Log analysis and workload characteristic extraction in distributed storage system
    GOU Zi'an, ZHANG Xiao, WU Dongnan, WANG Yanqiu
    2020, 40(9):  2586-2593.  DOI: 10.11772/j.issn.1001-9081.2020010121
    Asbtract ( )   PDF (1136KB) ( )  
    References | Related Articles | Metrics
    Analysis of the workload running on the file system is helpful to optimize the performance of the distributed file system and is crucial to the construction of new storage system. Due to the complexity of workload and the increase of scale diversity, it is incomplete to explicitly capture the characteristics of workload traces by intuition-based analysis. To solve this problem, a distributed log analysis and workload characteristic extraction model was proposed. First, reading and writing related information was extracted from distributed file system logs according to the keywords. Second, the workload characteristics were described from two aspects: statistics and timing. Finally, the possibility of system optimization based on workload characteristics was analyzed. Experimental results show that the proposed model has certain feasibility and accuracy, and can give workload statistics and timing characteristics in detail. It has the advantages of low overhead, high timeliness and being easy to analyze, and can be used to guide the synthesis of workloads with the same characteristics, hot spot data monitoring, and cache prefetching optimization of the system.
    Influential scholar recommendation model in academic social network
    LI Chunying, TANG Yong, XIAO Zhenghong, LI Tiansong
    2020, 40(9):  2594-2599.  DOI: 10.11772/j.issn.1001-9081.2020010110
    Asbtract ( )   PDF (971KB) ( )  
    References | Related Articles | Metrics
    At present, academic social network platforms have problems such as information overload and information asymmetry, which makes it difficult for scholars, especially those with low influence, to find contents they are interested in. At the same time, the scholars with high influence in the academic social network promote the formation of academic community and guide the scientific research of the scholars with low influence. Therefore, an Influential Scholar Recommendation Model based on Academic Community Detection (ISRMACD) was proposed to provide recommendation service for the scholars with low influence in academic social networks. First, the influential scholar group was used as the core structure of community to detect the academic community in complex network topological relationship generated by the relationship bonding — friendship among the scholars in the academic social network. Then the influences of scholars in the academic social network were calculated, and the recommendation service of influential scholars in the community was implemented. Experimental results on SCHOLAT dataset show that the proposed model achieves high recommendation quality under different influential scholar recommendation numbers, and has the best recommendation accuracy obtained by recommending 10 influential scholars each time, reaching 70% and above.
    POI recommendation algorithm combining spatiotemporal information and POI importance
    LI Hanlu, XIE Qing, TANG Lingli, LIU Yongjian
    2020, 40(9):  2600-2605.  DOI: 10.11772/j.issn.1001-9081.2020010060
    Asbtract ( )   PDF (846KB) ( )  
    References | Related Articles | Metrics
    Aiming at the data noise filtering problem and the importance problem of different POIs in POI (Point-Of-Interest)recommendation research, a POI recommendation algorithm, named RecSI (Recommendation by Spatiotemporal information and POI Importance), was proposed. First, the geographic information and the mutual attraction between the POIs were used to filter out the data noise, so as to narrow the range of candidate set. Second, the user’s preference score was calculated by combining the user’s preference on the POI category at different time periods of the day and the popularities of the POIs. Then, the importances of different POIs were calculated by combining social information and weighted PageRank algorithm. Finally, the user’s preference score and POI importances were linearly combined in order to recommend TOP-K POIs to the user. Experimental results on real Foursquare sign-in dataset show that the precision and recall of the RecSI algorithm are higher than those of baseline GCSR (Geography-Category-Socialsentiment fusion Recommendation) algorithm by 12.5% and 6% respectively, which verify the effectiveness of RecSI algorithm.
    Recommendation algorithm based on modularity and label propagation
    SHENG Jun, LI Bin, CHEN Ling
    2020, 40(9):  2606-2612.  DOI: 10.11772/j.issn.1001-9081.2020010095
    Asbtract ( )   PDF (1025KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of commodity recommendation based on network information, a recommendation algorithm based on community mining and label propagation on bipartite network was proposed. Firstly, a weighted bipartite graph was used to represent the user-item scoring matrix, and the label propagation technology was adopted to perform the community mining to the bipartite network. Then, the items which the users might be interested in were mined based on the community structure information of the bipartite network and by making full use of the similarity between the communities that the users in as well as the similarity between items and the similarity between the users. Finally, the item recommendation was performed to the users. The experimental results on real world networks show that, compared with the Collaborative Filtering recommendation algorithm based on item rating prediction using Bidirectional Association Rules (BAR-CF), the Collaborative Filtering recommendation algorithm based on Item Rating prediction (IR-CF), user Preferences prediction method based on network Link Prediction (PLP) and Modified User-based Collaborative Filtering (MU-CF), the proposed algorithm has the Mean Absolute Error (MAE) 0.1 to 0.3 lower, and the precision 0.2 higher. Therefore, the proposed algorithm can obtain recommendation results with higher quality compared to other similar methods.
    Commodity recommendation model based on improved deep Q network structure
    FU Kui, LIANG Shaoqing, LI Bing
    2020, 40(9):  2613-2621.  DOI: 10.11772/j.issn.1001-9081.2019112002
    Asbtract ( )   PDF (1681KB) ( )  
    References | Related Articles | Metrics
    Traditional recommendation methods have problems such as data sparsity and poor feature recognition. To solve these problems, positive and negative feedback datasets with time-series property were constructed according to implicit feedback. Since positive and negative feedback datasets and commodity purchases have strong time-series feature, Long Short-Term Memory (LSTM) network was introduced as the component of the model. Considering that the user’s own characteristics and action selection returns are determined by different input data, the deep Q network based on competitive architecture was improved: integrating the user positive and negative feedback and the time-series features of commodity purchases, a commodity recommendation model based on the improved deep Q network structure was designed. In the model, the positive and negative feedback data were trained differently, and the time-series features of the commodity purchases were extracted. On the Retailrocket dataset, compared with the best performance among the Factorization Machine (FM) model, W&D (Wide & Deep learning) and Collaborative Filtering (CF) models, the proposed model has the precision, recall, Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG) increased by 158.42%, 89.81%, 95.00% and 65.67%. At the same time, DBGD (Dueling Bandit Gradient Descent) was used as the exploration method, so as to improve the low diversity problem of recommended commodities.
    Evaluation metrics of outlier detection algorithms
    NING Jin, CHEN Leiting, LUO Zijuan, ZHOU Chuan, ZENG Huiru
    2020, 40(9):  2622-2627.  DOI: 10.11772/j.issn.1001-9081.2020010126
    Asbtract ( )   PDF (873KB) ( )  
    References | Related Articles | Metrics
    With the in-depth research and extensive application of outlier detection technology, more and more excellent algorithms have been proposed. However, the existing outlier detection algorithms still use the evaluation metrics of traditional classification, which leads to the problems of singleness and poor adaptability of evaluation metrics. To solve these problems, the first type of High True positive rate-Area Under Curve (HT_AUC) and the second type of Low False positive rate-Area Under Curve (LF_AUC) were proposed. First, the commonly used outlier detection evaluation metrics were analyzed to illustrate their advantages and disadvantages as well as applicable scenarios. Then, based on the existing Area Under Curve (AUC) method, the HT_AUC and the LF_AUC were proposed aiming at the high True Positive Rate (TPR) demand and low False Positive Rate (FPR) demand respectively, so as to provide more suitable metrics for performance evaluation as well as quantization and integration of outlier detection algorithms. Experimental results on real-world datasets show that the proposed method is able to better satisfy the demands of the first type of high true rate and the second type of low false positive rate than the traditional evaluation metrics.
    Application of fractal interpolation in wind speed time series
    GUO Xiuting, ZHU Changsheng, ZHANG Shengcai, ZHAO Kuipeng
    2020, 40(9):  2628-2633.  DOI: 10.11772/j.issn.1001-9081.2020010130
    Asbtract ( )   PDF (1546KB) ( )  
    References | Related Articles | Metrics
    A fractal interpolation algorithm based on adaptive mutation Particle Swarm Optimization (PSO) was proposed aiming at the interpolation problem of a large number of continuous missing data in wind speed data of wind farms. First, the mutation factor was introduced into the particle swarm optimization algorithm to enhance the diversity of particles and the search accuracy of the algorithm. Second, the optimal value of the vertical scaling factor in the fractal interpolation algorithm was obtained by the adaptive mutation particle swarm optimization algorithm. Finally, two datasets with different trends and change characteristics were analyzed by fractal interpolation, and the proposed algorithm was compared with Lagrange interpolation and cubic spline interpolation. The results show that fractal interpolation is not only able to maintain the overall fluctuation characteristics and local characteristics of wind speed curve, but also is more accurate than the traditional interpolation methods. In the experiment based on Dataset A, the Root Mean Square Error (RMSE) of fractal interpolation was reduced by 66.52% and 58.57% respectively compared with those of Lagrange interpolation and cubic spline interpolation. In the experiment based on Dataset B, the RMSE of fractal interpolation was decreased by 76.72% and 67.33% respectively compared with those of Lagrange interpolation and cubic spline interpolation. It is verified that fractal interpolation is more suitable for the interpolation of wind speed time series with strong fluctuation and continuous missing data.
    Cyber security
    Two-way synchronous quantum identity authentication protocol based on single photon
    ZHANG Xinglan, ZHAO Yijing
    2020, 40(9):  2634-2638.  DOI: 10.11772/j.issn.1001-9081.2020010069
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics
    Aiming at the needs of high-efficiency, two-way and synchronization in Quantum Identity Authentication (QIA), a two-party quantum identity authentication protocol based on single python was proposed. First, a new type of single-photon two-way measurement base coding method was used in the protocol. Then an authentication process based on quantum tickets was proposed by combining with the idea of Kerberos classic cryptographic protocol. On this basis, the strategy of two-way and synchronous authentication was adopted in the authentication process. Finally, the probability calculation and security analysis of various attack methods in quantum communication and authentication were carried out, and at the same time, the protocol was tried to expand from two parties to multiple parties. Compared with the preparation-measurement based quantum identity authentication protocol with new coding strategy, in the research result, a complete two-way synchronous identity authentication protocol which can prevent user repudiation was proposed, and the reference principles for expanding the protocol to multi-party communication were given. In conclusion, the proposed method improves the efficiency of quantum authentication theory, supports the new possibility of the combination and reference of quantum communication protocol and classic protocol, and realizes the theoretically undeniable synchronous authentication process of two parties.
    Auditable signature scheme for blockchain based on secure multi-party
    WANG Yunye, CHENG Yage, JIA Zhijuan, FU Junjun, YANG Yanyan, HE Yuchu, MA Wei
    2020, 40(9):  2639-2645.  DOI: 10.11772/j.issn.1001-9081.2020010096
    Asbtract ( )   PDF (983KB) ( )  
    References | Related Articles | Metrics
    Aiming at the credibility problem, a secure multi-party blockchain auditable signature scheme was proposed. In the proposed scheme, the trust vector with timestamp was introduced, and a trust matrix composed of multi-dimensional vector groups was constructed for regularly recording the trustworthy behavior of participants, so that a credible evaluation mechanism for the participants was established. Finally, the evaluation results were stored in the blockchain as a basis for verification. On the premise of ensuring that the participants are trusted, a secure and trusted signature scheme was constructed through secret sharing technology. Security analysis shows that the proposed scheme can effectively reduce the damages brought by the malicious participants, detect the credibility of participants, and resist mobile attacks. Performance analysis shows that the proposed scheme has lower computational complexity and higher execution efficiency.
    Optimal bitcoin transaction fee payment strategy based on queuing game
    HUANG Dongyan, LI Lang
    2020, 40(9):  2646-2649.  DOI: 10.11772/j.issn.1001-9081.2020010132
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics
    At the peak of bitcoin transactions, users need to increase the transaction fee to compete for the limited block space in order to pack the transactions into the block as soon as possible. An optimal transaction fee payment strategy was proposed to solve the problem of how to choose the appropriate transaction fees. First, the process of transactions queueing to complete for going up on the blockchain was modeled to a non-preemptive queueing model with priority by adopting the queuing game theory. Then, the impact of transaction fee on transaction time was analyzed, so as to obtain the functional relation between transaction time and transaction fee, and the Nash equilibrium payment strategy for the user was derived. Simulation results showed that the user total cost (weighted sum of the waiting time and the transaction fee) was able to be effectively reduced when the optimal payment strategy was adopted. Compared with the strategy of not paying transaction fees and the strategy of linearly increasing transaction fees according to the congestion, the proposed strategy had the user total cost decreased by 97% and 72% respectively in the system with high load. The proposed payment strategy can effectively reduce the cost of transaction fees while ensuring that the transactions are processed as quickly as possible.
    Design and implementation of high-interaction programmable logic controller honeypot system based on industrial control business simulation
    ZHAO Guoxin, DING Ruofan, YOU Jianzhou, LYU Shichao, PENG Feng, LI Fei, SUN Limin
    2020, 40(9):  2650-2656.  DOI: 10.11772/j.issn.1001-9081.2019122214
    Asbtract ( )   PDF (1350KB) ( )  
    References | Related Articles | Metrics
    The capability of entrapment is significantly influenced by the degree of simulation in industrial control honeypots. In view of the lack of business logic simulation of existing industrial control honeypots, the high-interaction Programmable Logic Controller (PLC) honeypot design framework and implementation method based on industrial control business simulation were proposed. First, based on the interaction level of industrial control system, a new classification method of Industrial Control System (ICS) honeypots was proposed. Then, according to different simulation dimensions of ICS devices, the entrapment process in honeypot was divided into a process simulation cycle and a service simulation cycle. Finally, in order to realize the real-time response to business logic data, the process data was transferred to the service simulation cycle through a customized data transfer module. Combining typical ICS honeypot software Conpot and the modeling simulation tool Matlab/Simulink, the experiments were carried out with Siemens S7-300 PLC device as the reference, and so as to realize the collaborative work of information service simulation and control process simulation. The experimental results show that compared with Conpot, the proposed PLC honeypot system newly adds 11 private functions of Siemens S7 devices. Especially, the operating read (function code 04 Read) and write (function code 05 Write) in the new functions realize 7 channel monitoring for I area data and 1 channel control for Q area data in PLC. This new honeypot system breaks through the limitations of existing interaction levels and methods and finds new directions for ICS honeypot design.
    Private-preserving determination problem of integer-interval positional relationship
    MA Minyao, LIU Zhuo, XU Yi, WU Lian
    2020, 40(9):  2657-2664.  DOI: 10.11772/j.issn.1001-9081.2020020149
    Asbtract ( )   PDF (1024KB) ( )  
    References | Related Articles | Metrics
    Integer-interval means the set of the left and right endpoints of the interval (which are integers) and all integers between them. The positional relationship between integer-intervals is the relation between the positions of two integer-intervals. Aiming at the positional relationship between integer-intervals, a secure two-party computation problem was proposed, in other words, a private-preserving determination problem of integer-interval positional relationship was proposed. In this problem, two users with private-preserving integer-intervals were helped to correctly determine the positional relationship between the two integer-intervals of them with the private preserved. Six positional relationships between two integer-intervals were defined, the 0-1 coding scheme of integer-intervals was given, and a determination rule for integer-interval positional relationship was proved. Then, based on the Goldwasser-Micali cryptosystem and semi-honest attacker model, a secure two-party computation protocol for solving the private-preserving determination problem of integer-interval positional relationship was designed. The protocol was proved to be both correct and secure, and the performance of the protocol was analyzed and explained.
    Advanced computing
    Design space exploration method for floating-point expression based on heuristic search
    LI Zhao, DONG Xiaoxiao, HUANG Chengcheng, REN Chongguang
    2020, 40(9):  2665-2669.  DOI: 10.11772/j.issn.1001-9081.2020010011
    Asbtract ( )   PDF (920KB) ( )  
    References | Related Articles | Metrics
    In order to improve the exploration efficiency of the design space for floating-point expression, a design space exploration method based on heuristic search was proposed. The design space of non-dominated expression was explored firstly during each iteration. At the same time, the non-dominated expression and the dominated expression were added to the non-dominated list and the dominated list respectively. Then the expression in the dominated list was explored after the iteration, the non-dominated expression in the dominated list was selected, and the neighborhood of the non-dominated expression in the dominated list was explored. And the new non-dominated expression was added to the non-dominated list, effectively improving the diversity and randomness of the non-dominated expression. Finally, the non-dominated list was explored again to obtain the final equivalent expression and further improve the performance of optimal expression. Compared with the existing design space exploration methods for floating-point expression, the proposed method has the calculation accuracy increased by 2% to 9%, the calculation time reduced by 5% to 19% and the resource consumption reduced by 4% to 7%. Experimental results show that the proposed method can effectively improve the efficiency of design space exploration.
    Hybrid multi-objective grasshopper optimization algorithm based on fusion of multiple strategies
    WANG Bo, LIU Liansheng, HAN Shaocheng, ZHU Shixing
    2020, 40(9):  2670-2676.  DOI: 10.11772/j.issn.1001-9081.2020030315
    Asbtract ( )   PDF (1792KB) ( )  
    References | Related Articles | Metrics
    In order to improve the performance of Grasshopper Optimization Algorithm (GOA) in solving multi-objective problems, a Hybrid Multi-objective Grasshopper Optimization Algorithm (HMOGOA) based on fusion of multiple strategies was proposed. First, the Halton sequence was used to establish the initial population to ensure that the population had an uniform distribution and high diversity in the initial stage. Then, the differential mutation operator was applied to guide the population mutation, so as to promote the population to move to the elite individuals and extend the search range of optimization. Finally, the adaptive weight factor was used to adjust the global exploration ability and local optimization ability of the algorithm dynamically according to the status of population optimization, so as to improve the optimization efficiency and the solution set quality. With seven typical functions selected for experiments and tests, HMOGOA were compared with algorithms such as multi-objective grasshopper optimization, Multi-Objective Particle Swarm Optimization (MOPSO), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) and Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA Ⅱ). Experimental results indicate that compared with the above algorithms, HMOGOA avoids falling into local optimum, makes the distribution of the solution set significantly more uniform and broader, and has greater convergence accuracy and stability.
    Improved teaching & learning based optimization with brain storming
    LI Lirong, YANG Kun, WANG Peichong
    2020, 40(9):  2677-2682.  DOI: 10.11772/j.issn.1001-9081.2020010087
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems that Teaching & Learning Based Optimization (TLBO) algorithm has slow convergence rate and low accuracy, and it is easy to be trapped into local optimum in solving high-dimensional problems, an Improved TLBO algorithm with Brain Storming Optimization (ITLBOBSO) was proposed. In this algorithm, a new “learning”operator was designed and applied to replace the origin “learning” in the TLBO. In the iteration process of the population, the “teaching” operator was executed by the current individual. Then, two individuals were selected randomly from the population, and brain storming learning was executed by the better one of the above and the current individual to improve the state of the current individual. Cauchy mutation and a random parameter associated with the iterations were introduced in the formula of this operator to improve the exploration ability in early stage and the exploitation ability for new solutions in later stage of the algorithm. In a series of simulation experimentations, compared with TLBO, the proposed algorithm has large improvements of solution accuracy, robustness and convergence speed on 11 benchmark functions. The experimental results on two constrained engineering optimization problems show that compared to TLBO algorithm, ITLBOBSO reduces the total cost by 4 percentage points, which proves the effectiveness of the proposed mechanism on overcoming the weakness of TLBO algorithm. The proposed algorithm is suitable for solving high dimensional continuous optimization problems.
    Computing power trading and pricing in mobile edge computing based on Stackelberg game
    WU Yuxin, CAI Ting, ZHANG Dabin
    2020, 40(9):  2683-2690.  DOI: 10.11772/j.issn.1001-9081.2020010112
    Asbtract ( )   PDF (1429KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of limited computing capacity and storage capacity of lightweight smart devices in mobile edge computing, a computational offloading solution based on Stackelberg game was proposed. First, Combining with the blockchain technology, a computing power trading model based on cloud mining mechanism, named CPTP-BSG (Computing Power Trading and Pricing with Blockchain and Stackelberg Game), was built, which allows mobile smart devices (miners) to offload intensive and complex computing tasks to edge servers. Second, the computing power trading between miners and Edge computing Service Providers (ESPs) was modeled as a two-stage Stackelberg game process, and the expected profit functions for miners and ESP were formulated. Then, the existence and uniqueness of Nash equilibrium solution were respectively analyzed under uniform pricing and discriminatory pricing strategies by backward induction. Finally, a low gradient iterative algorithm was proposed to maximize the profits of miners and ESP. Experimental results show the effectiveness of the proposed algorithm, and it can be seen that the discriminatory pricing is more in line with the personalized computing power demand of miners than uniform pricing, and can achieve higher total demand of computing power and ESP profit.
    Network and communications
    Clustering algorithm of energy harvesting wireless sensor network based on fuzzy control
    HU Runyan, LI Cuiran
    2020, 40(9):  2691-2697.  DOI: 10.11772/j.issn.1001-9081.2020010120
    Asbtract ( )   PDF (1155KB) ( )  
    References | Related Articles | Metrics
    The existing energy harvesting Wireless Sensor Network (WSN) clustering algorithms rarely consider the optimal number of clusters of the network, which leads to excessive network energy consumption and uneven energy consumption across the entire network. To solve this problem, a fuzzy control based energy harvesting WSN clustering algorithm was proposed, namely Energy Harvesting-Fuzzy Logic Clustering (EH-FLC). First, a solar energy replenishment model was introduced into the network energy consumption model, and the function relationship between total energy consumption of the network and the number of network clusters was obtained for each round. The function was derived to obtain the optimal number of clusters of the network. Then, the two-level fuzzy decision system was utilized to assess whether the nodes of the network can become cluster head nodes. The residual energy of the node and the number of adjacent nodes were input into the first level (capability level) as the judgment indexes to filter all the nodes in order to obtain the candidate cluster head nodes. And the centrality parameter and proximity parameter were input into the second level (collaboration level) as the judgment indexes to filter the candidate nodes in order to obtain the cluster head nodes. Finally, the performance indexes of the proposed algorithm such as network life cycle, network energy consumption and network throughput were analyzed through Matlab simulation. Compared with the algorithms of Low Energy Adaptive Clustering Hierarchy (LEACH), Wireless sensor networks non-Uniform Clustering Hierarchy (WUCH) and Cluster head selection using Two-Level Fuzzy Logic (CTLFL), the proposed algorithm has the network working life improved by about 1.4 times, 0.4 times and 0.6 times respectively, and the network throughput increased by about 20 times, 1.5 times and 1.28 times respectively. Simulation results show that the proposed algorithm has better performance in network life cycle and network throughput.
    Lightweight coverage hole detection algorithm based on relative position of link intersections
    HAN Yulao, FANG Dingyi
    2020, 40(9):  2698-2705.  DOI: 10.11772/j.issn.1001-9081.2019122115
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics
    Coverage holes in Wireless Sensor Network (WSN) cause poor network performance and low network service quality. To solve these problems, a Coverage Hole Detection Algorithm based on Relative Position of Intersections (CHDARPI) was proposed. First, the hole boundary nodes were defined and Relative Position of Intersections (RPI) of the link between adjacent boundary nodes was calculated. Then, the starting node of hole detection was selected based on the policy of Number of Incomplete Coverage Intersections (NICI) priority, which guaranteed the concurrent detection of the connected coverage holes. Finally, in the process of coverage hole detection, the message of hole detection was limited within the hole boundary nodes, and the forwarding strategies under different scenarios were formulated according to the sizes of the direction angles of the forwarding nodes, which ensured the efficiency of coverage hole detection. The simulation results show that, compared with the existing Distributed Coverage Hole Detection algorithm (DCHD) and Distributed Least Polar Angle algorithm (DLPA), the proposed CHDARPI decreases the average detection time and detection energy consumption by at least 15.2% and 16.7%.
    Handover access scheme in software-defined wireless local area network
    WANG Mingfen
    2020, 40(9):  2706-2711.  DOI: 10.11772/j.issn.1001-9081.2020010023
    Asbtract ( )   PDF (1522KB) ( )  
    References | Related Articles | Metrics
    Software-defined Wireless Local Area Network (WLAN) is a trend for managing wireless networks. Aiming at the problems of frequent handover and access failure in AP(Access Point)-intensive environment, an access control method based on global association memory retention of penalty factors was proposed. First, OpenFlow protocol was extended. Second, the extended data messages were used to report network quality, load, throughput, and utilization indicators to the controller through AP. Then, the network index parameters determined by introducing variation coefficient method were used to construct AP access weights. Finally, a global penalty factor was introduced to record frequent back-and-forth handovers in the network, and AP access weights and transmitting powers were modified based on the penalty factor retained by the global memory. Experimental comparison with the strongest signal access method and load balancing access method shows that when the STAtion (STA) moves in a complex network environment, the proposed method can effectively reduce the “ping-pong effect” of the handover, and reduce the number of network handovers and the handover delay, so as to improve the success rate of network handover. Compared with the traditional strongest signal access method, the proposed method has the number of handover requests reduced by 21.7%, which enhances the stability of network access performance.
    Computer software technology
    Cyclic iterative ontology construction method based on demand assessment and response
    DAI Tingting, ZHOU Le, YU Qinyong, HUNAG Xifeng, XIE Jun, SONG Minghui, LIU Qiao
    2020, 40(9):  2712-2718.  DOI: 10.11772/j.issn.1001-9081.2020010039
    Asbtract ( )   PDF (1259KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the METHONTOLOGY method and the seven-step method, which are more mature than the IEEE 1074-1995 software development standard, do not consider the ontology quality assessment and its response, a new cyclic iterative ontology construction method based on demand assessment and response was proposed. First, based on the software development V-model and ontology testing framework, demand analysis for the constructed ontology was conducted, so as to define a set of ontology test design documents that emphasize meeting the demands rather than knowledge richness. Second, core architecture and architecture knowledge system were refined, and the test documents were updated. Finally, the expressions of knowledge satisfiability on the core architecture, architecture knowledge system and demand analysis were respectively evaluated by using the test documents, and the ontology was updated locally or globally when the expressions of knowledge were not satisfied. Compared with the common methods of ontology construction, the proposed method can realize the evaluation and iterative evolution in the ontology construction process. Furthermore, the government ontology established by this method not only provides a knowledge representation framework for the relevant knowledge of item transaction, but also provides a new idea for the calculation of government knowledge. And the developed government affair process optimization program based on the proposed method has successfully applied in a provincial government affair big data analysis field, so as to confirm the rationality and effectiveness of the method to a certain extent.
    Virtual reality and multimedia computing
    Visual analysis system for exploring spatio-temporal exhibition data
    LIU Li, HU Haibo, YANG Tao
    2020, 40(9):  2719-2727.  DOI: 10.11772/j.issn.1001-9081.2019111976
    Asbtract ( )   PDF (4426KB) ( )  
    References | Related Articles | Metrics
    The spatio-temporal data in the exhibition environment is complex, with high discreteness, discontinuity and incomplete records. In most cases, the spatio-temporal data itself not only contains time, longitude and latitude, but also contains additional attributes such as speed, acceleration and direction, which makes the analysis of such data challenging. Therefore, an interactive visual analytic system, Visual Analysis system for Spatio-Temporal Exhibition Data (VASTED) was proposed, which combines multiple interactions to analyze participants’ types and movement patterns, as well as possible abnormal events, for the whole and details. The system utilizes and further improves the 3D map and Gantt chart to effectively represent the various attributes of data. The dataset of ChinaVis2019 challenge1 was used for case study to prove the feasibility of the system.
    Robust texture representation by combining differential feature and Haar wavelet decomposition
    LIU Wanghua, LIU Guangshuai, CHEN Xiaowen, LI Xurui
    2020, 40(9):  2728-2736.  DOI: 10.11772/j.issn.1001-9081.2020010032
    Asbtract ( )   PDF (1923KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that traditional local binary pattern operators lack deep-level correlation information between pixels and have poor robustness to common blurring and rotation changes in images, a robust texture expression operator combining differential features and Haar wavelet decomposition was proposed. In the differential feature channel, the first-order and second-order differential features in the image were extracted by the isotropic differential operators, so that the differential features of the image were essentially invariant to rotation and robust to image blur. In the wavelet decomposition feature extraction channel, based on the characteristic that the wavelet transform has good localization in the time domain and frequency domain at the same time, multi-scale two-dimensional Haar wavelet decomposition was used to extract blurring robustness features. Finally, the feature histograms on the two channels were concatenated to construct a texture description of the image. In the feature discrimination experiments, the accuracy of the proposed operator on the complex UMD, UIUC and KTH-TIPS texture databases reaches 98.86%, 98.2% and 99.05%, respectively, and compared with that of the MRELBP (Median Robust Extended Local Binary Pattern) operator, the accuracy increases by 0.26%, 1.32% and 1.12% respectively. In the robustness analysis experiments on rotation change and image blurring, the classification accuracy of the proposed operator on the TC10 texture database with only rotation changes reaches 99.87%, and the classification accuracy decrease of the proposed operator on the TC11 texture database with different levels of Gaussian blurs is only 6%. In the computational complexity experiments, the feature dimension of the proposed operator is only 324, and the average feature extraction time of the proposed operator on the TC10 texture database is 30.9 ms. Experimental results show that the method combining differential feature and Haar wavelet decomposition has strong feature discriminability and strong robustness to rotation and blurring, as well as has low computational complexity. It has good applicability in situations with small database.
    Background subtraction based on tensor nuclear norm and 3D total variation
    CHEN Lixia, BAN Ying, WANG Xuewen
    2020, 40(9):  2737-2742.  DOI: 10.11772/j.issn.1001-9081.2020010005
    Asbtract ( )   PDF (950KB) ( )  
    References | Related Articles | Metrics
    Concerning the fact that common background subtraction methods ignore the spatio-temporal continuity of foreground and the disturbance of dynamic background to foreground extraction, an improved background subtraction model was proposed based on Tensor Robust Principal Component Analysis (TRPCA). The improved tensor nuclear norm was used to constrain the background, which enhanced the low rank of background and retained the spatial information of videos. Then the regularization constraint was performed to the foreground by 3D Total Variation (3D-TV), so as to consider the spatio-temporal continuity of object and effectively suppress the interference of dynamic background and target movement on the foreground extraction. Experimental results show that the proposed model can effectively separate the foreground and background of videos. Compared with High-order Robust Principal Component Analysis (HoRPCA), Tensor Robust Principal Component Analysis with Tensor Nuclear Norm (TRPCA-TNN) and Kronecker-Basis-Representation based Robust Principal Component Analysis (KBR-RPCA), the proposed algorithm has the F-measure values all optimal or sub-optimal. It can be seen that, the proposed model effectively improves the accuracy of foreground and background separation, and suppresses the interference of complex weather and target movement on foreground extraction.
    Fast algorithm for distance regularized level set evolution model
    YUAN Quan, WANG Yan, LI Yuxian
    2020, 40(9):  2743-2747.  DOI: 10.11772/j.issn.1001-9081.2020010106
    Asbtract ( )   PDF (1693KB) ( )  
    References | Related Articles | Metrics
    The gradient descent method has poor convergence and is sensitive to local minimum. Therefore, an improved NAG (Nesterov’s Accelerated Gradient) algorithm was proposed to replace the gradient descent algorithm in the Distance Regularized Level Set Evolution (DRLSE) model, so as to obtain a fast image segmentation algorithm based on NAG algorithm. First, the initial level set evolution equation was given. Second, the gradient was calculated by using the NAG algorithm. Finally, the level set function was updated continuously, avoiding the level set function falling into local minimum. Experimental results show that compared with the original algorithm in the DRLSE model, the proposed algorithm has the number of iterations reduced by about 30%, and the CPU running time reduced by more than 30%. The algorithm is simple to implement, and can be applied to segment the images with high real-time requirement such as infrared images and medical images .
    Image quality evaluation model for X-ray circumferential welds
    WANG Siyu, GAO Weixin, LI Lu
    2020, 40(9):  2748-2753.  DOI: 10.11772/j.issn.1001-9081.2019122252
    Asbtract ( )   PDF (1188KB) ( )  
    References | Related Articles | Metrics
    Automatic evaluation of X-ray weld image quality is an important foundation for automatic evaluation of weld image defects. A digital blackness meter model was proposed to realize the automatic evaluation of X-ray weld image quality. Firstly, in order to obtain the physical blackness by numerical calculation, the physical illumination model and the weld blackness model were combined by the digital blackness meter model. Then, through the analysis of the correlation between the physical blackness value of the sample image and the corresponding grayscale value, a method for obtaining the parameters of the digital blackness meter model was given. Finally, an X-ray weld film blackness automatic evaluation algorithm was proposed. The experiments on the actual X-ray weld images show that, the accuracy of the proposed algorithm can reach 99% without manual intervention. The cross-validation experiments show that the sensitivity of the proposed method is 98.5% and the specificity of the method can reach 100%. The digital blackness meter model based on illumination model and blackness model as well as the solving algorithm can replace the commonly used physical blackness meter and realize the automation of weld image quality evaluation.
    Frontier & interdisciplinary applications
    Interactive dynamic optimization of dual-channel supply chain inventory under stochastic demand
    ZHAO Chuan, MIAO Liye, YANG Haoxiong, HE Mingke
    2020, 40(9):  2754-2761.  DOI: 10.11772/j.issn.1001-9081.2019122225
    Asbtract ( )   PDF (1530KB) ( )  
    References | Related Articles | Metrics
    Considering the problem of out-of-stock and inventory overstock caused by dual-channel supply chain inventory system, three dynamic optimization models of three modes: single control, centralized control and cross-replenishment control of dual-channel inventory were established under the condition that both online and offline channels are facing stochastic demand. Firstly, based on the dynamic differential equation of inventory, guided by the control theory creatively, and by means of Taylor expansion and Laplace transformation, the feedback transfer function of dual-channel inventory system was obtained. Secondly, considering the periodic interactions, upstream and downstream interactions and inter-channel interactions in the process of cross-replenishment’s purchase-sale-stock, delay control, feedback control and Proportion-Integral-Derivative (PID) control were used to construct a complex interactive system with two inputs and two outputs, so as to explore the dynamic balance between supply and demand of the dual-channel inventory system itself and among channels, optimize the dual-channel inventory holdings, reduce the out-of-stock times and amount and keep them to a dynamic equilibrium. Finally, through numerical simulation experiments, three dual-channel inventory control strategies were compared. The simulation results show that when on online and offline channels were facing different distributions of stochastic demand, the residual stock of cross-replenishment control decreased by 4.9% compared with that of single control, and the out-of-stock rate of cross-replenishment control decreased by 66.7% and 60% respectively compared those of single control and centralized control. The experimental results show that when online and offline channels are facing different distributions of stochastic demand, the use of cross-replenishment strategy can effectively reduce inventory holdings, reduce the times and amount of out-of-stock, and thus save the inventory costs.
    Intelligent house price evaluation model based on ensemble LightGBM and Bayesian optimization strategy
    GU Tong, XU Guoliang, LI Wanlin, LI Jiahao, WANG Zhiyuan, LUO Jiangtao
    2020, 40(9):  2762-2767.  DOI: 10.11772/j.issn.1001-9081.2019122249
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems in traditional house price evaluation method, such as single data source, over-reliance on subjective experience, idealization of considerations, an intelligent evaluation method based on multi-source data and ensemble learning was proposed. First, feature set was constructed from multi-source data, and the optimal feature subset was extracted using Pearson correlation coefficient and sequential forward selection method. Then, with Bagging ensemble strategy used as a combination method, multiple Light Gradient Boosting Machines (LightGBMs) were integrated based on the constructed features, and the model was optimized by using Bayesian optimization algorithm. Finally, this method was applied to the problem of house price evaluation, and the intelligent evaluation of house prices was realized. Experimental results on the real house price dataset show that, compared with traditional models such as Support Vector Machine (SVM) and random forest, the new model introduced with ensemble learning and Bayesian optimization improves the evaluation accuracy by 3.15%, and the evaluation results with percent error within 10% account for 84.09%. It can be seen that, the proposed model can be well applied to the field of intelligent house price evaluation, and has more accurate evaluation results.
    Drug abuse epidemiologic model based on prevention and therapy strategy
    LIU Feng
    2020, 40(9):  2768-2773.  DOI: 10.11772/j.issn.1001-9081.2020010108
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics
    Aiming at the shortcoming caused by the lack of prevention measures in the consideration of the existing drug abuse epidemiologic research, an Susceptible-Infected-Treated-Recovered-Susceptible (SITRS) drug abuse epidemiologic model based on prevention and therapy strategy was proposed by introducing prevention mechanism. Firstly, through analyzing the evolution process of the populations correlated with the drug abusers, an autonomous dynamical system was constructed by using ordinary differential equations. Secondly, the existence and local asymptotic stability of the drug-free equilibrium point of the system were proved. Thirdly, the unique existence of endemic equilibrium point was analyzed, and the sufficient conditions for global asymptotic stability of the endemic equilibrium point were obtained. Finally, the necessary conditions of existing backward bifurcation were calculated, the basic reproduction number under comprehensive prevention and therapy strategy and the number under single therapy strategy were compared. The possibility that backward bifurcation may exist and the stability of equilibrium points were verified by the numerical simulations. The study results indicate that, compared with single therapy strategy, comprehensive prevention and therapy strategy can further reduce the basic reproduction number of drug abuse thus more effectively prevent the breeding of drug abuse by increasing publicity coverage rate and education efficiency.
    Strategies of parameter fault detection for rocket engines based on transfer learning
    ZHANG Chenxi, TANG Shu, TANG Ke
    2020, 40(9):  2774-2780.  DOI: 10.11772/j.issn.1001-9081.2020010114
    Asbtract ( )   PDF (1319KB) ( )  
    References | Related Articles | Metrics
    In the parameter fault detection during rocket flights, the traditional red line method has high missing alarm rate and false alarm rate, expert system method has high maintenance cost, and machine learning is constrained by dataset size so that it is hard to train the model. Therefore, two transfer learning strategies based on instance and model respectively were proposed. In order to realize the real-time detection of the key parameter oxygen pump speed in the new type engine YF-77, after analyzing the parameters and data characteristics of LOX/LH2 engines YF-75 and YF-77 that have the same construction principle, the domain differences were solved, the feature space was built, and the feature vectors were filtered. In the experiments about instance transfer and model transfer from YF-75 to YF-77, compared with the methods without transfer learning such as k-Nearest Neighbor (kNN) and Support Vector Machine (SVM), the models after transfer learning can reduce the missing alarm rate from 58.33% (the highest) to 12.25% (the lowest), and reduce the false alarm rate from 60.83% (the highest) to 13.53% (the lowest), therefore verifying the information transferability between two kinds of engines, and the possibility of applying transfer learning in aerospace engineering practice.
    Rainfall cloud segmentation method in Tibet based on DeepLab v3
    ZHANG Yonghong, LIU Hao, TIAN Wei, WANG Jiangeng
    2020, 40(9):  2781-2788.  DOI: 10.11772/j.issn.1001-9081.2019122131
    Asbtract ( )   PDF (2718KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the numerical prediction method is complex in modeling, the radar echo extrapolation method is easy to generate cumulative error and the model parameters are difficult to set in plateau area, a method for segmenting rainfall clouds in Tibet was proposed based on the improved DeepLab v3. Firstly, the convolutional layers and residual modules in the coding network were used for down-sampling. Then, the multi-scale sampling module was constructed by using the dilated convolution, and the attention mechanism module was added to extract deep high-dimensional features. Finally, the deonvolutional layers in the decoding network were used to restore the feature map resolution. The proposed method was compared with Google semantic segmentation network DeepLab v3 and other models on the validation set. The experimental results show that the method has better segmentation performance and generalization ability, has the rainfall cloud segmented more accurately, and the Mean intersection over union (Miou) reached 0.95, which is 15.54 percentage points higher than that of the original DeepLab v3. On small targets and unbalanced datasets, rainfall clouds can be segmented more accurately by this method, so that the proposed method can provide a reference for the rain cloud monitoring and early warning.
2024 Vol.44 No.10

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF