Table of Content

    10 November 2019, Volume 39 Issue 11
    The 2019 China Conference on Granular Computing and Knowledge Discovery (CGCKD2019)
    Measure method and properties of weighted hypernetwork
    LIU Shengjiu, LI Tianrui, YANG Zonglin, ZHU Jie
    2019, 39(11):  3107-3113.  DOI: 10.11772/j.issn.1001-9081.2019050806
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics
    Hypernetwork is a kind of networks which is more complex than the ordinary complex network. Hypernetwork can describe complex system existing in the real world more appropriately than complex network since every hyperedge of it can connect any number of nodes. A new method to measure hypernetwork-Hypernetwork Dimension (HD) was proposed aiming to the shortcomings and deficiencies of existing measure method of hypernetwork. Hypernetwork dimension was expressed as twice as much as the ratio of the logarithm of the sum of all nodes' weights and product of corresponding hyperedge's weight in all hyperedges to the logarithm of the product of sum of hyperedges' weights and sum of nodes' weights. The hypernetwork dimension was able to be applied to the weighted hyperworks with many different numerical types of both nodes' weights and hyperedges' weights, such as positive real numbers, negative real numbers, pure imaginary numbers, and even complex numbers. Finally, several important properties of the proposed hypernetwork dimension were discussed.
    Evaluation method of granular performance indexes for fuzzy rule-based models
    HU Xingchen, SHEN Yinghua, WU Keyu, CHENG Guangquan, LIU Zhong
    2019, 39(11):  3114-3119.  DOI: 10.11772/j.issn.1001-9081.2019050791
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics
    Fuzzy rule-based models are widely used in many fields. The existing performance indexes for the models are mainly numeric, which ignore the characteristic of fuzzy sets in the models. Aiming at the problem, a new method of evaluating the performance of fuzzy rule-based models was proposed, to effectively evaluate the non-numeric (granular) nature of results formed by the fuzzy models. In this method, different from the commonly used numeric performance indexes (such as Mean Squared Error (MSE)), the characteristics of information granules were used to represent the quality of granular results output by the model and this proposed index was applied for the performance optimization of the fuzzy model. The performance of information granule was quantified by two basic indexes, coverage rate (of data) and specificity (of information granule itself), and the maximization of the output quality of granularity (expressed as the product of coverage rate and specificity) was realized with the use of particle swarm optimization. Moreover, the distribution of information granules formed through fuzzy clustering was optimized. The experimental results show the effectiveness of the proposed method on the performance evaluation of fuzzy rule-based models
    Three-way screening method of basic clustering for ensemble clustering
    XU Jianfeng, ZOU Weikang, LIANG Wei, CHENG Gaojie, ZHANG Yuanjian
    2019, 39(11):  3120-3126.  DOI: 10.11772/j.issn.1001-9081.2019050864
    Asbtract ( )   PDF (985KB) ( )  
    References | Related Articles | Metrics
    At present, the researches of ensemble clustering mainly focus on the optimization of ensemble strategy, while the measurement and optimization of the quality of basic clustering are rarely studied. On the basis of information entropy theory, a quality measurement index of basic clustering was proposed, and a three-way screening method for basic clustering was constructed based on three-way decision. Firstly, α, β were reset as the thresholds of three-way decision of basic clustering screening. Secondly, the average cluster quality of each basic clustering was calculated and was used as the quality measurement index of each basic clustering. Finally, the three-way decision was implemented. For one three-way screening, its decision strategy is:1) deleting the basic clustering if the quality measurement index of the basic clustering is less than the threshold β; 2) keeping the basic clustering if the quality measurement index of the basic clustering is greater than or equals to the threshold α; 3) recalculating the quality of a basic clustering and if the quality measurement index of the basic clustering is greater than β and less than α or equals to β. For the third option, the decision process continues until there is no deletion of basic clustering or reaching the times of iteration. The comparative experiments show that the three-way screening method of basic clustering can effectively improve the ensemble clustering effects.
    Feature selection method for imbalanced text sentiment classification based on three-way decisions
    WAN Zhichao, HU Feng, DENG Weibin
    2019, 39(11):  3127-3133.  DOI: 10.11772/j.issn.1001-9081.2019050822
    Asbtract ( )   PDF (1114KB) ( )  
    References | Related Articles | Metrics
    Traditional feature selection methods have great limitations in the imbalanced text sentiment tendency classification, which are mainly reflected in the high feature dimension, the sparse characteristics, and the imbalanced feature distribution, making the reduction of classification accuracy. According to the distribution of emotional features of imbalanced texts, a Three-Way Decisions-Feature Selection algorithm (TWD-FS) was proposed for imbalanced text sentiment classification based on three-way decisions. In order to reduce the number of feature words and reduce the feature dimension, two supervised feature selection methods were combined, and the feature words selected were further filtered in order to make them satisfy the characteristics of the maximum between-class scatter degree and the minimum within-class scatter degree. In addition, the imbalance of sentiment features was decreased and the classification accuracy of minority sentiment was effectively improved by combining positive and negative sentiment features. The experimental results on COAE2013 Chinese microblog imbalanced datasets and other datasets show that the proposed feature selection algorithm TWD-FS can effectively improve the accuracy of imbalanced text sentiment classification.
    Support vector data description method based on probability
    YANG Chen, WANG Jieting, LI Feijiang, QIAN Yuhua
    2019, 39(11):  3134-3139.  DOI: 10.11772/j.issn.1001-9081.2019050823
    Asbtract ( )   PDF (849KB) ( )  
    References | Related Articles | Metrics
    In view of the high complexity of current probabilistic machine learning methods in solving probability problems, and the fact that traditional Support Vector Data Description (SVDD), as a kernel density estimation method, can only estimate whether the test samples belong to this class, a probability-based SVDD method was proposed. Firstly, the traditional SVDD method was used to obtain the data descriptions of two types of data, and the distance between the test sample and the hypersphere was calculated. Then, a function was constructed to convert the distance into probability, and an SVDD method based on probability was proposed. At the same time, Bagging algorithm was used for the integration to further improve the performance of data description. By referring to classification scenarios, the proposed method was compared with the traditional SVDD method on 13 kinds of benchmark datasets of Gunnar Raetsch. The experimental results show that the proposed method is better than the traditional SVDD method on accuracy and F1-value, and its performance of data description is improved.
    Matrix-based algorithm for updating approximations in variable precision multi-granulation rough sets
    ZHENG Wenbin, LI Jinjin, YU Peiqiu, LIN Yidong
    2019, 39(11):  3140-3145.  DOI: 10.11772/j.issn.1001-9081.2019050836
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    In an information explosion era, the large scale and structure complexity of datasets become problems in approximation calculation. Dynamic computing is an efficient approach to solve these problems. With the development of existing updating method applied to the dynamic approximation in multi-granular rough sets, a vector matrix based method for computing and updating approximations in Variable Precision Multi-Granulation Rough Sets (VPMGRS) was proposed. Firstly, a static algorithm for computing approximations based on vector matrix for VPMGRS was presented. Secondly, the searching area for updating approximations in VPMGRS was reconsidered, and the area was shrunk according to the properties of VPMGRS, effectively improving the time efficiency of the approximation updating algorithm. Thirdly, according to the new searching area, a vector matrix based algorithm for updating approximations in VPMGRS was proposed based on the static algorithm for computing approximations. Finally, the effectiveness of the designed algorithm was verified by experiments.
    Protein-ATP binding site prediction based on 1D-convolutional neural network
    ZHANG Yu, YU Dongjun
    2019, 39(11):  3146-3150.  DOI: 10.11772/j.issn.1001-9081.2019050865
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics
    To improve the accuracy of protein-ATP (Adenosine TriPhosphate) binding sites, an algorithm was proposed by using One Dimensional Convolutional Neural Network (1D-CNN). Firstly, based on the protein sequence information, position specific score matrix information, secondary structure information and water solubility information were combined and random under-sampling was used to eliminate the impact of data imbalance. Then, the missing features were completed by recoding. Finally, the training features were obtained. A 1D-CNN was trained to predict protein-ATP binding sites, the network structure was optimized, and experiments were carried out to compare the proposed method and other machine learning methods. Experimental results show that the proposed method is effective and can achieve better performance on AUC (Area Under Curve) compared to the traditional Support Vector Machine (SVM).
    The 2019 CCF Conference on Artificial Intelligence (CCFAI2019)
    Overlapping community detection algorithm for attributed networks
    DU Hangyuan, PEI Xiya, WANG Wenjian
    2019, 39(11):  3151-3157.  DOI: 10.11772/j.issn.1001-9081.2019051177
    Asbtract ( )   PDF (1064KB) ( )  
    References | Related Articles | Metrics
    Real-world network nodes contain a large number of attribute information and there is an overlapping characteristic between communities. Aiming at the problems, an overlapping community detection algorithm for attributed networks was proposed. The network topology structure and node attributes were fused to define the intensity degree and interval degree of network nodes, which were designed to describe the characteristics of community-the dense interior connection and the sparse exterior connection respectively. Based on the idea of density peak clustering, the local density centers were selected as community centers. On this basis, an iteration calculating method for the membership of non-central nodes about each community was proposed, and the division of overlapping communities was realized. The simulation experiments were carried out on real datasets. The experimental results show that the proposed algorithm has better performance in community detection than LINK algorithm, COPRA algorithm and DPSCD (Density Peaks-based Clustering Method).
    Image feature point matching method based on distance fusion
    XIU Chunbo, MA Yunfei, PAN Xiaonan
    2019, 39(11):  3158-3162.  DOI: 10.11772/j.issn.1001-9081.2019051180
    Asbtract ( )   PDF (867KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the matching error rate of ORB (Oriented FAST and Rotated BRIEF) method caused by the scale invariance of the feature points in the algorithm and enhance the robustness of the descriptors of Binary Robust Independent Elementary Features (BRIEF) algorithm to noise, an improved feature point matching method was proposed. Speeded-Up Robust Features (SURF) algorithm was used to extract feature points, and BRIEF algorithm with direction information was used to describe the feature points. Random pixel pairs in the neighborhood of the feature point were selected, the comparison results of the grayscales and the similarity of pixel pairs were encoded respectively, and Hamming distance was used to calculate the differences between the two codes. The similarity between the feature points were measured by the adaptive weighted fusion method. Experimental results show that the improved method has better adaptability to the scale variance, illumination variance and blurred variance of images, can obtain a higher feature point correct matching rate compared with the conventional ORB method, and can be used to improve the performance of image stitching.
    Three-way group decisions model based on cloud aggregation
    LI Shuai, WANG Guoyin, YANG Jie
    2019, 39(11):  3163-3171.  DOI: 10.11772/j.issn.1001-9081.2019051050
    Asbtract ( )   PDF (1342KB) ( )  
    References | Related Articles | Metrics
    Group decision making of domain experts is the most direct approach to determine loss function in three-way decision problems. Different from linguistic variable model and fuzzy set model with single uncertainty, expert evaluations described by cloud model can reflect the complex uncertainty form in cognitive process, and the synthetic evaluation function can be obtained by cloud aggregation. However, numerical characteristics only are performed simple linear combination in current cloud aggregation methods, leading the lack of the description of concept semantic differences and the difficulty to obtain convincing results. Therefore firstly, the weighted distance sum was proved to be a convex function in the distance space of cloud model. And the aggregational cloud model was defined as the minimum point of that function. Then, this definition was generalized to the multi-cloud model scenario, and a cloud aggregation method namely density center based cloud aggregation method was proposed. In group decision making, the proposed method obtains the most accurate synthetic evaluations with the highest similarity between synthetic evaluation and basic evaluation, providing a novel semantic interpretation of the determination of loss function. The experimental results show that the misclassification rate of the three-way decision with loss function determined by the proposed method is the lowest compared with simple linear combination and rational granularity methods.
    Data enhancement algorithm based on feature extraction preference and background color correlation
    YU Ying, WANG Lewei, ZHANG Yinglong
    2019, 39(11):  3172-3177.  DOI: 10.11772/j.issn.1001-9081.2019051140
    Asbtract ( )   PDF (1039KB) ( )  
    References | Related Articles | Metrics
    Deep neural network has powerful feature self-learning ability, which can obtain the granularity features of different levels by multi-layer stepwise feature extraction. However, when the target subject of an image has strong correlation with the background color, the feature extraction will be "lazy", the extracted features are difficult to be discriminated with low abstraction level. To solve this problem, the intrinsic law of feature extraction of deep neural network was studied by experiments. It was found that there was correlation between feature extraction preference and background color of the image. Eliminating this correlation was able to help deep neural network ignore background interference and extract the features of the target subject directly. Therefore, a data enhancement algorithm was proposed and experiments were carried out on the self-built dataset. The experimental results show that the proposed algorithm can reduce the interference of background color on the extraction of target features, reduce over-fitting and improve classification effect.
    Point-of-Interest recommendation algorithm combining location influence
    XU Chao, MENG Fanrong, YUAN Guan, LI Yuee, LIU Xiao
    2019, 39(11):  3178-3183.  DOI: 10.11772/j.issn.1001-9081.2019051087
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that Point-Of-Interest (POI) recommendation has low recommendation accuracy and efficiency, with deep analysis of the influence of social factors and geographical factors in POI recommendation, a POI recommendation algorithm combining location influence was presented. Firstly, in order to solve the sparseness of sign-in data, the 2-degree friends were introduced into the collaborative filtering algorithm to construct a social influence model, and the social influence of the 2-degree friends on the users were obtained by calculating experience and friend similarity. Secondly, by deep consideration of the influence of geographical factors on POI, a location influence model was constructed based on the analysis of social networks. The users' influences were discovered through the PageRank algorithm, and the location influences were calculated by the POI sign-in frequency, obtaining overall geographical preference. Moreover, kernel density estimation method was used to model the users' sign-in behaviors and obtain the personalized geographical features. Finally, the social model and the geographic model were combined to improve the recommendation accuracy, and the recommendation efficiency was improved by constructing the candidate POI recommendation set. Experiments on Gowalla and Yelp sign-in datasets show that the proposed algorithm can quickly recommend POIs for users, and has high accuracy and recall rate than Location Recommendation with Temporal effects (LRT) algorithm and iGSLR (Personalized Geo-Social Location Recommendation) algorithm.
    Repairing of missing bus arrival data based on DBSCAN algorithm and multi-source data
    WANG Cheng, CUI Ziwei, DU Zilin, GAO Yueer
    2019, 39(11):  3184-3190.  DOI: 10.11772/j.issn.1001-9081.2019051033
    Asbtract ( )   PDF (1091KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the existing repair methods for missing bus arrival information have little factors considered, low accuracy and poor robustness, a method to repair missing bus arrival data based on DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm and multi-source data was proposed. Bus GPS (Global Positioning System) data, IC (Integrated Circuit) card data and other source data were used to repair the missing arrival information. For the name, longitude and latitude data of the missing arrival station, the association analysis of complete arrival data and static line information were carried out to repair. For the missing arrival time data, the following steps were taken to repair. Firstly, for every missing data station and its nearest non-missing data station, the travel time and schedule in the historical complete arrival data between the two stations were clustered based on DBSCAN algorithm. Secondly, whether the two adjacent runs of the studied bus with complete data belonged to the same cluster was judged, and if they belonged to the same cluster, th cluster would not change, otherwise the two clusters would be merged. Finally, the maximum travel time corresponding to the cluster midpoint was used as the missing travel time to determine whether there was a passenger swiping his card to board the bus at this station or not, if so, the arrival time was calculated from the time of swiping cards, and if not, the mean of the maximum and minimum travel time corresponding to the cluster midpoint was used as the missing travel time to calculate the arrival time. Taking Xia'men bus arrival data as examples, in the repair of name, longitude and latitude of the missing arrival station, the clustering method based on GPS data, the maximum probability estimation method and the proposed method can repair the data by 100.00%. In the repair of missing arrival time, the mean relative error of the proposed method is 0.0301% and 0.0004% lower than that of two comparison methods respectively, and the correlation coefficient of the proposed method is 0.005 and 0.0075 higher than that of two comparison methods respectively. The simulation results show that the proposed method can effectively improve the accuracy of repair of missing bus arrival data, and reduce the impact of the number of missing stations on accuracy.
    Pareto distribution based processing approach of deceptive behaviors of crowdsourcing workers
    PAN Qingxian, JIANG Shan, DONG Hongbin, WANG Yingjie, PAN Tingwei, YIN Zengxuan
    2019, 39(11):  3191-3197.  DOI: 10.11772/j.issn.1001-9081.2019051067
    Asbtract ( )   PDF (1013KB) ( )  
    References | Related Articles | Metrics
    Due to the loose organization of crowdsourcing, crowdsourcing workers have deceptive behaviors in the process of completing tasks. How to identify the deceptive behaviors of workers and reduce their impact, thus ensuring the completion quality of crowdsourcing tasks, has become one of the research hotspots in the field of crowdsourcing. Based on the evaluation and analysis of the task results, a Weight Setting Algorithm Based on Generalized Pareto Distribution (GPD) (WSABG) was proposed for the unified type deceptive behaviors of crowdsourcing workers. In the algorithm, the maximum likelihood estimation of GPD was performed, and the dichotomy was used to approximate the zero point of the likehood function in order to calculate the scale parameter σ and shape parameter ε. A new weight formula was defined, and an absolute influence weight was given to each worker according to the feedback data of the crowdsourcing workers to complete the current task, and finally the GPD-based crowdsourcing worker weight setting framework was designed. The proposed algorithm can solve the problem that the difference between the task results data is small and the data are easy to be centered on the two poles. Taking the data of Yantai University students' evaluation of teaching as the experimental dataset, with the concept of interval transfer matrix proposed, the effectiveness and superiority of WSABG algorithm are proved.
    Cross-social network user alignment algorithm based on knowledge graph embedding
    TENG Lei, LI Yuan, LI Zhixing, HU Feng
    2019, 39(11):  3198-3203.  DOI: 10.11772/j.issn.1001-9081.2019051143
    Asbtract ( )   PDF (862KB) ( )  
    References | Related Articles | Metrics
    Aiming at the poor network embedding performance of cross-social network user alignment algorithm and the inability to guarantee the quality of negative samples generated by negative sampling method, a cross-social network KGEUA (Knowledge Graph Embedding User Alignment) algorithm was proposed. In the embedding stage, some known anchor user pairs were used for the positive sample expansion, and the Near_K negative sampling method was proposed to generate negative examples. Finally, the two social networks were embedded into a unified low-dimensional vector space with the knowledge graph embedding method. In the alignment stage, the existing user similarity measurement method was improved, the proposed structural similarity was combined with the traditional cosine similarity to measure the user similarity jointly, and an adaptive threshold-based greedy matching method was proposed to align users. Finally, the newly aligned user pairs were added to the training set to continuously optimize the vector space. The experimental results show that the proposed algorithm has the hits@30 value of 67.7% on the Twitter-Foursquare dataset, which is 3.3 to 34.8 percentage points higher than that of the state-of-the-art algorithm, improving the user alignment performance effectively.
    Text-to-image synthesis method based on multi-level structure generative adversarial networks
    SUN Yu, LI Linyan, YE Zihan, HU Fuyuan, XI Xuefeng
    2019, 39(11):  3204-3209.  DOI: 10.11772/j.issn.1001-9081.2019051077
    Asbtract ( )   PDF (1012KB) ( )  
    References | Related Articles | Metrics
    In recent years, the Generative Adversarial Network (GAN) has achieved remarkable success in text-to-image synthesis, but there are still problems such as edge blurring of images, unclear local textures, small sample variance. In view of the above shortcomings, based on Stack Generative Adversarial Network model (StackGAN++), a Multi-Level structure Generative Adversarial Networks (MLGAN) model was proposed, which is composed of multiple generators and discriminators in a hierarchical structure. Firstly, hierarchical structure coding method and word vector constraint were introduced to change the condition vector of generator of each level in the network, so that the edge details and local textures of the image were clearer and more vivid. Then, the generator and the discriminator were jointed by trained to approximate the real image distribution by using the generated image distribution of multiple levels, so that the variance of the generated sample became larger, and the diversity of the generated sample was increased. Finally, different scale images of the corresponding text were generated by generators of different levels. The experimental results show that the Inception scores of the MLGAN model reached 4.22 and 3.88 respectively on CUB and Oxford-102 datasets, which were respectively 4.45% and 3.74% higher than that of StackGAN++. The MLGAN model has improvement in solving edge blurring and unclear local textures of the generated image, and the image generated by the model is closer to the real image.
    Fine-grained pedestrian detection algorithm based on improved Mask R-CNN
    ZHU Fan, WANG Hongyuan, ZHANG Ji
    2019, 39(11):  3210-3215.  DOI: 10.11772/j.issn.1001-9081.2019051051
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of poor pedestrian detection effect in complex scenes, a pedestrian detection algorithm based on improved Mask R-CNN framework was proposed with the use of the leading research results in deep learning-based object detection. Firstly, K-means algorithm was used to cluster the object frames of the pedestrian datasets to obtain the appropriate aspect ratio. By adding the set of aspect ratio (2:5), 12 anchors were able to be adapted to the size of the pedestrian in the image. Secondly, combined with the technology of fine-grained image recognition, the high accuracy of pedestrian positioning was realized. Thirdly, the foreground object was segmented by the Full Convolutional Network (FCN), and pixel prediction was performed to obtain the local mask (upper body, lower body) of the pedestrian, so as to achieve the fine-grained detection of pedestrians. Finally, the overall mask of the pedestrian was obtained by learning the local features of the pedestrian. In order to verify the effectiveness of the improved algorithm, the proposed algorithm was compared with the current representative object detection methods (such as Faster Region-based Convolutional Neural Network (Faster R-CNN), YOLOv2 and R-FCN (Region-based Fully Convolutional Network)) on the same dataset. The experimental results show that the improved algorithm increases the speed and accuracy of pedestrian detection and reduces the false positive rate.
    Person re-identification in video sequence based on spatial-temporal regularization
    LIU Baocheng, PIAO Yan, TANG Yue
    2019, 39(11):  3216-3220.  DOI: 10.11772/j.issn.1001-9081.2019051084
    Asbtract ( )   PDF (808KB) ( )  
    References | Related Articles | Metrics
    Due to the interference of various factors in the complex situation of reality, the errors may occur in the person re-identification. To improve the accuracy of person re-identification, a person re-identification algorithm based on spatial-temporal regularization was proposed. Firstly, the ResNet-50 network was used to extract the features of the input video sequence frame by frame, and the series of frame-level features were input into the spatial-temporal regularization network to generate corresponding weight scores. Then the weighted average was performed on the frame-level features to obtain the sequence-level features. To avoid weight scores from being aggregated in one frame, frame-level regularization was used to limit the difference between frames. Finally, the optimal results were obtained by minimizing the losses. A large number of tests were performed on MARS and DukeMTMC-ReID datasets. The experimental results show that the mean Average Precision (mAP) and the accuracy can be effectively improved by the proposed algorithm compared with Triplet algorithm. And the proposed algorithm has excellent performance for human posture variation, viewing angle changes and interference with similar appearance targets.
    Improved attribute reduction algorithm and its application to prediction of microvascular invasion in hepatocellular carcinoma
    TAN Yongqi, FAN Jiancong, REN Yande, ZHOU Xiaoming
    2019, 39(11):  3221-3226.  DOI: 10.11772/j.issn.1001-9081.2019051108
    Asbtract ( )   PDF (896KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the attribute reduction algorithm based on neighborhood rough set only considers the influence of a single attribute on the decision attribute, and fails to consider the correlation among different attributes, a Neighborhood Rough Set attribute reduction algorithm based on Chi-square test (ChiS-NRS) was proposed. Firstly, the Chi-square test was used to calculate the correlation, and the influence between the related attributes was considered when selecting the important attributes, making the time complexity reduced and the classification accuracy improved. Then, the improved algorithm and the Gradient Boosting Decision Tree (GBDT) algorithm were combined to establish a classification model and the model was verified on UCI datasets. Finally, the proposed model was applied to predict the occurrence of microvascular invasion in hepatocellular carcinoma. The experimental results show that the proposed algorithm has the highest classification accuracy on some UCI datasets compared with the reduction algorithm without reduction and neighborhood rough set reduction algorithm. In the prediction of microvascular invasion in hepatocellular carcinoma, compared with Convolution Neural Network (CNN), Support Vector Machine (SVM) and Random Forest (RF) prediction models, the proposed model has the prediction accuracy of 88.13% in test set, the sensitivity, specificity and the Area Under Curve (AUC) of Receiver Operating Curve (ROC) of 88.89%, 87.5% and 0.90 respectively are the best. Therefore, the prediction model proposed can better predict the occurrence of microvascular invasion in hepatocellular carcinoma and assist doctors to make more accurate diagnosis.
    Remote sensing image classification via semi-supervised fuzzy C-means algorithm
    FENG Guozheng, XU Jindong, FAN Baode, ZHAO Tianyu, ZHU Meng, SUN Xiao
    2019, 39(11):  3227-3232.  DOI: 10.11772/j.issn.1001-9081.2019051043
    Asbtract ( )   PDF (1151KB) ( )  
    References | Related Articles | Metrics
    Because of the uncertainty and complexity of remote sensing image data, it is difficult for traditional unsupervised algorithms to create an accurate classification model for them. Pattern recognition methods based on fuzzy set theory can express the fuzziness of data effectively. In these methods, type-2 fuzzy set can better describe inter-class hybrid uncertainty. Furthermore, semi-supervised method can use prior knowledge to deal with the generalization problem of algorithm to data. Therefore, a remote sensing image classification method based on Semi-Supervised Adaptive Interval Type-2 Fuzzy C-Means (SS-AIT2FCM) was proposed. Firstly, by integrating the semi-supervised and evolution theory, a novel fuzzy weight index selection method was proposed to improve the robustness and generalization of the adaptive interval type-2 fuzzy C-means clustering algorithm. The proposed algorithm was more suitable for the classification of remote sensing data with severe spectral aliasing, large coverage areas and abundant features. In addition, by performing soft constrained supervision on small number of labeled samples, the iterative process of the algorithm was optimized and guided, and the greatest expression of the data was obtained. In the experiments, SPOT5 multi-spectral remote sensing image data of the Summer Palace in Beijing and Landsat TM multi-spectral remote sensing image data of the Hengqin Island in Guangdong were used to compare the results of the existing fuzzy classification algorithms and SS-AIT2FCM. The experimental results show that the proposed method obtains more accurate classification and clearer boundaries of classes, and has good data generalization ability.
    Artificial intelligence
    Many-objective particle swarm optimization algorithm based on hyper-spherical fuzzy dominance
    TAN Yang, TANG Dequan, CAO Shoufu
    2019, 39(11):  3233-3241.  DOI: 10.11772/j.issn.1001-9081.2019040710
    Asbtract ( )   PDF (1319KB) ( )  
    References | Related Articles | Metrics
    With the increase of the dimension of the problem to be optimized, Many-objective Optimization Problem (MAOP) will form a huge target space, resulting in a sharp increase of the proportion of non-dominant solutions. And the selection pressure of evolutionary algorithms is weakened and the efficiency of evolutionary algorithms for solving MAOP is reduced. To solve this problem, a Particle Swarm Optimization (PSO) algorithm using hyper-spherical dominance relationship to reduce the number of non-dominant solutions was proposed. The fuzzy dominance strategy was used to maintain the selection pressure of the population to MAOP. And the distribution of individuals in the target space was maintained by the selection of global extremum and the maintenance of external files. The simulation results on standard test sets DTLZ and WFG show that the proposed algorithm has better convergence and distribution when solving MAOP.
    Design of experience-replay module with high performance
    CHEN Bo, WANG Jinyan
    2019, 39(11):  3242-3249.  DOI: 10.11772/j.issn.1001-9081.2019050810
    Asbtract ( )   PDF (1237KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that a straightforward implementation of the experience-replay procedure based on python data-structures may lead to a performance bottleneck in Deep Q Network (DQN) related applications, a design scheme of a universal experience-replay module was proposed to provide high performance. The proposed module consists of two software layers. One of them, called the "kernel", was written in C++, to implement fundamental functions for experience-replay, achieving a high execution efficiency. And the other layer "wrapper", written in python, encapsulated the module function and provided the call interface in an object-oriented style, guaranteeing the usability. For the critical operations in experience-replay, the software structure and algorithms were well researched and designed. The measures include implementing the priority replay mechanism as an accessorial part of the main module with logical separation, bringing forward the samples' verification of "get_batch" to the "record" operation, using efficient strategies and algorithms in eliminating samples, and so on. With such measures, the proposed module is universal and extendible. The experimental results show that the execution efficiency of the experience-replay process is well optimized by using the proposed module, and the two critical operations, the "record" and the "get_batch", can be executed efficiently. The proposed module operates the "get_batch" about 100 times faster compared with the straightforward implementation based on python data-structures. Therefore, the experience-replay process is no longer a performance bottleneck in the system, meeting the requirements of various kinds of DQN-related applications.
    Quantum-inspired migrating birds co-optimization algorithm for lot-streaming flow shop scheduling problem
    CHEN Linfeng, QI Xuemei, CHEN Junwen, HUANG Cheng, CHEN Fulong
    2019, 39(11):  3250-3256.  DOI: 10.11772/j.issn.1001-9081.2019040700
    Asbtract ( )   PDF (949KB) ( )  
    References | Related Articles | Metrics
    A Quantum-inspired Migrating Birds Co-Optimization (QMBCO) algorithm was proposed for minimizing the makespan in Lot-streaming Flow shop Scheduling Problem (LFSP). Firstly, the quantum coding based on Bloch coordinates was applied to expand the solution space. Secondly, an initial solution improvement scheme based on Framinan-Leisten (FL) algorithm was used to makeup the shortage of traditional initial solution and construct the random initial population with high quality. Finally, Migrating Birds Optimization (MBO) and Variable Neighborhood Search (VNS) algorithm were applied for iteration to achieve the information exchange between the worse individuals and superior individuals in proposed algorithm to improve the global search ability. A set of instances with different scales were generated randomly, and QMBCO was compared with Discrete Particle Swarm Optimization (DPSO), MBO and Quantum-inspired Cuckoo Co-Search (QCCS) algorithms on them. Experimental results show that compared with DPSO, MBO and QCCS, QMBCO has the Average Relative Percentage Deviation (ARPD) averagely reduced by 65%, 34% and 24% respectively under two types of running time, verifying the effectiveness and efficiency of the proposed QMBCO algorithm.
    Firefly fuzzy clustering algorithm based on Levy flight
    LIU Xiaoming, SHEN Mingyu, HOU Zhengfeng
    2019, 39(11):  3257-3262.  DOI: 10.11772/j.issn.1001-9081.2019040634
    Asbtract ( )   PDF (858KB) ( )  
    References | Related Articles | Metrics
    Fuzzy C-Means (FCM) clustering algorithm is sensitive to the initial clustering center and is easy to fall into local optimum. Therefore, a Firefly Fuzzy C-Means clustering Algorithm based on Levy flight (LFAFCM) was proposed. In LFAFCM, the random movement strategy of firefly algorithm was changed to balance the algorithm's local search and global search capabilities, the Levy flight mechanism was introduced during the firefly position update process to improve the global optimization ability, and the scale coefficient of each firefly was dynamically adjusted according to the number of iterations and the firefly position to limit the searchable range of Levy flight and speed up the convergence of the algorithm. The algorithm was validated by using five UCI datasets. The experimental results show that the algorithm avoids the local optimum and has a fast convergence speed.
    Microblog bursty events detection algorithm based on multi-feature
    WANG Xueying, YANG Wenzhong, ZHANG Zhihao, LI Donghao, QIN Xu
    2019, 39(11):  3263-3267.  DOI: 10.11772/j.issn.1001-9081.2019040647
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the harm caused by bursty events in social media, a multi-feature based microblog bursty events detection algorithm was proposed. The algorithm combines text emotion filtering and user influence calculation methods. Firstly, the microblog text with negative emotion was obtained through noise filtering and emotion filtering. Then the proposed user influence calculation method was combined with the burst word extraction algorithm to extract the characteristics of burst words. Finally, a cohesive hierarchical clustering algorithm was introduced to cluster bursty word sets, and extract bursty events from them. In the experimental test, the accuracy is 66.84%, which proves that the proposed method can effectively detect bursty events.
    Speech recognition method based on dual micro-array and convolutional neural network
    LIU Weibo, ZENG Qingning, BU Yuting, ZHENG Zhanheng
    2019, 39(11):  3268-3273.  DOI: 10.11772/j.issn.1001-9081.2019050878
    Asbtract ( )   PDF (938KB) ( )  
    References | Related Articles | Metrics
    In order to solve the low speech recognition rate in noise environment, and the difficulty of traditional beamforming algorithm in dealing with spatial noise problem, an improved Minimum Variance Distortionless Response (MVDR) beamforming method based on dual micro-array was proposed. Firstly, the gain of micro-array was increased by diagonal loading, and the computational complexity was reduced by the inversion of recursive matrix. Then, through the modulation domain spectrum subtraction for further processing, the problem that music noise was easily produced by general spectral subtraction was solved, effectively reducing speech distortion, and well suppressing the noise. Finally, the Convolution Neural Network (CNN) was used to train the speech model and extract the deep features of speech, effectively solve the problem of speech signal diversity. The experimental results show that the proposed method achieves good recognition effect in the CNN trained speech recognition system, and has the speech recognition accuracy of 92.3% in F16 noise environment with 10 dB signal-to-noise ratio, means it has good robustness.
    Segmentation algorithm of ischemic stroke lesion based on 3D deep residual network and cascade U-Net
    WANG Ping, GAO Chen, ZHU Li, ZHAO Jun, ZHANG Jing, KONG Weiming
    2019, 39(11):  3274-3279.  DOI: 10.11772/j.issn.1001-9081.2019040717
    Asbtract ( )   PDF (959KB) ( )  
    References | Related Articles | Metrics
    Artificial identification of ischemic stroke lesion is time-consuming, laborious and easy be added subjective differences. To solve this problem, an automatic segmentation algorithm based on 3D deep residual network and cascade U-Net was proposed. Firstly, in order to efficiently utilize 3D contextual information of the image and the solve class imbalance issue, the patches were extracted from the stroke Magnetic Resonance Image (MRI) and put into network. Then, a segmentation model based on 3D deep residual network and cascade U-Net was used to extract features of the image patches, and the coarse segmentation result was obtained. Finally, the fine segmentation process was used to optimize the coarse segmentation result. The experiment results show that, on the dataset of Ischemic Stroke LEsion Segmentation (ISLES), for the proposed algorithm, the Dice similarity coefficient reached 0.81, the recall reached 0.81 and the precision reached 0.81, the distance coefficient Average Symmetric Surface Distance (ASSD) reached 1.32 and Hausdorff Distance (HD) reached 22.67. Compared with 3D U-Net algorithm, level set algorithm, Fuzzy C-Means (FCM) algorithm and Convolutional Neural Network (CNN) algorithm, the proposed algorithm has better segmentation performance.
    Data science and technology
    Subspace clustering algorithm for high dimensional uncertain data
    WAN Jing, ZHENG Longjun, HE Yunbin, LI Song
    2019, 39(11):  3280-3287.  DOI: 10.11772/j.issn.1001-9081.2019050928
    Asbtract ( )   PDF (1411KB) ( )  
    References | Related Articles | Metrics
    How to reduce the impact of uncertain data on high dimensional data clustering is the difficulty of current research. Aiming at the problem of low clustering accuracy caused by uncertain data and curse of dimensionality, the method of determining the uncertain data and then clustering the certain data was adopted. In the process of determining the uncertain data, uncertain data were divided into value uncertain data and dimension uncertain data, and were processed separately to improve algorithm efficiency. K-Nearest Neighbor (KNN) query combined with expected distance was used to obtain the approximate value of uncertain data with the least impact on the clustering results, so as to improve the clustering accuracy. After determining the uncertain data, the method of subspace clustering was adopted to avoid the impact of the curse of dimensionality. The experimental results show that high-dimensional uncertain data clustering algorithm based on Clique for Uncertain data (UClique) has good performance on UCI datasets, has good anti-noise performance and scalability, can obtain better clustering results on high dimensional data, and can achieve the experimental results with higher accuracy on different uncertain datasets, showing that the algorithm is robust and can effectively cluster high dimensional uncertain data.
    User relevance measure method combining latent Dirichlet allocation and meta-path analysis
    XU Hongyan, WANG Dan, WANG Fuhai, WANG Rongbing
    2019, 39(11):  3288-3292.  DOI: 10.11772/j.issn.1001-9081.2019040728
    Asbtract ( )   PDF (837KB) ( )  
    References | Related Articles | Metrics
    User relevance measure is the foundation and core of heterogeneous information network research. The existing user relevance measure methods still have improvement space due to insufficient multi-dimensional analysis and link analysis. Aiming at the fact, a user relevance measure method combining Latent Dirichlet Allocation (LDA) and meta-path analysis was proposed. Firstly, the LDA was used to model the topic, and the relevance of nodes was analyzed by the node contents in the network. Secondly, the meta-path was introduced to describe the relationship type between nodes, and relevance measure was carried out for users in heterogeneous information network by relevance measure method (DPRel). Thirdly, the relevance of nodes was incorporated into the calculation of user relevance measure. Finally, the experiment was carried out on IMDB real movie dataset, and the proposed method was compared with the collaborative filtering recommendation method embedded in LDA topic model ULR-CF (Unifying LDA and Ratings Collaborative Filtering) and meta-path based similarity method (PathSim).The experimental results show that the proposed method can overcome the drawback of data sparsity and improve the accuracy of user relevance measure.
    Parameter independent clustering of air traffic trajectory based on silhouette coefficient
    SUN Shilei, WANG Chao, ZHAO Yuandi
    2019, 39(11):  3293-3297.  DOI: 10.11772/j.issn.1001-9081.2019040738
    Asbtract ( )   PDF (818KB) ( )  
    References | Related Articles | Metrics
    In order to eliminate the subjectivity of expert experience, get rid of the dependence on trajectory characteristics and reduce the burden of experimental parameter tuning, a Parameter Independent Clustering BAsed on SIlhoutte Coefficient (PICBASIC) algorithm was proposed. Firstly, existing Euclidean distance based track pairing methods were compared, and a trajectory similarity calculation model based on Dynamic Time Warping (DWT) distance and Gaussian kernel function was established. Secondly, the air traffic trajectories were partitioned and clustered by spectral clustering. Finally, a cluster number optimization method based on silhouette coefficient was proposed, and it had the function of quantitative evaluation of clustering results. Experiments were carried out by using real arrival trajectories to verify the validity of the proposed algorithm. PICBASIC judged that the clustering quality would be respectively optimum if the 365 trajectories of runway 28L were clustered into 5 clusters and the 530 trajectories of runway 28R were clustered into 6 clusters. The average silhouette coefficients in the two situations were respectively 0.8099 and 0.8056. Under the same experimental conditions, the difference rates of average silhouette coefficient between PICBASIC and MeanShift clustering were respectively -1.23% and 0.19%. The experimental results demonstrate that PICBASIC can tolerate the speed and length differences of trajectories, dispense with manual guidance or experimental parameter tuning and filter out the adverse impact of abnormal trajectories on the clustering quality.
    Cyber security
    Security verification method of safety critical software based on system theoretic process analysis
    WANG Peng, WU Kang, YAN Fang, WANG Kenian, ZHANG Xiaochen
    2019, 39(11):  3298-3303.  DOI: 10.11772/j.issn.1001-9081.2019040688
    Asbtract ( )   PDF (969KB) ( )  
    References | Related Articles | Metrics
    Functional implementation of modern safety critical systems is increasingly dependent on software. As a result, software security is very important to system security, and the complexity of software makes it difficult to capture the dangers of component interactions by traditional security analysis methods. In order to ensure the security of safety critical systems, a software security verification method based on System Theoretic Process Analysis (STPA) was proposed. On the basis of the security control structure, by constructing the process model with software process model variables, the system context information of dangerous behavior occurrence was specified and analyzed, and the software security requirements were generated. Then, through the landing gear control system software design, the software security verification was carried out by the model checking technology. The results show that the proposed method can effectively identify the potential dangerous control paths in the software at the system level and reduce the dependence on manual analysis.
    Cloud system security and performance modeling based on Markov model
    XU Han, LUO Liang, SUN Peng, MENG Sa
    2019, 39(11):  3304-3309.  DOI: 10.11772/j.issn.1001-9081.2019020257
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    Aiming at the lack of security assessment in cloud environment, a cloud-based security modeling method was proposed, and a Security-Performance (S-P) association model in cloud environment was established. Firstly, a model was constructed for virtual machines, the most important component of the cloud system, to evaluate its security. The model fully reflected the impact of security mechanisms and malicious attacks on virtual machines. Secondly, based on the relationship between virtual machine and cloud system, an indicator was proposed for assessing the security of the cloud system. Thirdly, a hierarchical modeling method was proposed to establish an S-P association model. Queuing theory was used to model the performance of cloud computing systems, and the relationship between security and performance was established based on Bayesian theory and association analysis, and a new index for evaluating the association of complex S-P was proposed. Experimental results verify the correctness of the theoretical model and reveal the dynamic change rule of performance caused by safety factors.
    Intrusion detection method based on ensemble transfer learning via weighted mutual information
    HU Jian, SU Yongdong, HUANG Wenzai, XIAO Peng, LIU Yuting, YANG Benfu
    2019, 39(11):  3310-3315.  DOI: 10.11772/j.issn.1001-9081.2019040730
    Asbtract ( )   PDF (906KB) ( )  
    References | Related Articles | Metrics
    Intrusion Detection System (IDS) has become an essential part of network security system, the practicability and durability of the existing intrusion detection methods still have improvement space, like detecting intrusion threats earlier and improving the detection accuracy of intrusion detection systems. Therefore, an intrusion detection method based on Ensemble Transfer Learning (ETL) via weighted mutual information was proposed. Firstly, the transfer strategy was used to model multiple feature sets, then the mutual information was used to measure the data attribution of feature sets under the transfer models in different domains. Finally, the weighted ensemble was performed to the multiple transfer models according to the measures, obtaining the ensemble transfer model. The method was able to construct the intrusion detection model better than the traditional models without ensemble or transfer learning by learning the knowledge of little labeled samples in the new environment and many labeled samples in the prior environment. The benchmark NSL-KDD dataset was used to evaluate the proposed method and the results show that the proposed method has good convergence performance and improve the accuracy of intrusion detection.
    Android permission management and control scheme based on access control list mechanism
    CAO Zhenhuan, CAI Xiaohai, GU Menghe, GU Xiaozhuo, LI Xiaowei
    2019, 39(11):  3316-3322.  DOI: 10.11772/j.issn.1001-9081.2019040685
    Asbtract ( )   PDF (1141KB) ( )  
    References | Related Articles | Metrics
    Android uses the permission-based access control method to protect the system resources, which has the problem of rough management. At the same time, some malicious applications can secretly access resources in a privacy scenario without the user's permission, bringing certain threats to user privacy and system resources. Based on the original permission management and control and with the introduction of Access Control List (ACL) mechanism, an Android fine-grained permission management and control system based on ACL mechanism was designed and implemented. The proposed system can dynamically set the access rights of the applications according to the user's policy, avoiding the access of malicious codes to protect system resources. Tests of compatibility and effectiveness show that the system provides a stable environment for applications.
    Advanced computing
    Reliability assessment of k-ary n-cube networks
    FENG Kai, LI Jing
    2019, 39(11):  3323-3327.  DOI: 10.11772/j.issn.1001-9081.2019040714
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics
    The functions of a parallel computer system heavily rely on the performance of interconnection network of the system. In order to measure the fault tolerance abilities of the parallel computer systems with k-ary n-cubes as underlying topologies, the reliability of the subnetworks of k-ary (n-1)-cubes in a k-ary n-cube under the node fault model was studied. For odd k ≥ 3, the mean time to failure to maintain the fault free condition of different number of k-ary (n-1)-cubes in a k-ary n-cube was analyzed under the fixed partition pattern and the flexible partition pattern, respectively. And the calculation formulas for the reliability evaluation parameter of subnetwork were obtained. Under the node fault model, the results indicate that the parallel computer system which is built based on k-ary n-cubes with odd k has better fault tolerance ability under the flexible partition pattern when subnetworks in the system are assigned for the user task execution.
    Task scheduling of variance-directional variation genetic algorithm in cloud environment
    SUN Min, YE Qiaonan, CHEN Zhongxiong
    2019, 39(11):  3328-3332.  DOI: 10.11772/j.issn.1001-9081.2019040635
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics
    The task scheduling of Genetic Algorithm (GA) in cloud environment has problems such as poor optimization ability and unstable results. For the above problems, a Variance-Directional Variation GA (V-DVGA) was proposed. In the selection part, multiple selections were made in the process of each iteration, and the mathematical variance was used to ensure the diversity of the population and expand the search range of the better solution. In the intersection part, a new intersection mechanism was established to enrich the diversity of the population and improve the overall fitness of the population. In the variation part, the variation method was improved, the directional variation was used on the basis of the traditional variation to increase the optimization ability of the algorithm. The cloud environment simulation experiments were carried out on the workflowSim platform, and the proposed algorithm was compared with the classical GA and the current Workflow Scheduling Algorithm based on Genetic Algorithm (CWTS-GA). The experimental results show that under the same setting conditions, the proposed algorithm is superior to the other two algorithms in terms of execution efficiency, optimization ability and stability, and is an effective task scheduling algorithm in cloud computing environment.
    Greedy algorithm-based virtual machine migration strategies in cloud data center
    LIU Kainan
    2019, 39(11):  3333-3338.  DOI: 10.11772/j.issn.1001-9081.2019040598
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics
    In order to save the energy consumption in cloud data center, some greedy algorithms-based Virtual Machine (VM) migration strategies were proposed. In these strategies, the virtual migration process was divided into physical host status detection, virtual machine selection and virtual machine placement, and the greedy algorithm was adopted in the process of virtual selection and virtual placement respectively. The three proposed migration strategies were:Minimum Host Utilization selection, Maximum Host Utilization placement (MinMax_Host_Utilization); Maximum Host Power Usage selection, Minimum Host Power Usage placement (MaxMin_Host_Power_Usage); Minimum Host MIPS selection, Maximum Host MIPS placement (MinMax_Host_MIPS). The maximum or minimum thresholds were set for the processor utilization efficency, the energy consumption and the processor computing power of physical host. According to the principle of greedy algorithm, the virtual machines with indicators higher or lower than the thresholds should be migrated. With CloudSim as the simulated cloud data center, the test results show that compared with the static threshold and median absolute deviation migration strategies existing in CloudSim, the proposed strategies have the total energy consumption reduced by 15%, the virtual machine migration number decreased by 60%, and the average SLA violation rate lowered about 5%.
    Network and communications
    Carrier parameter decoupling technique based on autocorrelation increment
    WANG Sixiu, ZHANG Lei, REN Yan, FENG Changzheng
    2019, 39(11):  3339-3342.  DOI: 10.11772/j.issn.1001-9081.2019040682
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics
    In the high-speed mobile communications, transceivers always face large Doppler shift and limited pilot overhead, which severely influence the overall performance of the Traditional Carrier Synchronization Pattern (TCSP). Thus, an autocorrelation increment based Carrier Parameter Estimation Decoupling Technique (CPEDT) was proposed and was applied to the TCSP (CPEDT-TCSP). Firstly, a pilot signal with certain length was selected at the receiving end to perform the operation of modulation removal, and then the correlation operation with an effective delay length α was performed on the signal with modulation removal. The frequency offset was estimated by the result of the correlation operation, and the conjugate form of the correlation operation result with α as half of the pilot length was used to make the maximum likelihood phase offset estimation with the signal with modulation removal. Theoretical analysis and simulation results show that with pilot starting location of zero, the CPEDT-TCSP can implement the decoupling between the frequency offset estimation and the phase offset estimation in the TCSP, and can reduce the computational complexity of complex multiplication from L to 1 in the maximum likelihood phase offset estimation, therefore is more suitable for high-speed mobile communications.
    Multi-objective automatic identification and localization system in mobile cellular networks
    MIAO Sheng, DONG Liang, DONG Jian'e, ZHONG Lihui
    2019, 39(11):  3343-3348.  DOI: 10.11772/j.issn.1001-9081.2019040672
    Asbtract ( )   PDF (905KB) ( )  
    References | Related Articles | Metrics
    Aiming at difficult multi-target identification recognition and low localization accuracy in mobile cellular networks, a multi-objective automatic identification and localization method was presented based on cellular network structure to improve the detection efficiency of target number and the localization accuracy of each target. Firstly, multi-target existence was detected through the analysis of the result variance of multiple positioning in the monitoring area. Secondly, cluster analysis on locating points was conducted by k-means unsupervised learning in this study. As it is difficult to find an optimal cluster number for k-means algorithm, a k-value fission algorithm based on beam resolution was proposed to determine the k value, and then the cluster centers were determined. Finally, to enhance the signal-to-noise ratio of received signals, the beam directions were determined according to cluster centers. Then, each target was respectively positioned by Time Difference Of Arrival (TDOA) algorithm by the different beam direction signals received by the linear constrained narrow-band beam former. The simulation results show that, compared to other TDOA and Probability Hypothesis Density (PHD) filter algorithms in recent references, the presented multi-objective automatic identification and localization method for solving multi-target localization problems can improve the signal-to-noise ratio of the received signals by about 10 dB, the Cramer-Mero lower bound of the delay estimation error can be reduced by 67%, and the relative accuracy of the positioning accuracy can be increased more than 10 percentage points. Meanwhile, the proposed algorithm is simple and effective, is relatively independent in each positioning, has a linear time complexity, and is relatively stable.
    Virtual reality and multimedia computing
    Human interaction recognition based on RGB and skeleton data fusion model
    JI Xiaofei, QIN Linlin, WANG Yangyang
    2019, 39(11):  3349-3354.  DOI: 10.11772/j.issn.1001-9081.2019040633
    Asbtract ( )   PDF (993KB) ( )  
    References | Related Articles | Metrics
    In recent years, significant progress has been made in human interaction recognition based on RGB video sequences. Due to its lack of depth information, it cannot obtain accurate recognition results for complex interactions. The depth sensors (such as Microsoft Kinect) can effectively improve the tracking accuracy of the joint points of the whole body and obtain three-dimensional data that can accurately track the movement and changes of the human body. According to the respective characteristics of RGB and joint point data, a convolutional neural network structure model based on RGB and joint point data dual-stream information fusion was proposed. Firstly, the region of interest of the RGB video in the time domain was obtained by using the Vibe algorithm, and the key frames were extracted and mapped to the RGB space to obtain the spatial-temporal map representing the video information. The map was sent to the convolutional neural network to extract features. Then, a vector was constructed in each frame of the joint point sequence to extract the Cosine Distance (CD) and Normalized Magnitude (NM) features. The cosine distance and the characteristics of the joint nodes in each frame were connected in time order of the joint point sequence, and were fed into the convolutional neural network to learn more advanced temporal features. Finally, the softmax recognition probability matrixes of the two information sources were fused to obtain the final recognition result. The experimental results show that combining RGB video information with joint point information can effectively improve the recognition result of human interaction behavior, and achieves 92.55% and 80.09% recognition rate on the international public SBU Kinect interaction database and NTU RGB+D database respectively, verifying the effectiveness of the proposed model for the identification of interaction behaviour between two people.
    Low coverage point cloud registration algorithm based on region segmentation
    TANG Hui, ZHOU Mingquan, GENG Guohua
    2019, 39(11):  3355-3360.  DOI: 10.11772/j.issn.1001-9081.2019040727
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of high time complexity, slow convergence speed and error-prone matching of low coverage point cloud registration, a point cloud registration algorithm based on region segmentation was proposed. Firstly, the volume integral invariant was used to calculate the concavity and convexity of points on the point cloud, and then the concavity and convexity feature point sets were extracted. Secondly, the regions of the feature points were partitioned by the segmentation algorithm based on the mixed manifold spectral clustering, and the regions were registered by the Iterative Closest Point (ICP) algorithm based on Singular Value Decomposition (SVD), so that the accurate registration of point clouds could be achieved. The experimental results show that the proposed algorithm can greatly improve the coverage of point clouds by region segmentation, and the optimal rotation matrix of rigid body transformation can be calculated without iteration. The algorithm has the registration accuracy increased by more than 10% and the registration time reduced by more than 20%. Therefore, the proposed algorithm can achieve fast and accurate registration of point clouds with low coverage.
    Automatic method for left atrial appendage segmentation from ultrasound images based on deep learning
    HAN Luyi, HUANG Yunzhi, DOU Haoran, BAI Wenjuan, LIU Qi
    2019, 39(11):  3361-3365.  DOI: 10.11772/j.issn.1001-9081.2019040771
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    Segmenting Left Atrial Appendage (LAA) from ultrasound image is an essential step for obtaining the clinical indicators, and the prerequisite and difficulty for automatic and accurate segmentation is locating the target accurately. Therefore, a method combining with automatic location based on deep learning and segmenting algorithm based on model was proposed to accomplish the automatic segmentation of LAA from ultrasound images. Firstly, You Only Look Once (YOLO) model was trained as the network structure for the automatic location of LAA. Secondly, the optimal weight files were determined by the validation set and the bounding box of LAA was predicted. Finally, based on the correct location, the bounding box was magnified 1.5 times as the initial contour, and C-V (Chan-Vese) model was utilized to realize the automatic segmentation of LAA. The performance of automatic segmentation was evaluated by 5 metrics, including accuracy, sensitivity, specificity, positive, and negative. The experimental results show that the proposed method can achieve a good automatic segmentation in different resolutions and visual modes, small samples data achieve the optimal location performance at 1000 iterations with a correct position rate of 72.25%, and C-V model can reach the accuracy of 98.09% based on the correct location. Therefore, deep learning is a rather promising technique in the automatic segmentation of LAA from ultrasound images, and it can provide a good initial contour for the segmentation algorithm based on contour.
    Liver CT images segmentation based on fuzzy C-means clustering with spatial constraints
    WANG Rongmiao, ZHANG Fengfeng, ZHAN Wei, CHEN Jun, WU Hao
    2019, 39(11):  3366-3369.  DOI: 10.11772/j.issn.1001-9081.2019040611
    Asbtract ( )   PDF (693KB) ( )  
    References | Related Articles | Metrics
    Traditional Fuzzy C-Means (FCM) clustering algorithm only considers the characteristics of a single pixel when applied to liver CT image segmentation, and it can not overcome the influence of uneven gray scale and the problem of boundary leakage caused by blurred liver boundary. In order to solve the problems, a Spatial Fuzzy C-Means (SFCM) clustering segmentation algorithm combined with spatial constraints was proposed. Firstly, the convolution kernel was constructed by using two-dimensional Gauss distribution function, and the feature matrix could be obtained by using the convolution kernel to extract the spatial information of the source image. Then, the penalty term of spatial constraint was introduced to update and optimize the objective function to obtain a new iteration equation. Finally, the liver CT image was segmented by using the new algorithm. As shown in results, the shape of liver contour splited by SFCM is more regular when segmenting liver CT images with gray unevenness and boundary leakage. The accuracy of SFCM reaches 92.8%, which is 2.3 and 4.3 percentage points higher than that of FCM and Intuitionistic Fuzzy C-Means (IFCM). Also, over-segmentation rate of SFCM is 4.9 and 5.3 percentage points lower than that of FCM and IFCM.
    Frontier & interdisciplinary applications
    Multi-view deep anomaly detection framework for vehicle refueling behaviors based on spatio-temporal data fusion
    DING Jingquan, MA Bo, LI Xiao
    2019, 39(11):  3370-3375.  DOI: 10.11772/j.issn.1001-9081.2019040670
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics
    The multi-source heterogeneity and complicated relationships of spatio-temporal data of vehicle refueling bring great challenges to existing anomaly detection approaches. Aiming at the problem, a multi-view deep anomaly detection framework for vehicle refueling based on spatio-temporal data fusion was proposed. Firstly, the static information and dynamic activity data were correlated, fused and managed based on Unified Conceptual Model (UCM). Secondly, the spatio-temporal data were encoded and converted according to spatial view, temporal view and semantic view. Finally, a deep anomaly detection framework was constructed based on the above multi-views. The experimental results on vehicle refueling spatio-temporal dataset show that all anomaly detection approaches tested can achieve an average decrease in the Root Mean Square Error (RMSE) by 10.73%, and the proposed multi-view spatio-temporal anomaly detection framework can obtain a decrease in the RMSE by 19.36% compared to LSTM (Long Short-Term Memory), which gets the best results in the-state-of-the-art methods. And the Matthews Correlation Coefficient (MCC) of the proposed method on the credit card fraud dataset is increased by 32.78% compared with that of Logistic Regression model. All experimental results demonstrate the effectiveness of the proposed anomaly detection framework.
    Procurement-production-distribution joint scheduling model in job shop environment
    ZHANG Weicun, GAO Rui, ZHANG Man
    2019, 39(11):  3383-3390.  DOI: 10.11772/j.issn.1001-9081.2019040712
    Asbtract ( )   PDF (1160KB) ( )  
    References | Related Articles | Metrics
    Aiming at the issue that the Integrated Production and Distribution Scheduling (IPDS) model rarely considers the complex production environment and procurement, the model of Integrated Purchase Production and Distribution Scheduling (IPPDS) with minimizing the order completion time in the job shop environment as target was established and the improved Dynamic Artificial Bee Colony (DABC) algorithm was used to solve the model. Based on characteristics of IPPDS, firstly, to realize the matching relationship between task (processing and transportation) and resource (equipment and vehicle), a coding method of two-dimensional real number matrix was adopted. Secondly, the decoding method based on the process was adopted, and the method to satisfy the constraints for different tasks were designed in the decoding process to ensure the feasibility of the decoding method. Finally, the dynamic coordination mechanism and local heuristic information of leading and following bees were designed in the process of the algorithm. Appropriate parameter intervals of DABC were obtained by experiments, and the experimental results show that:compared with piecewise scheduling and IPDS, IPPDS strategy has the scheduling time reduced by 35.59% and 30.95% respectively. DABC algorithm has the solution effect averagely improved by 2.54% compared with Artificial Bee Colony (ABC) algorithm, and averagely improved by 6.99% compared to the Adapted Genetic Algorithm (AGA). Therefore, IPPDS strategy can meet customer requirements more quickly, and DABC algorithm not only reduces the parameters to be set, but also has good exploration and development ability.
    Visual decision support platform for air pollution exposure risk prevention and control
    XIE Jing, ZOU Bin, LI Shenxin, ZHAO Xiuge, QIU Yonghong
    2019, 39(11):  3391-3397.  DOI: 10.11772/j.issn.1001-9081.2019040693
    Asbtract ( )   PDF (1104KB) ( )  
    References | Related Articles | Metrics
    China's air pollution control policy has gradually shifted from pollution control to risk prevention and control, and existing air quality monitoring equipment and platform services are limited to environmental monitoring rather than exposure monitoring. Aiming at this problem, a comprehensive visual analysis and decision support platform based on B/S architecture-Air Pollution Exposure Risk Measurement System (APERMS) was designed and developed. Firstly, based on air pollution concentration monitoring data and exposure spatio-temporal behavior patterns, the complete air pollution exposure risk measurement technology route of pollution concentration mapping, individual exposure measurement, population exposure measurement and exposure risk assessment was researched and integrated. Secondly, based on the principle of high availability and reliability, the overall system architecture design, database design and functional modules design were carried out. Finally, GIS and J2EE Web technologies were utilized to complete the development of APERMS, realizing the high spatio-temporal resolution simulation of air pollution concentration distribution, accurate assessment of individual and population exposure of air pollution and comprehensive evaluation of air pollution exposure risk levels. The APERMS is mainly used in the air pollution monitoring and environmental health management industries, to provide effective technical support for risk aversion as well as pollution prevention and control.
    Housing recommendation method based on user network embedding
    LIU Tong, ZENG Cheng, HE Peng
    2019, 39(11):  3398-3402.  DOI: 10.11772/j.issn.1001-9081.2019040721
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of the hotel industry, the online hotel reservation system has become popular. How to let users quickly find the housing they need from massive housing information is the problem to be solved in the reservation system. Aiming at the cold start and data sparseness of users in the housing recommendation, the User Network Embedding Recommendation (UNER) method based on the network embedding method was proposed. Firstly, two kinds of user networks were constructed by the user's historical behavior data and tag information in the system. Then, the network was mapped into the low-dimensional vector space based on the network embedding method, and the vector representation of the user node was obtained and the user similarity matrix was calculated by the user vector. Finally, according to the matrix, the housing recommendation was performed for the user. The experimental data come from the hotel reservation system of "Shuidongxiangshe" in Guizhou. The experimental results show that compared with the user-based collaborative filtering algorithm, the proposed method has the comprehensive evaluation index (F1) increased by 20 percentage points and the Mean Average Precision (MAP) increased by 11 percentage points, reflecting the superiority of the method.
    RMB exchange rate forecast embedded with Internet public opinion intensity
    WANG Jixiang, GUO Yi, QI Tianmei, WANG Zhihong, LI Zhen, TANG Minwei
    2019, 39(11):  3403-3408.  DOI: 10.11772/j.issn.1001-9081.2019040726
    Asbtract ( )   PDF (914KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low prediction effect caused by single data source in the current RMB exchange rate forecast research, a forecast technology based on Internet public opinion intensity was proposed. By comparing and analyzing various data sources, the forecast error of RMB exchange rate was effectively reduced. Firstly, the Internet foreign exchange news data and historical market data were fused, and the multi-source text data were converted into the computable vectors. Secondly, five feature combinations based on sentiment feature vectors were constructed and compared, and the feature combination embedded with intensity of Internet public opinion was given as the input of forecast models. Finally, a temporal sliding window of foreign exchange public opinion data was designed, and an exchange rate forecast model based on machine learning was built. Experimental results show that feature combination embedded with Internet public opinion outperforms the feature combination without public opinion by 9.8% and 16.2% in Root Mean Squared Error (RMSE) and Mean Squared Error (MAE). At the same time, the forecast model based on Long Short-Term Memory network (LSTM) is better than that based on Support Vector Regression (SVR), Decision Tree regression (DT) and Deep Neural Network (DNN).
    Rumor propagation model based on edge-based compartmental theory
    LUO Jingyu, TANG Ningjiu
    2019, 39(11):  3409-3414.  DOI: 10.11772/j.issn.1001-9081.2019040739
    Asbtract ( )   PDF (1024KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that spreader nodes will be influenced by their neighbors during the recovery in the rumor propagation process, a rumor propagation model based on edge-based compartmental theory was proposed. Firstly, a set of dynamic equations were established using the improved edge-based compartmental theory, and the propagation range and breaking threshold were theoretically analyzed. Then the influence of factors including network structure, propagation probability and basic recovery probability were analyzed through numerical simulation. Finally, an effective immunization strategy to control rumor propagation range and breaking threshold was presented on the above basis. The results of theoretical analysis and numerical simulation show that, compared with the classical SIR (Susceptible-Infected-Recovered) rumor propagation model, the presented model decrease the period of rumor propagation and increase the peak of the propagation of spreader nodes slightly. Through comparison experiments with the random immunization strategy, in the proposed strategy the edges with higher product of the degrees of two nodes were immuned preferentially to obtain better effect when rumor has minor propagation probability, and the edges with lower product of the degrees of two nodes were immuned preferentially to obtain better effect when rumor has larger propagation probability. Study results indicate that the presented model conform to the characteristics of rumor fading away phase and can provide theoretical and numerical support for rumor prediction and control.
2024 Vol.44 No.2

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF