Loading...

Table of Content

    10 August 2021, Volume 41 Issue 8
    Artificial intelligence
    Knowledge graph survey: representation, construction, reasoning and knowledge hypergraph theory
    TIAN Ling, ZHANG Jinchuan, ZHANG Jinhao, ZHOU Wangtao, ZHOU Xue
    2021, 41(8):  2161-2186.  DOI: 10.11772/j.issn.1001-9081.2021040662
    Asbtract ( )   PDF (2811KB) ( )  
    References | Related Articles | Metrics
    Knowledge Graph (KG) strongly support the research of knowledge-driven artificial intelligence. Aiming at this fact, the existing technologies of knowledge graph and knowledge hypergraph were analyzed and summarized. At first, from the definition and development history of knowledge graph, the classification and architecture of knowledge graph were introduced. Second, the existing knowledge representation and storage methods were explained. Then, based on the construction process of knowledge graph, several knowledge graph construction techniques were analyzed. Specifically, aiming at the knowledge reasoning, an important part of knowledge graph, three typical knowledge reasoning approaches were analyzed, which are logic rule-based, embedding representation-based, and neural network-based. Furthermore, the research progress of knowledge hypergraph was introduced along with heterogeneous hypergraph. To effectively present and extract hyper-relational characteristics and realize the modeling of hyper-relation data as well as the fast knowledge reasoning, a three-layer architecture of knowledge hypergraph was proposed. Finally, the typical application scenarios of knowledge graph and knowledge hypergraph were summed up, and the future researches were prospected.
    Cross-modal retrieval algorithm based on multi-level semantic discriminative guided hashing
    LIU Fangming, ZHANG Hong
    2021, 41(8):  2187-2192.  DOI: 10.11772/j.issn.1001-9081.2020101607
    Asbtract ( )   PDF (1091KB) ( )  
    References | Related Articles | Metrics
    Most cross-modal hashing methods use binary matrix to represent the degree of correlation, which results in high-level semantic information cannot be captured in multi-label data, and those methods ignore maintaining the semantic structure and the discrimination of the data features. Therefore, a cross-modal retrieval algorithm named ML-SDH (Multi-Level Semantics Discriminative guided Hashing) was proposed. In the algorithm, multi-level semantic similarity matrix was used to discover the deeply correlated information in the cross-modal data, and equally guided cross-modal hashing was used to express the correlations in the semantic structure and discriminative classification. As the result, not only the purpose of encoding multi-label data of high-level semantic information was achieved, but also the distinguishability and semantic similarity of the final learned hash codes were ensured by the constructed multi-level semantic structure. On NUS-WIDE dataset, with the hash code length of 32 bit, the mean Average Precision (mAP) of the proposed algorithm in two retrieval tasks is 19.48,14.50,1.95 percentage points and 16.32,11.82,2.08 percentage points higher than those of DCMH (Deep Cross-Modal Hashing), PRDH (Pairwise Relationship guided Deep Hashing) and EGDH (Equally-Guided Discriminative Hashing) algorithms respectively.
    Disambiguation method of multi-feature fusion based on HowNet sememe and Word2vec word embedding representation
    WANG Wei, ZHAO Erping, CUI Zhiyuan, SUN Hao
    2021, 41(8):  2193-2198.  DOI: 10.11772/j.issn.1001-9081.2020101625
    Asbtract ( )   PDF (1018KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that the low-frequency words expressed by the existing word vectors are of poor quality, the semantic information expressed by them is easy to be confused, and the existing disambiguation models cannot distinguish polysemous words accurately, a multi-feature fusion disambiguation method based on word vector fusion was proposed. In the method, the word vectors expressed by HowNet sememes and the word vectors generated by Word2vec (Word to vector) were fused to complement the polysemous information of words and improve the expression quality of low-frequency words. Firstly, the cosine similarity between the entity to be disambiguated and the candidate entity was calculated to obtain the similarity between them. After that, the clustering algorithm and HowNet knowledge base were used to obtain entity category feature similarity. Then, the improved Latent Dirichlet Allocation (LDA) topic model was used to extract the topic keywords to calculate the similarity of entity topic feature similarity. Finally, the word sense disambiguation of polysemous words was realized by weighted fusion of the above three types of feature similarities. Experimental results conducted on the test set of the Tibet animal husbandry field show that the accuracy of the proposed method (90.1%) is 7.6 percentage points higher than that of typical graph model disambiguation method.
    Reasoning method based on linear error assertion
    WU Peng, WU Jinzhao
    2021, 41(8):  2199-2204.  DOI: 10.11772/j.issn.1001-9081.2021030390
    Asbtract ( )   PDF (4634KB) ( )  
    References | Related Articles | Metrics
    Errors are common to the system. In safety-critical systems, quantitative analysis of errors is necessary. However, the previous reasoning and verification methods rarely consider errors. The errors are usually described with the interval numbers, so that the linear assertion was spread and the concept of linear error assertion was given. Furthermore, combined with the properties of convex set, a method to solve the vertices of linear error assertion was proposed, and the correctness of this method was proved. By analyzing the related concepts and theorems, the problem to judge whether there was implication relationship between linear error assertions was converted to the problem to judge whether the vertices of the precursor assertion were contained in the zero set of the successor assertion, so as to give the easy-to-program steps of judging the implication relationship between linear error assertions. Finally, the application of this method to train acceleration was given, and the correctness of the method was tested with the large-scale random examples. Compared with the reasoning methods without error semantics, this method has advantages in the field of reasoning and verification of systems with error parameters.
    Automated English essay scoring method based on multi-level semantic features
    ZHOU Xianbing, FAN Xiaochao, REN Ge, YANG Yong
    2021, 41(8):  2205-2211.  DOI: 10.11772/j.issn.1001-9081.2020101572
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    The Automated Essay Scoring (AES) technology can automatically analyze and score the essay, and has become one of the hot research problems in the application of natural language processing technology in the education field. Aiming at the current AES methods that separate deep and shallow semantic features, and ignore the impact of multi-level semantic fusion on essay scoring, a neural network model based on Multi-Level Semantic Features (MLSF) was proposed for AES. Firstly, Convolutional Neural Network (CNN) was used to capture local semantic features, and the hybrid neural network was used to capture global semantic features, so that the essay semantic features were obtained from a deep level. Secondly, the feature of the topic layer was obtained by using the essay topic vector of text level. At the same time, aiming at the grammatical errors and language richness features that are difficult to mine by deep learning model, a small number of artificial features were constructed to obtain the linguistic features of the essay from the shallow level. Finally, the essay was automatically scored through the feature fusion. Experimental results show that the proposed model improves the performance significantly on all subsets of the public dataset of the Kaggle ASAP (Automated Student Assessment Prize) champion, with the average Quadratic Weighted Kappa (QWK) of 79.17%, validating the effectiveness of the model in the AES tasks.
    Distribution entropy penalized support vector data description
    HU Tianjie, HU Wenjun, WANG Shitong
    2021, 41(8):  2212-2218.  DOI: 10.11772/j.issn.1001-9081.2020101542
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that traditional Support Vector Data Description (SVDD) is quite sensitive to penalty parameters, a new detection method, called Distribution Entropy Penalized SVDD (DEP-SVDD), was proposed. First, the normal samples were taken as the global distribution of the data, and the distance measure between each sample point and the normal sample distribution center was defined in the Gaussian kernel space. Then, a probability was defined for every data point, which was able to estimate the possibility of the point belonging to normal sample or abnormal one. Finally, the probability was used to construct the punishment degree based on distribution entropy to punish the corresponding samples. On 9 real-world datasets, the proposed method was compared with the algorithms of SVDD, Density Weighted SVDD (DW-SVDD), Position regularized SVDD (P-SVDD), K-Nearest Neighbor (KNN) and isolation Forest (iForest). The results show that DEP-SVDD achieves the highest classification precision on 6 datasets, which proves that DEP-SVDD has better performance advantages in anomaly detection than many anomaly detection methods.
    Feature construction algorithm for multi-target regression via radial basis function
    YAN Haisheng, MA Xinqiang
    2021, 41(8):  2219-2224.  DOI: 10.11772/j.issn.1001-9081.2020101578
    Asbtract ( )   PDF (917KB) ( )  
    References | Related Articles | Metrics
    Multi-Target Regression (MTR) is a regression problem of single samples with multiple continuous outputs. The existing multi-target regression algorithms learn regression models based on a same feature space, and ignore the specific characteristics of each output target. To solve the problem, a feature construction algorithm for multi-target regression via radial basis function was proposed. Firstly, clustering was applied to each output target with the output of each target as the additional feature, and according to the centers of clusters, the bases of target specific feature space were constructed in the original feature space. Secondly, the radial basis function was utilized to map the original feature space into the target specific feature space, constructing the target specific features, and then a base regression model was built for each target based on these target specific features. Finally, the low-rank learning method was applied to explore and utilize the correlation between the output targets from the latent space formed by the outputs of base regression models. Experiments were conducted on 18 multi-target regression datasets, and the proposed algorithm was compared with some classical regression algorithms, such as Stacked Single-Target (SST), Ensemble of Regressor Chains (ERC) and Multi-layer Multi-target Regression (MMR). The results show that the proposed algorithm outperforms the comparison algorithms on 14 datasets and achieves the best average performance on 18 datasets. It can be seen that the target specific features can improve the prediction accuracy of each output target and improve the overall prediction performance of multi-target regression by combining the low-rank learning to learn and obtain the correlation between the output targets.
    Improved AdaBoost algorithm based on base classifier coefficients and diversity
    ZHU Liang, XU Hua, CUI Xin
    2021, 41(8):  2225-2231.  DOI: 10.11772/j.issn.1001-9081.2020101584
    Asbtract ( )   PDF (1058KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low efficiency of linear combination of base classifiers and over-adaptation of the traditional AdaBoost algorithm, an improved algorithm based on coefficients and diversity of base classifiers - WD AdaBoost (AdaBoost based on Weight and Double-fault measure) was proposed. Firstly, according to the error rates of the base classifiers and the distribution status of the sample weights, a new method to solve the base classifier coefficients was given to improve the combination efficiency of the base classifiers. Secondly, the double-fault measure was introduced into WD AdaBoost algorithm in the selection strategy of base classifiers for increasing the diversity among base classifiers. On five datasets of different actual application fields, compared with the traditional AdaBoost algorithm, CeffAda algorithm uses the new base classifier coefficient solution method to make the test error reduced by 1.2 percentage points on average; meanwhile, WD AdaBoost algorithm has the lower error rate compared with WLDF_Ada, AD_Ada (Adaptive to Detection AdaBoost), sk_AdaBoost and other algorithms. Experimental results show that WD AdaBoost algorithm can integrate base classifiers more efficiently, resist overfitting, and improve the classification performance.
    Advanced computing
    Survey of research progress on crowdsourcing task assignment for evaluation of workers’ ability
    MA Hua, CHEN Yuepeng, TANG Wensheng, LOU Xiaoping, HUANG Zhuoxuan
    2021, 41(8):  2232-2241.  DOI: 10.11772/j.issn.1001-9081.2020101629
    Asbtract ( )   PDF (1533KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of internet technology and sharing economy mode, as a new crowd computing mode, crowdsourcing has been widely applied and become a research focus recently. Aiming at the characteristics of crowdsourcing applications, to ensure the completion quality of crowdsourcing tasks, the existing researches have proposed different crowdsourcing task assignment methods from the perspective of the evaluation of worker's ability. Firstly, the crowdsourcing's concept and classification were introduced, and the workflow and task characteristics of the crowdsourcing platform were analyzed. Based on them, the existing research works on the evaluation of workers' ability were summarized. Then, the crowdsourcing task assignment methods and the related challenges were reviewed from three different aspects, including matching-based, planning-based and role-based collaboration. Finally, the research directions of future work were put forward.
    Machine breakdown rescheduling of flexible job shop based on improved imperialist competitive algorithm
    ZHANG Guohui, LU Xixi, HU Yifan, SUN Jinghe
    2021, 41(8):  2242-2248.  DOI: 10.11772/j.issn.1001-9081.2020101664
    Asbtract ( )   PDF (1072KB) ( )  
    References | Related Articles | Metrics
    For the flexible job shop rescheduling problem with machine breakdown, an improved Imperialist Competition Algorithm (ICA) was proposed. Firstly, a flexible job shop dynamic rescheduling model was established with the maximum completion time, machine energy consumption and total delay time as the objective functions, and linear weighting method was applied to three objectives. Then, the improved ICA was proposed to retain the excellent information for the next generation. A roulette selection mechanism was added after the assimilation and revolutionary steps of the general ICA, so that the excellent genes in the initial empire were able to be retained, and the updated empire quality was better and closer to the optimal solution. Finally, after the machine breakdown, the event-driven rescheduling strategy was adopted to reschedule the unprocessed job procedures after the breakdown point. Through production examples, simulation experiments were carried out on three hypothetical machine breakdown scenarios, and the proposed algorithm was compared with improved Genetic Algorithm (GA) and Genetic and Simulated Annealing Algorithm (GASA). Experimental results show that the proposed improved ICA is effective and feasible.
    Hybrid particle swarm optimization with multi-region sampling strategy to solve multi-objective flexible job-shop scheduling problem
    ZHANG Wenqiang, XING Zheng, YANG Weidong
    2021, 41(8):  2249-2257.  DOI: 10.11772/j.issn.1001-9081.2020101675
    Asbtract ( )   PDF (1458KB) ( )  
    References | Related Articles | Metrics
    Flexible Job-shop Scheduling Problem (FJSP) is a widely applied combinatorial optimization problem. Aiming at the problems of multi-objective FJSP that the solution process is complex and the algorithm is easy to fall into the local optimum, a Hybrid Particle Swarm Optimization algorithm with Multi-Region Sampling strategy (HPSO-MRS) was proposed to optimize both the makespan and total machine delay time. The multi-region sampling strategy was able to reorganize the positions of the Pareto frontiers that the particles belonging to, and guide the corresponding moving directions for the particles in multiple regions of the Pareto frontiers after sampling. Thus, the convergence ability of particles in multiple directions was adjusted, and the ability of uniform distribution was improved to a certain extent. In addition, in the encoding and decoding aspect, the decoding strategy with interpolation mechanism was used to eliminate the potential local left shift; in the particle updating aspect, the particle update method of traditional Particle Swarm Optimization (PSO) algorithm was combined with the crossover and mutation operators of Genetic Algorithm (GA), which improved the diversity of search process and avoid the algorithm from falling into the local optimum. The proposed algorithm was tested on benchmark problems Mk01-Mk10 and compared with Hybrid Particle Swarm Optimization algorithm (HPSO), Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ), Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) on algorithm effectiveness and operation efficiency. Experimental results of significance analysis showed that, HPSO-MRS was significantly better than the comparison algorithms on the convergence evaluation indexes Hyper Volume (HV) and Inverted Generational Distance (IGD) in 85% and 77.5% of the control groups, respectively. In 35% of the control groups, the distribution index Spacing of the algorithm was significantly better than those of the comparison algorithms. And there was no situation that the proposed algorithm was significantly worse than the comparison algorithms on the three indexes. It can be seen that, compared with the others, the proposed algorithm has better convergence and distribution performance.
    Deep pipeline 5×5 convolution method based on two-dimensional Winograd algorithm
    HUANG Chengcheng, DONG Xiaoxiao, LI Zhao
    2021, 41(8):  2258-2264.  DOI: 10.11772/j.issn.1001-9081.2020101668
    Asbtract ( )   PDF (1087KB) ( )  
    References | Related Articles | Metrics
    Aiming at problems such as high memory bandwidth demand, high computational complexity, long design and exploration cycle, and inter-layer computing delay of cascade convolution in two-dimensional Winograd convolution algorithm, a double-buffer 5×5 convolutional layer design method based on two-dimensional Winograd algorithm was proposed. Firstly, the column buffer structure was used to complete the data layout, so as to reuse the overlapping data between adjacent blocks and reduce the memory bandwidth demand. Then, the repeated intermediate calculation results in addition process of Winograd algorithm were precisely searched and reused to reduce the computational cost of addition, so that the energy consumption and the design area of the accelerator system were decreased. Finally, according to the calculation process of Winograd algorithm, the design of 6-stage pipeline structure was completed, and the efficient calculation for 5×5 convolution was realized. Experimental results show that, on the premise that the prediction accuracy of the Convolutional Neural Network (CNN) is basically not affected, this calculation method of 5×5 convolution reduces the multiplication computational cost by 83% compared to the traditional convolution, and has the acceleration ratio of 5.82; compared with the method of cascading 3×3 two-dimensional Winograd convolutions to generate 5×5 convolutions, the proposed method has the multiplication computational cost reduced by 12%, the memory bandwidth demand decreased by about 24.2%, and the computing time reduced by 20%.
    Chaotic elite Harris hawks optimization algorithm
    TANG Andi, HAN Tong, XU Dengwu, XIE lei
    2021, 41(8):  2265-2272.  DOI: 10.11772/j.issn.1001-9081.2020101610
    Asbtract ( )   PDF (1295KB) ( )  
    References | Related Articles | Metrics
    Aiming at the shortcomings of Harris Hawks Optimization (HHO) algorithm, such as low convergence accuracy, low convergence speed and being easy to fall into local optimum, a Chaotic Elite HHO (CEHHO) algorithm was proposed. Firstly, the elite hierarchy strategy was introduced to make full use of the dominant population to enhance the population diversity and improve the convergence speed and accuracy of the algorithm. Secondly, the Tent chaotic map was used to adjust the key parameters of the algorithm. Thirdly, a nonlinear energy factor adjustment strategy was adopted to balance the exploitation and exploration of the algorithm. Finally, the Gaussian random walk strategy was used to disturb the optimal individual, and when the algorithm was stagnant, the random walk strategy was used to make the algorithm jump out of the local optimum effectively. Through the simulation experiments of 20 benchmark functions in different dimensions, the optimization ability of the algorithm was evaluated. Experimental results show that the improved algorithm outperforms Whale Optimization Algorithm (WOA), Grey Wolf Optimization (GWO) algorithm, Particle Swarm Optimization (PSO) algorithm, and Biogeography-Based Optimization (BBO) algorithm, and the performance of this algorithm is significantly better than that of original HHO algorithm, which prove the effectiveness of the improved algorithm.
    Multimedia computing and computer simulation
    Review of deep learning-based medical image segmentation
    CAO Yuhong, XU Hai, LIU Sun'ao, WANG Zixiao, LI Hongliang
    2021, 41(8):  2273-2287.  DOI: 10.11772/j.issn.1001-9081.2020101638
    Asbtract ( )   PDF (2539KB) ( )  
    References | Related Articles | Metrics
    As a fundamental and key task in computer-aided diagnosis, medical image segmentation aims to accurately recognize the target regions such as organs, tissues and lesions at pixel level. Different from natural images, medical images show high complexity in texture and have the boundaries difficult to judge caused by ambiguity, which is the fault of much noise due to the limitations of the imaging technology and equipment. Furthermore, annotating medical images highly depends on expertise and experience of the experts, thereby leading to limited available annotations in the training and potential annotation errors. For medical images suffer from ambiguous boundary, limited annotated data and large errors in the annotations, which makes it is a great challenge for the auxiliary diagnosis systems based on traditional image segmentation algorithms to meet the demands of clinical applications. Recently, with the wide application of Convolutional Neural Network (CNN) in computer vision and natural language processing, deep learning-based medical segmentation algorithms have achieved tremendous success. Firstly the latest research progresses of deep learning-based medical image segmentation were summarized, including the basic architecture, loss function, and optimization method of the medical image segmentation algorithms. Then, for the limitation of medical image annotated data, the mainstream semi-supervised researches on medical image segmentation were summed up and analyzed. Besides, the studies related to measuring uncertainty of the annotation errors were introduced. Finally, the characteristics summary and analysis as well as the potential future trends of medical image segmentation were listed.
    Classification of functional magnetic resonance imaging data based on semi-supervised feature selection by spectral clustering
    ZHU Cheng, ZHAO Xiaoqi, ZHAO Liping, JIAO Yuhong, ZHU Yafei, CHENG Jianying, ZHOU Wei, TAN Ying
    2021, 41(8):  2288-2293.  DOI: 10.11772/j.issn.1001-9081.2020101553
    Asbtract ( )   PDF (1318KB) ( )  
    References | Related Articles | Metrics
    Aiming at the high-dimensional and small sample problems of functional Magnetic Resonance Imaging (fMRI) data, a Semi-Supervised Feature Selection by Spectral Clustering (SS-FSSC) model was proposed. Firstly, the prior brain region template was used to extract the time series signal. Then, the Pearson correlation coefficient and the Order Statistics Correlation Coefficient (OSCC) were selected to describe the functional connection features between the brain regions, and spectral clustering was performed to the features. Finally, the feature importance criterion based on Constraint score was adopted to select feature subsets, and the subsets were input into the Support Vector Machine (SVM) classifier for classification. By 100 times of five-fold cross-validation on the COBRE (Center for Biomedical Research Excellence) schizophrenia public dataset in the experiments, it is found that when the number of retained features is 152, the highest average accuracy of the proposed model to schizophrenia is about 77%, and the highest accuracy of the proposed model to schizophrenia is 95.83%. Experimental result analysis shows that by only retaining 16 functional connection features for classifier training, the model can stably achieve an average accuracy of more than 70%. In addition, in the results obtained by the proposed model, Intracalcarine Cortex has the highest occurrence frequency among the 10 brain regions corresponding to the functional connections, which is consistent to the existing research state about schizophrenia.
    Review of remote sensing image change detection
    REN Qiuru, YANG Wenzhong, WANG Chuanjian, WEI Wenyu, QIAN Yunyun
    2021, 41(8):  2294-2305.  DOI: 10.11772/j.issn.1001-9081.2020101632
    Asbtract ( )   PDF (1683KB) ( )  
    References | Related Articles | Metrics
    As a key technology of land use/land cover detection, change detection aims to detect the changed part and its type in the remote sensing data of the same region in different periods. In view of the problems in traditional change detection methods, such as heavy manual labor and poor detection results, a large number of change detection methods based on remote sensing images have been proposed. In order to further understand the change detection technology based on remote sensing images and further study on the change detection methods, a comprehensive review of change detection was carried out by sorting, analyzing and comparing a large number of researches on change detection. Firstly, the development process of change detection was described. Then, the research progress of change detection was summarized in detail from three aspects:data selection and preprocessing, change detection technology, post-processing and precision evaluation, where the change detection technology was mainly summarized from analysis unit and comparison method respectively. Finally, the summary of the problems in each stage of change detection was performed and the future development directions were proposed.
    Indefinite reconstruction method of spatial data based on multi-resolution generative adversarial network
    GUAN Qijie, ZHANG Ting, LI Deya, ZHOU Shaojing, DU Yi
    2021, 41(8):  2306-2311.  DOI: 10.11772/j.issn.1001-9081.2020101541
    Asbtract ( )   PDF (1224KB) ( )  
    References | Related Articles | Metrics
    In the field of indefinite spatial data reconstruction, Multiple-Point Statistics (MPS) has been widely used, but its applicability is affected due to the high computational cost. A spatial data reconstruction method based on a multi-resolution Generative Adversarial Network (GAN) model was proposed by using a pyramid structured fully convolutional GAN model to learn the data training images with different resolutions. In the method, the detailed features were captured from high-resolution training images and large-scale features were captured from low-resolution training images. Therefore, the image reconstructed by this method contained the global and local structural information of the training image while maintaining a certain degree of randomness. By comparing the proposed algorithm with the representative algorithms in MPS and the GAN method applied in spatial data reconstruction, it can be seen that the total time of 10 reconstructions of the proposed algorithm is reduced by about 1 h, the difference between the average porosity of the algorithm and the training image porosity is reduced to 0.000 2, and the variogram curve and the Multi-Point Connectivity (MPC) curve of the algorithm are closer to those of the training image, showing that the proposed algorithm has better reconstruction quality.
    Noise image segmentation by adaptive wavelet transform based on artificial bee swarm and fuzzy C-means
    SHI Xuesong, LI Xianhua, SUN Qing, SONG Tao
    2021, 41(8):  2312-2317.  DOI: 10.11772/j.issn.1001-9081.2020101684
    Asbtract ( )   PDF (3644KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that traditional Fuzzy C-Means (FCM) clustering algorithm is easily affected by noise in processing noise images, a noise image segmentation method of wavelet domain feature enhancement based on FCM was proposed. Firstly, the noise image was decomposed by two-dimensional wavelet. Secondly, the approximate coefficient was enhanced at the edge, and Artificial Bee Colony (ABC) optimization algorithm was used to perform threshold processing to the detail coefficients, and then the wavelet reconstruction was carried out for the processed coefficients. Finally, the reconstructed image was segmented by FCM algorithm. Five typical grayscale images were selected, and were added with Gaussian noise and salt-and-pepper noise respectively. Various methods were used to segment them, and the Peak Signal-to-Noise Ratio (PSNR) and Misclassification Error (ME) of the segmented images were taken as performance indicators. Experimental results show that the PSNR of the images segmented by the proposed method is at most 281% and 54% higher than the PSNR of the images segmented by the traditional FCM clustering algorithm segmentation method and Particle Swarm Optimization (PSO) segmentation method respectively, and the segmented images of the proposed method has the ME at most 55% and 41% lower than those of the comparison methods respectively. It can be seen that the proposed segmentation method preserves the edge texture information well, and the anti-noise and segmentation performance of this method are improved.
    Nonlinear constraint based quasi-homography warps for image stitching
    WANG Huai, WANG Zhanqing
    2021, 41(8):  2318-2323.  DOI: 10.11772/j.issn.1001-9081.2020101637
    Asbtract ( )   PDF (2008KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of longitudinal projection distortion in non-overlapping regions of images caused by the quasi-homography warp algorithm for image stitching, an image stitching algorithm based on nonlinear constraint was proposed. Firstly, the nonlinear constraint was used to smoothly transit the image regions around the dividing line. Then, the linear equation of quasi-homography warp was replaced by a parabolic equation. Finally, the mesh-based method was used to improve the speed of image texture mapping and the method based on optimal stitching line was used to fuse the images. For images of 1 200 pixel×1 600 pixel, the time consumption range of texture mapping by the proposed algorithm is 4 s to 7 s, and the proposed algorithm has the average deviation degree of diagonal structure is 11 to 31. Compared with the quasi-homography warp algorithm for image stitching, the proposed algorithm has the time consumption of texture mapping reduced by 55% to 67%, and the average deviation degree of diagonal structure reduced by 36% to 62%. It can be seen that the proposed algorithm not only corrects the oblique diagonal structure, but also improves the efficiency of image stitching. Experimental results show that the proposed algorithm has better results in improving the visual effect of stitched images.
    Detection of left and right railway tracks based on deep convolutional neural network and clustering
    ZENG Xiangyin, ZHENG Bochuan, LIU Dan
    2021, 41(8):  2324-2329.  DOI: 10.11772/j.issn.1001-9081.2021030385
    Asbtract ( )   PDF (1502KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy and speed of railway track detection, a new method of detecting left and right railway tracks based on deep Convolutional Neural Network (CNN) and clustering was proposed. Firstly, the labeled images in the dataset were processed, each origin labeled image was divided into many grids uniformly, and the railway track information in each grid region was represented by one pixel, so as to construct the reduced images of railway track labeled images. Secondly, based on the reduced labeled images, a new deep CNN for railway track detection was proposed. Finally, a clustering method was proposed to distinguish left and right railway tracks. The proposed left and right railway track detection method can reach accuracy of 96% and speed of 155 frame/s on images with size of 1000 pixel×1000 pixel. Experimental results demonstrate that the proposed method not only has high detection accuracy, but also has fast detection speed.
    Frontier and comprehensive applications
    Research progress on driver distracted driving detection
    QIN Binbin, PENG Liangkang, LU Xiangming, QIAN Jiangbo
    2021, 41(8):  2330-2337.  DOI: 10.11772/j.issn.1001-9081.2020101691
    Asbtract ( )   PDF (2153KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of the vehicle industry and world economy, the number of private cars continues to increase, which results in more and more traffic accidents, and traffic safety problem has become a global hotpot. The research of driver distracted driving detection is mainly divided into two types:traditional Computer Vision (CV) algorithms and deep learning algorithms. In the driver distraction detection based on traditional CV algorithm, image features are extracted by the feature operators such as Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradient (HOG), then Support Vector Machine (SVM) is combined to build model and classify the images. However, the traditional CV algorithms have disadvantages of high requirements for the environment, narrow application range, large amount of parameters and high computational complexity. In recent years, deep learning has shown excellent performance such as fast speed and high precision in extracting data features. Therefore, the researchers began to introduce deep learning into driver distracted driving detection. The methods based on deep learning can realize the end-to-end distracted driving detection network with high accuracy. The research status of the traditional CV algorithms and deep learning algorithms in driver distracted driving detection was introduced. Firstly, the situations of the traditional CV algorithms used in the image field and the research of driver distracted driving detection were elaborated. Secondly, the research of driver distracted driving based on deep learning was introduced. Thirdly, the accuracies and model parameters of different driver distracted driving detection methods were compared and analyzed. Finally, the existing research was summarized and three problems that driver distracted driving detection need to solve in the future were put forward:the driver's distraction state and the distraction degree division standards need to be further improved, three aspects of person-car-road need to be considered comprehensively, and how to reduce neural network parameters more effectively.
    Multi-person collaborative creation system of building information modeling drawings based on blockchain
    SHEN Yumin, WANG Jinlong, HU Diankai, LIU Xingyu
    2021, 41(8):  2338-2345.  DOI: 10.11772/j.issn.1001-9081.2020101549
    Asbtract ( )   PDF (1810KB) ( )  
    References | Related Articles | Metrics
    Multi-person collaborative creation of Building Information Modeling (BIM) drawings is very important in large building projects. However, the existing methods of multi-person collaborative creation of BIM drawings based on Revit and other modeling software or cloud service have the confusion of BIM drawing version, difficulty of traceability, data security risks and other problems. To solve these problems, a blockchain-based multi-person collaborative creation system for BIM drawings was designed. By using the on-chain and off-chain collaborative storage method, the blockchain and database were used to store BIM drawings information after each creation in the BIM drawing creation process and the complete BIM drawings separately. The decentralization, traceability and anti-tampering characteristics of the blockchain were used to ensure that the version of the BIM drawings is clear, and provide a basis for the future copyright division. These characteristics were also used to enhance the data security of BIM drawings information. Experimental results show that the average block generation time of the proposed system in the multi-user concurrent case is 0.467 85 s, and the maximum processing rate of the system is 1 568 transactions per second, which prove the reliability of the system and that the system can meet the needs of actual application scenarios.
    Real-time remaining life prediction method of Web software system based on self-attention-long short-term memory network
    DANG Weichao, LI Tao, BAI Shangwang, GAO Gaimei, LIU Chunxia
    2021, 41(8):  2346-2351.  DOI: 10.11772/j.issn.1001-9081.2020091486
    Asbtract ( )   PDF (1238KB) ( )  
    References | Related Articles | Metrics
    In order to predict the Remaining Useful Life (RUL) of Web software system in real time and accurately, taking into consideration the time sequence characteristics of the Web system health status performance indicators and the interdependence between the indicators, a real-time remaining life prediction method of Web software system based on Self-Attention-Long Short-Term Memory (Self-Attention-LSTM) network was proposed. Firstly, an accelerated life test platform was built to collect the performance indicators data reflecting the aging trend of the Web software system. Then, according to the time sequence characteristics of the performance indicators data, a Long Short-Term Memory (LSTM) recurrent neural network was constructed to extract the hidden layer characteristics of the performance indicators, and the self-attention mechanism was used to model the dependency relationship between the characteristics. Finally, the real-time RUL prediction value of the Web system was obtained. On three test sets, the proposed model was compared with the Back Propagation (BP) network and the conventional Recurrent Neural Network (RNN). Experimental results show that the Mean Absolute Error (MAE) of the model is 16.92% lower than that of LSTM on average, and the relative accuracy (Accuracy) of the model is 5.53% higher than that of LSTM on average, which verify the effectiveness of the RUL model of Self-Attention-LSTM network. It can be seen that the proposed method can provide technical support for optimizing the software rejuvenation decision of the Web system.
    Binary classification to multiple classification progressive detection network for aero-engine damage images
    FAN Wei, LI Chenxuan, XING Yan, HUANG Rui, PENG Hongjian
    2021, 41(8):  2352-2357.  DOI: 10.11772/j.issn.1001-9081.2020101575
    Asbtract ( )   PDF (1589KB) ( )  
    References | Related Articles | Metrics
    Aero-engine damage is an important factor affecting flight safety. There are two main problems in the current computer vision-based damage detection of engine borescope image:one is that the complex background of borescope image makes the model detect the damage with low accuracy; the other one is that the data source of borescope image is limited, which leads to fewer detectable classes for the model. In order to solve these two problems, a Mask R-CNN (Mask Region-based Convolutional Neural Network) based progressive detection network from binary classification to multiple classification was proposed for aero-engine damage images. By adding a binary classification detection branch to the Mask R-CNN, firstly, the damage in the image was detected in binary way and regression optimization was performed to the localization coordinates. Secondly, the original detection branch was used to progressively perform multiple classification detection, so as to further optimize the damage detection results by regression and determine the damage class. Finally, instance segmentation was performed to the damage through the Mask branch according to the results of multiple classification detection. In order to increase the detection classes of the model and verify the effectiveness of the method, a dataset of 1 315 borescope images with 8 damage classes was constructed. The training and testing results on this set show that the Average Precision (AP) and AP75 (Average Precision under IoU (Intersection over Union) of 75%) of multiple classification detection are improved by 3.34% and 9.71% respectively, compared with those of Mask R-CNN. It can be seen that the proposed method can effectively improve the multiple classification detection accuracy for damages in borescope images.
    Hydraulic tunnel defect recognition method based on dynamic feature distillation
    HUANG Jishuang, ZHANG Hua, LI Yonglong, ZHAO Hao, WANG Haoran, FENG Chuncheng
    2021, 41(8):  2358-2365.  DOI: 10.11772/j.issn.1001-9081.2020101596
    Asbtract ( )   PDF (1838KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that the existing Deep Convolutional Neural Network (DCNN) have insufficient defect image feature extraction ability, few recognition types and long reasoning time in hydraulic tunnel defect recognition tasks, an autonomous defect recognition method based on dynamic feature distillation was proposed. Firstly, the deep curve estimation network was used to optimize the image to improve the image quality in low illumination environment. Secondly, the dynamic convolution module with attention mechanism was constructed to replace the traditional static convolution, and the obtained dynamic features were used to train the teacher network to obtain better model feature extraction ability. Finally, a dynamic feature distillation loss was constructed by fusing the discriminator structure in the knowledge distillation framework, and the dynamic feature knowledge was transferred from the teacher network to the student network through the discriminator, so as to achieve the high-precision recognition of six types of defects while significantly reducing the model reasoning time. In the experiments, the proposed method was compared with the original residual network on a hydraulic tunnel defect dataset of a hydropower station in Sichuan Province. The results show that this method has the recognition accuracy reached 96.15%, and the model parameter amount and reasoning time reduced to 1/2 and 1/6 of the original ones respectively. It can be seen from the experimental results that fusing the dynamic feature distillation information of the defect image into the recognition network can improve the efficiency of hydraulic tunnel defect recognition.
    Dam defect object detection method based on improved single shot multibox detector
    CHEN Jing, MAO Yingchi, CHEN Hao, WANG Longbao, WANG Zicheng
    2021, 41(8):  2366-2372.  DOI: 10.11772/j.issn.1001-9081.2020101603
    Asbtract ( )   PDF (1651KB) ( )  
    References | Related Articles | Metrics
    In order to improve the efficiency of dam safety operation and maintenance, the dam defect object detection models can help to assist inspectors in defect detection. There is variability of the geometric shapes of dam defects, and the Single Shot MultiBox Detector (SSD) model using traditional convolution methods for feature extraction cannot adapt to the geometric transformation of defects. Focusing on the above problem, a DeFormable convolution Single Shot multi-box Detector (DFSSD) was proposed. Firstly, in the backbone network of the original SSD:Visual Geometry Group (VGG16), the standard convolution was replaced by the deformable convolution, which was used to deal with the geometric transformation of defects, and the model's spatial information modeling ability was increased by learning the convolution offset. Secondly, according to the sizes of different features, the ratio of the prior bounding box was improved to prompt the detection accuracy of the model to the bar feature and the model's generalization ability. Finally, in order to solve the problem of unbalanced positive and negative samples in the training set, an improved Non-Maximum Suppression (NMS) algorithm was adopted to optimize the learning effect. Experimental results show that the average detection accuracy of DFSSD is improved by 5.98% compared to the benchmark model SSD on dam defect images. By comparing with Faster Region-based Convolutional Neural Network (Faster R-CNN) and SSD models, it can be seen that DFSSD model has a better effect in improving the detection accuracy of dam defect objects.
    Prediction method of capacity data in telecom industry based on recurrent neural network
    DING Yin, SANG Nan, LI Xiaoyu, WU Feizhou
    2021, 41(8):  2373-2378.  DOI: 10.11772/j.issn.1001-9081.2020101677
    Asbtract ( )   PDF (1094KB) ( )  
    References | Related Articles | Metrics
    In the capacity prediction process of telecom operation and maintenance, there are problems of too many capacity indicators and deployed business classes. Most of the existing researches do not consider the difference of indicator data types, and use the same prediction method for all types of data, which results in both good and bad prediction effects. In order to improve the efficiency of indicator prediction, a classification method of data type was proposed, and the data types were divided into trend type, periodic type and irregular type. Aiming at the prediction of periodical data, a periodic capacity indicator prediction model based on Bi-directional Recurrent Neural Network (BiRNN), called BiRNN-BiLSTM-BI, was proposed. Firstly, In order to analyze the periodic characteristics of capacity data, a busy and idle distribution analysis algorithm was proposed. Secondly, a Recurrent Neural Network (RNN) model was built, which included a layer of BiRNN and a layer of Bi-directional Long Short-Term Memory network (BiLSTM). Finally, the output of BiRNN was optimized by the system's busy and idle distribution information. Experimental results compared with the best one among Holt-Winters, AutoRregressive Integrated Moving Average (ARIMA) model and Back Propagation (BP) neural network model show that, the proposed BiRNN-BiLSTM-BI model has the Mean Square Error (MSE) reduced by 15.16% and 45.67% on the unified log dataset and the distributed cache service dataset respectively, showing that the prediction accuracy is greatly improved.
    CCF Bigdata 2020
    Review of spatio-temporal trajectory sequence pattern mining methods
    KANG Jun, HUANG Shan, DUAN Zongtao, LI Yixiu
    2021, 41(8):  2379-2385.  DOI: 10.11772/j.issn.1001-9081.2020101571
    Asbtract ( )   PDF (1204KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of global positioning technology and mobile communication technology, huge amounts of trajectory data appear. These data are true reflections of the moving patterns and behavior characteristics of moving objects in the spatio-temporal environment, and they contain a wealth of information which carries important application values for the fields such as urban planning, traffic management, service recommendation, and location prediction. And the applications of spatio-temporal trajectory data in these fields usually need to be achieved by sequence pattern mining of spatio-temporal trajectory data. Spatio-temporal trajectory sequence pattern mining aims to find frequently occurring sequence patterns from the spatio-temporal trajectory dataset, such as location patterns (frequent trajectories, hot spots), activity periodic patterns, and semantic behavior patterns, so as to mine hidden information in the spatio-temporal data. The research progress of spatial-temporal trajectory sequence pattern mining in recent years was summarized. Firstly, the data characteristics and applications of spatial-temporal trajectory sequence were introduced. Then, the mining process of spatial-temporal trajectory patterns was described:the research situation in this field was introduced from the perspectives of mining location patterns, periodic patterns and semantic patterns based on spatial-temporal trajectory sequence. Finally, the problems existing in the current spatio-temporal trajectory sequence pattern mining methods were elaborated, and the future development trends of spatio-temporal trajectory sequence pattern mining method were prospected.
    Algorithm for mining top-k high utility itemsets with negative items
    SUN Rui, HAN Meng, ZHANG Chunyan, SHEN Mingyao, DU Shiyu
    2021, 41(8):  2386-2395.  DOI: 10.11772/j.issn.1001-9081.2020101561
    Asbtract ( )   PDF (1361KB) ( )  
    References | Related Articles | Metrics
    Mininng High Utility Itemsets (HUI) with negative items is one of the emerging itemsets mining tasks. In order to mine the result set of HUI with negative items meeting the user needs, a Top-k High utility itemsets with Negative items (THN) mining algorithm was proposed. In order to improve the temporal and spatial performance of the THN algorithm, a strategy to automatically increase the minimum utility threshold was proposed, and the pattern growth method was used for depth-first search; the search space was pruned by using the redefined subtree utility and the redefined local utility; the transaction merging technology and dataset projection technology were employed to solve the problem of scanning the database for multiple times; in order to increase the utility counting speed, the utility array counting technology was used to calculate the utility of the itemset. Experimental results show that the memory usage of THN algorithm is about 1/60 of that of the HUINIV (High Utility Itemsets with Negative Item Values)-Mine algorithm, and is about 1/2 of that of the FHN (Faster High utility itemset miner with Negative unit profits) algorithm; the THN algorithm takes 1/10 runtime of that of the FHN algorithm; and the THN algorithm achieves better performance on dense datasets.
    Low-latency cluster scheduling framework for large-scale short-time tasks
    ZHAO Quan, TANG Xiaochun, ZHU Ziyu, MAO Anqi, LI Zhanhuai
    2021, 41(8):  2396-2405.  DOI: 10.11772/j.issn.1001-9081.2020101566
    Asbtract ( )   PDF (1310KB) ( )  
    References | Related Articles | Metrics
    There are always some tasks with short duration and high concurrency in the large-scale data analysis environment. How to schedule these concurrent jobs with low-latency requirement is a hot research topic. In some existing cluster resource management frameworks, the centralized schedulers cannot meet the low-latency requirement due to the bottleneck of the master node, and some distributed schedulers achieve the low-latency task scheduling, but has shortcomings in the optimal resource allocation and resource allocation conflict. By considering the needs for large-scale real-time jobs, a distributed cluster resource scheduling framework was designed and implemented to meet the low-latency requirement of large-scale data processing. Firstly, a two-stage scheduling framework and an optimized two-stage multi-path scheduling framework were proposed. Secondly, aiming at some resource conflict problems in two-stage multi-path scheduling, a task transfer mechanism based on load balancing was proposed to solve the load imbalance problems among computing nodes. At last, the task scheduling framework for large-scale clusters was simulated and verified by using actual load and a simulated scheduler. For the actual load, the scheduling delay of the proposed framework is controlled within 12% of that of the ideal scheduling. In the simulated environment, this framework has the delay of short-time tasks reduced by more than 40% compared with the centralized scheduler.
    Time-incorporated point-of-interest collaborative recommendation algorithm
    BAO Xuan, CHEN Hongmei, XIAO Qing
    2021, 41(8):  2406-2411.  DOI: 10.11772/j.issn.1001-9081.2020101565
    Asbtract ( )   PDF (886KB) ( )  
    References | Related Articles | Metrics
    Point-Of-Interest (POI) recommendation aims to recommend places that users do not visit but may be interested in, which is one of the important location-based services. In POI recommendation, time is an important factor, but it is not well considered in the existing POI recommendation models. Therefore, the Time-incorporated User-based Collaborative Filtering POI recommendation (TUCF) algorithm was proposed to improve the performance of POI recommendation by considering time factor. Firstly, the users' check-in data of Location-Based Social Network (LBSN) was analyzed to explore the time relationship of users' check-ins. Then, the time relationship was used to smooth the users' check-in data, so as to incorporate time factor and alleviate data sparsity. Finally, according to the user-based collaborative filtering method, different POIs were recommended to the users at different times. Experimental results on real check-in datasets showed that compared with the User-based collaborative filtering (U) algorithm, TUCF algorithm had the precision and recall increased by 63% and 69% respectively, compared with the U with Temporal preference with smoothing Enhancement (UTE) algorithm, TUCF algorithm had the precision and recall increased by 8% and 12% respectively. And TUCF algorithms reduced the Mean Absolute Error (MAE) by 1.4% and 0.5% respectively, compared with U and UTE algorithms.
    Hybrid ant colony optimization algorithm with brain storm optimization
    LI Mengmeng, QIN Wei, LIU Yi, DIAO Xingchun
    2021, 41(8):  2412-2417.  DOI: 10.11772/j.issn.1001-9081.2020101562
    Asbtract ( )   PDF (946KB) ( )  
    References | Related Articles | Metrics

    Feature selection can improve the performance of data classification effectively. In order to further improve the solving ability of Ant Colony Optimization (ACO) on feature selection, a hybrid Ant colony optimization with Brain storm Optimization (ABO) algorithm was proposed. In the algorithm, the information communication archive was used to maintain the historical better solutions, and a longest time first method based on relaxation factor was adopted to update archive dynamically. When the global optimal solution of ACO was not updated for several times, a route-idea transformation operator based on Fuch chaotic map was used to transform the route solutions in the archive to the idea solutions. With the obtained solutions as initial population, the Brain Storm Optimization (BSO) was adopted to search for better solutions in wider space. On six typical binary datasets, experiments were conducted to analyze the sensibility of parameters of the proposed algorithm, and the algorithm was compared to three typical evolutionary algorithms:Hybrid Firefly and Particle Swarm Optimization (HFPSO) algorithm, Particle Swarm Optimization and Gravitational Search Algorithm (PSOGSA) and Genetic Algorithm (GA). Experimental results show that compared with the comparison algorithms, the proposed algorithm can improve the classification accuracy by at least 2.88% to 5.35%, and the F1-measure by at least 0.02 to 0.05, which verify the effectiveness and superiority of the proposed algorithm.

    Matching method for academic expertise of research project peer review experts
    WANG Zisen, LIANG Ying, LIU Zhengjun, XIE Xiaojie, ZHANG Wei, SHI Hongzhou
    2021, 41(8):  2418-2426.  DOI: 10.11772/j.issn.1001-9081.2020101564
    Asbtract ( )   PDF (1602KB) ( )  
    References | Related Articles | Metrics
    Most of the existing expert recommendation processes rely on manual matching, which leads to the low accuracy of expert recommendation due to that they cannot fully capture the semantic association between the subject of the project and the interests of experts. To solve this problem, a matching method for academic expertise of project peer review experts was proposed. In the method, an academic network was constructed to establish the academic entity connection, and a meta-path was designed to capture the semantic association between different nodes in the academic network. By using the random walk strategy, the node sequence of co-occurrence association between the subject of the project and the expert research interests was obtained. And through the network representation learning model training, the vector representation with semantic association of the project subject and expert research interests was obtained. On this basis, the semantic similarity was calculated layer by layer according to the hierarchical structure of project subject tree to realize multi-granularity peer review academic expertise matching. Experimental results on the crawled datasets of HowNet and Wanfang papers, an expert review dataset and Baidu Baike word vector dataset show that this method can enhance the semantic association between the project subject and expert research interests, and can be effectively applied to the academic expertise matching of project review experts.
    Case reading comprehension method combining syntactic guidance and character attention mechanism
    HE Zhenghai, XIAN Yantuan, WANG Meng, YU Zhengtao
    2021, 41(8):  2427-2431.  DOI: 10.11772/j.issn.1001-9081.2020101568
    Asbtract ( )   PDF (813KB) ( )  
    References | Related Articles | Metrics
    Case reading comprehension is the specific application of machine reading comprehension in judicial field. Case reading comprehension is one of the important applications of judicial intelligence, which reads the judgment documents by computer and answers the related questions. At present, the mainstream method of machine reading comprehension is to use deep learning model to encode the text words and obtain vector representation of the text. The core problem of model construction is how to obtain the semantic representation of the text and how to match the questions with the context. Considering that syntactic information is helpful for model learning the sentence skeleton information and Chinese characters have potential semantic information, a case reading comprehension method that integrates syntactic guidance and character attention mechanism was proposed. By fusing the syntactic information and Chinese character information, the coding ability of the model for the case text was improved. Experimental results on the reading comprehension dataset of Law Research Cup 2019 show that compared with the baseline model, the proposed method has the Exact Match (EM) value increased by 0.816 and the F1 value improved by 1.809%.
    Analysis of hypernetwork characteristics in Tang poems and Song lyrics
    WANG Gaojie, YE Zhonglin, ZHAO Haixing, ZHU Yu, MENG Lei
    2021, 41(8):  2432-2439.  DOI: 10.11772/j.issn.1001-9081.2020101569
    Asbtract ( )   PDF (1147KB) ( )  
    References | Related Articles | Metrics
    At present, there are many research results in Tang poems and Song lyrics from the perspective of literature, but there are few research results in Tang poems and Song lyrics by using the hypergraph based hypernetwork method, and the only researches of this kind are also limited to the study of Chinese character frequency and word frequency. The analysis and study of Tang poems and Song lyrics by using the method of hypernetwork data analysis is helpful to explore the breadth that cannot be reached by the traditional perspective of literature, and to discover the law of word composition laws in literatures and historical backgrounds reflected by Tang poems and Song lyrics. Therefore, based on two ancient text corpuses:Quan Tang Shi and Quan Song Ci, the hypernetworks of Tang poems and Song lyrics were established respectively. In the construction of the hypernetworks, a Tang poem or a Song lyrics was taken as a hyperedge, and the characters in Tang poems or Song lyrics were taken as the nodes within the hyperedge. Then, the topological indexes and network characteristics of the hypernetworks of Tang poems and Song lyrics, such as node hyperdegree, node hyperdegree distribution, hyperedge node degree, and hyperedge node degree distribution, were experimentally analyzed, in order to find out the characters use, word use and aesthetic tendency of poets in Tang dynasty and lyricists in Song dynasty. Finally, based on the works of poems and lyrics of Li Bai, Du Fu, Su Shi and Xin Qiji, the work hypernetworks were constructed, and the relevant network parameters were calculated. The analysis results show that there is a great difference between the maximum and minimum hyperdegrees of the two hypernetwork, and the distribution of the hyperdegrees is approximate to the power-law distribution, which indicates the scale-free property of the two hypernetworks. In addition, the degrees of hyperedge nodes in Tang poem hypernetwork are also have obvious distribution characteristics. In specific, the degrees of hyperedge nodes in Tang poems and Song lyrics are more distributed between 20 and 100, and the degrees of hyperedge nodes in Song lyric hypernetwork are more distributed between 30 and 130. Moreover, it is found that the work hypernetworks have smaller average path length and a larger clustering coefficient, which reflects the small-world characteristics of the work hypernetworks.
    Remote sensing image dehazing method based on cascaded generative adversarial network
    SUN Xiao, XU Jindong
    2021, 41(8):  2440-2444.  DOI: 10.11772/j.issn.1001-9081.2020101563
    Asbtract ( )   PDF (2363KB) ( )  
    References | Related Articles | Metrics
    Dehazing algorithms based on image training pairs are difficult to deal with the problems of insufficient training sample pairs in remote sensing images, and have the model with weak generalization ability, therefore, a remote sensing image dehazing method based on cascaded Generative Adversarial Network (GAN) was proposed. In order to solve the missing of paired remote sensing datasets, U-Net GAN (UGAN) learning haze generation and Pixel Attention GAN (PAGAN) learning dehazing were proposed. In the proposed method, UGAN was used to learn how to add haze to the haze-free remote sensing images with the details of the images retained by using unpaired clear and haze image sets, and then was used to guide the PAGAN to learn how to correctly dehazing such images. To reduce the discrepancy between the synthetic haze remote sensing images and the dehazing remote sensing images, the self-attention mechanism was added to PAGAN. By the generator, the high-resolution detail features were generated by using cues from all feature locations in the low-resolution image. By the discriminator, the detail features in distant parts of the images were checked whether they are consistent with each other. Compared with the dehazing methods such as Feature Fusion Attention Network (FFANet), Gated Context Aggregation Network (GCANet) and Dark Channel Prior (DCP), this cascaded GAN method does not require a large number of paired data to train the network repeatedly. Experimental results show this method can remove haze and thin cloud effectively, and is better than the comparison methods on both visual effect and quantitative indices.
    Hybrid aerial image segmentation algorithm based on multi-region feature fusion for natural scene
    YANG Rui, QIAN Xiaojun, SUN Zhenqiang, XU Zhen
    2021, 41(8):  2445-2452.  DOI: 10.11772/j.issn.1001-9081.2020101567
    Asbtract ( )   PDF (1689KB) ( )  
    References | Related Articles | Metrics
    In the two components of hybrid image segmentation algorithm, the initial segmentation cannot form the over-segmentation region sets with low wrong segmentation rate, while region merging lacks the label selection mechanism for region merging and the method of determining region merging stopping moment in this component commonly does not meet the scenario requirements. To solve the above problems, a Multi-level Region Information fusion based Hybrid image Segmentation algorithm (MRIHS) was proposed. Firstly, the improved Markov model was used to smooth the superpixel blocks, so as to form initial segmentation regions. Then, the designed region label selection mechanism was used to select the labels of the merged regions after measuring the similarity of the initial segmentation regions and selecting the region pairs to be merged. Finally, an optimal merging state was defined to determine region merging stopping moment. To verify MRIHS performance, comparison experiments between this algorithm with Multi-dimensional Feature fusion based Hybrid image Segmentation algorithm (MFHS), Improved FCM image segmentation algorithm based on Region Merging (IFRM), Inter-segment and Boundary Homogeneities based Hybrid image Segmentation algorithm (IBHHS), Multi-dimensional Color transform and Consensus based Hybrid image Segmentation algorithm (MCCHS) were carried out on Visual Object Classes (VOC), Cambridge-driving labeled Video database (CamVid) and the self-built river and lake inspection (rli) datasets. The results show that on VOC and rli datasets, the Boundary Recall (BR), Achievable Segmentation Accuracy (ASA), recall and dice of MRIHS are at least increased by 0.43 percentage points, 0.35 percentage points, 0.41 percentage points, 0.84 percentage points respectively and the Under-segmentation Error (UE) of MRIHS is at least decreased by 0.65 percentage points compared with those of other algorithms; on CamVid dataset, the recall and dice of MRIHS are at least improved by 1.11 percentage points, 2.48 percentage points respectively compared with those of other algorithms.
    Multi-granularity temporal structure representation based outlier detection method for prediction of oil reservoir
    MENG Fan, CHEN Guang, WANG Yong, GAO Yang, GAO Dequn, JIA Wenlong
    2021, 41(8):  2453-2459.  DOI: 10.11772/j.issn.1001-9081.2020101867
    Asbtract ( )   PDF (1265KB) ( )  
    References | Related Articles | Metrics
    The traditional methods for prediction of oil reservoir utilize the seismic attributes generated when seismic waves passing through the stratum and geologic drilling data to make a comprehensive judgment in combination with the traditional geophysical methods. However, this type of prediction methods has high cost of research and judgement and its accuracy strongly depends on the prior knowledge of the experts. To address the above issues, based on the seismic data of the Subei Basin of Jiangsu Oilfield, and considering the sparseness and randomness of oil-labeling samples, a multi-granularity temporal structure representation based outlier detection algorithm was proposed to perform the prediction by using the post-stack seismic trace data. Firstly, the multi-granularity temporal structures for the single seismic trace data was extracted, and the independent feature representations were formed. Secondly, based on extracting multiple granularity temporal structure representations, feature fusion was carried out to form the fusion representation of seismic trace data. Finally, a cost-sensitive method was utilized for the joint training and judgement to the fused features, so as to obtain the results of oil reservoir prediction for these seismic data. Experiments and simulations of the proposed algorithm were performed on an actual seismic data of Jiangsu Oilfield. Experimental results show that the proposed algorithm is improved by 10% on Area Under Curve (AUC) compared to both of the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) algorithms.
    ChinaVR 2020
    Analysis and improvement of panic concept in social force model
    DING Nanzhe, LIU Tingting, LIU Zhen, WANG Yuanyi, CHAI Yanjie, JIANG Lan
    2021, 41(8):  2460-2465.  DOI: 10.11772/j.issn.1001-9081.2020101550
    Asbtract ( )   PDF (1782KB) ( )  
    References | Related Articles | Metrics
    Social force model is a classical model in crowd simulation. Since it was proposed in 1995, the model has been widely used and modified. In 2000, the concept of panic degree was added to the model to propose an improved version. Although many studies focus on social force model, there are few studies on this concept. Therefore, some key parameters and the concept of panic degree in the social force model were analyzed, and the change of panic degree was used to explain the phenomenons of "fast is slow" and "herd behavior" in crowd evacuation. To overcome the problem in the original model:very few pedestrians may not follow others or may not evacuate at the exit in some conditions caused by the not detailed enough description of pedestrian perception in social force model, the visual field range description of the pedestrian was added and the self-motion state for the pedestrian was redefined and other methods were performed to optimize the social force model. Experimental results show that the improved model can simulate the crowd's herd phenomenon well and is helpful for understanding the concept of panic degree in social force model.
    Modeling technology for maintenance posture of virtual human in narrow space of ship
    LUO Mingyu, LUO Xiaomeng, ZHU Wenmin, ZHANG Lei, FAN Xiumin
    2021, 41(8):  2466-2472.  DOI: 10.11772/j.issn.1001-9081.2020101551
    Asbtract ( )   PDF (1390KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of the existing virtual human simulation technology in the ship's narrow space maintenance operations, such as the inefficiency, the need for many manual interventions, and the high cost of simulation, a virtual human posture hybrid modeling and simulation technology was proposed. According to the characteristics of human body maintenance operations in a narrow space, the virtual human posture modeling was divided into two parts:virtual human torso and lower limb posture modeling and virtual human arm posture modeling. Firstly, a narrow space automatic posture matching algorithm based on the posture library was proposed to determine the operation position and posture of the virtual human in narrow space. On this basis, a multi-objective optimization model was established to solve the arm posture and realize the maintenance simulation posture generation. Experimental results with a certain type of ship engine room tank valve maintenance as an example show that the proposed method can realize the automatic positioning and generation of virtual human posture, and can effectively improve the efficiency of maintenance simulation.
2024 Vol.44 No.11

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF