Loading...

Table of Content

    10 June 2019, Volume 39 Issue 6
    2018 National Annual Conference on High Performance Computing (HPC China 2018)
    Single precision floating general matrix multiply optimization for machine translation based on ARMv8 architecture
    GONG Mingqing, YE Huang, ZHANG Jian, LU Xingjing, CHEN Wei
    2019, 39(6):  1557-1562.  DOI: 10.11772/j.issn.1001-9081.2018122608
    Asbtract ( )   PDF (1002KB) ( )  
    References | Related Articles | Metrics
    Aiming at the inefficiency of neural network inferential calculation executed by mobile intelligent devices using ARM processor, a set of Single precision floating GEneral Matrix Multiply (SGEMM) algorithm optimization scheme based on ARMv8 architecture was proposed. Firstly, it was determined that the computational efficiency of the processor based on ARMv8 architecture executing SGEMM algorithm was limited by the vectorized computation unit usage scheme, the instruction pipeline, and the probability of occurrence of cache miss. Secondly, three optimization techniques:vector instruction inline assembly, data rearrangement and data prefetching were implemented for the three reasons that the computational efficiency was limited. Finally, the test experiments were designed based on three matrix patterns commonly used in the neural network of speech direction and the programs were run on the RK3399 hardware platform. The experimental results show that, the single-core computing speed is 10.23 GFLOPS in square matrix mode, reaching 78.2% of the measured floating-point peak value; the single-core computing speed is 6.35 GFLOPS in slender matrix mode, reaching 48.1% of the measured floating-point peak value; and the single-core computing speed is 2.53 GFLOPS in continuous small matrix mode, reaching 19.2% of the measured floating-point peak value. With the optimized SGEMM algorithm deployed into the speech recognition neural network program, the actual speech recognition speed of program is significantly improved.
    Real-time processing of space science satellite data based on stream computing
    SUN Xiaojuan, SHI Tao, HU Yuxin, TONG Jizhou, LI Bing, SONG Yao
    2019, 39(6):  1563-1568.  DOI: 10.11772/j.issn.1001-9081.2018122602
    Asbtract ( )   PDF (855KB) ( )  
    References | Related Articles | Metrics
    Concerning the increasingly high real-time processing requirement of space science satellite observed data, a real-time processing method of space science satellite data based on stream computing framework was proposed. Firstly, the data stream was abstractly analyzed according to the data processing characteristics of space science satellite. Then, the input and output data structures of each processing unit were redefined. Finally, the parallel data stream processing structure was designed based on the stream computing framework Storm to meet the requirements of parallel processing and distributed computing of large-scale data. The developed system for space science satellite data processing applying with this method was tested and analyzed. The results show that the data processing time is half of that of the original system under same conditions and the data localization strategy has higher throughput than round-robin strategy with the data tuple throughput increased by 29% on average. It can be seen that the use of stream computing framework can greatly shorten the data processing delay and improve the real-time performance of the space science satellite data processing system.
    HSWAP: numerical simulation workflow management platform suitable for high performance computing environment
    ZHAO Shicao, XIAO Yonghao, DUAN Bowen, LI Yufeng
    2019, 39(6):  1569-1576.  DOI: 10.11772/j.issn.1001-9081.2018122606
    Asbtract ( )   PDF (1328KB) ( )  
    References | Related Articles | Metrics
    Concerning the construction of integrated application of "modeling, computation, analysis, optimization" workflow under High Performance Computing (HPC) environment, HPC Simulation Workflow Application Platform (HSWAP) supporting numerical simulation software encapsulation and numerical simulation workflow interaction design was developed. Firstly, based on the modeling of runtime characteristics of numerical simulation activities, the component model was built. Then, the control and data dependency relationships between simulation activities were represented by the workflow, creating a formal numerical simulation workflow model. The formed workflow model was able to be automatically parsed in the platform to adapt to HPC resources. Therefore, HSWAP platform could be used for automatic generation and scheduling of a batch of related numerical simulation tasks, screening technical details of HPC resources from domain users. The platform provided Web Portal services, which supports the push of interactive interfaces of graphical numerical simulation programs. The platform is already deployed and applied at Supercomputing Center and with this platform, the integration of numerical simulation workflows with up to 10 numerical simulation softwares and 20 computing task nodes can be completed in 2 person-month.
    Research and analysis of supercomputer network boot technology
    GONG Daoyong, SONG Changming, LIU Sha, QI Fengbin
    2019, 39(6):  1577-1582.  DOI: 10.11772/j.issn.1001-9081.2018122605
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    Since the network booting time overhead is high in supercomputer system, the idea that the network boot distribution algorithm is one of the main factors affecting the network boot performance and the main direction of optimizing network boot performance was proposed. Firstly, the main factors affecting large-scale network boot performance were analyzed. Secondly, combined with a typical supercomputer system, the network boot data flow topologies of Supernode Cyclic Distribution Algorithm (SCDA) and Board Cyclic Distribution Algorithm (BCDA) were analyzed. Finally, the pressure of above two algorithms on each network path branch and the available network performance were quantitatively analyzed. It can be seen that the bandwidth performance of BCDA is 1-20 times of that of SCDA. Theoretical analysis and model deduction show that the finer-grained mapping algorithm between compute nodes and boot servers can make as many boot servers as possible be used while boot some resources, reducing the premature competition for partial network resources and improving network boot performance.
    PaaS platform resource allocation method based on demand forecasting
    XU Yabin, PENG Hong'en
    2019, 39(6):  1583-1588.  DOI: 10.11772/j.issn.1001-9081.2018122613
    Asbtract ( )   PDF (1006KB) ( )  
    References | Related Articles | Metrics
    In view of the lack of effective resource demand forecasting and optimal allocation in Platform-as-a-Service (PaaS) platform, a resource demand forecasting model and an allocation method were proposed. Firstly, according to the periodicity of the application demand for resources in PaaS platform, the resource sequence was segmented. And on the basis of short-term prediction, combined with the multi-periodicity characteristics of the application, a comprehensive prediction model was established by using the multiple regression algorithm. Then, based on MapReduce architecture, a PaaS platform resource allocation system based on Master-Slave mode was designed and implemented. Finally, the resources were allocated based on current task request and resource demand prediction results. The experimental results show that, compared with autoregressive model and exponential smoothing algorithm, the proposed resource demand forecasting model and allocation method has the mean absolute percentage error drop of 8.71 percentage points and 2.07 percentage points respectively, root mean square error drop of 2.01 percentage points and 0.46 percentage points respectively. It can be seen that the prediction result of the prediction model has little error and its fitting degree with real value is high, while high accuracy costs little time. Besides, the average waiting time of PaaS platform with the proposed prediction model for resource requests decreases significantly.
    Microoperation-based parameter auto-optimization method of Hadoop
    LI Yunshu, TENG Fei, LI Tianrui
    2019, 39(6):  1589-1594.  DOI: 10.11772/j.issn.1001-9081.2018122592
    Asbtract ( )   PDF (931KB) ( )  
    References | Related Articles | Metrics
    As a large-scale distributed data processing framework, Hadoop has been widely used in industry during the past few years. Currently manual parameter optimization and experience-based parameter optimization are ineffective due to complex running process and large parameter space. In order to solve this problem, a method and an analytical framework for Hadoop parameter auto-optimization were proposed. Firstly, the operation process of a job was broken down into several microoperations and the microoperations were determined from the angle of finer granularity directly affected by variable parameters, so that the relationship between parameters and the execution time of a single microoperation was able to be analyzed. Then, by reconstructing the job operation process based on microoperations, a model of the relationship between parameters and the execution time of whole job was established. Finally, various searching optimization algorithms were applied on this model to efficiently and quickly obtain the optimized system parameters. Experiments were conducted with two types of jobs, terasort and wordcount. The experimental results show that, compared with the default parameters condition, the proposed method reduce the job execution time by at least 41% and 30% respectively. The proposed method can effectively improve the job execution efficiency of Hadoop and shorten the job execution time.
    Weighted reviewer graph based spammer group detection and characteristic analysis
    ZHANG Qi, JI Shujuan, FU Qiang, ZHANG Chunjin
    2019, 39(6):  1595-1600.  DOI: 10.11772/j.issn.1001-9081.2018122611
    Asbtract ( )   PDF (949KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that how to detect spammer groups writing fake reviews on the e-commerce platforms, a Weighted reviewer Graph based Spammer group detection Algorithm (WGSA) was proposed. Firstly, a weighted reviewer graph was built based on the co-reviewing feature with the weight calculated by a series of group spam indicators. Then, a threshold was set for the edge weight to filter the suspicious subgraphs. Finally, considering the community structure of the graph, the community discovery algorithm was used to generate the spammer groups. Compared with K-Means clustering algorithm (KMeans), Density-Based spatial clustering of applications with noise (DBscan) and hierarchical clustering algorithm on the large dataset Yelp, the accuracy of WGSA is higher. The characteristics and distinction of the detected spammer groups were also analyzed, which show that spammer groups with different activeness have different harm. The high-active group is more harmful and should be concerned more.
    Artificial intelligence
    Improved convolution neural network model averaging method based on Dropout
    CHENG Junhua, ZENG Guohui, LU Dunke, HUANG Bo
    2019, 39(6):  1601-1606.  DOI: 10.11772/j.issn.1001-9081.2018122501
    Asbtract ( )   PDF (1004KB) ( )  
    References | Related Articles | Metrics
    In order to effectively solve the overfitting problem in deep Convolutional Neural Network (CNN), a model prediction averaging method based on Dropout improved CNN was proposed. Firstly, Dropout was employed in the pooling layers to sparse the unit values of pooling layers in the training phase. Then, in the testing phase, the probability of selecting unit value according to pooling layer Dropout was multiplied by the probability of each unit value in the pooling area as a double probability. Finally, the proposed double-probability weighted model averaging method was applied to the testing phase, so that the sparse effect of the pooling layer Dropout in the training phase was able to be better reflected on the pooling layer in the testing phase, thus achieving the low testing error as training result. The testing error rates of the proposed method in the given size network on MNIST and CIFAR-10 data sets were 0.31% and 11.23% respectively. The experimental results show that the improved method has lower error rate than Prob. weighted pooling and Stochastic Pooling method with only the impact of pooling layer on the results considered. It can be seen that the pooling layer Dropout makes the model more generalized and the pooling unit value is helpful for model generalization and can effectively avoid overfitting.
    Model compression method of convolution neural network based on feature-reuse
    JI Shuwei, YANG Xiwang, HUANG Jinying, YIN Ning
    2019, 39(6):  1607-1613.  DOI: 10.11772/j.issn.1001-9081.2018091992
    Asbtract ( )   PDF (968KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the volume and computational complexity of the convolutional neural network model without reducing the accuracy, a compression method of convolutional neural network model based on feature reuse unit called FR-unit (Feature-Reuse unit) was proposed. Firstly, different optimization methods were proposed for different types of convolution neural network structures. Then, after convoluting the input feature map, the input feature was combined with output feature. Finally, the combined feature was transferred to the next layer. Through the reuse of low-level features, the total number of extracted features would not change, so as to ensure that the accuracy of optimized network would not change. The experimental results on CIFAR10 dataset show that, the volume of Visual Geometry Group (VGG) model is reduced to 75.4% and the prediction time is reduced to 43.5% after optimization, the volume of Resnet model is reduced to 53.1% and the prediction time is reduced to 60.9% after optimization, without reducing the accuracy on the test set.
    Intelligent trigger mechanism for model aggregation and disaggregation
    NING Jin, CHEN Leiting, ZHOU Chuan, ZHANG Lei
    2019, 39(6):  1614-1618.  DOI: 10.11772/j.issn.1001-9081.2018112281
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    Aiming at high manual dependence and frequent Aggregation and Disaggregation (AD) of existing model AD trigger mechanisms, an intelligent trigger mechanism based on focus-area multi-entity temporal outlier detection algorithm was proposed. Firstly, the focus-areas were divided based on attention neighbors. Secondly, the outlier score of focus-area was obtained by calculating the k-distance outlier score of entities in a focus-area. Finally, a trigger mechanism for AD was constructed based on strongest-focus-area threshold decision method. The experimental results on real dataset show that, compared with the traditional single-entity temporal outlier detection algorithms, the proposed algorithm improves the performance of Precision, Recall and F1-score by more than 10 percentage points. The proposed algorithm can not only judge the trigger time of the AD operation in time, but also enable the simulation system to intelligently detect the simulation entities with emergency situation and meet the requirements of multi-resolution modeling.
    Denoising autoencoder based extreme learning machine
    LAI Jie, WANG Xiaodan, LI Rui, ZHAO Zhenchong
    2019, 39(6):  1619-1625.  DOI: 10.11772/j.issn.1001-9081.2018112246
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that parameter random assignment reduces the robustness of the algorithm and the performance is significantly affected by noise of Extreme Learning Machine (ELM), combining Denoising AutoEncoder (DAE) with ELM algorithm, a DAE based ELM (DAE-ELM) algorithm was proposed. Firstly, a denoising autoencoder was used to generate the input data, input weight and hidden layer parameters of ELM. Then, the hidden layer output was obtained through ELM to complete the training of classifier. On the one hand, the advantages of DAE were inherited by the algorithm, which means the features extracted automatically were more representative and robust and were impervious to noise. On the other hand, the randomness of parameter assignment of ELM was overcome and the robustness of the algorithm was improved. The experimental results show that, compared to ELM, Principal Component Analysis ELM (PCA-ELM), SAA-2, the classification error rate of DAE-ELM at least decreases 5.6% on MNIST, 3.0% on Fashion MINIST, 2.0% on Rectangles and 12.7% on Convex.
    Temporal evidence fusion method with consideration of time sequence preference of decision maker
    LI Xufeng, SONG Yafei, LI Xiaonan
    2019, 39(6):  1626-1631.  DOI: 10.11772/j.issn.1001-9081.2018102218
    Asbtract ( )   PDF (873KB) ( )  
    References | Related Articles | Metrics
    Aiming at temporal uncertain information fusion problem, to fully reflect the dynamic characteristic and the influence of time factor on temporal information fusion, a temporal evidence fusion method was proposed with considering decision maker's preference for time sequence based on evidence theory. Firstly, time sequence preference of decision maker was fused to temporal evidence fusion, through the analysis of characteristics of temporal evidence sequence, decision maker's preference for time sequence was measured based on the definition of temporal memory factor. Then, the evidence source was revised by time sequence weight vector obtained by constructing the optimal model and evidence credibility idea. Finally, the revised evidences were fused by Dempster combination rule. Numerical examples show that compared with other fusion methods without considering time factor, the proposed method can deal with conflicting information in temporal information sequence effectively and obtain a reasonable fusion effect; meanwhile, with the consideration of the credibility of temporal evidence sequence and the subjective preference of decision maker, the proposed method can reflect the influence of subjective factors of decision maker on temporal evidence fusion, giving a good expression to the dynamic characteristic of temporal evidence fusion.
    Link prediction model based on densely connected convolutional network
    WANG Wentao, WU Lintao, HUANG Ye, ZHU Rongbo
    2019, 39(6):  1632-1638.  DOI: 10.11772/j.issn.1001-9081.2018112279
    Asbtract ( )   PDF (1061KB) ( )  
    References | Related Articles | Metrics
    The current link prediction algorithms based on network representation learning mainly construct feature vectors by capturing the neighborhood topology information of network nodes for link prediction. However, those algorithms usually only focus on learning information from the single neighborhood topology of network nodes, while ignore the researches on similarity between multiple nodes in link structure. Aiming at these problems, a new Link Prediction model based on Densely connected convolutional Network (DenseNet-LP) was proposed. Firstly, the node representation vectors were generated by the network representation learning algorithm called node2vec, and the structure information of the network nodes was mapped into three dimensional feature information by these vectors. Then, DenseNet was used to to capture the features of link structure and establish a two-category classification model to realize link prediction. The experimental results on four public datasets show that, the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) value of the prediction result of the proposed model is increased by up to 18 percentage points compared to the result of network representation learning algorithm.
    Chinese medical question answer matching method based on attention mechanism and character embedding
    CHEN Zhihao, YU Xiang, LIU Zichen, QIU Dawei, GU Bengang
    2019, 39(6):  1639-1645.  DOI: 10.11772/j.issn.1001-9081.2018102184
    Asbtract ( )   PDF (1101KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems that the current word segmentation tool can not effectively distinguish all medical terms in Chinese medical field, and feature engineering has high labor cost, a multi-scale Convolutional Neural Network (CNN) modeling method based on attention mechanism and character embedding was proposed. In the proposed method, character embedding was combined with multi-scale CNN to extract context information at different scales of question and answer sentences, and attention mechanism was introduced to emphasize the interaction between question sentences and answer sentences, meanwhile the semantic relationship between the question sentence and the correct answer sentence was able to be effectively learned. Since the question and answer matching task in Chinese medical field does not have a standard evaluation dataset, the proposed method was evaluated using the publicly available Chinese Medical Question and Answer dataset (cMedQA). The experimental results show that the proposed method is superior to word matching, character matching and Bi-directional Long Short-Term Memory network (BiLSTM) modeling method, and the Top-1 accuracy is 65.43%.
    Evolution relationship extraction of emergency based on attention-based bidirectional long short-term memory network model
    WEN Chang, LIU Yu, GU Jinguang
    2019, 39(6):  1646-1651.  DOI: 10.11772/j.issn.1001-9081.2018122533
    Asbtract ( )   PDF (973KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that existing study of emergency relationship extraction mostly focuses on causality extraction while neglects other evolutions, in order to improve the completeness of information extracted in emergency decision-making, a method based on attention-based bidirectional Long Short-Term Memory (LSTM) model was used to extract the evolution relationship. Firstly, combined with the concept of evolution relationship in emergencies, an evolution relationship model was constructed and given the formal definition, and the emergency corpus was labeled according to the model. Then, a bidirectional LSTM network was built and attention mechanism was introduced to calculate the attention probability to highlight the importance of the key words in the text. Finally, the built network model was used to extract the evolution relationship. In the evolution relationship extraction experiments, compared with the existing causality extraction methods, the proposed method can extract more sufficient evolution relationship for emergency decision-making. At the same time, the average precision, recall and F1_score are respectively increased by 7.3%, 6.7% and 7.0%, which effectively improves the accuracy of the evolution relationship extraction of emergency.
    Real-time visual tracking based on dual attention siamese network
    YANG Kang, SONG Huihui, ZHANG Kaihua
    2019, 39(6):  1652-1656.  DOI: 10.11772/j.issn.1001-9081.2018112419
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that Fully-Convolutional Siamese network (SiamFC) tracking algorithm is prone to model drift and results in tracking failure when the tracking target suffers from dramatic appearance changes, a new Dual Attention Siamese network (DASiam) was proposed to adapt the network model without online updating. Firstly, a modified Visual Geometry Group (VGG) network which was more expressive and suitable for the target tracking task was used as the backbone network. Then, a novel dual attention mechanism was added to the middle layer of the network to dynamically extract features. This mechanism was consisted of a channel attention mechanism and a spatial attention mechanism. The channel dimension and the spatial dimension of the feature maps were transformed to obtain the double attention feature maps. Finally, the feature representation of the model was further improved by fusing the feature maps of the two attention mechanisms. The experiments were conducted on three challenging tracking benchmarks:OTB2013, OTB100 and 2017 Visual-Object-Tracking challenge (VOT2017) real-time challenges. The experimental results show that, running at the speed of 40 frame/s, the proposed algorithm has higher success rates on OTB2013 and OTB100 than the baseline SiamFC by the margin of 3.5 percentage points and 3 percentage points respectively, and surpass the 2017 champion SiamFC in the VOT2017 real-time challenge, verifying the effectiveness of the proposed algorithm.
    Video frame prediction based on deep convolutional long short-term memory neural network
    ZHANG Dezheng, WENG Liguo, XIA Min, CAO Hui
    2019, 39(6):  1657-1662.  DOI: 10.11772/j.issn.1001-9081.2018122551
    Asbtract ( )   PDF (1005KB) ( )  
    References | Related Articles | Metrics
    Concerning the difficulty in accurately predicting the spatial structure information details in video frame prediction, a method of deep convolutional Long Short Term Memory (LSTM) neural network was proposed by the improvement of the convolutional LSTM neural network. Firstly, the input sequence images were input into the coding network composed of two deep convolutional LSTM of different channels, and the position information change features and the spatial structure information change features of the input sequence images were learned by the coding network. Then, the learned change features were input into the decoding network corresponding to the coding network channel, and the next predicted picture was output by the decoding network. Finally, the picture was input back to the decoding network, and the next picture was predicted, and all the predicted pictures were output after the pre-set loop times. In the experiments on Moving-MNIST dataset, compared with the convolutional LSTM neural network, the proposed method preserved the accuracy of position information prediction, and had stronger spatial structure information detail representation ability with the same training steps. With the convolutional layer of the convolutional Gated Recurrent Unit (GRU) deepened, the method improved the details of the spatial structure information, verifying the versatility of the idea of the proposed method.
    Ship tracking and recognition based on Darknet network and YOLOv3 algorithm
    LIU Bo, WANG Shengzheng, ZHAO Jiansen, LI Mingfeng
    2019, 39(6):  1663-1668.  DOI: 10.11772/j.issn.1001-9081.2018102190
    Asbtract ( )   PDF (1018KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of low utilization rate, high error rate, no recognition ability and manual participation in video surveillance processing in coastal and inland waters of China, a new ship tracking and recognition method based on Darknet network model and YOLOv3 algorithm was proposed to realize ship tracking and real-time detection and recognition of ship types, solving the problem of ship tracking and recognition in important monitored waters. In the Darknet network of the proposed method, the idea of residual network was introduced, the cross-layer jump connection was used to increase the depth of the network, and the ship depth feature matrix was constructed to extract advanced ship features for combination learning and obtaining the ship feature map. On the above basis, YOLOv3 algorithm was introduced to realize target prediction based on image global information, and target region prediction and target class prediction were integrated into a single neural network model. Punishment mechanism was added to improve the ship feature difference between frames. By using logistic regression layer for binary classification prediction, target tracking and recognition was able to be realized quickly with high accuracy. The experimental results show that, the proposed algorithm achieves an average recognition accuracy of 89.5% with the speed of 30 frame/s; compared with traditional and deep learning algorithms, it not only has better real-time performance and accuracy, but also has better robustness to various environmental changes, and can recognize the types and important parts of various ships.
    Laboratory personnel statistics and management system based on Faster R-CNN and IoU optimization
    SHENG Heng, HUANG Ming, YANG Jingjing
    2019, 39(6):  1669-1674.  DOI: 10.11772/j.issn.1001-9081.2018102182
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics
    Aiming at the management requirement of real-time personnel statistics in office scenes with relatively fixed personnel positions, a laboratory personnel statistics and management system based on Faster Region-based Convolutional Neural Network (Faster R-CNN) and Intersection over Union (IoU) optimization was designed and implemented with an ordinary university laboratory as the example. Firstly, Faster R-CNN model was used to detect the heads of the people in the laboratory. Then, according to the output results of the model detection, the repeatedly detected targets were filtered by using IoU algorithm. Finally, a coordinate-based method was used to determine whether there were people at each workbench in the laboratory and store the corresponding data in the database. The main functions of the system are as follows:① real-time video surveillance and remote management of the laboratory; ② timed automatic photo, detection and acquisition of data to provide data support for the quantitative management of the laboratory; ③ laboratory personnel change data query and visualization. The experimental results show that the proposed laboratory personnel statistics and management system based on Faster R-CNN and IoU optimization can be used for real-time personnel statistics and remote management in office scenes.
    YOLO network character recognition method with variable candidate box density for international phonetic alphabet
    ZHENG Yi, QI Donglian, WANG Zhenyu
    2019, 39(6):  1675-1679.  DOI: 10.11772/j.issn.1001-9081.2018112361
    Asbtract ( )   PDF (730KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low recognition accuracy and poor practicability of the traditional character feature extraction methods to International Phonetic Alphabet (IPA), a You Only Look Once (YOLO) network character recognition method with variable candidate box density for IPA was proposed. Firstly, based on YOLO network and combined with three characteristics such as the characters of IPA are closely arranged on X-axis direction and have various types and forms, the distribution density of candidate box in YOLO network was changed. Then, with the distribution density of candidate box on the X-axis increased while the distribution density of candidate box on the Y-axis reduced, YOLO-IPA network was constructed. The proposed method was tested on the IPA dataset collected from Chinese Dialect Vocabulary with 1360 images of 72 categories. The experimental results show that, the proposed method has the recognition rate of 93.72% for large characters and 89.31% for small characters. Compared with the traditional character recognition algorithms, the proposed method greatly improves the recognition accuracy. Meanwhile, the detection speed was improved to less than 1 s in the experimental environment. Therefore, the proposed method can meet the need of real-time application.
    Pneumonia image recognition model based on deep neural network
    HE Xinyu, ZHANG Xiaolong
    2019, 39(6):  1680-1684.  DOI: 10.11772/j.issn.1001-9081.2018102112
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    Current recognition algorithm of pneumonia image faces two problems. First, the extracted features can not fit the pneumonia image well because the transfer learning model used by the pneumonia feature extractor has large image difference between the source dataset and the pneumonia dataset. Second, the softmax classifier used by the algorithm can not well process high-dimensional features, and there is still room for improvement in recognition accuracy. Aiming at these two problems, a pneumonia image recognition algorithm based on Deep Convolution Neural Network (DCNN) was proposed. Firstly, the GoogLeNet Inception V3 network model trained by ImageNet dataset was used to extract the features. Then, a feature fusion layer was added and random forest classifier was used to classify and forecast. Experiments were implemented on Chest X-Ray Images pneumonia standard dataset. The experimental results show that the recognition accuracy, sensitivity and specificity of the proposed model reach 96.77%, 97.56% and 94.26% respectively. The proposed model is 1.26 percentage points and 1.46 percentage points higher than the classic GoogLeNet Inception V3+Data Augmentation (GIV+DA) algorithm in the index of recognition accuracy and sensitivity, and is close to the optimal result of GIV+DA in the index of specificity.
    Expression-insensitive three-dimensional face recognition algorithm based on multi-region fusion
    SANG Gaoli, YAN Chao, ZHU Rong
    2019, 39(6):  1685-1689.  DOI: 10.11772/j.issn.1001-9081.2018112301
    Asbtract ( )   PDF (841KB) ( )  
    References | Related Articles | Metrics
    In order to realize the robustness of three-Dimensional (3D) face recognition algorithm to expression variations, a multi-region template fusion 3D face recognition algorithm based on semantic alignment was proposed. Firstly, in order to guarantee the semantic alignment of 3D faces, all the 3D face models were densely aligned with a pre-defined standard reference 3D face model. Then, considering the expressions were regional, to be robust to region division, a multi-region template based similarity prediction method was proposed. Finally, all the prediction results of multiple classifiers were fused by majority voting method. The experimental results show that, the proposed algorithm can achieve the rank-1 face recognition rate of 98.69% on FRGC (the Face Recognition Grand Challenge) v2.0 expression 3D face database and rank-1 face recognition rate of 84.36% on Bosphorus database with occlusion change.
    Three dimensional palmprint recognition based on neighbor ternary pattern and collaborative representation
    LIU Yuzhen, JIANG Zhengquan, ZHAO Na
    2019, 39(6):  1690-1695.  DOI: 10.11772/j.issn.1001-9081.2018102124
    Asbtract ( )   PDF (887KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that two Dimensional (2D) palmprint images are eaasily to be forged and affected by noise, a three Dimensional (3D) palmprint recognition method based on Neighbor Ternary Pattern (NTP) and collaborative representation was proposed. Firstly, a shape index was used to map the surface geometric information of 3D palmprint into 2D data, avoiding the inaccurate description of 3D palmprint features by common mean value or Gaussian curvature mapping. Secondly, the shape index image was divided into several blocks, and NTP algorithm was used to extract texture features of divided shape index images. Finally, collaborative representation was used to classify the features. Experiments on 3D palmprint base show that compared with the classical algorithms, the proposed method has the best recognition effect with recognition rate of 99.52% and recognition time of 0.6738 s. The proposed method improves the recognition rate by 7.77%, 6.02%, 5.12% and 3.97% respectively compared to Local Binary Pattern (LBP), Local Ternary Pattern (LTP), CompCode and Mean Curvature Image (MCI) method; the proposed method reduces the recognition time by 6.7 s, 15.9 s and 61 s compared to Homotopy, Dual Augmented Lagrangian Algorithm (DALM) and SpaRSA method. The experimental results show that the proposed algorithm has good feature extraction and classification ability, which can effectively improve the recognition accuracy and reduce the recognition time.
    Data science and technology
    Semantic-driven learning and classification method of judicial documents
    MA Jiangang, MA Yinglong
    2019, 39(6):  1696-1700.  DOI: 10.11772/j.issn.1001-9081.2018109193
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics
    Efficient document classification techniques based on large-scale judicial documents are crucial to current judicial intelligent application, such as similar case pushing, legal document retrieval, judgment prediction and sentencing assistance. The general-domain-oriented document classification methods are lack of efficiency because they do not consider the complex structure and knowledge semantics of judicial documents. To solve this problem, a semantic-driven method was proposed to learn and classify judicial documents. Firstly, a domain knowledge model oriented to judicial domain was proposed and constructed to express the document-level semantics clearly. Then, domain knowledge was extracted from the judicial documents based on the model. Finally, the judicial documents were trained and classified by using Graph Long Short-Term Memory (Graph LSTM) model. The experimental results show that, the proposed method is superior to Long Short-Term Memory (LSTM) model, Multinomial Logistic Regression (MLR) and Support Vector Machine (SVM) in accuracy and recall.
    Improvement of Web search result clustering performance based on Word2Vec model feature extension
    YANG Nan, LI Yaping
    2019, 39(6):  1701-1706.  DOI: 10.11772/j.issn.1001-9081.2018102106
    Asbtract ( )   PDF (881KB) ( )  
    References | Related Articles | Metrics
    Aiming at generalized or fuzzy queries, the content of the returned list of Web search engines is clustered to help users to find the desired information quickly. Generaly, the returned list consists of short texts called snippets carring few information which traditional Term Frequency-Inverse Document Frequency (TF-IDF) feature selection model is not suitable for, so the clustering performance is very low. An effective way to solve this problem is to extend snippets according to a external knowledge base. Inspired by neural network based word presenting method, a new snippet extension approach based on Word2Vec model was proposed. In the model, TopN similar words in Word2Vec model were used to extend snippets and the extended text was able to improve the clustering performance of TF-IDF feature selection. Meanwhile,in order to reduce the impact of noise caused by some common used terms, the term frequency weight in TF-IDF matrix of the extended text was modified. The experiments were conducted on two open datasets OPD239 and SearchSnippets to compare the proposed method with pure snippets, Wordnet based and Wikipedia based feature extensions. The experimental results show that the proposed method outperforms other comparative methods significantly in term of clustering effect.
    Credit assessment method based on majority weight minority oversampling technique and random forest
    TIAN Chen, ZHOU Lijuan
    2019, 39(6):  1707-1712.  DOI: 10.11772/j.issn.1001-9081.2018102180
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of unbalanced dataset in credit assessment and the limited classification effect of single classifier on unbalanced data, a Majority Weighted Minority Oversampling TEchnique-Random Forest (MWMOTE-RF) credit assessment method was proposed. Firstly, MWMOTE technology was applied to increase the samples of minority classes in the preprocessing stage. Then, on the preprocessed balanced dataset, random forest algorithm, one of supervised machine learning algorithms, was used to classify and predict the data. With Area Under the Carve (AUC) used to evaluate the performance of classifier, experiments were conducted on German credict card dataset from UCI database and a company's car default loan dataset. The results show that the AUC value of MWMOTE-RF method increases by 18% and 20% respectively compared with random forest method and Naive Bayes method on the same data set. At the same time, random forest method was combined with Synthetic Minority Over-sampling TEchnique (SMOTE) and ADAptive SYNthetic over-sampling (ADASYN), respectively, and the AUC value of MWMOTE-RF method increases by 1.47% and 2.34% respectively compared with them. The results prove the effectiveness and the optimization of classifier performance of the proposed method.
    Cyber security
    Analysis method of passwords under Chinese context
    ZENG Jianping, CHEN Qile, WU Chengrong, FANG Xi
    2019, 39(6):  1713-1718.  DOI: 10.11772/j.issn.1001-9081.2018109122
    Asbtract ( )   PDF (1028KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the current research on password semantics is mainly based on English datasets and restricted to some units like common words or surnames, by using data analysis technology based on password strings, a Chinese context password analysis method based on known-password elements was proposed with the pattern library based on Chinese poems and idioms in Chinese context. Firstly, the known-password element was identified. Then, it was considered as a single password degree of freedom. Finally, the freedom attack cost within a given attack success rate was calculated and the quantitative security of password was obtained. After quantitative analysis of large amounts of plaintext passwords by designed experiments, it is concluded that 80% of user passwords are low secure and can be easily broken by dictionary attacks in Chinese context.
    Network security measurment based on dependency relationship graph and common vulnerability scoring system
    WANG Jiaxin, FENG Yi, YOU Rui
    2019, 39(6):  1719-1727.  DOI: 10.11772/j.issn.1001-9081.2018102199
    Asbtract ( )   PDF (1367KB) ( )  
    References | Related Articles | Metrics
    Administrators usually take some network security metrics as important bases to measure network security. Common Vulnerability Scoring System (CVSS) is one of the generally accepted network measurement method. Aiming at the problem that the existing network security measurement based on CVSS could not accurately measure the probability and the impact of network attack at the same time, an improved base metric algorithm based on dependency relationship graph and CVSS was proposed. Firstly, the dependency relationship of the vulnerability nodes in an attack graph was explored to build the dependency relationship graph. Then, the base metric algorithm of the vulnerability in CVSS was modified according to the dependency relationship. Finally, the vulnerability scores in the whole attack graph were aggregated to obtain the probability and the impact of network attack. The results of simulation with simulated attacker show that the proposed algorithm is superior to the algorithm of aggregating CVSS scores in terms of accuracy and credibility, and can get measurement results closer to the actual simulation results.
    Malicious code detection method based on icon similarity analysis
    YANG Ping, ZHAO Bing, SHU Hui
    2019, 39(6):  1728-1734.  DOI: 10.11772/j.issn.1001-9081.2018112259
    Asbtract ( )   PDF (1200KB) ( )  
    References | Related Articles | Metrics
    According to statistics, a large part of large amount of malicious codes belong to deceptive malicious codes. They usually use icons which are similar to those icons commonly used softwares to disguise themselves and deceive users to click to achieve the purpose of communication and attack. Aiming at solving the problems of low efficiency and high cost of traditional malicious code detection methods based on code and behavior characteristics on the deceptive malicious codes, a new malicious code detection method was proposed. Firstly, Portable Executable (PE) file icon resource information was extracted and icon similarity analysis was performed by image hash algorithm. Then, the PE file import table information was extracted and a fuzzy hash algorithm was used for behavior similarity analysis. Finally, clustering and local sensitive hash algorithms were adopted to realize icon matching, designing and implementing a lightweight and rapid malicious code detection tool. The experimental results show that the designed tool has a good detection effect on malicious code.
    Risk analysis of cyber-physical system based on dynamic fault trees
    XU Bingfeng, ZHONG Zhicheng, HE Gaofeng
    2019, 39(6):  1735-1741.  DOI: 10.11772/j.issn.1001-9081.2018122601
    Asbtract ( )   PDF (1050KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that network security attacks against the Cyber-Physical System (CPS) will cause a system failure, a CPS risk modeling and analysis method based on dynamic fault tree was proposed. Firstly, the integrated modeling was performed to dynamic fault tree and dynamic attack tree to build the Attack-Dynamic Fault Trees (Attack-DFTs) model. Then, the formal models of static subtree and dynamic subtree in Attack-DFTs were given by binary decision graph and input-out Markov chain respectively. On this basis, the qualitative analysis method of Attack-DFTs was given to analyze the basic event path of the system failure caused by network security attacks. Finally, the effectiveness of the proposed method was verified by the typical case study of a pollution system. The case analysis results show that, the proposed method can analyze the event sequence of system failure caused by network security attack in CPS, and effectively realize the formal safety assessment of CPS.
    Intrusion detection approach for IoT based on practical Byzantine fault tolerance
    PAN Jianguo, LI Hao
    2019, 39(6):  1742-1746.  DOI: 10.11772/j.issn.1001-9081.2018102096
    Asbtract ( )   PDF (786KB) ( )  
    References | Related Articles | Metrics
    Current Internet of Things (IoT) networks have high detection rate of known types of attacks but the network node energy consumption is high. Aiming at this fact, an intrusion detection approach based on Practical Byzantine Fault Tolerance (PBFT) algorithm was proposed. Firstly, Support Vector Machine (SVM) was used for pre-training to obtain the intrusion detection decision rule, and the trained rule was applied to each node in IoT. Then, some nodes were voted to perform the active intrusion detection on other nodes in the network, while announce their detection results to other nodes. Finally, each node judged the state of other nodes according to PBFT algorithm, making the detection results reach consistency in the system. The simulation results on NSL-KDD dataset by TinyOS show that the proposed approach reduces the energy consumption by 12.2% and 7.6% averagely and respectively compared with Integrated Intrusion Detection System (ⅡDS) and Two-layer Dimension reduction and Two-tier Classification (TDTC) approach, effectively reducing the energy consumption of IoT.
    Gait feature identification method based on motion sensor in smartphone
    KONG Jing, GUO Yuanbo, LIU Chunhui, WANG Yifeng
    2019, 39(6):  1747-1752.  DOI: 10.11772/j.issn.1001-9081.2018102161
    Asbtract ( )   PDF (1043KB) ( )  
    References | Related Articles | Metrics
    The identification based on behavior features is a leading technology of biometric recognition. In order to optimize the process of data processing and the way of recognition in the existing studies of identification based on gait feature, a method of extracting gait features from the data of smart phone motion sensors for identification was proposed. Firstly, a spatial transformation algorithm was used to solve the problem of sensor coordinate system drift, making the data to describe the behavior features completely and accurately. Then, Support Vector Machine (SVM) algorithm was used to classify and identify gait features change caused by user transformation. The experimental results show that, the identification accuracy of the proposed method is 95.5%. It can be used to effectively identify user transformation with reduction of space cost and implementation difficulty.
    Advanced computing
    Machine learning based online mapping approach for heterogeneous multi-core processor system
    AN Xin, ZHANG Ying, KANG An, CHEN Tian, LI Jianhua
    2019, 39(6):  1753-1759.  DOI: 10.11772/j.issn.1001-9081.2018112311
    Asbtract ( )   PDF (1164KB) ( )  
    References | Related Articles | Metrics
    Heterogeneous Multi-core Processors (HMPs) platform has become the mainstream solution for modern embedded system design, and online mapping or scheduling plays a vital role in making full use of the advantages of high performance and low power consumption. Aiming at the dynamic mapping problem of application tasks in HMPs, a mapping and scheduling approach based on machine learning prediction model was proposed. On the one hand, a machine learning model was constructed to predict and evaluate the performance of different mapping strategies rapidly and efficiently, so as to provide support for online scheduling. On the other hand, the machine learning model was integrated with genetic algorithm to find out the optimal resource allocation strategy efficiently. Finally, an Motion-Join Photographic Experts Group (M-JPEG) decoder was used to verify the effectiveness of the proposed approach. The experimental results show that, compared with the Round Robin Scheduler (RRS) and sampling scheduling approaches, the proposed online mapping/scheduling approach has reduced the average execution time by about 19% and 28% respectively.
    Automated course arrangement algorithm based on multi-class iterated local search
    SONG Ting, CHEN Mao, WU Chao, ZHANG Gongzhao
    2019, 39(6):  1760-1765.  DOI: 10.11772/j.issn.1001-9081.2018102183
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that local search algorithm is prone to fall into the local optimum and does not adapt to the course arrangement under multiple constraints, an automated course arrangement algorithm based on multi-class iterated local search was proposed. Firstly, the course arrangement problems were classified by the multi-class classifier according to the characteristics of the problems to guide the neighborhood selection and parameter setting of the iteration local search. Then, in the process of iterated local search, the sequence-based greedy algorithm was used to obtain the feasible solutions. Finally, the problem characteristics oriented two-temperature control simulated annealing algorithm was used to search for local optimal solution in the neighborhood and the current optimal solution was perturbed by a specific strategy and iterated as the new initial solution to achieve global optimization. The proposed algorithm was tested on two internationally famous datasets, which are the second international timetabling competition dataset and Lewis 60 dataset. The experimental results show that, compared with the existing efficient algorithms in current literatures, the proposed algorithm has higher efficiency and solution quality.
    Network and communications
    Byzantine fault tolerance consensus algorithm based on voting mechanism
    WANG Haiyong, GUO Kaixuan, PAN Qiqing
    2019, 39(6):  1766-1771.  DOI: 10.11772/j.issn.1001-9081.2018102049
    Asbtract ( )   PDF (961KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problems of high energy consumption, low efficiency and poor scalability of Practical Byzantine Fault Tolerance (PBFT) consensus algorithm, Dynamic authorized Byzantine Fault Tolerance (DDBFT) consensus algorithm and Consortium Byzantine Fault Tolerance (CBFT) consensus algorithm existed in the blockchain, Practical Byzantine Fault Tolerant consensus algorithm based on Voting (VPBFT) was proposed by introducing voting mechanism. Firstly, based on PBFT algorithm, the nodes in the network were divided into four types of nodes with different responsibility. Secondly, the voting nodes in the algorithm had voting and scoring rights to supervise the production nodes to produce data blocks honestly and reliably, the production nodes producing valid data blocks had priority to be selected into next turn, while the candidate nodes were able to be voted as production nodes, and the ordinary nodes were able to be voted as production nodes or candidate nodes. Finally, different types of nodes had a certain quantity relationship between themselves, which means the parameters were able to be dynamically adjusted when the number of different types of nodes or the total number of nodes in the network changed, so that the algorithm was able to adapt to the dynamic network. Through performance simulation analysis, the proposed VPBFT algorithm has low energy consumption, short delay, high fault tolerance and high dynamicity compared with consensus algorithms such as PBFT, DDBFT and CBFT.
    Software defined network based fault tolerant routing mechanism for satellite networks
    JIA Mengyao, WANG Xingwei, ZHANG Shuang, YI Bo, HUANG Min
    2019, 39(6):  1772-1779.  DOI: 10.11772/j.issn.1001-9081.2018122615
    Asbtract ( )   PDF (1119KB) ( )  
    References | Related Articles | Metrics
    Duing to the satellite network has high requirement for security and fault-dealing ability, with Software Defined Network (SDN) technology introduced, the central controller was set in the network to enhance the network's fault-dealing ability. Firstly, a satellite network model was designed based on the SDN idea, and the satellite's operating parameters on the three-layer orbit were calculated and the constellations were built. Then, the method of hierarchical routing was used to design a fault tolerant routing mechanism for satellite network. Finally, the simulation experiments were carried out on the Mininet platform, and the experimental results of Fault-Tolerant Routing algorithm (FTR) were compared with the results of inter-Satellite Routing algorithm based on Link Recognizing (LRSR) and Multi-Layered Satellite Routing algorithm (MLSR). The comparison results show that in the case without damaged nodes or links in the network, the total routing delay of FTR is reduced by 6.06% on average compared with that of LRSR, which shows the effectiveness of introducing SDN centralized control; the packet loss rate of FTR is reduced by 25.79% compared with that of MLSR which also targets the minimum delay, which shows the effectiveness of temporary storage routing mechanism design for the Medium Earth Orbit (MEO) satellites. When the failure of nodes and links in the network is serious, FTR has the total routing delay 3.99% lower than LRSR and 19.19% lower than MLSR, and has the packet loss rate 16.94% lower than LRSR and 37.95% lower than MLSR, which shows the effectiveness of fault tolerance of FTR. The experimental results prove that the fault tolerant routing mechanism of satellite network based on SDN has better fault tolerant capability.
    Quantitative analysis of physical secret key in OFDM channel based on universal software radio peripheral
    DING Ning, GUAN Xinrong, YANG Weiwei
    2019, 39(6):  1780-1785.  DOI: 10.11772/j.issn.1001-9081.2018102120
    Asbtract ( )   PDF (990KB) ( )  
    References | Related Articles | Metrics
    In order to compare and analyze the performance of single threshold quantization algorithm and double thresholds quantization algorithm on measured data and improve the performance of physical secret key by optimizing the quantization parameters, an Orthogonal Frequency Division Multiplexing (OFDM) system was built by Universal Software Radio Peripheral (USRP). The channel amplitude feature was extracted as the key source through channel estimation and the performance of the two quantization algorithms was analyzed in terms of consistency, randomness and residual length of secret key. The simulation results of consistency, randomness and residual length of secret key under single threshold quantization and double thresholds quantization were obtained based on measured data. The results show that single threshold quantization algorithm has the optimal quantization threshold to minimize the key inconsistency rate under the given key randomness constraint, double thresholds quantization algorithm has the optimal quantization factor to maximize the effective secret key length, and when Cascade key negotiation algorithm is used for negotiation, there is a trade-off relation between secret key consistency and secret key generation rate in different quantization algorithms.
    Design of optimization algorithm for selfish misbehavior in medium access control protocol of mobile Ad Hoc network
    GAO Shijuan, WANG Xijun, ZHU Qingchao
    2019, 39(6):  1786-1791.  DOI: 10.11772/j.issn.1001-9081.2018102152
    Asbtract ( )   PDF (829KB) ( )  
    References | Related Articles | Metrics
    To address the problems like static nature, unfairness and complexity in Selfish Misbehavior (SM) processing mechanism of Medium Access Control (MAC) protocol of Mobile Ad Hoc NETwork (MANET), an optimization algorithm for SM was proposed. By using optimization theory and feedback theory, the Optimal Access Probability (OAP) was conducted through the utilization of historical samples, realizing the dynamic change of parameters to improve static nature. Then, all nodes in the network were set to use the OAP at the given period, thus the fairness index of the network was promoted. Finally, linear iteration mechanism was adopted to avoid the increase of complexity. On basis of the above, stability and effectiveness of the proposed algorithm were proved theoretically by Lyapunov algorithm and global stable point. Experimental results show that, by the proposed algorithm, the number of SM decreases by 30%-50%, the end-to-end delay brings down 8-10 ms, the throughput increases about 0.5 Mb/s, the fairness index raises by 0.05, while the control overhead remains unchanged, all of which indicates that the performance of the SM processing mechanism has been improved.
    Radio wave propagation characteristics and analysis software in troposphere based on split step wavelet method
    WEI Shanshan, HU Shengbo, YAN Tingting, MO Jinrong
    2019, 39(6):  1792-1798.  DOI: 10.11772/j.issn.1001-9081.2018102119
    Asbtract ( )   PDF (929KB) ( )  
    References | Related Articles | Metrics
    In order to meet the needs of tropospheric wireless communication system design and optimization, based on parabolic wave equation and Split Step Wavelet Method (SSWM), the tropospheric radio wave propagation characteristics were studied, and the wave propagation characteristic analysis software was developed. Fristly,a method for analyzing the tropospheric propagation characteristics based on split step wavelet method was presented by establishing a computation scene of numerical solution. Then, the tropospheric radio wave propagation characteristics analysis software was developed based on the proposed analysis method and Matlab.The numerical results show that the convergence of the proposed SSWM is better than that of Split Step Fourier Method (SSFM); tropospheric propagation loss is closely related to antenna height and elevation:the smaller the antenna elevation angle, the smaller the propagation loss; the larger the antenna height, the smaller the propagation loss; the propagation loss in an evaporation duct environment is smaller than that in the standard atmospheric environment. In addition, the developed analysis software has a user-friendly graphical user interface and is simple and flexible to operate.
    Virtual reality and multimedia computing
    Group formation control method based on Voronoi diagram
    HUANG Dongjin, DUAN Siwen, LEI Xue, LIANG Jingkun
    2019, 39(6):  1799-1803.  DOI: 10.11772/j.issn.1001-9081.2018102210
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    Group formation control technologies are ofen used for the film formation scenes of a large number of characters in film and television works, but a lot of group formation technologies tend to focus on the free-moving individual characters without considering the overall control of the formation, which causes the scene picture a lack of beauty, integrity and organization. In order to solve these problems, a group formation control method based on Voronoi diagram was proposed. Firstly, the group formation was divided into Voronoi diagram spaces to create a formation grid containing all the agents. Then, a new group formation deformation algorithm was proposed, in which artificial potential energy field and relative speed obstacle method were used to reasonably avoid obstacles, and a spring system was combined to keep the formation as stable as possible in the deformation process. Finally, Lloyd algorithm was used to quickly restore the target formation. The experimental results show that, the proposed method can simulate the group formation transformation motion well, is suitable for various complex scenes, and has an aesthetic, overall and organized formation transformation effect.
    Low-light image enhancement method based on simulating multi-exposure fusion
    SIMA Ziling, HU Feng
    2019, 39(6):  1804-1809.  DOI: 10.11772/j.issn.1001-9081.2018112284
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of low luminance, low contrast and poor visual information, a low-light image enhancement method based on simulating multi-exposure fusion was proposed. Firstly, the improved variational Retinex model and morphology were combined to generate the reference map to ensure the subject information in the exposed image set. Then, a new illumination compensation normalization function was constructed by combining Sigmoid function and gamma correction. At the same time, an unsharp masking algorithm based on Gaussian guided filtering was proposed to adjust the details of the reference map. Finally, the weighted values of exposed image set were designed from luminance, chromatic information and exposure rate respectively, and the final enhancement result was obtained through multi-scale fusion with effective avoidance of halo phenomenon and color distortion. The experimental results on different public datasets show that, compared with the traditional low-light image enhancement method, the proposed method has reduced the lightness distortion rate and increased the visual information fidelity. The proposed method can effectively preserve visual information, which is conducive to the real-time application of low-light image enhancement.
    Slope unit extraction algorithm based on texture watershed
    CHENG Lu, ZHOU Bo
    2019, 39(6):  1810-1815.  DOI: 10.11772/j.issn.1001-9081.2018102164
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics
    Slope units are widely used in the prevention and evaluation of landslide-based geological hazards, whose extraction and division are the primary target and important foundation for the risk assessment of landslide hazards. Considering the parallel boundaries and incorrect segmentation problems of the slope units extracted by traditional Geographic Information System (GIS) method, a slope unit extraction algorithm based on texture watershed was proposed, in which slope units were extracted by segmenting terrain images. Firstly, a Digital Elevation Model (DEM) image was obtained by the pretreatment of terrain data, and DEM texture features were extracted by gray level co-occurrence matrix. Then, the gradient image with gray level fused with texture features was calculated and segmented by marker-based watershed segmentation to accurately obtain mountain boundaries and watershed boundaries. Finally, combined with positive and negative terrains, the mountain objects were segmented by watershed segmentation to extract slope units. The experimental results show that the proposed method is pretty effective in segmentation for DEM images of different landform types and resolutions. Compared with traditional GIS method, horizontal planes and inclined planes can be segmented correctly, and the problem of parallel boundaries caused by filling of depressions can be effectively avoided through the proposed method.
    Grayscale image colorization algorithm based on dense neural network
    ZHANG Na, QIN Pinle, ZENG Jianchao, LI Qi
    2019, 39(6):  1816-1823.  DOI: 10.11772/j.issn.1001-9081.2018102100
    Asbtract ( )   PDF (1365KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low information extraction rate of traditional methods and the unideal coloring effect in the grayscale image colorization field, a grayscale image colorization algorithm based on dense neural network was proposed to improve the colorization effect and make the information of image be better observed by human eyes. With making full use of the high information extraction efficiency of dense neural network, an end-to-end deep learning model was built and trained to extract multiple types of information and features in the image. During the training, the loss of the network output result (such as information loss and classification loss) was gradually reduced by comparing with the original image. After the training, with only a grayscale image input into the trained network, a full and vibrant vivid color image was able to be obtained. The experimental results show that the introduction of dense network can effectively alleviate the problems such as color leakage, loss of detail information and low contrast, during the colorization process. The coloring effect has achieved significant improvement compared with the current advanced coloring methods based on Visual Geometry Group (VGG)-net, U-Net, dual stream network structure, Residual Network (ResNet), etc.
    Frontier & interdisciplinary applications
    Evolutionary game model under synergistic effect of time scale and selection preference
    WANG Xilong, WANG Jicheng, LUO Cheng, TIAN Xiuxia
    2019, 39(6):  1824-1828.  DOI: 10.11772/j.issn.1001-9081.2018102196
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    Considering emergence and maintenance of cooperative behavior, based on evolutionary game theory and network theory, an evolutionary game model which can promote cooperation was proposed. In the proposed model, time scale and selection preference were introduced simultaneously into evolutionary game. In initialization phase, players were segmented into two categories according to their time scales of the strategies. Players in one category updated their strategies in each round, while players in the other category determined wether to update their strategies according to certain probability after every round of game. In strategy updating phase, the reputation of a player was determined by his distribution to his neighbors, and all players perfered to learn the strategies of neighbors with good reputation. The simulation experimental results show that, in the proposed evolutionary game model under synergistic effect of time scale and selection preference, cooperative behavior can be maintained in the group, the players with inertia hinders the emergence of cooperation, but the irrational behavior of players can promote cooperation.
    Cascading failure model of carbon emission spatial correlation system considering load overload
    HUANG Guangqiu, XIE Rong
    2019, 39(6):  1829-1835.  DOI: 10.11772/j.issn.1001-9081.2018112294
    Asbtract ( )   PDF (1172KB) ( )  
    References | Related Articles | Metrics
    In order to increase the creability of the damage degree evaluation of cascading failure caused by emergency to carbon emission correlation system, considering the redundancy ability of individual members to load, an overload failure probability was proposed based on "load-capacity" cascade failure model of traditional complex network, and a cascading failure model was constructed considering load overload. Then, based on the characteristics of nodes, six load allocation strategies for overloaded nodes were raised. The simulation results show that, in the load allocation strategies of the overloaded nodes, the integrated allocation strategy is superior in general, which can effectively control the scale of cascading failure and increase the robustness of network; increasing the overloaded parameters in a certain range can help to reduce the impact of cascading failure, while the improvement effect is not significant when the parameters are too large; under different load allocation strategies, the residual coefficient has an optimal value and capacity adjustable parameters have optimal ranges which can keep the carbon emission correlation network in good robustness with low construction cost while the tight allocation strategy means high costruction cost.
    Order allocation problem of vehicle logistics service supply chain considering multiple modes of transportation
    LI Liying, FU Hanmei
    2019, 39(6):  1836-1841.  DOI: 10.11772/j.issn.1001-9081.2018122461
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    Focusing on the order allocation in vehicle logistics service supply chain, a bi-level programming model considering multiple modes of transportation was proposed. Firstly, considering that different transportation modes affect the transportation cost and the customer's on-time delivery requirement, a bi-level programming model aiming to punctual delivery and minimization of purchasing cost was established. Secondly, a Heuristic Algorithm (HA) was designed to determine the tasks of each transportation mode. Thirdly, Shuffled Frog Leaping Algorithm (SFLA) was used to solve task allocation of each transportation mode between functional logistics service providers. Finally, the solution of the proposed model was compared with those of Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) through different scale examples. The results show that compared with the original purchasing cost 4.38 million yuan, the proposed model has a significantly optimized result 4.21 million yuan, which shows the order allocation scheme of the proposed model solves the order allocation problem of vehicle logistics more effectively. Experimantal results show that HA-SFLA can obtain the significantly optimized result quickly compared to GA, PSO and ACO, illustrating that HA-SFLA can solve the bi-level model considering transportation modes more efficiently. The bi-level order allocation model and algorithm considering transportation modes can reduce logistics costs while meet customer on-time requirements, making the logistics suppliers consider the transportation modes in order allocation phase to achieve more benefits.
    XML-based component modeling and stimulation of cyber physical system
    ZHANG Cheng, CHEN Fulong, LIU Chao, QI Xuemei
    2019, 39(6):  1842-1848.  DOI: 10.11772/j.issn.1001-9081.2018102207
    Asbtract ( )   PDF (1096KB) ( )  
    References | Related Articles | Metrics
    Cyber Physical System (CPS) involves the integration and collaboration of various computing models. Concerning the problems of inconsistent CPS design methods, poor plasticity, high complexity and difficulty in collaborative modeling and verification, a structured and descriptive heterogeneous component model was proposed. Firstly, the model was constructed by a unified component modeling method to solve the problem that the model was not open. Then, eXtensible Markup Language (XML) was used to realize the standard description of all kinds of components to resolve the inconsistency and non-extensibility of different computing model description languages. Finally, the collaborative simulation verification method of multi-level open component model was used to realize the simulation verification to solve the non-collaboration problem of verification. The medical thermostat was modeled, described and simulated by the general component modeling method, the XML component standard description language and the verification tool platform XModel. The case of medical thermostat shows that, the proposed model-driven process of building reconfigurable heterogeneous components and confirming their design correctness supports the collaborative design of cyber physics and the correction while constructing, avoiding repeated modifications when problems are found in the process of system implementation.
    Autonomous localization and obstacle detection method of robot based on vision
    DING Doujian, ZHAO Xiaolin, WANG Changgen, GAO Guangen, KOU Lei
    2019, 39(6):  1849-1854.  DOI: 10.11772/j.issn.1001-9081.2018102187
    Asbtract ( )   PDF (977KB) ( )  
    References | Related Articles | Metrics
    Aiming at the obstacle detection problem caused by the loss of environmental information in sparse Simultaneous Localization And Mapping (SLAM) algorithm, an autonomous location and obstacle detection method of robot based on vision was proposed. Firstly, the parallax map of the observed scene was obtained by binocular camera. Secondly, under the framework of Robot Operating System (ROS), localization and mapping node and obstacle detection node were operated simultaneously. The localization and mapping node completed pose estimation and map building based on ORB-SLAM2. In the obstacle detection node, a depth threshold was introduced to binarize the parallax graph and the contour extraction algorithm was used to obtain the contour information of the obstacle and calculate the convex hull area of the obstacle, then an area threshold was introduced to eliminate the false detection areas, so as to accurately obtain the coordinates of obstacles in real time. Finally, the detected obstacle information was inserted into the sparse feature map of the environment. Experiment results show that this method can quickly detect obstacles in the environment while realizing autonomous localization of the robot, and the detection accuracy can ensure the robot to avoid obstacles smoothly.
    Matrix LED high-beam intelligent assistant control system
    TAN Xitang, LIU Sha, ZHU Qinyue, FAN Qingwen, WANG Chen
    2019, 39(6):  1855-1862.  DOI: 10.11772/j.issn.1001-9081.2018102098
    Asbtract ( )   PDF (1228KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem that the existing car high-beam requires the driver to manually change the headlamp through his own judgment of the road condition, which may results in a traffic accident due to the illegal use of the high-beam, a matrix LED high-beam intelligent assistant control system which can automatically adjust the radiation way of high-beam according to the road condition and environment was designed and implemented. Firstly, according to the driving characteristics of vehicles and related traffic regulations, the intelligent control strategy of matrix LED high-beam assistant system was proposed for different road conditions. Then the hardware and software of the system were designed and implemented. In the hardware part, the device selection and circuit design of the modules like main controller, LED power driver and matrix switch controller were given, and the software part was composed of function modules like driving circuit control, matrix switch control and intelligent control strategy. Finally, a complete experiment system under laboratory conditions was built for functional test. The experiment test results indicate that the proposed method has accurate results and is steady, reliable, better in real-time and easy to realize, which achieves the expected goal.
    Test data compatible compression method based on tri-state signal
    CHEN Tian, ZUO Yongsheng, AN Xin, REN Fuji
    2019, 39(6):  1863-1868.  DOI: 10.11772/j.issn.1001-9081.2018112334
    Asbtract ( )   PDF (942KB) ( )  
    References | Related Articles | Metrics
    Focusing on the increasing amount of test data in the development of Very Large Scale Integration (VLSI), a test data compression method based on tri-state signal was proposed. Firstly, the test set was optimized and pre-processed by performing partial input reduction and test vector reordering operations, improving the compatibility among test patterns while increasing the proportion of don't-care bit X in the test set. Then, the coding compression of tri-state signal was performed to the pre-processed test set, so that the test set was divided into multiple scan slices by using the characteristics of tri-state signal, and the tri-state signal was used to perform compatible coding compression on the scann slices. With various test rules considered, the test set compression ratio was improved. The experimental results show that, compared with the similar compression methods, the proposed method achieves a higher compression ratio, and the average test compression ratio reaches 76.17% without significant increase of test power and area overhead.
2024 Vol.44 No.7

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF