Loading...

Table of Content

    10 July 2019, Volume 39 Issue 7
    Artificial intelligence
    Review of clustering algorithms
    ZHANG Yonglai, ZHOU Yaojian
    2019, 39(7):  1869-1882.  DOI: 10.11772/j.issn.1001-9081.2019010174
    Asbtract ( )   PDF (2238KB) ( )  
    References | Related Articles | Metrics

    Clustering is very important as an unsupervised learning algorithm in the age of big data. Recently, considerable progress has been made in the analysis of clustering algorithm. Firstly, the whole process of clustering, similarity measurement, new classification of clustering algorithms and evaluation on their results were summarized. Clustering algorithms were divided into two categories:big data clustering and small data clustering, and the systematic analysis and summary of big data clustering were carried out particularly. Moreover, the research progress and application of various clustering algorithms were summarized and analyzed, and the development trend of clustering algorithms was discussed in combination with the research topics.

    Integration of cost-sensitive algorithms based on average distance of K-nearest neighbor samples
    YANG Hao, WANG Yu, ZHANG Zhongyuan
    2019, 39(7):  1883-1887.  DOI: 10.11772/j.issn.1001-9081.2018122483
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of classification of unbalanced data sets and the problem that the general cost-sensitive learning algorithm can not be applied to multi-classification condition, an integration method of cost-sensitive algorithm based on average distance of K-Nearest Neighbor (KNN) samples was proposed. Firstly, according to the idea of maximizing the minimum interval, a resampling method for reducing the density of decision boundary samples was proposed. Then, the average distance of each type of samples was used as the basis of judgment of classification results, and a learning algorithm based on Bayesian decision-making theory was proposed, which made the improved algorithm cost sensitive. Finally, the improved cost-sensitive algorithm was integrated according to the K value. The weight of each base learner was adjusted according to the principle of minimum cost, obtaining the cost-sensitive AdaBoost algorithm aiming at the minimum total misclassification cost. The experimental results show that compared with traditional KNN algorithm, the improved algorithm reduces the average misclassification cost by 31.4 percentage points and has better cost sensitivity.

    Deep learning neural network model for consumer preference prediction
    KIM Chungsong, LI Dong
    2019, 39(7):  1888-1893.  DOI: 10.11772/j.issn.1001-9081.2019010061
    Asbtract ( )   PDF (1094KB) ( )  
    References | Related Articles | Metrics

    Neuromarketing, by which consumer responses to advertisements and products are analyzed through research on human brain activity, is receiving new attention. Aiming at neuromarketing based on ElectroEncephaloGraphy (EEG), a method of consumer preference prediction based on deep learning neural network was proposed. Firstly, in order to extract features of consumer's EEG, five different frequency bands of EEG topographic videos were obtained from multi-channel EEG signals by using Short Time Fourier Transform (STFT) and biharmonic spline interpolation. Then, a prediction model combining five three-Dimensional Convolutional Neural Networks (3D CNNs) and multi-layer Long Short-Term Memory (LSTM) neural networks was proposed for predicting consumer preference from EEG topographic videos. Compared with the Convolutional Neural Network (CNN) model and LSTM neural network model, the average accuracy of consumer-dependence model was increased by 15.05 percentage points and 19.44 percentage points respectively, and the average accuracy of consumer-independence model was increased by 16.34 percentage points and 17.88 percentage points respectively. Theoretical analysis and experimental results show that the proposed consumer preference prediction system can provide effective marketing strategy development and marketing management at low cost.

    Vehicle behavior dynamic recognition network based on long short-term memory
    WEI Xing, LE Yue, HAN Jianghong, LU Yang
    2019, 39(7):  1894-1898.  DOI: 10.11772/j.issn.1001-9081.2018122448
    Asbtract ( )   PDF (858KB) ( )  
    References | Related Articles | Metrics

    In the advanced assisted driving device, machine vision technology was used to process the video of vehicles in front in real time to dynamically recognize and predict the posture and behavior of vehicle. Concerning low precision and large delay of this kind of recognition algorithm, a deep learning algorithm for vehicle behavior dynamic recognition based on Long Short-Term Memory (LSTM) was proposed. Firstly, the key frames in vehicle behavior video were extracted. Secondly, a dual convolutional network was introduced to analyze the feature information of key frames in parallel, and then LSTM network was used to sequence the extracted characteristic information. Finally, the output predicted score was used to determine the behavior type of vehicle. The experimental results show that the proposed algorithm has an accuracy of 95.6%, and the recognition time of a single video is only 1.72 s. The improved dual convolutional network algorithm improves the accuracy by 8.02% compared with ordinary convolutional network and increases by 6.36% compared with traditional vehicle behavior recognition algorithm based on a self-built dataset.

    Bee colony double inhibition labor division algorithm and its application in traffic signal timing
    HU Liang, XIAO Renbin, LI Hao
    2019, 39(7):  1899-1904.  DOI: 10.11772/j.issn.1001-9081.2018112337
    Asbtract ( )   PDF (1080KB) ( )  
    References | Related Articles | Metrics

    Swarm intelligence labor division refers to any algorithm and distributed problem solving method that is inspired by the collective behaviors of social insects and other animal groups. It can be widely used in real-life task assignment. Focusing on the task assignment problem like traffic signal timing, the theory of labor division that describes the interaction mode between bee individuals was introduced, a Bee colony Double Inhibition Labor Division Algorithm (BDILDA) based on swarm intelligence was proposed, in which the dynamic accommodation of swarm labor division was achieved through interaction between internal and external inhibitors of the individual. In order to verify the validity of BDILDA, the traffic signal timing problem was selected for simulation experiments. BDILDA was used to solve actual case of traffic signal timing and the result was compared with the results of Webster algorithm, Multi-Colony Ant Algorithm (MCAA), Transfer Bees Optimizer (TBO) algorithm and Backward FireWorks Algorithm (BFWA). The experimental results show that average delay time of BDILDA is reduced by 14.3-20.1 percentage points, the average parking times is reduced by 3.7-4.5 percentage points, the maximum traffic capacity is increased by 5.2-23.6 percentage points. The results indicate that the proposed algorithm is suitable for solving dynamic assignment problems in uncertain environment.

    Exoskeleton robot gait detection based on improved whale optimization algorithm
    HE Hailin, ZHENG Jianbin, YU Fangli, YU Lie, ZHAN Enqi
    2019, 39(7):  1905-1911.  DOI: 10.11772/j.issn.1001-9081.2018122474
    Asbtract ( )   PDF (999KB) ( )  
    References | Related Articles | Metrics

    In order to solve problems in traditional gait detection algorithms, such as simplification of information, low accuracy, being easy to fall into local optimum, a gait detection algorithm for exoskeleton robot called Support Vector Machine optimized by Improved Whale Optimization Algorithm (IWOA-SVM) was proposed. The selection, crossover and mutation of Genetic Algorithm (GA) were introduced to Whale Optimization Algorithm (WOA) to optimize the penalty factor and kernel parameters of Support Vector Machine (SVM), and then classification models were established by SVM with optimized parameters, expanding the search scope and reduce the probability of falling into local optimum. Firstly, the gait data was collected by using hybrid sensing technology. With the combination of plantar pressure sensor, knee joint and hip joint angle sensors, motion data of exoskeleton robot was acquired as the input of gait detection system. Then, the gait phases were divided and tagged according to the threshold method. Finally, the plantar pressure signal was integrated with hip and knee angle signals as input, and gait detection was realized by IWOA-SVM algorithm. Through the simulation experiments of six standard test functions, the results demonstrate that Improved Whale Optimization Algorithm (IWOA) is superior to GA, Particle Swarm Optimization (PSO) algorithm and WOA in robustness, optimization accuracy and convergence speed. By analyzing the gait detection results of different wearers, the accuracy is up to 98.8%, so the feasibility and practicability of the proposed algorithm in the new generation exoskeleton robot are verified. Compared with Support Vector Machine optimized by Genetic Algorithm (GA-SVM), Support Vector Machine optimized by Particle Swarm Optimization (PSO-SVM) and Support Vector Machine optimized by Whale Optimization Algorithm (WOA-SVM), the proposed algorithm has the gait detection accuracy improved by 5.33%, 2.70% and 1.44% respectively. The experimental results show that the proposed algorithm can effectively detect the gait of exoskeleton robot and realize the precise control and stable walking of exoskeleton robot.

    Greedy core acceleration dynamic programming algorithm for solving discounted {0-1} knapsack problem
    SHI Wenxu, YANG Yang, BAO Shengli
    2019, 39(7):  1912-1917.  DOI: 10.11772/j.issn.1001-9081.2018112393
    Asbtract ( )   PDF (860KB) ( )  
    References | Related Articles | Metrics

    As the existing dynamic programming algorithm cannot quickly solve Discounted {0-1} Knapsack Problem (D{0-1}KP), based on the idea of dynamic programming and combined with New Greedy Repair Optimization Algorithm (NGROA) and core algorithm, a Greedy Core Acceleration Dynamic Programming (GCADP) algorithm was proposed with the acceleration of the problem solving by reducing the problem scale. Firstly, the incomplete item was obtained based on the greedy solution of the problem by NGROA. Then, the radius and range of fuzzy core interval were found by calculation. Finally, Basic Dynamic Programming (BDP) algorithm was used to solve the items in the fuzzy core interval and the items in the same item set. The experimental results show that GCADP algorithm is suitable for solving D{0-1}KP. Meanwhile, the average solution speed of GCADP improves by 76.24% and 75.07% respectively compared with that of BDP algorithm and FirEGA (First Elitist reservation strategy Genetic Algorithm).

    Joint entity and relation extraction model based on reinforcement learning
    CHEN Jiafeng, TENG Chong
    2019, 39(7):  1918-1924.  DOI: 10.11772/j.issn.1001-9081.2019010182
    Asbtract ( )   PDF (1115KB) ( )  
    References | Related Articles | Metrics

    Existing entity and relation extraction methods that rely on distant supervision suffer from noisy labeling problem. A model for joint entity and relation extraction from noisy data based on reinforcement learning was proposed to reduce the impact of noise data. There were two modules in the model:an sentence selector module and a sequence labeling module. Firstly, high-quality sentences without labeling noise were selected by instance selector module and the selected sentences were input into sequence labeling module. Secondly, predictions were made by sequence labeling module and the rewards were provided to sentence selector module to help the module select high-quality sentences. Finally, two modules were trained jointly to optimize instance selection and sequence labeling processes. The experimental results show that the F1 value of the proposed model is 47.3% in the joint entity and relation extraction, which is 1% higher than those of joint extraction models represented by CoType and 14% higher than those of serial models represented by LINE(Large-scale Information Network Embedding). The results show that the joint entity and relation extraction model in combination with reinforcement learning can effectively improve F1 value of sequential labeling model, in which the sentence selector can effectively deal with the noise of data.

    Deceptive review detection via hierarchical neural network model with attention mechanism
    YAN Mengxiang, JI Donghong, REN Yafeng
    2019, 39(7):  1925-1930.  DOI: 10.11772/j.issn.1001-9081.2018112340
    Asbtract ( )   PDF (958KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that traditional discrete models fail to capture global semantic information of whole comment text in deceptive review detection, a hierarchical neural network model with attention mechanism was proposed. Firstly, different neural network models were adopted to model the structure of text, and which model was able to obtain the best semantic representation was discussed. Then, the review was modeled by two attention mechanisms respectively based on user view and product view. The user view focused on the user's preferences in comment text and the product view focused on the product feature in comment text. Finally, two representations learned from user and product views were combined as final semantic representation for deceptive review detection. The experiments were carried out on Yelp dataset with accuracy as the evaluation indicator. The experimental results show that the proposed hierarchical neural network model with attention mechanism performs the best with the accuracy higher than traditional discrete methods and existing neural benchmark models by 1 to 4 percentage points.

    Sentiment analysis method combining sentiment and semantic information
    MENG Shilin, ZHAO Yunlong, GUAN Donghai, ZHAI Xiangping
    2019, 39(7):  1931-1935.  DOI: 10.11772/j.issn.1001-9081.2018112375
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics

    When using word embedding method for word-to-vector, two antonyms are converted into similar vectors. If they are sentiment words, it will lead to the loss of sentimental information, which is unreasonable in sentiment analysis task. To solve this problem, a method of adding sentiment vectors to obtain sentiment information based on word embedding was proposed. Firstly, the sentiment vector was constructed by using sentiment lexicon, and combined with word vector obtained by word embedding method. Then, a bidirectional Long Short Term Memory (BiLSTM) network was used to obtain the characteristics of text. Finally, the sentiment of text was classified. Experiments of the proposed method and the method without fusing sentimental vector were carried out on four datasets. The experimental results show that the classification accuracy and F1 score of the proposed method are higher than those of the method without fusion, which indicates that adding sentimental vectors is beneficial to improve the performance of sentiment analysis.

    Text sentiment classification based on 1D convolutional hybrid neural network
    CHEN Zhenghao, FENG Ao, HE Jia
    2019, 39(7):  1936-1941.  DOI: 10.11772/j.issn.1001-9081.2018122477
    Asbtract ( )   PDF (1060KB) ( )  
    References | Related Articles | Metrics

    Traditional 2D convolutional models suffer from loss of semantic information and lack of sequential feature expression ability in sentiment classification. Aiming at these problems, a hybrid model based on 1D Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) was proposed. Firstly, 2D convolution was replaced by 1D convolution to retain richer local semantic features. Then, a pooling layer was used to reduce data dimension and the output was put into the recurrent neural network layer to extract sequential information between the features. Finally, softmax layer was used to realize the sentiment classification. The experimental results on multiple standard English datasets show that the proposed model has 1-3 percentage points improvement in classification accuracy compared with traditional statistical method and end-to-end deep learning method. Analysis of each component of network verifies the value of introduction of 1D convolution and recurrent neural network for better classification accuracy.

    Text sentiment classification algorithm based on feature selection and deep belief network
    XIANG Jinyong, YANG Wenzhong, SILAMU·Wushouer
    2019, 39(7):  1942-1947.  DOI: 10.11772/j.issn.1001-9081.2018112363
    Asbtract ( )   PDF (984KB) ( )  
    References | Related Articles | Metrics

    Because of the complexity of human language, text sentiment classification algorithms mostly have the problem of excessively huge vocabulary due to redundancy. Deep Belief Network (DBN) can solve this problem by learning useful information in the input corpus and its hidden layers. However, DBN is a time-consuming and computationally expensive algorithm for large applications. Aiming at this problem, a semi-supervised sentiment classification algorithm called text sentiment classification algorithm based on Feature Selection and Deep Belief Network (FSDBN) was proposed. Firstly, the feature selection methods including Document Frequency (DF), Information Gain (IG), CHI-square statistics (CHI) and Mutual Information (MI) were used to filter out some irrelevant features to reduce the complexity of vocabulary. Then, the results of feature selection were input into DBN to make the learning phase of DBN more efficient. The proposed algorithm was applied to Chinese and Uygur language. The experimental results on hotel review dataset show that the accuracy of FSDBN is 1.6% higher than that of DBN and the training time of FSDBN halves that of DBN.

    Brain function network feature selection and classification based on multi-level template
    WU Hao, WANG Xincan, LI Xinyun, LIU Zhifen, CHEN Junjie, GUO Hao
    2019, 39(7):  1948-1953.  DOI: 10.11772/j.issn.1001-9081.2018112421
    Asbtract ( )   PDF (1024KB) ( )  
    References | Related Articles | Metrics

    The feature representation extracted from the functional connection network based on single brain map template is not sufficient to reveal complex topological differences between patient group and Normal Control (NC) group. However, the traditional multi-template-based functional brain network definitions mostly use independent templates, ignoring the potential topological association information in functional brain networks built with each template. Aiming at the above problems, a multi-level brain map template and a method of Relationship Induced Sparse (RIS) feature selection model were proposed. Firstly, an associated multi-level brain map template was defined, and the potential relationship between templates and network structure differences between groups were mined. Then, the RIS feature selection model was used to optimize the parameters and extract the differences between groups. Finally, the Support Vector Machine (SVM) method was used to construct classification model and was applied to the diagnosis of patients with depression. The experimental results on the clinical diagnosis database of depression in the First Hospital of Shanxi University show that the functional brain network based on multi-level template achieves 91.7% classification accuracy by using the RIS feature selection method, which is 3 percentage points higher than that of traditional multi-template method.

    Cyber security
    Cloud outsourcing data secure auditing protocol throughout whole lifecycle
    LIU Yudong, WANG Xu'an, TU Guangsheng, WANG Han
    2019, 39(7):  1954-1958.  DOI: 10.11772/j.issn.1001-9081.2018122438
    Asbtract ( )   PDF (832KB) ( )  
    References | Related Articles | Metrics

    The generation of massive data brings a huge storage and computational burden to users, and the emergence of cloud servers solves this problem well. However, data outsourcing brings convenience to users while it also causes some security problems. In order to solve the security problem of data in the outsourcing process, a simpler and more efficient cloud outsourcing data security auditing protocol throughout whole lifecycle was designed and implemented, which was combined with classical distributed string equality checking protocol and Rank-based Merkel Hash Tree (RMHT) algorithm. The protocol not only can protect the integrity of outsourced storage data, allowing users periodically audit its integrity, but also can guarantee the secure transfer of cloud data. Besides, the copy of transfer data can avoid being reserved by malicious cloud servers, protecting users' privacy well. The analyses of security and efficiency show that the proposed protocol is sufficiently secure and comparatively efficient, the security of outsourcing data throughout its whole lifecycle will be protected well.

    Research on vulnerability mining technique for smart contracts
    FU Menglin, WU Lifa, HONG Zheng, FENG Wenbo
    2019, 39(7):  1959-1966.  DOI: 10.11772/j.issn.1001-9081.2019010082
    Asbtract ( )   PDF (1413KB) ( )  
    References | Related Articles | Metrics

    The second generation of blockchain represented by smart contract has experienced an explosive growth of its platforms and applications in recent years. However, frequent smart contract vulnerability incidents pose a serious risk to blockchain ecosystem security. Since code auditing based on expert experience is inefficient in smart contracts vulnerability mining, the significance of developing universal automated tools to mining smart contracts vulnerability was proposed. Firstly, the security threats faced by smart contracts were investigated and analyzed. Top 10 vulnerabilities, including code reentrancy, access control and integer overflow, as well as corresponding attack modes were summarized. Secondly, mainstream detection methods of smart contract vulnerabilities and related works were discussed. Thirdly, the performance of three existing tools based on symbolic execution were verified through experiments. For a single type of vulnerability, the highest false negative rate was 0.48 and the highest false positive rate was 0.38. The experimental results indicate that existing studies only support incomplete types of vulnerability with many false negatives and positives and depend on manual review. Finally, future research directions were forecasted aiming at these limitations, and a symbolic-execution-based fuzzy test framework was proposed. The framework can alleviate the problems of insufficient code coverage in fuzzy test and path explosion in symbolic execution, thus improving vulnerability mining efficiency for large and medium-sized smart contracts.

    Network abnormal behavior detection model based on adversarially learned inference
    YANG Hongyu, LI Bochao
    2019, 39(7):  1967-1972.  DOI: 10.11772/j.issn.1001-9081.2018112302
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of low recall rate caused by data imbalance in network abnormal behavior detection, a network abnormal behavior detection model based on Adversarially Learned Inference (ALI) was proposed. Firstly, the feature items represented by discrete data in a dataset were removed, and the processed dataset was normalized to improve the convergence speed and accuracy of the model. Then, an improved ALI model was proposed and trained by ALI training algorithm with a dataset only consisting of positive samples, and the improved ALI model which had been trained was used to process the detection data to generate the processed detection dataset. Finally, the distance between detection data and the processed detection data was calculated based on abnormality detection function to determine whether the data was abnormal. The experimental results show that compared with One-Class Support Vector Machine (OC-SVM), Deep Structured Energy Based Model (DSEBM), Deep Autoencoding Gaussian Mixture Model (DAGMM) and Anomaly detection model with Generative Adversarial Network (AnoGAN), the accuracy of the proposed model is improved by 5.8-17.4 percentage points, the recall rate is increased by 1.4-31.4 percentage points, and the F1 value is increased by 14.18-19.7 percentage points. It can be seen that the network abnormal behavior detection model based on ALI has high recall rate and detection accuracy when the data is unbalanced.

    Hidden semi-Markov model-based approach to detect DDoS attacks in application layer of SWIM system
    MA Lan, CUI Bohua, LIU Xuan, YUE Meng, WU Zhijun
    2019, 39(7):  1973-1978.  DOI: 10.11772/j.issn.1001-9081.2019010017
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that System Wide Information Management (SWIM) system is affected by Distributed Denial of Service (DDoS) attacks in the application layer, a detection approach of SWIM application layer DDoS attack based on Hidden Semi-Markov Model (HSMM) was proposed. Firstly, an improved forward-backward algorithm was adopted, and HSMM was used to establish dynamic anomaly detection model to dynamically track the browsing behaviors of normal SWIM users. Then, normal detection interval was obtained by learning and predicting normal SWIM user behaviors. Finally, access packet size and request time interval were extracted as features for modeling, and the model was trained to realize anomaly detection. The experimental results show that the detection rate of the proposed approach is 99.95% and 91.89% in the case of attack 1 and attack 2 respectively. Compared with the HSMM constructed by fast forward-backward algorithm, the detection rate is improved by 0.9%. It can be seen that the proposed approach can effectively detect the application layer DDoS attacks of SWIM system.

    Efficient semi-supervised multi-level intrusion detection algorithm
    CAO Weidong, XU Zhixiang
    2019, 39(7):  1979-1984.  DOI: 10.11772/j.issn.1001-9081.2019010018
    Asbtract ( )   PDF (829KB) ( )  
    References | Related Articles | Metrics

    An efficient semi-supervised multi-level intrusion detection algorithm was proposed to solve the problems existing in present intrusion detection algorithms such as difficulty of collecting a lot of tagged data for supervised learning-based algorithms, low accuracy of unsupervised learning-based algorithms and low detection rate on R2L (Remote to Local) and U2L (User to Root) of both types of algorithms. Firstly, according to Kd-tree (K-dimension tree) index structure, weighted density was used to select initial clustering centers of K-means algorithm in high-density sample region. Secondly, the data after clustering were divided into three clusters. Then, weighted voting rule was utilized to expand the labeled dataset by means of Tri-training from the unlabeled clusters and mixed clusters. Finally, a hierarchical classification model with binary tree structure was designed and experimental verification was performed on NSL-KDD dataset. The results show that the semi-supervised multi-level intrusion detection model can effectively improve detection rate of R2L and U2R attacks by using small amount of tagged data, the detection rates of R2L and U2R attacks reach 49.38% and 81.14% respectively, thus reducing the system's false negative rate.

    Privacy protection based on local differential privacy for numerical sensitive data of wearable devices
    MA Fangfang, LIU Shubo, XIONG Xingxing, NIU Xiaoguang
    2019, 39(7):  1985-1990.  DOI: 10.11772/j.issn.1001-9081.2018122466
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that collecting multi-dimensional numerical sensitive data directly from wearable devices may leak users' privacy information when a data server was untrusted, by introducing a local differential privacy model, a personalized local privacy protection scheme for the numerical sensitive data of wearable devices was proposed. Firstly, by setting the privacy budget threshold interval, a users' privacy budget within the interval was set to meet the individual privacy needs, which also met the definition of personalized local differential privacy. Then, security domain was used to normalize the sensitive data. Finally, the Bernoulli distribution was used to perturb multi-dimensional numerical data by grouping, and attribute security domain was used to restore the disturbance results. The theoretical analysis shows that the proposed algorithm meets the personalized local differential privacy. The experimental results demonstrate that the proposed algorithm has lower Max Relative Error (MRE) than that of Harmony algorithm, thus effectively improving the utility of aggregated data collecting from wearable devices with the untrusted data server as well as protecting users' privacy.

    Threat modeling and application of server management control system
    SU Zhenyu, SONG Guixiang, LIU Yanming, ZHAO Yuan
    2019, 39(7):  1991-1996.  DOI: 10.11772/j.issn.1001-9081.2018122475
    Asbtract ( )   PDF (1026KB) ( )  
    References | Related Articles | Metrics

    Baseboard Management Controller (BMC) is responsible to the control and management of server as a large embedded system. Concerning the problem of vulnerability and security threats of BMC, a threat model of server management control system was proposed. Firstly, in order to discover threats, a Data Flow Diagram (DFD) was established according to the hardware and software architecture of BMC. Secondly, a comprehensive threat list was obtained by using Spoofing-Tampering-Repudiation-Information disclosure-Denial of service-Elevation of privilege (STRIDE) method. Thirdly, a threat tree model was constructed to describe the threats in detail, and the specific attack modes were obtained and the threats were quantified. Finally, the response strategies were formulated for the threats classified by STRIDE, and the specific protection methods of BMC were obtained, which met the security objectives such as confidentiality, integrity and availability. The analysis results show that the proposed model can fully identity the security threats of BMC, and the protection methods of BMC based on the model have been used in the design process as security baselines, which improves the overall security of server.

    Dual public-key cryptographic scheme based on improved Niederreiter cryptosystem
    WANG Zhong, HAN Yiliang
    2019, 39(7):  1997-2000.  DOI: 10.11772/j.issn.1001-9081.2018122429
    Asbtract ( )   PDF (743KB) ( )  
    References | Related Articles | Metrics

    The code-based cryptosystem can effectively resist quantum computing attacks with good operability and data compression capability, and is one of the reliable candidates for the post-quantum era cryptographic scheme. Aiming at the security and confidentiality of computer data in the quantum era, the in-depth study of an improved Niederreiter cryptographic scheme in code-based cryptography was carried out, and a cryptographic scheme with combination of dual public-key encryption method was proposed. The security of the proposed scheme was improved compared with the improved Niederreiter scheme and the Niederreiter dual public-key encryptographic scheme based on Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) code. The amount of keys in the scheme is at least 32% lower than that of traditional Niederreiter scheme, and is also effectively reduced compared with that of the Niederreiter dual public-key encryptographic scheme based on QC-LDPC code, which shows the strong reliability for ensuring computer data security in the quantum age.

    Advanced computing
    Gaming@Edge: low latency cloud gaming system based on edge nodes
    LIN Li, XIONG Jinbo, XIAO Ruliang, LIN Mingwei, CHEN Xiuhua
    2019, 39(7):  2001-2007.  DOI: 10.11772/j.issn.1001-9081.2019010163
    Asbtract ( )   PDF (1232KB) ( )  
    References | Related Articles | Metrics

    As a "killer" application in cloud computing, cloud gaming is leading the revolution of way of gaming. However, the high latency between the cloud and end devices hurts user experience. Aiming at the problem, a low latency cloud gaming system deployed on edge nodes, called Gaming@Edge, was proposed based on edge computing concept. To reduce the overhead of edge nodes for improving the concurrency, a cloud gaming running mechanism based on compressed graphics streaming, named GSGOD (Graphics Stream based Game-on-Demand), was implemented in Gaming@Edge system. The logic computing and rendering in the game running were separated and a computing fusion of edge nodes and end devices was built by GSGOD. Moreover, the network data transmission and latency were optimized through the mechanisms such as data caching, instruction pipeline processing and lazy object updating in GSGOD. The experimental results show that Gaming@Edge can reduce average network latency by 74% and increase concurrency of game instances by 4.3 times compared to traditional cloud gaming system.

    GPU-based morphological reconstruction system
    HE Xi, WU Yantao, DI Zhenwei, CHEN Jia
    2019, 39(7):  2008-2013.  DOI: 10.11772/j.issn.1001-9081.2018122549
    Asbtract ( )   PDF (942KB) ( )  
    References | Related Articles | Metrics

    Morphological reconstruction is a fundamental and critical operation in medical image processing, in which dilation operations are repeatedly carried out on the marker image based on the characteristics of mask image, until no change occurs on the pixels of the marker image. Concerning the problem that traditional CPU-based morphological reconstruction system has low computational efficiency, using Graphics Processing Unit (GPU) to quicken the morphological reconstruction was proposed. Firstly, a GPU-friendly data structure:parallel heap cluster was proposed. Then, based on the parallel heap cluster, a GPU-based morphological reconstruction system was designed and implemented. The experimental results show that compared with traditional CPU-based morphological reconstruction system, the proposed GPU-based morphological reconstruction system can achieve speedup ratio over 20 times. The proposed system demonstrates how to efficiently port complex data structure-based software system onto GPU.

    Network and communications
    Correlation delay-DCSK chaotic communication scheme without inter-signal interference
    HE Lifang, CHEN Jun, ZHANG Tianqi
    2019, 39(7):  2014-2018.  DOI: 10.11772/j.issn.1001-9081.2019010036
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics

    The major drawback of existing Differential Chaos Shift Keying (DCSK) communication system is low transmission rate. To solve the problem, a Correlation Delay-Differential Chaos Shift Keying (CD-DCSK) communication scheme without inter-signal interference was proposed. At the transmitting side, two orthogonal chaotic signals were generated by an orthogonal signal generator and normalized by the sign function to keep the energy of the transmitted signal constant. Then, two chaotic signals and their chaotic signals with different delay time intervals were respectively modulated by 1 bit data information to form a frame of transmission signal. At the demodulation side, correlation demodulation was used to extract data information and the information bits were recovered by detecting the sign of correlator output. The theoretical Bit Error Rate (BER) performance of system under Additive White Gaussian Noise (AWGN) channel was analyzed by using Gaussian Approximation (GA) method, and was compared with classical chaotic communication systems. The performance analysis and experimental results indicate that, compared with DCSK system, the transmission rate of CD-DCSK system without inter-signal interference increases by 50 percentage points, and the BER performance of the proposed system is better than that of Correlation Delay Shift Keying (CDSK) system.

    Research of continuous time two-level polling system performance of exhaustive service and gated service
    YANG Zhijun, LIU Zheng, DING Hongwei
    2019, 39(7):  2019-2023.  DOI: 10.11772/j.issn.1001-9081.2019010063
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics

    For the fact that information groups arrive at the system in a continuous time, a two-level polling service model with different priorities was proposed for the business problems of different priorities in the polling system. Firstly, gated service was used in sites with low priority, and exhaustive service was used in sites with high priority. Then, when high priority turned into low priority, the transmission service and the transfer query were processed in parallel to reduce the time cost of server during query conversion, improving the efficiency of polling system. Finally, the mathematical model of system was established by using Markov chain and probabilistic parent function. By accurately analyzing the mathematical model, the expressions of average queue length and average waiting time of each station of continuous-time two-level service system were obtained. The simulation results show that the theoretical calculation value was approximately equal to the experimental simulation value, indicating that the theoretical analysis is correct and reasonable. The model provides high-quality services for high-priority sites while maintaining the quality of services in low-priority sites.

    Community dividing algorithm based on similarity of common neighbor nodes
    FU Lidong, HAO Wei, LI Dan, LI Fan
    2019, 39(7):  2024-2029.  DOI: 10.11772/j.issn.1001-9081.2019010183
    Asbtract ( )   PDF (827KB) ( )  
    References | Related Articles | Metrics

    The community structure in complex networks can help people recognize basic structure and functions of network. Aiming at the problems of low accuracy and high complexity of most community division algorithms, a community division algorithm based on similarity of common neighbor nodes was proposed. Firstly, a similarity model was proposed in order to calculate the similarity between nodes. In the model, the accuracy of similarity measurement was improved by calculating the tested node pairs and their neighbor nodes together. Secondly, local influence values of nodes were calculated, objectively showing the importances of nodes in the network. Thirdly, the nodes were hierarchically clustered according to the similarity and local influence values of nodes, and preliminary division of network community structure was completed. Finally, the preliminary divided sub-communities were clustered until the optimal modularity value was obtained. The simulation results show that compared with the new Community Detection Algorithm based on Local Similarity (CDALS), the proposed algorithm has the accuuracy improved by 14%, which proves that the proposed algorithm can divide the community structure of complex networks accurately and effectively.

    Partial interference alignment scheme with limited antenna resource in heterogeneous network
    LI Shibao, WANG Yixin, ZHAO Dayin, YE Wei, GUO Lin, LIU Jianhang
    2019, 39(7):  2030-2034.  DOI: 10.11772/j.issn.1001-9081.2018122456
    Asbtract ( )   PDF (838KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that the antenna resources in heterogeneous network are limited which leads to the unrealizable Interference Alignment (IA), a partial IA scheme for maximizing the utilization of antenna resources was proposed based on the characteristics of heterogeneous network. Firstly, a system model based on partial connectivity in heterogeneous network was built and the feasibility conditions for entire system to achieve IA were analyzed. Then, based on the heterogeneity of network (the difference between transmitted power and user stability), the users were assigned to different priorities and were distributed with different antenna resources according to their different priorities. Finally, with the goal of maximizing total rate of system and the utilization of antenna resources, a partial IA scheme was proposed, in which the high-priority users had full alignment and low-priority users had the maximum interference removed. In the Matlab simulation experiment where antenna resources are limited, the proposed scheme can increase total system rate by 10% compared with traditional IA algorithm, and the received rate of the high-priority users is 40% higher than that of the low-priority users. The experimental results show that the proposed algorithm can make full use of the limited antenna resources and achieve the maximum total system rate while satisfying the different requirements of users.

    Enhanced sine cosine algorithm based node deployment optimization of wireless sensor network
    HE Qing, XU Qinshuai, WEI Kangyuan
    2019, 39(7):  2035-2043.  DOI: 10.11772/j.issn.1001-9081.2018112282
    Asbtract ( )   PDF (1225KB) ( )  
    References | Related Articles | Metrics

    In order to improve the performance of Wireless Sensor Network (WSN), a node deployment optimization method based on Enhanced Sine Cosine Algorithm (ESCA) was proposed. Firstly, hyperbolic sine regulatory factor and dynamic cosine wave weight coefficient were introduced to balance the global exploration and local exploitation capability of the algorithm. Then, a mutation strategy based on Laplacian and Gaussian distribution was proposed to avoid the algorithm falling into local optimum. The experimental results of benchmark function optimization show that, compared with gravitational search algorithm, whale optimization algorithm, basic Sine Cosine Algorithm (SCA) and improved algorithms, ESCA has better convergence accuracy and convergence speed. Finally, ESCA was applied to WSN node deployment optimization. The results show that, compared with enhanced particle swarm optimization algorithm, extrapolation artificial bee colony algorithm, improved grey wolf optimization algorithm and self-adaptive chaotic quantum particle swarm algorithm, ESCA has improved the coverage rate by 1.55 percentage points, 7.72 percentage points, 2.99 percentage points and 7.63 percentage points respectively, and achieves the same target precision with fewer nodes.

    Online-hot video cache replacement policy based on cooperative small base stations and popularity prediction
    ZHANG Chao, LI Ke, FAN Pingzhi
    2019, 39(7):  2044-2050.  DOI: 10.11772/j.issn.1001-9081.2018122465
    Asbtract ( )   PDF (1110KB) ( )  
    References | Related Articles | Metrics

    The exponential growth in the number of wireless mobile devices leads that heterogeneous cooperative Small Base Stations (SBS) carry large-scale traffic load. Aiming at this problem, an Online-hot Video Cache Replacement Policy (OVCRP) based on cooperative SBS and popularity prediction was proposed. Firstly, the changes of popularity in short term of online-hot videos were analyzed, then a k-nearest neighbor model was constructed to predict the popularities of the online-hot videos, and finally the locations for cache replacement of online-hot videos were determined. In order to select appropriate locations to cache the online-hot videos, with minimization of overall transmission delay as the goal, a mathematical model was built and an integer programming optimization algorithm was designed. The simulation experiment results show that compared with the schemes such as RANDOM cache (RANDOM), Least Recently Used (LRU) and Least Frequently Used (LFU), the proposed OVCRP has obvious advantages in average cache hit rate and average access delay, reducing the network burden of cooperative SBS.

    Device-to-device caching algorithm based on user preference and replica threshold
    WEN Kai, TAN Xiao
    2019, 39(7):  2051-2055.  DOI: 10.11772/j.issn.1001-9081.2018122462
    Asbtract ( )   PDF (682KB) ( )  
    References | Related Articles | Metrics

    In the Device-to-Device (D2D) cache network, the cache space in the mobile terminal is relatively small with many multimedia contents. In order to realize the efficient use of cache space in mobile terminals, a D2D cache deployment algorithm based on user preference and replica threshold was proposed. Firstly, based on the user preference, a cache revenue function to determine the cache value of caching each file was designed. Then, with the goal of maximizing the cache hit ratio of system, the cache replica threshold was designed based on convex programming theory to deploy replica number of the files in the system. Finally, combining the cache revenue function with the replica threshold, a heuristic algorithm was proposed to implement file cache deployment. Compared with the existing cache deployment algorithm, the proposed algorithm can significantly improve the cache hit rate and the offload gain with the reduction of service delay.

    Lifetime estimation for human motion with WiFi channel state information
    LIU Lishuang, WEI Zhongcheng, ZHANG Chunhua, WANG Wei, ZHAO Jijun
    2019, 39(7):  2056-2060.  DOI: 10.11772/j.issn.1001-9081.2018122431
    Asbtract ( )   PDF (817KB) ( )  
    References | Related Articles | Metrics

    Concerning the poor privacy and flexibility of traditional lifetime estimation for human motion, a lifetime estimation system for human motion was proposed, by analyzing the amplitude variation of WiFi Channel State Information (CSI). In this system, the continuous and complex lifetime estimation problem was transformed into a discrete and simple human motion detection problem. Firstly, the CSI was collected with filtering out the outliers and noise. Secondly, Principal Component Analysis (PCA) was used to reduce the dimension of subcarriers, obtaining the principal components and the corresponding eigenvectors. Thirdly, the variance of principal components and the mean of first difference of eigenvectors were calculated, and a Back Propagation Neural Network (BPNN) model was trained with the ratio of above two parameters as eigenvalue. Fourthly, human motion detection was achieved by the trained BP neural network model, and the CSI data were divided into some segments with equal width when the human motion was detected. Finally, after the human motion detection being performed on all CSI segments, the human motion lifetime was estimated according to the number of CSI segments with human motion detected. In real indoor environment, the average accuracy of human motion detection can reach 97% and the error rate of human motion lifetime is less than 10%. The experimental results show that the proposed system can effectively estimate the lifetime of human motion.

    Sampling awareness weighted round robin scheduling algorithm in power grid
    TAN Xin, LI Xiaohui, LIU Zhenxing, DING Yuemin, ZHAO Min, WANG Qi
    2019, 39(7):  2061-2064.  DOI: 10.11772/j.issn.1001-9081.2018112339
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics

    When the smart grid phasor measurement equipment competes for limited network communication resources, the data packets will be delayed or lost due to uneven resource allocation, which will affect the accuracy of power system state estimation. To solve this problem, a Sampling Awareness Weighted Round Robin (SAWRR) scheduling algorithm was proposed. Firstly, according to the characteristics of Phasor Measurement Unit (PMU) sampling frequency and packet size, a weight definition method based on mean square deviation of PMU traffic flow was proposed. Secondly, the corresponding iterative loop scheduling algorithm was designed for PMU sampling awareness. Finally, the algorithm was applied to the PMU sampling transmission model. The proposed algorithm was able to adaptively sense the sampling changes of PMU and adjust the transmission of data packets in time. The simulation results show that compared with original weighted round robin scheduling algorithm, SAWRR algorithm reduces the scheduling delay of PMU sampling data packet by 95%, halves the packet loss rate and increases the throughput by two times. Applying SAWRR algorithm to PMU data transmission is beneficial to ensure the stability of smart grid.

    Computer software technology
    Polynomial ranking function detection method based on Dixon resultant and successive difference substitution
    YUAN Yue, LI Yi
    2019, 39(7):  2065-2073.  DOI: 10.11772/j.issn.1001-9081.2019010199
    Asbtract ( )   PDF (1216KB) ( )  
    References | Related Articles | Metrics

    Ranking function detection is one of the most important methods to analyze the termination of loop program. Some tools have been developed to detect linear ranking functions corresponding to linear loop programs. However, for polynomial loops with polynomial loop conditions and polynomial assignments, existing methods for detecting their ranking functions are mostly incomplete or with high time complexity. To deal with these weaknesses of existing work, a method was proposed for detecting polynomial ranking functions for polynomial loop programs, which was based on extended Dixon resultants (the KSY (Kapur-Saxena-Yang) method) and Successive Difference Substitution (SDS) method. Firstly, the ranking functions to be detected were seen as polynomials with parametric coefficients. Then the detection of ranking functions was transformed to the problem of finding parametric coefficients satisfying the conditions. Secondly, this problem was further transformed to the problem of determining whether the corresponding equations have solutions or not. Based on extended Dixon resultants in KSY method, the problem was reduced to the decision problem whether the polynomials with symbolic coefficients (resultants) were strictly positive or not. Thirdly, a sufficient condition making the resultants obtained strictly positive were found by SDS method. In this way, the coefficients satisfying the conditions were able to be obtained and thus a ranking function satisfying the conditions was found. The effectiveness of the method was proved by experiments. The experimental results show that polynomial ranking functions including d-depth multi-stage polynomial ranking functions are able to be detected for polynomial loop programs. This method is more efficient to find polynomial ranking functions compared with the existing methods. For loops whose ranking functions cannot be detected by the method based on Cylindrical Algebraic Decomposition (CAD) due to high time complexity, their ranking functions are able be found within a few seconds with the proposed method.

    Clone code detection based on image similarity
    WANG Yafang, LIU Dongsheng, HOU Min
    2019, 39(7):  2074-2080.  DOI: 10.11772/j.issn.1001-9081.2019010083
    Asbtract ( )   PDF (1041KB) ( )  
    References | Related Articles | Metrics

    At present, scholars mainly focus on four perspectives of text, vocabulary, grammar and semantics in the field of clone code detection. However, few breakthroughs have been made in the effect of clone code detection for a long time. In view of this problem, a new method called Clone Code detection based on Image Similarity (CCIS) was proposed. Firstly, the source code was preprocessed by removing comments, white space, etc., from which a "clean" function fragment was able to be obtained, and the identifiers, keywords, etc. in the function were highlighted. Then the processed source code was converted into images and these images were normalized. Finally, Jaccard distance and perceptual Hash algorithm were used for detection, obtaining the clone code information from these images. In order to verify the validity of this method, six open source softwares were used to constitute the evaluation dataset for testing. The experimental results show that CCIS method can detect 100% type-1 clone code, 88% type-2 clone code and 60% type-3 clone code, which proves the good effect of CCIS method on clone code detection.

    Virtual reality and multimedia computing
    Two-stream CNN for action recognition based on video segmentation
    WANG Ping, PANG Wenhao
    2019, 39(7):  2081-2086.  DOI: 10.11772/j.issn.1001-9081.2019010156
    Asbtract ( )   PDF (1002KB) ( )  
    References | Related Articles | Metrics

    Aiming at the issue that original spatial-temporal two-stream Convolutional Neural Network (CNN) model has low accuracy for action recognition in long and complex videos, a two-stream CNN for action recognition based on video segmentation was proposed. Firstly, a video was split into multiple non-overlapping segments with same length. For each segment, one frame image was sampled randomly to represent its static features and stacked optical flow images were calculated to represent its motion features. Secondly, these two patterns of images were input into the spatial CNN and temporal CNN for feature extraction, respectively. And the classification prediction features of spatial and temporal domains for action recognition were obtained by merging all segment features in two streams respectively. Finally, the two-steam predictive features were integrated to obtain the action recognition results for the video. In series of experiments, some data augmentation techniques and transfer learning methods were discussed to solve the problem of over-fitting caused by the lack of training samples. The effects of various factors including the number of segments, network architectures, feature fusion schemes based on segmentation and two-stream integration strategy on the performance of action recognition were analyzed. The experimental results show that the accuracy of action recognition of the proposed model on dataset UCF101 reaches 91.80%, which is 3.8% higher than that of original two-stream CNN model; and the accuracy of the proposed model on dataset HMDB51 is improved to 61.39%, which is higher than that of the original model. It shows that the proposed model can better learn and express the action features in long and complex videos.

    Multi-exposure image fusion algorithm based on Retinex theory
    WAGN Keqiang, ZHANG Yushuai, WANG Baoqun
    2019, 39(7):  2087-2092.  DOI: 10.11772/j.issn.1001-9081.2018112382
    Asbtract ( )   PDF (994KB) ( )  
    References | Related Articles | Metrics

    Multi-exposure image fusion technology directly combines a sequence of images with the same scene but different exposure levels into a high-quality image with more details of scene. Aiming at the problems of poor local contrast difference and color distortion of existing algorithms, a new multi-exposure image fusion algorithm was proposed based on Retinex theoretical model. Firstly, based on Retinex theoretical model, the exposure sequence images were divided into an illumination component sequence and a reflection component sequence by using the illumination estimation algorithm, and then two sets of sequences were processed by different fusion methods. For the illumination component, the variation characteristics of global brightness of scene were guaranteed and the effects of overexposed and underexposed regions were weakened, while for the reflection component, the evaluation parameters of moderate exposure were used to better preserve the color and detail information of scene. The proposed algorithm was analyzed from both subjective and objective aspects. The experimental results show that compared with traditional algorithm based on image domain synthesis, the proposed algorithm has an average increase of 1.7% in Structural SIMilarity (SSIM) and has better effect in the processing of image color and local details.

    Accelerated KAZE-SIFT feature extraction algorithm for oblique images
    BO Dan, LI Zongchun, WANG Xiaonan, QIAO Hanwen
    2019, 39(7):  2093-2097.  DOI: 10.11772/j.issn.1001-9081.2018122564
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics

    Concerning that traditional vertical image feature extraction algorithms have poor effect on oblique image matching, a feature extraction algorithm, based on Accelerated KAZE (AKAZE) and Scale Invariant Feature Transform (SIFT) algorithm called AKAZE-SIFT was proposed. Firstly, in order to guarantee the accuracy and distinctiveness of image feature detection, AKAZE operator, which fully preserves the contour information of image, was utilized for feature detection. Secondly, the robust SIFT operator was used to improve the stability of feature description. Thirdly, the rough matching point pairs were determined by the Euclidean distance between object feature point vector and candidate feature point vectors. Finally, the homography constraint was applied to improve the matching purity by random sample consensus algorithm. To evaluate the performance of the feature extraction algorithm, the blur, rotation, brightness, viewpoint and scale changes under the condition of oblique photography were simulated. The experimental results show that compared with SIFT algorithm and AKAZE algorithm, the recall of AKAZE-SIFT is improved by 12.8% and 5.3% respectively, the precision of AKAZE-SIFT is increased by 6.5% and 6.1% respectively, the F1 measure of AKAZE-SIFT is elevated by 13.8% and 5.6% respectively and the efficiency of the proposed algorithm is higher than that of SIFT and slightly worse than that of AKAZE. For the excellent detection and description performance, AKAZE-SIFT algorithm is more suitable for oblique image feature extraction.

    Human fatigue detection based on eye information characteristics
    LUO Yuan, YUN Mingjing, WANG Yi, ZHAO Liming
    2019, 39(7):  2098-2102.  DOI: 10.11772/j.issn.1001-9081.2018122441
    Asbtract ( )   PDF (799KB) ( )  
    References | Related Articles | Metrics

    The eye state is an important indicator reflecting the degree of fatigue. Changes in head posture and light have a great influence on human eye positioning, which affects the accuracy of eye state recognition and fatigue detection. A cascade Convolutional Neural Network (CNN) was proposed, by which the human eye state could be identified by detecting six feature points of human eye to identify human body fatigue. Firstly, grayscale integral projection and regional-convolution neural network were used as the first-level network to realize the positioning and detection of human eyes. Then, the secondary network was adopted to divide the human eye image and parallel sub-convolution system was used to perform human eye feature point regression. Finally, human eye feature points were used to calculate the human eye opening and closing degree to identify the current eye state, and the human body fatigue state was judged according to the PERcentage of eyelid CLOSure over the pupil time (PERCLOS) criterion. The experimental results show that the average detection accuracy of six eye feature points reaches 95.8% when the normalization error is 0.05, thus the effectiveness of the proposed method is verified by identifying the fatigue state based on the PERCLOS value of analog video.

    Surface scratch recognition method based on deep neural network
    LI Wenjun, CHEN Bin, LI Jianming, QIAN Jide
    2019, 39(7):  2103-2108.  DOI: 10.11772/j.issn.1001-9081.2018112247
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics

    In order to achieve robust, accurate and real-time recognition of surface scratches under complex texture background with uneven brightness, a surface scratch recognition method based on deep neural network was proposed. The deep neural network for surface scratch recognition consisted of a style transfer network and a focus Convolutional Neural Network (CNN). The style transfer network was used to preprocess surface scratches under complex background with uneven brightness. The style transfer networks included a feedforward conversion network and a loss network. Firstly, the style features of uniform brightness template and the perceptual features of the detected image were extracted through the loss network, and the feedforward conversion network was trained offline to obtain the optimal parameter values of network. Then, the images with uniform brightness and uniform style were generated by style transfer network. Finally, the proposed focus convolutional neural network based on focus structure was used to extract and recognize scratch features in the generated image. Taking metal surface with light change as an example, the scratch recognition experiment was carried out. The experimental results show that compared with traditional image processing methods requiring artificial designed features and traditional deep convolutional neural network, the false negative rate of scratch detection is as low as 8.54% with faster convergence speed and smoother convergence curve, and the better detection results can be obtained under different depth models with accuracy increased of about 2%. The style transfer network can retain complete scratch features with the problem of uneven brightness solved, thus improving the accuracy of scratch recognition, while the focus convolutional neural network can achieve robust, accurate and real-time recognition of scratches, which greatly reduces false negative rate and false positive rate of scratches.

    Pulmonary nodule detection algorithm based on deep convolutional neural network
    DENG Zhonghao, CHEN Xiaodong
    2019, 39(7):  2109-2115.  DOI: 10.11772/j.issn.1001-9081.2019010056
    Asbtract ( )   PDF (1207KB) ( )  
    References | Related Articles | Metrics

    In traditional pulmonary nodule detection algorithms, there are problems of low detection sensitivity and large number of false positives. To solve these problems, a pulmonary nodule detection algorithm based on deep Convolutional Neural Network (CNN) was proposed. Firstly, the traditional full convolution segmentation network was simplified on purpose. Then, in order to obtain high-quality candidate pulmonary nodules and ensure high sensitivity, the deep supervision of partial CNN layers was innovatively added and the improved weighted loss function was used. Thirdly, three-dimensional deep CNNs based on multi-scale contextual information were designed to enhance the feature extraction of images. Finally, the trained fusion classification model was used for candidate nodule classification to achieve the purpose of reducing false positive rate. The performance of algorithm was verified through comparison experiments on LUNA16 dataset. In the detection stage, when the number of candidate nodules detected by each CT (Computed Tomography) is 50.2, the sensitivity of this algorithm is 94.3%, which is 4.2 percentage points higher than that of traditional full convolution segmentation network. In the classification stage, the competition performance metric of this algorithm reaches 0.874. The experimental results show that the proposed algorithm can effectively improve the detection sensitivity and reduce the false positive rate.

    Crack detection for aircraft skin based on image analysis
    XUE Qian, LUO Qijun, WANG Yue
    2019, 39(7):  2116-2120.  DOI: 10.11772/j.issn.1001-9081.2019010092
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics

    To realize automatic crack detection for aircraft skin, skin image processing and parameter estimation methods were studied based on scanning images obtained by pan-and-tilt long-focus camera. Firstly, considering the characteristics of aircraft skin images, light compensation, adaptive grayscale stretching, and local OTSU segmentation were carried out to obtain the binary images of cracks. Then, the characteristics like area and rectangularity of the connected domains were calculated to remove block noises in the images. After that, thinning and deburring were operated on cracks presented in the denoised binary images, and all branches of crack were separated by deleting the nodes of cracks. Finally, using the branch pixels as indexes, information of each crack branch such as the length, average width, maximum width, starting point, end point, midpoint, orientation, and number of branches were calculated by tracing pixels and the report was output by the crack detection software. The experimental results demonstrate that cracks wider than 1 mm can be detected effectively by the proposed method, which provides a feasible means for automatic detection of aircraft skin cracks in fuselage and wings.

    Left ventricular segmentation method of ultrasound image based on convolutional neural network
    ZHU Kai, FU Zhongliang, CHEN Xiaoqing
    2019, 39(7):  2121-2124.  DOI: 10.11772/j.issn.1001-9081.2018112321
    Asbtract ( )   PDF (690KB) ( )  
    References | Related Articles | Metrics

    Ultrasound image segmentation of left ventricle is very important for doctors in clinical practice. As the ultrasound images contain a lot of noise and the contour features are not obvious, current Convolutional Neural Network (CNN) method is easy to obtain unnecessary regions in left ventricular segmentation, and the segmentation regions are incomplete. In order to solve these problems, keypoint location and image convex hull method were used to optimize segmentation results based on Fully Convolutional neural Network (FCN). Firstly, FCN was used to obtain preliminary segmentation results. Then, in order to remove erroneous regions in segmentation results, a CNN was proposed to locate three keypoints of left ventricle, by which erroneous regions were filtered out. Finally, in order to ensure that the remained area were able to be a complete ventricle, image convex hull algorithm was used to merge all the effective areas together. The experimental results show that the proposed method can greatly improve left ventricular segmentation results of ultrasound images based on FCN. Under the evaluation standard, the accuracy of results obtained by this method can be increased by nearly 15% compared with traditional CNN method.

    On-line fabric defect recognition algorithm based on deep learning
    WANG Lishun, ZHONG Yong, LI Zhendong, HE Yilong
    2019, 39(7):  2125-2128.  DOI: 10.11772/j.issn.1001-9081.2019010110
    Asbtract ( )   PDF (681KB) ( )  
    References | Related Articles | Metrics

    On-line detection of fabric defects is a major problem faced by textile industry. Aiming at the problems such as high false positive rate, high false negative rate and low real-time in the existing detection of fabric defects, an on-line detection algorithm for fabric defects based on deep learning was proposed. Firstly, based on GoogLeNet network architecture, and referring to classical algorithm of other classification models, a fabric defect classification model suitable for actual production environment was constructed. Secondly, a fabric defect database was set up by using different kinds of fabric pictures marked by quality inspectors, and the database was used to train the fabric defect classification model. Finally, the images collected by high-definition camera on fabric inspection machine were segmented, and the segmented small images were sent to the trained classification model in batches to realize the classification of each small image. Thereby the defects were detected and their positions were determined. The model was validated on a fabric defect database. The experimental results show that the average test time of each small picture is 0.37 ms by this proposed model, which is 67% lower than that by GoogLeNet, 93% lower than that by ResNet-50, and the accuracy of the proposed model is 99.99% on test set, which shows that its accuracy and real-time performance meet actual industrial demands.

    Automatic image annotation based on generative adversarial network
    SHUI Liucheng, LIU Weizhong, FENG Zhuoming
    2019, 39(7):  2129-2133.  DOI: 10.11772/j.issn.1001-9081.2018112400
    Asbtract ( )   PDF (875KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem that the number of output neurons in deep learning-based image annotation model is directly proportionate to the labeled vocabulary, which leads the change of model structure caused by the change of vocabulary, a new annotation model combining Generative Adversarial Network (GAN) and Word2vec was proposed. Firstly, the labeled vocabulary was mapped to the fixed multidimensional word vector through Word2vec. Secondly, a neural network model called GAN-W (GAN-Word2vec annotation) was established based on GAN, making the number of neurons in model output layer equal to the dimension of multidimensional word vector and no longer relevant to the vocabulary. Finally, the annotation result was determined by sorting the multiple outputs of model. Experiments were conducted on the image annotation datasets Corel 5K and IAPRTC-12. The experimental results show that on Corel 5K dataset, the accuracy, recall and F1 value of the proposed model are increased by 5,14 and 9 percentage points respectively compared with those of Convolutional Neural Network Regression (CNN-R); on IAPRTC-12 dataset, the accuracy, recall and F1 value of the proposed model are 2,6 and 3 percentage points higher than those of Two-Pass K-Nearest Neighbor (2PKNN). The experimental results show that GAN-W model can solve the problem of neuron number change in output layer with vocabulary. Meanwhile, the number of labels in each image is self-adaptive, making the annotation results of the proposed model more suitable for actual annotation situation.

    Detection method of hard exudates in fundus images by combining local entropy and robust principal components analysis
    CHEN Li, CHEN Xiaoyun
    2019, 39(7):  2134-2140.  DOI: 10.11772/j.issn.1001-9081.2019010208
    Asbtract ( )   PDF (1062KB) ( )  
    References | Related Articles | Metrics

    To solve the time-consuming and error-prone problem in the diagnosis of fundus images by the ophthalmologists, an unsupervised automatic detection method for hard exudates in fundus images was proposed. Firstly, the blood vessels, dark lesion regions and optic disc were removed by using morphological background estimation in preprocessing phase. Then, with the image luminosity channel taken as the initial image, the low rank matrix and sparse matrix were obtained by combining local entropy and Robust Principal Components Analysis (RPCA) based on the locality and sparsity of hard exudates in fundus images. Finally, the hard exudates regions were obtained by the normalized sparse matrix. The performance of the proposed method was tested on the fundus images databases e-ophtha EX and DIARETDB1. The experimental results show that the proposed method can achieve 91.13% of sensitivity and 90% of specificity in the lesional level and 99.03% of accuracy in the image level and 0.5 s of average running time. It can be seen that the proposed method has higher sensitivity and shorter running time compared with Support Vector Machine (SVM) method and K-means method.

    Frontier & interdisciplinary applications
    Modeling and simulation of container transportation process in blockchain based container sharing mode
    LIU Weirong, ZHEN Hong
    2019, 39(7):  2141-2147.  DOI: 10.11772/j.issn.1001-9081.2018122440
    Asbtract ( )   PDF (1154KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that stock and increment sharing of container can not be effectively implemented, a container sharing model based on blockchain principle was proposed. Firstly, the operation mechanism of blockchain based container sharing mode was elaborated. Secondly, the changes of container transportation process with the influence of this mode were analyzed. Thirdly, based on Petri net theory, Colored Timed Petri Net (CTPN) models of traditional mode and blockchain based container sharing mode were established respectively by CPN Tools. Finally, the simulation of the models were carried out with four indicators compared and analyzed under different modes. The four indicators were the time from receipt of orders to picking up of empty containers, the ratio of empty driving time in the road, the order loss rate and the proportion of unloaded containers. The experimental results show that compared with under the traditional mode, under the blockchain based container sharing mode, the shipper's picking up time is shortened, the empty driving proportion reduces by 5.28% while there is no longer any order lost due to the mismatch between the shipping time and the order time window, and the proportion of unloaded containers is reduced by 6.99%. The simulation results show that the blockchain based container sharing mode can not only make up for the shortcomings of stock and increment sharing of container in the traditional ways, but also optimize the container transportation process. It is an effective way to reduce costs and increase efficiency in container transportation industry.

    Improvement of blockchain practical Byzantine fault tolerance consensus algorithm
    GAN Jun, LI Qiang, CHEN Zihao, ZHANG Chao
    2019, 39(7):  2148-2155.  DOI: 10.11772/j.issn.1001-9081.2018112343
    Asbtract ( )   PDF (1409KB) ( )  
    References | Related Articles | Metrics

    Since Practical Byzantine Fault Tolerance (PBFT) consensus algorithm applied to the alliance chain has the problems of static network structure, random selection of master node and large communication overhead, an Evolution of Practical Byzantine Fault Tolerance (EPBFT) consensus algorithm was proposed. Firstly, a series of activity states were set for consensus nodes, making the nodes have complete life cycle in the system through state transition, so that the nodes were able to dynamically join and exit while the system has a dynamic network structure. Secondly, the selection method of master node of PBFT was improved with adding the election process of master node with the longest chain as the election principle. After the election of master node, the reliability of master node was further ensured through data synchronization and master node verification process. Finally, the consensus process of PBFT algorithm was optimized to improve the consensus efficiency, thus the communication overhead of EPBFT algorithm was reduced to 1/2 of that of PBFT algorithm with little view changes. The experimental results show that EPBFT algorithm has good effectiveness and practicability.

    Public welfare time bank system based on blockchain technology
    XIAO Kai, WANG Meng, TANG Xinyu, JIANG Tonghai
    2019, 39(7):  2156-2161.  DOI: 10.11772/j.issn.1001-9081.2018122503
    Asbtract ( )   PDF (1072KB) ( )  
    References | Related Articles | Metrics

    In the existing time bank system, the issuance and settlement functions of time dollar are completely centralized on a central node. This central way not only suffers from many security problems including single point failure of central node and data tampering, but also has some problems such as lack of transparency in time dollar issuance and circulation, the dependance on centralized settlement agency in time dollar settlement process. In order to solve these problems, a public welfare time bank system based on blockchain was proposed. Firstly, the issuance function and settlement function of time dollar were separated from the central node. Then, the separated issuance function was gradually decentralized, and the separated settlement function was directly decentralized by the use of advantages of blockchain such as distributed decentration, collective maintenance and the feature of not easy to tamper, after that Public Welfare Time Blockchain (PWTB) was formed. Finally, PWTB used blockchain to decentralize the time bank system from a single node maintaining ledger to the collective maintaining distributed shared ledger, so the issuance and circulation of time dollar became open and transparent, and the settlement of time dollar did not rely on a central node. The security analysis shows that the PWTB can achieve safe information transmission and storage as well as safe data sharing.

    Improved scheme of delegated proof of stake consensus mechanism
    HUANG Jiacheng, XU Xinhua, WANG Shichun
    2019, 39(7):  2162-2167.  DOI: 10.11772/j.issn.1001-9081.2018122527
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that Delegated Proof of Stake (DPoS) consensus mechanism has malicious nodes not eliminated in time due to inactive voting and long voting cycle, an improved scheme of DPoS consensus mechanism based on fusing mechanism, credit mechanism and standby witness node was proposed. Firstly, fusing mechanism was introduced to provide the function of negative vote to quicken kicking out evil nodes. Secondly, credit mechanism was introduced to set credit scores and credit grades for nodes, the credit scores and grades of nodes were dynamically adjusted by monitoring the behavior of nodes, therefore the difficulty of obtaining votes for evil nodes was increased. Finally, standby witness node list was added to fill in the vacancy in time after witness right of evil node being cancelled. A test blockchain system based on the improved scheme was built, and the availability and effectiveness of the improved scheme were verified by experiments. The experimental results show that the blockchain based on the improved DPoS consensus mechanism can eliminate the evil nodes in time and is suitable for most scenarios.

    Multi-period multi-decision closed-loop logistics network for fresh products with fuzzy variables
    YANG Xiaohua, GUO Jianquan
    2019, 39(7):  2168-2174.  DOI: 10.11772/j.issn.1001-9081.2018122434
    Asbtract ( )   PDF (1059KB) ( )  
    References | Related Articles | Metrics

    Concerning the high frequency logistics distribution of fresh products due to the products' perishability and vulnerability, as well as the uncertainty of demand and return, a multi-period closed-loop logistics network for fresh products with fuzzy variables was constructed to achieve the multi-decision arrangement of minimum system cost, optimal facility location and optimal delivery route. In order to solve the Fuzzy Mixed Integer Linear Programming (FMILP) model corresponding to the system, firstly, the amounts of demand and return were defined as triangular fuzzy parameters; secondly, the fuzzy constraints were transformed into crisp formula by using fuzzy chance constrained programming method; finally, the optimal solution of case was obtained by using Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. The experimental results show that multi-period closed-loop system performs better than single-period system in the aspect of multi-decision programming, meanwhile, the confidence levels of triangular fuzzy parameters have significant influence on the optimal operation of enterprise, thus providing a reference for relevant decision makers.

    Train fault identification based on compressed sensing and deep wavelet neural network
    DU Xiaolei, CHEN Zhigang, ZHANG Nan, XU Xu
    2019, 39(7):  2175-2180.  DOI: 10.11772/j.issn.1001-9081.2018112278
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics

    Aiming at the difficulty of unsupervised feature learning on defect vibration data of train running part, a method based on Compressed Sensing and Deep Wavelet Neural Network (CS-DWNN) was proposed. Firstly, the collected vibration data of train running part were compressed and sampled by Gauss random matrix. Secondly, a DWNN based on improved Wavelet Auto-Encoder (WAE) was constructed, and the compressed data were directly input into the network for automatic feature extraction layer by layer. Finally, the multi-layer features learned by DWNN were used to train multiple Deep Support Vector Machines (DSVMs) and Deep Forest (DF) classifiers respectively, and the recognition results were integrated. In this method DWNN was employed to automatically mine hidden fault information from compressed data, which was less affected by prior knowledge and subjective influence, and complicated artificial feature extraction process was avoided. The experimental results show that the CS-DWNN method achieves an average diagnostic accuracy of 99.16%, and can effectively identify three common faults in train running part. The fault recognition ability of the proposed method is superior to traditional methods such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and deep learning models such as Deep Belief Network (DBN), Stack De-noised Auto-Encoder (SDAE).

2024 Vol.44 No.9

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF