Loading...

Table of Content

    10 January 2020, Volume 40 Issue 1
    Artificial intelligence
    Review of speech segmentation and endpoint detection
    YANG Jian, LI Zhenpeng, SU Peng
    2020, 40(1):  1-7.  DOI: 10.11772/j.issn.1001-9081.2019061071
    Asbtract ( )   PDF (1105KB) ( )  
    References | Related Articles | Metrics
    Speech segmentation is an indispensable basic work in speech recognition and speech synthesis, and its quality has a great impact on the following system. Although manual segmentation and labeling is of high accuracy, it is quite time-consuming and laborious, and requires domain experts to deal with. As a result, automatic speech segmentation has become a research hotspot in speech processing. Firstly, aiming at current progress of automatic speech segmentation, several different classification methods of speech segmentation were explained. The alignment-based methods and boundary detection-based methods were introduced respectively, and the neural network speech segmentation methods, which can be applied in the above two frameworks, were expounded in detail. Then, some new speech segmentation technologies based on the methods such as bio-inspiration signal and game theory were introduced, and the performance evaluation metrics widely used in the speech segmentation field were given, and these evaluation metrics were compared and analyzed. Finally, the above contents were summarized and the future important research directions of speech segmentation were put forward.
    Review of facial action unit detection
    YAN Jingwei, LI Qiang, WANG Chunmao, XIE Di, WANG Baoqing, DAI Jun
    2020, 40(1):  8-15.  DOI: 10.11772/j.issn.1001-9081.2019061043
    Asbtract ( )   PDF (1281KB) ( )  
    References | Related Articles | Metrics
    Facial action unit detection aims at making computers detect the action unit targets based on the given facial images or videos automatically. Due to a great amount of research during the past 20 years, especially the construction of more and more facial action unit databases and the raise of deep learning based methods, facial action unit detection technology has been rapidly developed. Firstly, the concept of facial action unit and commonly used facial action unit databases were introduced, and the traditional methods including steps such as pre-processing, feature extraction and classifier learning were summarized. Then, for several important research areas, such as region learning, facial action unit correlation learning and weak supervised learning, systematic review and analysis were conducted. Finally, the shortcomings of the existing reasearch and potential developing trends of facial action unit detection were discussed.
    Text sentiment analysis based on serial hybrid model of bi-directional long short-term memory and convolutional neural network
    ZHAO Hong, WANG Le, WANG Weijie
    2020, 40(1):  16-22.  DOI: 10.11772/j.issn.1001-9081.2019060968
    Asbtract ( )   PDF (1101KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of low accuracy, poor real-time performance and insufficient feature extraction in existing text sentiment analysis methods, a serial hybrid model based on Bi-directional Long Short-Term Memory neural network and Convolutional Neural Network (BiLSTM-CNN) was constructed. Firstly, the context information was extracted from the text by using Bi-directional Long Short Term Memory (BiLSTM) neural network. Then, the local semantic features were extracted from the context information by using Convolutional Neural Network (CNN). Finally, the emotional tendency of text was obtained by using Softmax. Compared with single models such as CNN, Long Short-Term Memory (LSTM) and BiLSTM, the proposed text sentiment analysis model increases the comprehensive evaluation index F1 by 2.02 percentage points, 1.18 percentage points and 0.85 percentage points respectively; and compared with the hybrid models such as LSTM and CNN (LSTM-CNN) and parallel features fusion of BiLSTM-CNN, the proposed text sentiment analysis model improves the comprehensive evaluation index F1 by 1.86 percentage points and 0.76 percentage points respectively. The experimental results show that the serial hybrid model based on BiLSTM-CNN has great value in practical applications.
    Image caption generation model with convolutional attention mechanism
    HUANG Youwen, YOU Yadong, ZHAO Peng
    2020, 40(1):  23-27.  DOI: 10.11772/j.issn.1001-9081.2019050943
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics
    The image caption model needs to extract features in the image, and then express the features in sentence by Natural Language Processing (NLP) techniques. The existing image caption model based on Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) have the problems of low precision and slow training speed during the extraction of key information from the image. To solve the problems, an image caption generation model based on convolutional attention mechanism and Long Short-Term Memory (LSTM) network was proposed. The Inception-ResNet-V2 was used as the feature extraction network, and the full convolution operation was introduced in the attention mechanism to replace traditional full connection operation, reducing the number of model parameters. The image features and the text features were effectively fused together and sent to the LSTM unit for training in order to generate the semantic information to caption image content. The model was trained by the MSCOCO dataset and validated by a variety of evaluation metrics (BLEU-1, BLEU-4, METEOR, CIDEr, etc.). The experimental results show that the proposed model can caption the image content accurately and perform better than the method based on traditional attention mechanism on various evaluation metrics.
    Link prediction method fusing clustering coefficients
    LIU Yuyang, LI Longjie, SHAN Na, CHEN Xiaoyun
    2020, 40(1):  28-35.  DOI: 10.11772/j.issn.1001-9081.2019061008
    Asbtract ( )   PDF (1137KB) ( )  
    References | Related Articles | Metrics
    Many network structure information-based link prediction algorithms estimate the similarity between nodes and perform link prediction by using the clustering degree of nodes. However, these algorithms only focus on the clustering coefficient of nodes in network, and do not consider the influence of link clustering coefficient between the predicted nodes and their common neighbor nodes on the similarity between nodes. Aiming at the problem, a link prediction algorithm combining node clustering coefficient and asymmetric link clustering coefficient was proposed. Firstly, the clustering coefficient of common neighbor node was calculated, and the average link clustering coefficient of the predicted nodes was obtained by using two asymmetric link clustering coefficients of common neighbor node. Then, a comprehensive measurement index was obtained by fusing these two clustering coefficients based on Dempster-Shafer(DS) theory, and by applying the index to Intermediate Probability Model (IMP), a new node similarity index, named IMP_DS, was designed. The experimental results on the data of nine networks show that the proposed algorithm achieves performance in terms of Area Under the Curve (AUC) of Receiver Operating Characteristic (ROC) and Precision in comparison with Common Neighbor (CN), Adamic-Adar (AA), Resource Allocation (RA) indexes and InterMediate Probability model based on Common Neighbor (IMP_CN).
    Residents' travel origin and destination identification method based on naive Bayes classification
    ZHAO Guanghua, LAI Jianhui, CHEN Yanyan, SUN Haodong, ZHANG Ye
    2020, 40(1):  36-42.  DOI: 10.11772/j.issn.1001-9081.2019061076
    Asbtract ( )   PDF (1036KB) ( )  
    References | Related Articles | Metrics
    Mobile signaling data has the characteristics of low accuracy, large time interval and the existence of signal "ping-pong switching". In order to identify residents' travel Origin and Destination (OD) using mobile location data, a method based on Naive Bayesian Classification (NBC) was proposed. Firstly, according to the distance between places of residence and working, the travel log data measured by 80 volunteers for one month were classified statistically, and the conditional probability distribution of moving and staying states was obtained. Then, the feature parameters used to represent the user's states of moving and staying were established, including angular separation and minimum covering circle diameter. Finally, the conditional probability distribution of moving and staying states was calculated according to NBC theory, the processes with more than two consecutive moving states were clustered into travel OD. The analysis results on Xiamen mobile location data indicate that the travel time per capita obtained by proposed method has the Mean Absolute Percentage Error (MAPE) of 7.79%, which has a high precision, and the analysis results of travel OD can better reflect real travel rules.
    Flexible job-shop green scheduling algorithm considering machine tool depreciation
    WANG Jianhua, PAN Yujie, SUN Rui
    2020, 40(1):  43-49.  DOI: 10.11772/j.issn.1001-9081.2019061058
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics
    For the Flexible Job-shop Scheduling Problem (FJSP) with machine flexibility and machine tool depreciation, in order to reduce the energy consumption in the production process, a mathematical model with the minimization of weighted sum of maximum completion time and total energy consumption as the scheduling objective was established, and an Improved Genetic Algorithm (IGA) was proposed. Firstly, according to strong randomness of Genetic Algorithm (GA), the principle of balanced dispersion of orthogonal test was introduced to generate initial population, which was used to improve the search performance in global range. Secondly, in order to overcome genetic conflict after crossover operation, the coding mode of three-dimensional real numbers and the arithmetic crossover of double individuals were used for chromosome crossover, which reduced the steps of conflict detection and improved the solving speed. Finally, the dynamic step length was adopted to perform genetic mutation in mutation operation stage, which guaranteed local search ability in global range. By testing on the 8 Brandimarte examples and comparing with 3 improved heuristic algorithms in recent years, the calculation results show that the proposed algorithm is effective and feasible to solve the FJSP.
    Collaborative routing method for operation vehicle in inland port based on game theory
    FAN Jiajia, LIU Hongxing, LI Yonghua, YANG Lijin
    2020, 40(1):  50-55.  DOI: 10.11772/j.issn.1001-9081.2019060988
    Asbtract ( )   PDF (1022KB) ( )  
    References | Related Articles | Metrics
    Focusing on the traffic congestion problem in inland ports with vehicle transportation and large throughput, a collaborative routing method for operation vehicles in inland port based on game theory was proposed. Firstly, the interaction between the operation vehicles that simultaneously request route planning was modeled as a game with incomplete information and the idea of Satisfaction Equilibrium (SE) was applied to analyze the proposed game. It was assumed that every vehicle has an expected utility for routing result, when all vehicles were satisfied, the game achieved an equilibrium. Then, a collaborative routing algorithm was proposed. In this algorithm, firstly every vehicle selected the route according to greedy strategy, then all vehicles were divided into groups by the rule and vehicles in the group performed adaptive learning based on historical routing results to complete the game. The experimental results show that the collaborative routing algorithm reduces the average driving time of vehicles up to 50.8% and 16.3% respectively and improves the system profit up to 51.7% and 24.5% respectively compared with Dijkstra algorithm and Self-Adaptive Learning Algorithm (SALA) when the number of simultaneously working vehicles in port is 286. The proposed algorithm can effectively reduce the average driving time of vehicles, improve system profit, and is more suitable for the routing problem of vehicles in inland port.
    Crowd counting method based on pixel-level attention mechanism
    CHEN Meiyun, WANG Bisheng, CAO Guo, LIANG Yongbo
    2020, 40(1):  56-61.  DOI: 10.11772/j.issn.1001-9081.2019050920
    Asbtract ( )   PDF (1007KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of uneven distribution of crowd and massive network learning parameters, a method for accurate high-density crowd counting was proposed, which is composed of Pixel-level Attention Mechanism (PAM) and improved single-column crowd density estimation network. First of all, the PAM was used to generate a high-quality local crowd density map by classifying the crowd images at pixel level, and the Full Convolutional Network (FCN) was used to generate the density mask of each image, and the pixels in image were divided into different density levels. Then, using the generated density mask as the label, the single-column crowd density estimation network was used to learn more representative features with fewer parameters. Before this method was proposed, the counting error of Network for Congested Scene Recognition (CSRNet) method was the smallest on part_B of Shanghaitech dataset, the UCF_CC_50 dataset and the WorldExpo'10 dataset. Comparing the error results of proposed method with CSRNet, it is found that this method has the Mean Absolute Error (MAE) and Mean Squared Error (MSE) on part_B of Shanghaitech dataset reduced by 8.49% and 4.37%; the MAE and MSE on UCF_CC_50 dataset decreased by 58.38% and 51.98% respectively, which are of significant optimization, and the MAE of overall average value on the WorldExpo'10 dataset reduced by 1.16%. The experimental results show that when counting the unevenly distributed high-density crowd, the method of combination of PAM and single-column crowd density estimation network can effectively improve the accuracy and training efficiency of high-density crowd counting.
    Construction of brain functional hypernetwork and feature fusion analysis based on sparse group Lasso method
    LI Yao, ZHAO Yunpeng, LI Xinyun, LIU Zhifen, CHEN Junjie, GUO Hao
    2020, 40(1):  62-70.  DOI: 10.11772/j.issn.1001-9081.2019061026
    Asbtract ( )   PDF (1501KB) ( )  
    References | Related Articles | Metrics
    Functional hyper-networks are widely used in brain disease diagnosis and classification studies. However, the existing research on hyper-network construction lacks the ability to interpret the grouping effect or only considers the information of group level information of brain regions, the hyper-network constructed in this way may lose some useful connections or contain some false information. Therefore, considering the group structure problem of brain regions, the sparse group Lasso (Least absolute shrinkage and selection operator) (sgLasso) method was introduced to further improve the construction of hyper-network. Firstly, the hyper-network was constructed by using the sgLasso method. Then, two groups of attribute indicators specific to the hyper-network were introduced for feature extraction and feature selection. The indictors are the clustering coefficient based on single node and the clustering coefficient based on a pair of nodes. Finally, the two groups of features with significant difference obtained after feature selection were subjected to multi-kernel learning for feature fusion and classification. The experimental results show that the proposed method achieves 87.88% classification accuracy by using the multi-feature fusion, which indicates that in order to improve the construction of hyper-network of brain function, the group information should be considered, but the whole group information cannot be forced to be used, and the group structure can be appropriately expanded.
    Optimized convolutional neural network method for classification of pneumonia images
    DENG Qi, LEI Yinjie, TIAN Feng
    2020, 40(1):  71-76.  DOI: 10.11772/j.issn.1001-9081.2019061039
    Asbtract ( )   PDF (889KB) ( )  
    References | Related Articles | Metrics
    Currently, Convolutional Neural Network (CNN) is applied in the field of pneumonia classification. Aiming at the hardness to improve the accuracy of pneumonia recognition of convolution network with shallow layers and simple structure, deep learning method was adopted; and concerning the problem that the deep learning method often consumes a lot of system resources, which makes the convolution network difficult to be deployed at user end, an classification method based on optimized convolution neural network was proposed. Firstly, according to the features of pneumonia images, AlexNet and Inception V3 models with good image classification performance were selected. Then, the characteristics of medical images were used to re-train the Inception V3 model with deeper layers and more complex structure. Finally, through knowledge distillation method, the trained "knowledge" (effective information) was extracted into AlexNet model, so as to reduce the occupancy of system resources and improve the accuracy. The experimental data show that after knowledge distillation, AlexNet model has the accuracy, specificity and sensitivity improved by 4.1, 7.45 and 1.97 percentage points respectively, and has the Graphics Processing Unit (GPU) occupation reduced by 51 percentage points compared with InceptionV3 model.
    Benign and malignant diagnosis of thyroid nodules based on different ultrasound imaging
    WU Kuan, QIN Pinle, CHAI Rui, ZENG Jianchao
    2020, 40(1):  77-82.  DOI: 10.11772/j.issn.1001-9081.2019061113
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    In order to achieve more accurate diagnosis of benign and malignant of thyroid nodule ultrasound images and avoid unnecessary puncture or biopsy surgery, a feature combining method of conventional ultrasound imaging and ultrasound elastography based on Convolutional Neural Network (CNN) was proposed to improve the accuracy of benign and malignant classification of thyroid nodules. Firstly, large-scale natural image datasets were used by the convolutional network model for pre-training, and the feature parameters were transferred to the ultrasound image domain by transfer learning to generate depth features and process small samples. Then, the depth feature maps of conventional ultrasound imaging and ultrasound elastography were combined to form a hybrid feature space. Finally, the classification task was completed in the hybrid feature space, and an end-to-end convolution network model was constructed. The experiments were carried out on 1156 images, the proposed method had the accuracy of 0.924, which was higher than that of other single data source methods. The experimental results show that, the edge and texture features of the image are shared by the shallow convolutions, the abstract features of the high-level convolutions are related to the specific classification tasks, and the transfer learning method can solve the problem of insufficient data samples. At the same time, the elastic ultrasound image can objectively quantify the lesion hardness of thyroid nodules, and with the combination of the texture contour features of conventional ultrasound image, the mixed features can more fully describe the differences between different lesions. Therefore, this method can effectively and accurately classify the thyroid nodules, reduce the pain of patients, and provide doctors with more accurate auxiliary diagnostic information.
    Data science and technology
    Under-sampling method based on sample density peaks for imbalanced data
    SU Junning, YE Dongyi
    2020, 40(1):  83-89.  DOI: 10.11772/j.issn.1001-9081.2019060962
    Asbtract ( )   PDF (1034KB) ( )  
    References | Related Articles | Metrics
    Imbalanced data classification is an important problem in data mining and machine learning. The way of re-sampling of data is crucial to the accuracy of classification. Concerning the problem that the existing under-sampling methods for imbalanced data cannot keep the distribution of sampling samples in good agreement with that of original samples, an under-sampling method based on sample density peaks was proposed. Firstly, the density peak clustering algorithm was applied to cluster samples of majority class and to estimate the central and boundary regions of different clusters obtained, so that each sample weight was determined according to the local density and different density peak distribution of cluster region where the sample was in. Then, the samples of majority class were under-sampled based on weights, so that the population of extracted majority class samples was gradually reduced from central region to boundary region of its cluster. In this way, the extracted samples would well reflect original sample distribution while suppressing the noise. Finally, a balanced data set was constructed by the sampled majority samples and all minority samples for the classifier training. The experimental results on multiple datasets show that the proposed sampling method has the F1-measure and G-mean improved, compared with some existing methods such as RBBag (Roughly Balanced Bagging), uNBBag (under-sampling NeighBorhood Bagging), KAcBag (K-means AdaCost bagging), proving that the proposed method is an effective and feasible sampling method.
    Discovery of functional dependencies in university data based on affinity propagation clustering and TANE algorithms
    HUANG Yongxin, TANG Xuefei
    2020, 40(1):  90-95.  DOI: 10.11772/j.issn.1001-9081.2019061050
    Asbtract ( )   PDF (1057KB) ( )  
    References | Related Articles | Metrics
    In view of the missing values of datasets and the number of found functional dependencies is small and inaccurate in actual data quality detection process of universities, a university functional dependency discovery method combining Affinity Propagation (AP) clustering and TANE algorithm (APTANE) was proposed. Firstly, the Chinese field in the dataset was parsed row by row, and the Chinese field values were represented by the corresponding numerical values. Then, the AP clustering algorithm was used to fill the missing values in the dataset. Finally, the TANE algorithm was used to automatically find out the functional dependencies satisfying non-trivial and minimum requirements from the processed dataset. The experimental results show that after using AP clustering algorithm to repair real university dataset, compared with the direct use of functional dependency automatic discovery algorithm, the number of functional dependencies found increases to 80. The functional dependencies found after the filling of missing values represent the relationship between fields more accurately, reducing the workload of domain experts and improving the quality of data held by universities.
    Influence maximization algorithm based on reverse PageRank
    ZHANG Xianli, TANG Jianxin, CAO Laicheng
    2020, 40(1):  96-102.  DOI: 10.11772/j.issn.1001-9081.2019061066
    Asbtract ( )   PDF (1052KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the existing influence maximization algorithms on social networks are difficult to meet the requirements of propagation range, time cost and memory usage on large scale networks simultaneously, a heuristic algorithm of Mixed PageRank and Degree (MPRD) was proposed. Firstly, the idea of reverse PageRank was introduced for evaluating the influence of nodes based on PageRank. Secondly, a mixed index based on reverse PageRank and degree centrality was designed for evaluating final influence of nodes. Finally, the seed node set was selected by using the similarity method to filter out the node with serious overlapping influence. The experiments were conducted on six datasets and two propagation models. The experimental results show that the proposed MPRD is superior to the existing heuristic algorithms in term of propagation range, and is four or five orders of magnitude faster than greedy algorithm, and needs lower memory compared to Influence Maximization based on Martingale (IMM) algorithm based on reverse sampling. The proposed MPRD can achieve the balance of propagation range, time cost and memory usage in solving the problem of influence maximization on large scale networks.
    Cyber security
    Survey on application of binary reverse analysis in detecting software supply chain pollution
    WU Zhenhua, ZHANG Chao, SUN He, YAN Xuexiong
    2020, 40(1):  103-115.  DOI: 10.11772/j.issn.1001-9081.2019071245
    Asbtract ( )   PDF (2085KB) ( )  
    References | Related Articles | Metrics
    In recent years, Software Supply Chain (SSC) security issues have frequently occurred, which has brought great challenges to software security research. Since there are millions of new software released every day, it is essential to detect the pollution of SSC automatically. The problem of SSC pollution was first analyzed and discussed. Then focusing on the requirements of pollution detection in the downstream of SSC, the automatic program reverse analysis methods and their applications in the SSC pollution detection was introduced. Finally, the shortcomings and challenges faced by the existing technologies in solving the problem of SSC pollution was summarized and analyzed, and some researches worth studying to overcome these challenges were pointed out.
    Design and implementation of intrusion detection model for software defined network architecture
    CHI Yaping, MO Chongwei, YANG Yintan, CHEN Chunxia
    2020, 40(1):  116-122.  DOI: 10.11772/j.issn.1001-9081.2019061125
    Asbtract ( )   PDF (1026KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that traditional intrusion detection method cannot detect the specific attacks aiming at Software Defined Network (SDN) architecture, an intrusion detection model based on Convolutional Neural Network (CNN) was proposed. Firstly, an feature extraction method was designed based on SDN flow table entry. The SDN specific attack samples were collected to form the attack flow table dataset. Then, the CNN was used for training and detection. And focusing on the low recognition rate caused by small sample size of SDN attacks, a reinforcement learning method based on probability was proposed. The experimental results show that the proposed intrusion detection model can effectively detect the specific attacks aiming at SDN architecture with high accuracy, and the proposed reinforcement learning method can effectively improve the recognition rate of small probability attacks.
    Analysis of attack events based on multi-source alerts
    WANG Chunying, ZHANG Xun, ZHAO Jinxiong, YUAN Hui, LI Fangjun, ZHAO Bo, ZHU Xiaoqin, YANG Fan, LYU Shichao
    2020, 40(1):  123-128.  DOI: 10.11772/j.issn.1001-9081.2019071229
    Asbtract ( )   PDF (969KB) ( )  
    References | Related Articles | Metrics
    In order to overcome the difficulty in discovering multi-stage attack from multi-source alerts, an algorithm was proposed to mine the attack sequence pattern. The multi-source alerts were normalized into a unified format by matching them with regular expressions. The redundant information of alerts was compressed, and the alerts of the same stage were clustered according to the association rule set trained by strong association rules, efficiently removing the redundant alerts, so that the number of alerts was reduced. Then, the clustered alerts were divided to obtain candidate attack event dataset by sliding-window, and the attack pattern mining algorithm PrefixSpan was used to find out the attack sequence patterns of multi-stage attack events. The experimental results show that the proposed algorithm can lead to an accurate and efficient analysis of alert correlation and extract the attack steps of attack events without expert knowledge. Compared with the traditional algorithm PrefixSpan, the algorithm has an increase in attack pattern mining efficiency of 48.05%.
    Virus propagation suppression model in mobile wireless sensor networks
    WU Sanzhu, LI Peng, WU Sanbin
    2020, 40(1):  129-135.  DOI: 10.11772/j.issn.1001-9081.2019040736
    Asbtract ( )   PDF (889KB) ( )  
    References | Related Articles | Metrics
    To better control the propagation of virus in mobile wireless sensor networks, an improved dynamics model of virus propagation was established according to the theory of infectious diseases. The dead nodes were put into the network, and the communication radius, the moving and staying states of virus nodes during the propagation process were also added. Then the differential equations were established aiming at the model, and the existence and stability of equilibrium point were analyzed. The control and extinction conditions of virus propagation were gotten. Furthermore, the effects of following factors on virus propagation in mobile wireless sensor networks were analyzed:node communication radius, moving velocity, density, immunity rate of susceptible nodes, virus detection rate of infected nodes and node mortality. Finally, the simulation results show that adjusting the parameters in the model can effectively suppress the virus propagation in mobile wireless sensor networks.
    Efficient genetic comparison scheme for user privacy protection
    LI Gongli, LI Yu, ZHANG En, YIN Tianyu
    2020, 40(1):  136-142.  DOI: 10.11772/j.issn.1001-9081.2019061080
    Asbtract ( )   PDF (1224KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that current genetic comparison protocols generally require a trusted third party, which may result in the leakage of a wide range of private data, a genetic comparison scheme based on linear scan was proposed. The gene sequences of two parties were first encoded based on Garbled Circuit (GC), and then the genome database was linearly scanned and the garbled circuit was used to compare gene sequence of user with all gene sequences in database. The above scheme can achieve genetic comparison under the premise of protecting user privacy of both parties. However, the scheme needs to scan whole database with time complexity of O(n), and is inefficient when the genome database is large. In order to improve the efficiency of genetic comparison, a genetic comparison scheme based on Oblivious Random Access Memory (ORAM) was further proposed, in which genetic data was stored at ORAM first, then only the data blocks on target path were picked out to perform genetic comparison by using garbled circuit. This scheme has the number of comparisons sub-linear to the size of database and time complexity of O (log n). The experimental results show that the genetic comparison scheme based on ORAM reduces the number of comparisons from O(n) to O(log n) while realizing privacy protection, significantly decreases the time complexity of comparison operation. It can be used for disease diagnosis, especially in the case with large genome database.
    Performance analysis of wireless key generation with multi-bit quantization under imperfect channel estimation condition
    DING Ning, GUAN Xinrong, YANG Weiwei, LI Tongkai, WANG Jianshe
    2020, 40(1):  143-147.  DOI: 10.11772/j.issn.1001-9081.2019061004
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics
    The channel estimation error seriously affects the key generation consistency of two communicating parties in the process of wireless key generation, a multi-bit quantization wireless key generation scheme under imperfect channel estimation condition was proposed. Firstly, in order to investigate the influence of imperfect channel estimation on wireless key generation, a channel estimation error model was established. Then, a multi-bit key quantizer with guard band was designed, and the performance of wireless key was able to be improved by optimizing the quantization parameters. The closed-form expressions of Key Disagreement Rate (KDR) and Effective Key Generation Rate (EKGR) were derived, and the relationships between pilot signal power, quantization order, guard bands and the above two key generation performance indicators were revealed. The simulation results show that, increasing the transmit pilot power can effectively reduce the KDR, and with the increasing of quantization order, the key generation rate can be improved, but the KDR also increases. Moreover, increasing the quantization order and choosing the appropriate guard band size at the same time can effectively reduce KDR.
    Adaptive hierarchical searchable encryption scheme based on learning with errors
    ZHANG En, HOU Yingying, LI Gongli, LI Huimin, LI Yu
    2020, 40(1):  148-156.  DOI: 10.11772/j.issn.1001-9081.2019060961
    Asbtract ( )   PDF (1430KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that the existing hierarchical searchable encryption scheme cannot effectively resist quantum attack and cannot flexibly add and delete the level, a scheme of Adaptive Hierarchical Searchable Encryption based on learning with errors (AHSE) was proposed. Firstly, the proposed scheme was made to effectively resist the quantum attack by utilizing the multidimensional characteristic of lattices and based on the Learning With Errors (LWE) problem on lattices. Secondly, the condition key was constructed to divide the users into different levels clearly, making the user only able to search the files at his own level, so as to achieve effective level access control. At the same time, a segmented index structure with good adaptability was designed, whose levels could be added and deleted flexibly, meeting the requirements of access control with different granularities. Moreover, all users in this scheme were able to search by only sharing one segmented index table, which effectively improves the search efficiency. Finally, theoretical analysis shows that the update, deletion and level change of users and files in this scheme is simple and easy to operate, which are suitable for dynamic encrypted database, cloud medical system and other dynamic environments.
    Blockchain-based electronic health record sharing scheme
    LUO Wenjun, WEN Shenglian, CHENG Yu
    2020, 40(1):  157-161.  DOI: 10.11772/j.issn.1001-9081.2019060994
    Asbtract ( )   PDF (891KB) ( )  
    References | Related Articles | Metrics
    To solve the problems such as data sharing difficulty, data privacy disclosure of data sharing between medical institutions, a blockchain-based Electronic Health Record (EHR) sharing scheme was proposed. Firstly, based on the blockchain characteristics of non-tampering, decentralization and distributed storage, a blockchain-based EHR data sharing model was designed. The blockchain network and distributed database were used to jointly store the encrypted EHR and the related access control policies, preventing the modification and leakage of EHR data. Secondly, the Distributed Key Generation (DKG) and Identity-Based Proxy Re-Encryption (IBPRE) were combined to design a data secure sharing protocol. The Delegated Proof of Stake (DPOS) algorithm was used in this protocol to select the proxy node, which re-encrypted the EHR to achieve the data sharing between single pair of users. The safety analyses show that the proposed scheme can resist the fake identity and the replay attack. Simulation experiments and comparative analyses show that DPOS algorithm has the efficiency higher than Proof of Work (POW) algorithm, and slightly lower than the Practical Byzantine Fault Tolerance (PBFT) algorithm, but the proposed scheme is more decentralized and costs less computing power.
    Classification of malicious code variants based on VGGNet
    WANG Bo, CAI Honghao, SU Yang
    2020, 40(1):  162-167.  DOI: 10.11772/j.issn.1001-9081.2019050953
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    Aiming at the phenomenon that code reuse is common in the same malicious code family, a malicious sample classification method using code reuse features was proposed. Firstly, the binary sequence of file was split into the values of RGB three-color channels, converting malicious samples into color images. Then, these images were used to generate a malicious sample classification model based on VGG convolutional neural network. Finally, during training process of model, to solve the problems of overfitting and gradient vanishing as well as high computation overhead, the random dropout algorithm was utilized. This method achieves 96.16% average classification accuracy on the 9342 samples from 25 families in Malimg dataset and can effectively classify the malicious code samples. Experimental results show that compared with grayscale images, converting binary files into color images can emphasize the image features more significantly, especially for the files with repetitive short data segments in binary sequences. And, using a training set with more obvious features, neural networks can generate a classification model with better performance. Since the preprocessing operation is simple and the classification result response is fast, the method is suitable for the scene with high real-time requirements such as rapid classification of large-scale malicious samples.
    Advanced computing
    Spark framework based optimized large-scale spectral clustering parallel algorithm
    CUI Yixin, CHEN Xiaodong
    2020, 40(1):  168-172.  DOI: 10.11772/j.issn.1001-9081.2019061061
    Asbtract ( )   PDF (683KB) ( )  
    References | Related Articles | Metrics
    To solve the performance bottlenecks such as time-consuming computation and inability of clustering in spectral clustering on large-scale datasets, a spectral clustering parallelization algorithm suitable for large-scale datasets was proposed based on Spark technology. Firstly, the similar matrices were constructed through one-way loop iteration to avoid double counting. Then, the construction and normalization of Laplacian matrices were optimized by position transformation and scalar multiplication replacement in order to reduce the storage requirements. Finally, the approximate eigenvector calculation was used to further reduce the computational cost. The experimental results on different test datasets show that, as the size of test dataset increases, the proposed algorithm has the running time of one-way loop iteration and the approximate eigenvector calculation increased linearly with slow speed, the clustering effects of approximate eigenvector calculation are similar to those of exact eigenvector calculation, and the algorithm shows good extensibility on large-scale datasets. On the basis of obtaining better spectral clustering performance, the improved algorithm increases operation efficiency, and effectively alleviates high computational cost and the problem of clustering.
    Design of distributed computing framework for foreign exchange market monitoring
    CHENG Wenliang, WANG Zhihong, ZHOU Yu, GUO Yi, ZHAO Junfeng
    2020, 40(1):  173-180.  DOI: 10.11772/j.issn.1001-9081.2019061002
    Asbtract ( )   PDF (1204KB) ( )  
    References | Related Articles | Metrics
    In order to solve the index calculation problems of high complexity, strong completeness and low efficiency in the filed of financial foreign exchange market monitoring, a novel distributed computing framework for foreign exchange market monitoring based on Spark big data structure was proposed. Firstly, the business characteristics and existing technology framework for foreign exchange market monitoring were analyzed and summarized. Secondly, the foreign exchange business features of single-market multi-indicator and multi-market multi-indicator were considered. Finally, based on Spark's Directed Acyclic Graph (DAG) job scheduling mechanism and resource scheduling pool isolation mechanism of YARN (Yet Another Recourse Negotiator), the Market-level DAG (M-DAG) model and the market-level resource allocation strategy named M-YARN (Market-level YARN) model were proposed, respectively. The experimental results show that, the performance of the proposed distributed computing framework for foreign exchange market monitoring improves the performance by more than 80% compared to the traditional technology framework, and can effectively guarantee the completeness, accuracy and timeliness of foreign exchange market monitoring indicator calculation under the background of big data.
    Network and communications
    Bandwidth resource prediction and management of Web applications hosted on cloud
    SUN Tianqi, HU Jianpeng, HUANG Juan, FAN Ying
    2020, 40(1):  181-187.  DOI: 10.11772/j.issn.1001-9081.2019050903
    Asbtract ( )   PDF (1217KB) ( )  
    References | Related Articles | Metrics
    To address the problem of bandwidth resource management in Web applications, a prediction method for bandwidth requirement and Quality of Service (QoS) of Web applications based on network simulation was proposed. A modeling framework and formal specification were presented for Web services, a simplified parallel workload model was adopted, the model parameters were extracted from Web application access logs by means of automated data mining, and the complex network transmission process was simulated by using network simulation tool. As a result, the bandwidth requirement and changes on QoS were able to be predicted under different workload intensities. A classic benchmark system named TPC-W was used to evaluate the accuracy of prediction results. Theoretical analysis and simulation results show that compared with traditional linear regression prediction, network simulation can stably simulate real system, the predicted average relative error for total request number and total byte number is 4.6% and 3.3% respectively. Finally, with different bandwidth scaling schemes simulated and evaluated based on the TPC-W benchmark system, the results can provide decision support for resource management of Web applications.
    Node classification method in social network based on graph encoder network
    HAO Zhifeng, KE Yanrong, LI Shuo, CAI Ruichu, WEN Wen, WANG Lijuan
    2020, 40(1):  188-195.  DOI: 10.11772/j.issn.1001-9081.2019061116
    Asbtract ( )   PDF (1280KB) ( )  
    References | Related Articles | Metrics
    Aiming at how to merge the nodes' attributes and network structure information to realize the classification of social network nodes, a social network node classification algorithm based on graph encoder network was proposed. Firstly, the information of each node was propagated to its neighbors. Secondly, for each node, the possible implicit relationships between itself and its neighbor nodes were mined through neural network, and these relationships were merged together. Finally, the higher-level features of each node were extracted based on the information of the node itself and the relationships with the neighboring nodes and were used as the representation of the node, and the node was classified according to this representation. On the Weibo dataset, compared with DeepWalk model, logistic regression algorithm and the recently proposed graph convolutional network, the proposed algorithm has the classification accuracy greater than 8%; on the DBLP dataset, compared with multilayer perceptron, the classification accuracy of this algorithm is increased by 4.83%, and is increased by 0.91% compared with graph convolutional network.
    Routing protocol optimized for data transmission delay in wireless sensor networks
    REN Xiuli, CHEN Yang
    2020, 40(1):  196-201.  DOI: 10.11772/j.issn.1001-9081.2019060987
    Asbtract ( )   PDF (947KB) ( )  
    References | Related Articles | Metrics
    Concerning the serious packet loss and high end-to-end delay in wireless sensor networks, a Routing Protocol Optimized for Data Transmission Delay (RPODTD) was proposed. Firstly, according to the data transmission result, the channel detection conditions were classified, and the effective detection ratio and transmission efficiency were introduced as the evaluation indexes of nodes. Then, the queuing delay of data packet was estimated by the difference between actual delay and theoretical delay. Finally, the maximum and minimum queuing delay thresholds were given for judging whether to change the transmission path according to the interval that the queuing delay belongs to. In the simulation experiment on OMNeT++, compared with link quality and delay based Composite Load Balancing routing protocol (ComLoB) and Congestion Avoidance multipath routing protocol based on Routing Protocol for Low-power and lossy network (CA-RPL), RPODTD has the average end-to-end delay of nodes reduced by 78.87% and 51.81% respectively, and the node loss rate reduced by 40.71% and 68.43% respectively, and the node mortality rate reduced by 25.42% and 44.62% respectively. The simulation results show that the proposed RPODTD can effectively reduce the end-to-end delay, decrease the packet loss rate and extend the network life cycle.
    Efficient communication receiver design for Internet of things environment
    ZHOU Zhen, YUAN Zhengdao
    2020, 40(1):  202-206.  DOI: 10.11772/j.issn.1001-9081.2019060989
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    Internet of Things (IoT) communication system has the characteristics of small active user number and short data frame, while the pilot and user identification code required by channel estimation and multi-user detection will greatly reduce the communication efficiency and response speed of IoT system. To solve these problems, a blind channel estimation and multi-user detection algorithm based on Non-Orthogonal Multiple Access (NOMA) was proposed. Firstly, the spread spectrum matrix in Code Division Multiple Access (CDMA) system was used to allocate the carrier to each user, and the constellation rotation problem caused by blind estimation was solved by differential coding. Secondly, according to the sparsity of carriers allocated to users, the Bernoulli-Gaussian (B-G) distribution was introduced as a prior distribution, and the hidden Markov characteristic between the variables was used to perform the factor decomposition and modeling, and the multi-user identification was carried out based on sparse features of user data. Finally, the above model was deduced by message passing algorithm to solve multi-user interference caused by NOMA, and the joint channel estimation and detection receiver algorithm for IoT environment was obtained. The simulation results show that, compared with Block Sparse Single Measurement Vector (BS-SMV) algorithm and Block Sparse Adaptive Space Pursuit (BSASP) algorithm, the proposed algorithm can achieve a performance gain of about 1 dB without increasing the complexity.
    Unambiguous tracking method based on combined correlation function for CosBOC (10, 5) signal
    YUAN Zhixin, ZHOU Yanling
    2020, 40(1):  207-211.  DOI: 10.11772/j.issn.1001-9081.2019060993
    Asbtract ( )   PDF (749KB) ( )  
    References | Related Articles | Metrics
    Binary Offset Carrier (BOC) modulation signal is a new type of satellite navigation signal with ingenious design. Eliminating the tracking ambiguity of signal is the premise of exploring its potentiality. Concerning the problem that the autocorrelation function of Cosine-phased BOC (CosBOC) signal is relatively more complex and difficult to express concisely and uniformly, as well as the lack of research on its tracking ambiguity, an unambiguous tracking method based on combined correlation function was proposed for CosBOC (10,5) signal in the E6 band of Galileo system. Firstly, a local reference signal of linear combination of as few local duplicated signals as possible and their shift weighted variants was designed, then through the code tracking loop, the reference signal was correlated with the received signal to obtain a correlation function without sub-peaks, eliminating the tracking ambiguity of code discriminator. Finally the code tracking error of the method was analyzed. The experimental results show that, compared with Binary Phase Shift Keying Like (BPSK Like) method, Bump Jump method and Pseudo Correlation Function (PCF) method, the proposed method has simple tracking loop structure, and it can suppress the tracking ambiguity completely and has better code tracking performance on the whole.
    Computer software technology
    Test case generation method for Web applications based on page object
    WANG Shuyan, ZHENG Jiani, SUN Jiaze
    2020, 40(1):  212-217.  DOI: 10.11772/j.issn.1001-9081.2019060969
    Asbtract ( )   PDF (870KB) ( )  
    References | Related Articles | Metrics
    To reduce the navigation graph size and redundant test paths in the generation process of Web application test cases, a Web application test case generation method based on Selenium page object design pattern and graph traversal algorithm was proposed. Firstly, by classifying the original page objects, the page object navigation graph with navigation page object class as the node and the navigation method as the migration edge was created. Secondly, with the shortest-path algorithm of graph, a Page Object Graph Algorithm (POGA) was proposed to realize the navigation graph traversal in order to generate test path set. Finally, the test paths were extracted and Faker was used to generate the simulated data, and the test cases that can be directly executed were produced. The experimental results show that, the proposed method has the reduction rate of about 89% compared with the navigation graph size generated by crawling Web applications, reduces the number of redundant and infeasible paths in comparison with the state migration method for generating Web application test cases, and further improves the reuse rate of page objects and the maintainability of test cases.
    Evaluation model of software quality based on group decision-making and projection measure
    YUE Chuan, ZHANG Jian
    2020, 40(1):  218-226.  DOI: 10.11772/j.issn.1001-9081.2019060984
    Asbtract ( )   PDF (1247KB) ( )  
    References | Related Articles | Metrics
    The traditional software evaluation methods are lack of consideration for user requirements. For this problem, an evaluation model of software quality based on user group decision-making was proposed. Firstly, it is found that the existing projection measure is not always reasonable in real number and interval vector spaces. Therefore, a new normalized projection measure was proposed to comprehensively measure the proximity between two vectors or matrices and the measure allows the evaluation matrix with hybrid decision-making information. Secondly, the new projection measure was fused in the improved Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) technique. On this basis, a new group decision-making model with hybrid information of real number and interval was developed. And the pseudocode of algorithm was provided. Finally, the new model was applied to software quality evaluation. The requirements of software users were focused and the evaluation information of users group was synthesized in this method. The effectiveness and feasibility of the proposed method were illustrated by a practical example of software quality comprehensive evaluation and the experimental analysis.
    Virtual reality and multimedia computing
    Binocular vision target positioning method based on coarse-fine stereo matching
    MA Weiping, LI Wenxin, SUN Jinchuan, CAO Pengxia
    2020, 40(1):  227-232.  DOI: 10.11772/j.issn.1001-9081.2019071010
    Asbtract ( )   PDF (996KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low positioning accuracy of binocular vision system, a binocular vision target positioning method based on coarse-fine stereo matching was proposed. The coarse-fine matching strategy was adopted in the proposed method, firstly the random fern algorithm based on Canny-Harris feature points was used to identify the target in the left and right images at the stage of coarse matching, and the center points of target rectangular regions were extracted to achieve the center matching. Then, a binary feature descriptor based on image gradient information was established at the stage of fine matching, and the right center point obtained by center matching was used as an estimated value to set a pixel search range, in which the best matching point of left center point was found. Finally, the center matching points were substituted into the mathematical model of parallel binocular vision to achieve target positioning. The experimental results show that the proposed method has the positioning error controlled in 7 mm within 500 mm distance, and the average relative positioning error of 2.53%. Compared with other methods, the proposed method has the advantages of high positioning accuracy and short running time.
    High dynamic range imaging algorithm based on luminance partition fuzzy fusion
    LIU Ying, WANG Fengwei, LIU Weihua, AI Da, LI Yun, YANG Fanchao
    2020, 40(1):  233-238.  DOI: 10.11772/j.issn.1001-9081.2019061032
    Asbtract ( )   PDF (1027KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of color distortion and local detail information loss caused by the histogram expansion of High Dynamic Range (HDR) image generated by single image, an imaging algorithm of high dynamic range image based on luminance partition fusion was proposed. Firstly, the luminance component of normal exposure color image was extracted, and the luminance was divided into two intervals according to luminance threshold. Then, the luminance ranges of images of two intervals were extended by the improved exponential function, so that the luminance of low-luminance area was increased, the luminance of high-luminance area was decreased, and the ranges of two areas were both expanded, increasing overall contrast of image, and preserving the color and detail information. Finally, the extended image and original normal exposure image were fused into a high dynamic image based on fuzzy logic. The proposed algorithm was analyzed from both subjective and objective aspects. The experimental results show that the proposed algorithm can effectively expand the luminance range of image and keep the color and detail information of scene, and the generated image has better visual effect.
    Fast stitching method for dense repetitive structure images based on grid-based motion statistics algorithm and optimal seam
    MU Qi, TANG Yang, LI Zhanli, LI Hong'an
    2020, 40(1):  239-244.  DOI: 10.11772/j.issn.1001-9081.2019061045
    Asbtract ( )   PDF (999KB) ( )  
    References | Related Articles | Metrics
    For the images with dense repetitive structure, the common algorithms will lead to a large number of false matches, resulting in obvious ghosting in final image and high time consumption. To solve the above problems, a fast stitching method for dense repetitive structure images was proposed based on Grid-based Motion Statistics (GMS) algorithm and optimal seam algorithm. Firstly, a large number of coarse matching points were extracted from the overlapping regions. Then, the GMS algorithm was used for precise matching, and the transformation model was estimated based on the above. Finally, the dynamic-programming-based optimal seam algorithm was adopted to complete the image stitching. The experimental results show that, the proposed method can effectively stitch images with dense repetitive structures. Not only ghosting is effectively suppressed, but also the stitching time is significantly reduced, the average stitching speed is 7.4 times and 3.2 times of the traditional Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) algorithms respectively, 4.1 times as fast as the area-blocking-based SIFT algorithm, 1.4 times as fast as the area-blocking-based SURF algorithm. The proposed algorithm can effectively eliminate the ghosting of dense repetitive structure splicing and shorten the stitching time.
    Constraint iterative image reconstruction algorithm of adaptive step size non-local total variation
    WANG Wenjie, QIAO Zhiwei, NIU Lei, XI Yarui
    2020, 40(1):  245-251.  DOI: 10.11772/j.issn.1001-9081.2019061129
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the Total Variation (TV) iterative constraint model is easy to cause staircase artifact and cannot save the details in Computer Tomography (CT) images, an adaptive step size Non-Local Total Variation (NLTV) constraint iterative reconstruction algorithm was proposed. Considering the NLTV model is able to preserve and restore the details and textures of image, firstly, the CT model was regarded as a constraint optimization model for searching the solutions satisfying specific regular term, which means the NLTV minimization, in the solution set that satisfies the fidelity term of projection data. Then, the Algebraic Reconstruction Technique (ART) and the Split Bregman (SB) algorithm were used to ensure that the reconstructed results were constrained by the data fidelity term and regularization term. Finally, the Adaptive Steepest Descent-Projection Onto Convex Sets (ASD-POCS) algorithm was used as basic iterative framework to reconstruct images. The experimental results show that the proposed algorithm can achieve accurate results by using the projection data of 30 views under the noise-free sparse reconstruction condition. In the noise-added sparse data reconstruction experiment, the algorithm obtains the result similar to final convergence and has the Root Mean Squared Error (RMSE) as large as 2.5 times of that of ASD-POCS algorithm. The proposed algorithm can reconstruct the accurate result image under the sparse projection data and suppress the noise while improving the details reconstruction ability of TV iterative model.
    Multi-exposure image fusion algorithm based on adaptive segmentation
    WANG Shupeng, ZHAO Yao
    2020, 40(1):  252-257.  DOI: 10.11772/j.issn.1001-9081.2019061114
    Asbtract ( )   PDF (1021KB) ( )  
    References | Related Articles | Metrics
    Aiming at the insufficient preservation of color and details existed in traditional multi-exposure image fusion, a novel multi-exposure image fusion algorithm based on adaptive segmentation was proposed. Firstly, the input image was divided into blocks with the same color by super-pixel segmentation. Then structural decomposition was conducted on the image blocks to obtain three individual components. Different fusion rules were designed according to the characteristics of each component, so as to preserve the color and details in original images. Then, the weight map of each component, signal strength component and brightness component were smoothed by guided filtering, effectively overcoming the problem of block effect, retaining the edge information in the source image and reducing the artifacts. Finally, the fusion image was obtained by reconstructing three fused components. The experimental results show that, compared to the traditional fusion algorithms, the proposed algorithm has the average increase of 53.6% in Mutual Information (MI) and 24.0% in Standard Deviation (SD) respectively. The proposed image fusion algorithm can effectively preserve the color and texture details of input images.
    Hyperspectral band selection algorithm based on kernelized fuzzy rough set
    ZHANG Wu, CHEN Hongmei
    2020, 40(1):  258-263.  DOI: 10.11772/j.issn.1001-9081.2019071211
    Asbtract ( )   PDF (959KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the redundancy between hyperspectral band images, decrease the computing time and facilitate the following classification task, a hyperspectral band selection algorithm based on kernelized fuzzy rough set was proposed. Due to strong similarity between adjacent bands of hyperspectral images, the kernelized fuzzy rough set theory was introduced to measure the importance of bands more effectively. Considering the distribution characteristics of categories in the bands, the correlation between bands was defined according to the distribution of the lower approximate set of bands, and then the importance of bands was defined by combining the information entropy of bands. The search strategy of maximum correlation and maximum importance was used to realize the band selection of hyperspectral images. Finally, experiments were conducted on the commonly used hyperspectral dataset Indiana Pines agricultural area by using the J48 and KNN classifiers. Compared with other hyperspectral band selection algorithms, this algorithm has overall average classification accuracy increased by 4.5 and 6.6 percentage points respectively with two classifiers. The experimental results show that the proposed algorithm has some advantages in hyperspectral band selection.
    Frontier & interdisciplinary applications
    Noise type recognition and intensity estimation based on K-nearest neighbors algorithm
    WU Xiaoli, ZHENG Yifeng
    2020, 40(1):  264-270.  DOI: 10.11772/j.issn.1001-9081.2019061109
    Asbtract ( )   PDF (1150KB) ( )  
    References | Related Articles | Metrics
    For the problem that the existing methods for noise type recognition and intensity estimation all focus on single noises, and cannot estimate the intensity of source noises in the mixed noises, a K-Nearest Neighbors (KNN) algorithm with distance threshold was proposed to recognize the single and mixed noises, and estimate the intensity of source noises in the mixed noises by combining the recognition results of mixed noises and the reconstruction of noise bases. Firstly, the data distribution in frequency domain was used as feature vector. Then the signals were identified by the noise type recognition algorithm, and the frequency domain cosine distance between reconstructed noise and real noise was adopted as the optimal evaluation criterion in the process of reconstruction of noise bases. Finally, the intensity of source noises was estimated. The experimental results on two test databases indicate that, the proposed algorithm has the average accuracy of noise type identification as high as 98.135%, and the error rate of intensity estimation of mixed noise of 20.96%. The results verify the accuracy and generalization of noise type recognition algorithm as well as the feasibility of mixed noise intensity estimation algorithm, and this method provides a new idea for the mixed noise intensity estimation. The information of mixed noise type and intensity obtained by this method contributes to the determination of denoising methods and parameters, and improves the denoising efficiency.
    Multi-extended target tracking algorithm based on improved K-means++ clustering
    YU Haofang, SUN Lifan, FU Zhumu
    2020, 40(1):  271-277.  DOI: 10.11772/j.issn.1001-9081.2019061057
    Asbtract ( )   PDF (1062KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low partition accuracy of measurement set and high computational complexity, a Gaussian-mixture hypothesis density intensity multi-extended target tracking algorithm based on improved K-means++ clustering algorithm was proposed. Firstly, the traversal range of K value was narrowed according to the situations that the targets may change at the next moment. Secondly, the predicted states of targets were used to select the initial clustering centers, providing a basis for the correct partition of measurement set to improve the accuracy of clustering algorithm. Finally, the proposed improved K-means++ clustering algorithm was applied to the Gaussian-mixture probability hypothesis filter to jointly estimate the number and states of multiple targets. The simulation results show that the average tracking time of the proposed algorithm is reduced by 59.16% and 53.25% respectively, compared with that of multi-extended target tracking algorithms based on distance partition and K-means++. Meanwhile, the Optimal Sub-Pattern Assignment (OSPA) of the proposed algorithm is much lower than that of above two algorithms. In summary, the algorithm can greatly reduce the computational complexity and achieve better tracking performance than existing measurement set partition methods.
    High-speed railway fare adjustment strategy based on passenger flow assignment
    YIN Shengnan, LI Yinzhen, ZHANG Changze
    2020, 40(1):  278-283.  DOI: 10.11772/j.issn.1001-9081.2019061088
    Asbtract ( )   PDF (1051KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of single fare, low revenue rate of passenger transport and unbalanced passenger flow in different sections of high-speed railway, an adjustment strategy of high-speed railway fare based on passenger flow assignment was proposed. Firstly, the related factors affecting passenger travel choice behavior were analyzed, and a generalized travel cost function including four indicators of economy, rapidity, convenience and comfort was constructed. Secondly, a bilevel programming model considering the maximization of revenue of railway passenger transport management department and the minimization of passenger travel cost was established, in which the upper level programming achieved the maximum revenue of high-speed railway passenger transport by formulating fare adjustment strategy, the lower-level programming took the minimum passenger generalized travel cost as the goal, and used the competition and cooperation relationship between different trains of section to construct Stochastic User Equilibrium (SUE) model, and the model was solved by Method of Successive Averages (MSA) based on the improved Logit assignment model. Finally, the case study shows that the proposed fare adjustment strategy can effectively balance the section passenger flow, reduce passenger travel cost and improve passenger transport revenue to a certain extent. The experimental results show that the fare adjustment strategy can provide decision support and methodological guidance for railway passenger transport management departments to optimize fare system and formulate fare adjustment schemes.
    Storage location assignment optimization of stereoscopic warehouse based on genetic simulated annealing algorithm
    ZHU Jie, ZHANG Wenyi, XUE Fei
    2020, 40(1):  284-291.  DOI: 10.11772/j.issn.1001-9081.2019061035
    Asbtract ( )   PDF (1063KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of storage location assignment in automated warehouse, combined with the operational characteristics and security requirements of warehouse, a multi-objective model for automated stereoscopic warehouse storage location assignment was constructed, and an adaptive improved Simulated Annealing Genetic Algorithm (SAGA) based on Sigmoid curve for solving the model was proposed. Firstly, aiming at reducing the loading and unloading time of items, the distance between items in same group and the gravity center of shelf, a storage location optimization model was established. Then, in order to overcome the shortcomings of poor local search ability and being easily fall into local optimum of Genetic Algorithm (GA), the adaptive cross mutation operation based on Sigmoid curve and the reversed operation were introduced and fused with SAGA. Finally, the optimization, stability and convergence of the improved genetic SAGA were tested. The experimental results show that compared with Simulated Annealing (SA) algorithm, the proposed algorithm has the optimization degree of loading and unloading time of items increased by 37.7949 percentage points, the optimization degree of distance between items in same group improved by 58.4630 percentage points, and optimization degree of gravity center of shelf increased by 25.9275 percentage points, meanwhile the algorithm has better stability and convergence. It proves the effectiveness of the improved genetic SAGA to solve the problem. The algorithm can provide a decision method for automated warehouse storage location assignment optimization.
    Modeling of dyeing vat scheduling and slide time window scheduling heuristic algorithm
    WEI Qianqian, DONG Xingye, WANG Huanzheng
    2020, 40(1):  292-298.  DOI: 10.11772/j.issn.1001-9081.2019060981
    Asbtract ( )   PDF (1123KB) ( )  
    References | Related Articles | Metrics
    Considering the characteristics of dyeing vat scheduling problem, such as complex constraints, large task scales, high efficiency request, an incremental dyeing vat scheduling model was established and the Slide Time Window Scheduling heuristic (STWS) algorithm was proposed to improve the applicability of the problem model and the algorithm in real scenario. In order to meet the optimization target of minimizing delay cost, washing cost and the switching cost of dyeing vat, the heuristic scheduling rules were applied to schedule the products according to the priority order. For each product scheduling, the dynamic combination batch algorithm and the batch split algorithm were used to divide batches, and then the batch optimal sorting algorithm was used to schedule the batches. The simulated scheduling results on actual production data provided by a dyeing enterprise show that the algorithm can complete the scheduling for monthly plan within 10 s. Compared with the manual scheduling, the proposed algorithm improves the scheduling efficiency and significantly optimizes three objectives. Additionally, experiments on incremental scheduling show obvious optimization of the algorithm on reducing the washing cost and the switching cost of dyeing vat. All the results indicate that the proposed algorithm has excellent scheduling ability.
    Solution method to anomalous smoothing problem in particle probability hypothesis density smoother
    HE Xiangyu, YU Bin, XIA Yujie
    2020, 40(1):  299-303.  DOI: 10.11772/j.issn.1001-9081.2019061128
    Asbtract ( )   PDF (744KB) ( )  
    References | Related Articles | Metrics
    To solve the anomalous smoothing problems caused by the missed detection or target disappearance in the particle Probability Hypothesis Density (PHD) smoother, an improved method based on the modified target survival probability was proposed. Firstly, the prediction and update formulas of forward filtering were modified to obtain the target intensity function of filtering and estimate the number of survival targets in filtering process. On this basis, using the estimated value changes of forward filtering of survival number to judge whether targets disappearance or missed detection occurring, and the survival probability used in backward smoothing calculation was defined. Then, the iterative calculating formula for backward smoothing was improved with the obtained survival probability, and the particle weights were obtained on this basis. The simulation results show that the proposed method can solve the anomalous smoothing problems in PHD smoother effectively, its time averaged Optimal SubPattern Assignment (OSPA) distance error is decreased from 7.75 m to 1.05 m compared with standard algorithm, which indicates that the tracking performance of the proposed method is improved significantly.
    QRS complex detection algorithm of electrocardiograph based on Shannon energy and adaptive threshold
    WANG Zhizhong, LI Hongyi, HAN Chuang
    2020, 40(1):  304-310.  DOI: 10.11772/j.issn.1001-9081.2019050818
    Asbtract ( )   PDF (1024KB) ( )  
    References | Related Articles | Metrics
    In view of the problem that the existing QRS complex detection algorithms of electrocardiograph are still not ideal for the detection of some signal abnormalities, a QRS complex detection method combining Shannon energy with adaptive threshold was proposed to solve the problem of low accuracy of QRS complex detection. Firstly, the Shannon energy envelope was extracted from the pre-processed signal. Then, the QRS complex was detected by the improved adaptive threshold method. Finally, the location of the detected QRS complex was located according to the enhanced signal of the detected QRS complex. The MIT-BIH arrhythmia database was employed to evaluate the performance of the proposed algorithm. Results show that the algorithm can accurately detect the location of the QRS complex even when high P wave, T wave, irregular rhythm and serious noise interference exist in the signal, and has the sensitivity, positive and accuracy of the overall data detection reached 99.88%, 99.85% and 99.73% respectively, meanwhile the proposed algorithm can quickly complete the QRS complex detection task with the accuracy guaranteed.
2024 Vol.44 No.7

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF