Table of Content

    10 August 2019, Volume 39 Issue 8
    Artificial intelligence
    Visual sentiment analysis by combining global and local regions of image
    CAI Guoyong, HE Xinhao, CHU Yangyang
    2019, 39(8):  2181-2185.  DOI: 10.11772/j.issn.1001-9081.2018122452
    Asbtract ( )   PDF (901KB) ( )  
    References | Related Articles | Metrics
    Most existing visual sentiment analysis methods mainly construct visual sentiment feature representation based on the whole image. However, the local regions with objects in the image are able to highlight the sentiment better. Concerning the problem of ignorance of local regions sentiment representation in visual sentiment analysis, a visual sentiment analysis method by combining global and local regions of image was proposed. Image sentiment representation was mined by combining a whole image with local regions of the image. Firstly, an object detection model was used to locate the local regions with objects in the image. Secondly, the sentiment features of the local regions with objects were extracted by deep neural network. Finally, the deep features extracted from the whole image and the local region features were utilized to jointly train the image sentiment classifier and predict the sentiment polarity of the image. Experimental results show that the classification accuracy of the proposed method reaches 75.81% and 78.90% respectively on the real datasets TwitterⅠand TwitterⅡ, which is higher than the accuracy of sentiment analysis methods based on features extracted from the whole image or features extracted from the local regions of image.
    Cross-domain sentiment classification method of convolution-bi-directional long short-term memory based on attention mechanism
    GONG Qin, LEI Man, WANG Jichao, WANG Baoqun
    2019, 39(8):  2186-2191.  DOI: 10.11772/j.issn.1001-9081.2019010096
    Asbtract ( )   PDF (873KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems that the text representation features in the existing cross-domain sentiment classification method ignore the sentiment information of important words and there is negative transfer during transfer process, a Convolution-Bi-directional Long Short-Term Memory based on Attention mechanism (AC-BiLSTM) model was proposed to realize knowledge transfer. Firstly, the vector representation of text was obtained by low-dimensional dense word vectors. Secondly, after local context features being obtained by convolution operation, the long dependence relationship between the features was fully considered by Bi-directional Long Short-Term Memory (BiLSTM) network. Then, the contribution degrees of different words to the text were considered by introducing attention mechanism, and a regular term constraint was introduced into the objective function in order to avoid the negative transfer phenomenon in transfer process. Finally, the model parameters trained on source domain product reviews were transferred to target domain product reviews, and the labeled data in a small number of target domains were fine-tuned. Experimental results show that compared with AE-SCL-SR (AutoEncoder Structural Correspondence Learning with Similarity Regularization) method and Adversarial Memory Network (AMN) method, AC-BiLSTM method has average accuracy increased by 6.5% and 2.2% respectively, which demonstrates that AC-BiLSTM method can effectively improve cross-domain sentiment classification performance.
    Short text sentiment analysis based on parallel hybrid neural network model
    CHEN Jie, SHAO Zhiqing, ZHANG Huanhuan, FEI Jiahui
    2019, 39(8):  2192-2197.  DOI: 10.11772/j.issn.1001-9081.2018122552
    Asbtract ( )   PDF (884KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems that the traditional Convolutional Neural Network (CNN) ignores the contextual semantics of words when performing sentiment analysis tasks and CNN loses a lot of feature information during max pooling operation at the pooling layer, which limit the text classification performance of model, a parallel hybrid neural network model, namely CA-BGA (Convolutional Neural Network Attention and Bidirectional Gated Recurrent Unit Attention), was proposed. Firstly, a feature fusion method was adopted to integrate Bidirectional Gated Recurrent Unit (BiGRU) into the output of CNN, thus semantic learning was enhanced by integrating the global semantic features of sentences. Then, the attention mechanism was introduced between the convolutional layer and the pooling layer of CNN and at the output of BiGRU to reduce noise interference while retaining more feature information. Finally, a parallel hybrid neural network model was constructed based on the above two improvement strategies. Experimental results show that the proposed hybrid neural network model has the characteristic of fast convergence, and effectively improves the F1 value of text classification. The proposed model has excellent performance in Chinese short text sentiment analysis tasks.
    Aspect level sentiment classification model with location weight and long-short term memory based on attention-over-attention
    WU Ting, CAO Chunping
    2019, 39(8):  2198-2203.  DOI: 10.11772/j.issn.1001-9081.2018122565
    Asbtract ( )   PDF (847KB) ( )  
    References | Related Articles | Metrics
    The traditional attention-based neural network model can not effectively pay attention to aspect features and sentiment information, and context words of different distances or different directions have different contributions to the sentiment polarity assessment of aspect words. Aiming at these problems, Location Weight and Attention-Over-Attention Long-short Term Memory (LWAOA-LSTM) model was proposed. Firstly, the location weight information was added to the word vectors. Then Long-Short Term Memory (LSTM) network was used to simultaneously model aspects and sentences to generate aspect representation and sentence representation, and the aspect and sentence representations were learned simultaneously through attention-over-attention module to obtain the interactions from the aspect to the text and from the text to the aspect, and the important part of the sentence was automatically paid attention to. Finally, the experiments were carried out on different thematic datasets of attractions, catering and accommodation, and the accuracy of the aspect level sentiment analysis by the model was verified. Experimental results show that the accuracy of the model on the datasets of attractions, catering and accommodation is 78.3%, 80.6% and 82.1% respectively, and LWAOA-LSTM has better performance than traditional LSTM network model.
    Semi-supervised ensemble learning for video semantic detection based on pseudo-label confidence selection
    YIN Yu, ZHAN Yongzhao, JIANG Zhen
    2019, 39(8):  2204-2209.  DOI: 10.11772/j.issn.1001-9081.2019010129
    Asbtract ( )   PDF (1074KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problems in video semantic detection that the insufficience of labeled samples would seriously affect the performance of the detection and the performances of the base classifiers in ensemble learning would be improved deficiently due to noise in the pseudo-label samples, a semi-supervised ensemble learning algorithm based on pseudo-label confidence selection was proposed. Firstly, three base classifiers were trained in three different feature spaces to get the label vectors of the base classifiers. Secondly, the error between the maximum and submaximal probability of a certain class of weighted fusion samples and the error between the maximum probability of a certain class of samples and the average probability of the other classes of samples were introduced as the label confidences of the base classifiers, and the pseudo-label and integrated confidence of samples were obtained through fusing label vectors and label confidences. Thirdly, samples with high degree of integrated confidence were added to the labeled sample set, and base classifiers were trained iteratively. Finally, the trained base classifiers were integrated to detect the video semantic concept collaboratively. The average accuracy of the algorithm on the experimental data set UCF11 reaches 83.48%. Compared with Co-KNN-SVM algorithm, the average accuracy is increased by 3.48 percentage points. The selected pseudo-label by the algorithm can reflect the overall variation among the class of samples and other classes, as well as the uniqueness of the class of samples, which can reduce the risk of using pseudo-label samples, and effectively improve the accuracy of video semantic concept detection.
    Face recognition combining weighted information entropy with enhanced local binary pattern
    DING Lianjing, LIU Guangshuai, LI Xurui, CHEN Xiaowen
    2019, 39(8):  2210-2216.  DOI: 10.11772/j.issn.1001-9081.2019010181
    Asbtract ( )   PDF (1131KB) ( )  
    References | Related Articles | Metrics
    Under the influence of illumination, pose, expression, occlusion and noise, the recognition rate of faces is excessively low, therefore a method combining weighted Information Entropy (IEw) with Adaptive-Threshold Ring Local Binary Pattern (ATRLBP) (IEwATR-LBP) was proposed. Firstly, the information entropy was extracted from the sub-blocks of the original face image, and then the IEw of each sub-block was obtained. Secondly, the probability histogram was obtained by using ATRLBP operator to extract the features of face sub-blocks. Finally, the final feature histogram of original face image was obtained by concatenating the multiplications of each IEw with the probability histogram, and the recognition result was calculated through Support Vector Machine (SVM). In the comparison experiments on the illumination, pose, expression and occlusion datasets from AR face database, the proposed method achieved recognition rates of 98.37%, 94.17%, 98.20%, and 99.34% respectively; meanwile, it also achieved the maximum recognition rate of 99.85% on ORL face database. And the average recognition rates in 5 experiments with different training samples were compared to conclude that the recognition rate of samples with Gauss noise was 14.04 percentage points lower than that of samples without noise, while the recognition rate of samples with salt & pepper noise was only 2.95 percentage points lower than that of samples without noise. Experimental results show that the proposed method can effectively improve the recognition rate of faces under the influence of illumination, pose, occlusion, expression and impulse noise.
    Real-time face recognition on ARM platform based on deep learning
    FANG Guokang, LI Jun, WANG Yaoru
    2019, 39(8):  2217-2222.  DOI: 10.11772/j.issn.1001-9081.2019010164
    Asbtract ( )   PDF (958KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low real-time performance of face recognition and low face recognition rate on ARM platform, a real-time face recognition method based on deep learning was proposed. Firstly, an algorithm for detecting and tracking faces in real time was designed based on MTCNN face detection algorithm. Then, a face feature extraction network was designed based on Residual Neural Network (ResNet) on ARM platform. Finally, according to the characteristics of ARM platform, Mali-GPU was used to accelerate the operation of face feature extraction network, sharing the CPU load and improving the overall running efficiency of the system. The algorithm was deployed on ARM-based Rockchip development board, and the running speed reaches 22 frames per second. Experimental results show that the recognition rate of this method is 11 percentage points higher than that of MobileFaceNet on MegaFace.
    Person re-identification based on deep multi-view feature distance learning
    DENG Xuan, LIAO Kaiyang, ZHENG Yuanlin, YUAN Hui, LEI Hao, CHEN Bing
    2019, 39(8):  2223-2229.  DOI: 10.11772/j.issn.1001-9081.2018122505
    Asbtract ( )   PDF (1190KB) ( )  
    References | Related Articles | Metrics
    The traditional handcrafted features rely heavily on the appearance characteristics of pedestrians and the deep convolution feature is a high-dimensional feature, so, it will consume a lot of time and memory when the feature is directly used to match the image. Moreover, features from higher levels are easily affected by human pose or background clutter. Aiming at these problems, a method based on deep multi-view feature distance learning was proposed. Firstly, a new feature to improve and integrate the convolution feature of the deep region was proposed. The convolution feature was processed by the sliding frame technique, and the integration feature of low-dimensional deep region with the dimension equal to the number of convolution layer channels was obtained. Secondly, from the perspectives of the deep regional integration feature and the handcrafted feature, a multi-view feature distance learning algorithm was proposed by utilizing the cross-view quadratic discriminant analysis method. Finally, the weighted fusion strategy was used to accomplish the collaboration between handcrafted features and deep convolution features. Experimental results show that the Rank1 value of the proposed method reaches 80.17% and 75.32% respectively on the Market-1501 and VIPeR datasets; under the new classification rules of CHUK03 dataset, the Rank1 value of the proposed method reaches 33.5%. The results show that the accuracy of pedestrian re-identification after distance-weighted fusion is significantly higher than that of the separate feature distance metric, and the effectiveness of the proposed deep region features and algorithm model are proved.
    Pedestrian detection method based on Movidius neural computing stick
    ZHANG Yangshuo, MIAO Zhuang, WANG Jiabao, LI Yang
    2019, 39(8):  2230-2234.  DOI: 10.11772/j.issn.1001-9081.2018122595
    Asbtract ( )   PDF (729KB) ( )  
    References | Related Articles | Metrics
    Movidius neural computing stick is a USB-based deep learning inference tool and a stand-alone artificial intelligence accelerator that provides dedicated deep neural network acceleration for a wide range of mobile and embedded vision devices. For the embedded application of deep learning, a near real-time pedestrian target detection method based on Movidius neural computing stick was realized. Firstly, the model size and calculation were adapted to the requirements of the embedded device by improving the RefineDet target detection network structure. Then, the model was retrained on the pedestrian detection dataset and deployed on the Raspberry Pi equipped with Movidius neural computing stick. Finally, the model was tested in the actual environment, and the algorithm achieved an average processing speed of 4 frames per second. Experimental results show that based on Movidius neural computing stick, the near real-time pedestrian detection task can be completed on the Raspberry Pi with limited computing resources.
    Aggressive behavior recognition based on human joint point data
    CHEN Hao, XIAO Lixue, LI Guang, PAN Yuekai, XIA Yu
    2019, 39(8):  2235-2241.  DOI: 10.11772/j.issn.1001-9081.2019010084
    Asbtract ( )   PDF (974KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of human aggressive behavior recognition, an aggressive behavior recognition method based on human joint points was proposed. Firstly, OpenPose was used to obtain the human joint point data of a single frame image, and nearest neighbor frame feature weighting method and piecewise polynomial regression were used to realize the completion of missing values caused by body self-occlusion and environmental factors. Then, the dynamic "safe distance" threshold was defined for each human body. If the true distance between the two people was less than the threshold, the behavior feature vector was constructed, including the human barycenter displacement between frames, the angular velocity of human joint rotation and the minimum attack distance during interaction. Finally, the improved LightGBM (Light Gradient Boosting Machine) algorithm, namely w-LightGBM (weight LightGBM), was used to realize the classification and recognition of aggressive behaviors. The public dataset UT-interaction was used to verify the proposed method, and the accuracy reached 95.45%. The results show that this method can effectively identify the aggressive behaviors from various angles.
    Video object segmentation method based on dual pyramid network
    JIANG Sihao, SONG Huihui, ZHANG Kaihua, TANG Runfa
    2019, 39(8):  2242-2246.  DOI: 10.11772/j.issn.1001-9081.2018122566
    Asbtract ( )   PDF (787KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that it is difficult to segment a specific object in a complex video scene, a video object segmentation method based on Dual Pyramid Network (DPN) was proposed. Firstly, the one-way transmission of modulating network was used to make the segmentation model adapt to the appearance of a specific object, which means, a modulator was learned based on visual and spatial information of target object to modulate the intermediate layers of segmentation network to make the network adapt to the appearance changes of specific object. Secondly, global context information was aggregated in the last layer of segmentation network by different-region-based context aggregation method. Finally, a left-to-right architecture with lateral connections was developed for building high-level semantic feature maps at all scales. The proposed video object segmentation method is a network which is able to be trained end-to-end. Extensive experimental results show that the proposed method achieves results which can be competitive to the results of the state-of-the-art methods using online fine-tuning on DAVIS2016 dataset, and outperforms other methods on DAVIS2017 dataset.
    Object tracking algorithm combining re-detection mechanism and convolutional regression network
    JIA Yongchao, HE Xiaowei, ZHENG Zhonglong
    2019, 39(8):  2247-2251.  DOI: 10.11772/j.issn.1001-9081.2018122593
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that Context-Ware Correlation Filter (CACF) algorithm based on artificial features has poor tracking performance under the situations of deformation, motion blur and low resolution and when the tracker encounters conditions like severe occlusion, it is easy to fall into local optimum and cause tracking failure, a new object tracking algorithm combining re-detection mechanism and Convolutional Regression Network (CRN) was proposed. In the training phase, the correlation filter was integrated into the deep neural network as a CRN layer, so that the network became a whole for end-to-end training. In the tracking phase, different network layers and their response values were merged through residual connections. At the same time, a re-detection mechanism was introduced to make the tracking algorithm recover from the potential tracking failure, and the re-detector would be activated when the response value was lower than the given threshold. Experimental results on the dataset OTB-2013 show that the proposed algorithm achieves 88.1% accuracy on 50 video sequences, which is 9.7 percentage points higher than the accuracy of original CACF algorithm, and has better results compared with original algorithm on video sequences with attributes like deformation and motion blur.
    Best action identification of tree structure based on ternary multi-arm bandit
    LIU Guoqing, WANG Jieting, HU Zhiguo, QIAN Yuhua
    2019, 39(8):  2252-2260.  DOI: 10.11772/j.issn.1001-9081.2018112394
    Asbtract ( )   PDF (1397KB) ( )  
    References | Related Articles | Metrics
    Monte Carlo Tree Search (MCTS) shows excellent performance in chess game problem. Most existing studies only consider the success and failure feedbacks and assum that the results follow the Bernoulli distribution. However, this setting ignores the usual result of draw, causing inaccurate assessment of the disk status and missing of optimal action. In order to solve this problem, Ternary Multi-Arm Bandit (TMAB) model was constructed and Best Arm identification of TMAB (TBBA) algorithm was proposed. Then, TBBA algorithm was applied to Ternary Minimax Sampling Tree (TMST). Finally, TBBA_tree algorithm based on the simple iteration of TBBA and Best Action identification of TMST (TTBA) algorithm based on transforming the tree structure into TMAB were proposed. In the experiments, two arm spaces with different precision were established, and several comparative TMABs and TMSTs were constructed based on the two arm spaces. Experimental results show that compared to the accuracy of uniform sampling algorithm, the accuracy of TBBA algorithm keeps rising steadily and can reach 100% partially, and the accuracy of TBBA algorithm is basically more than 80% with good generalization and stability and without outliers or fluctuation ranges.
    Trajectory prediction based on Gauss mixture time series model
    GAO Jian, MAO Yingchi, LI Zhitao
    2019, 39(8):  2261-2270.  DOI: 10.11772/j.issn.1001-9081.2019010030
    Asbtract ( )   PDF (1517KB) ( )  
    References | Related Articles | Metrics
    Considering the large change of trajectory prediction error caused by the change of road traffic flow at different time, a Gauss Mixture Time Series Model (GMTSM) based on probability distribution model was proposed. The model regression of mass vehicle historical trajectories and the analysis of road traffic flow were carried out to realize vehicle trajectory prediction. Firstly, aiming at the problem that the uniform grid partition method was easy to cause the splitting of related trajectory points, an iterative grid partition method was proposed to realize the quantity balance of trajectory points. Secondly, Gaussian Mixture Model (GMM) and AutoRegressive Integrated Moving Average model (ARIMA) in time series analysis were trained and combined together. Thirdly, in order to avoid the interference of the instability of GMTSM hybrid model's sub-models on the prediction results, the weights of sub-models were dynamically calculated by analyzing the prediction errors of the sub-models. Finally, based on the dynamic weight, the sub-models were combined together to realize trajectory prediction. Experimental results show that the average prediction accuracy of GMTSM is 92.3% in the case of sudden change of road traffic flow. Compared with Gauss mixed model and Markov model under the same parameters, GMTSM has prediction accuracy increased by about 55%. GMTSM can not only accurately predict vehicle trajectory under normal circumstances, but also effectively improve the accuracy of trajectory prediction under road traffic flow changes, which is applicable to the real road environment.
    Two-input stream deep deconvolution neural network for interpolation and recognition
    ZHANG Qiang, YANG Jian, FU Lizhen
    2019, 39(8):  2271-2275.  DOI: 10.11772/j.issn.1001-9081.2018122555
    Asbtract ( )   PDF (822KB) ( )  
    References | Related Articles | Metrics
    It is impractical to have a large size of training dataset in real work for neural network training, so a two-input stream generative neural network which can generate a new image with the given parameters was proposed, hence to augment the training dataset. The framework of the proposed neural network consists of a two-input steam convolution network and a deconvolution network. The two-input steam network has two convolution networks to extract features, and the deconvolution network is connected to the end. Two images with different angle were input into the convolution network to get high-level description, then an interpolation target image with a new perspectives was generated by using the deconvolution network with the above high-level description and set parameters. The experiment results on ShapeNetCore show that on the same dataset, the recognition rate of the proposed network has increased by 20% than the common network framework. This method can enlarge the size of the training dataset and is useful for multi-angle recognition.
    Data science and technology
    Hyperspectral unmixing based on sparse and orthogonal constrained non-negative matrix factorization
    CHEN Shanxue, CHU Chengquan
    2019, 39(8):  2276-2280.  DOI: 10.11772/j.issn.1001-9081.2019010105
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that hyperspectral unmixing based on Non-negative Matrix Factorization (NMF) is easy to fall into local minimum and greatly affected by initial value, a linear unmixing algorithm based on Sparse and Orthogonal constrained Non-negative Matrix Factorization (SONMF) was proposed. Firstly, based on the traditional NMF hyperspectral linear unmixing method, the physical and chemical properties of the hyperspectral data was analyzed. Then the sparsity of the abundance and the independence of the endmember were combined together, two methods of Sparse Non-negative Matrix Factorization (SNMF) and Orthogonal Non-negative Matrix Factorization (ONMF) were combined and applied into hyperspectral unmixing. The experiments on simulation data and real data show that, compared with the three reference unmixing algorithms of Vertex Component Analysis (VCA), SNMF and ONMF, the proposed algorithm has improved the performance of linear unmixing, in which the Spectral Angle Distance (SAD) is reduced by 0.012 to 0.145. SONMF can combine the advantages of the two constraints to make up for the lack the expression of hyperspectral data by traditional NMF based linear unmixing methods, and achieve good results.
    Learning sample extraction method based on convex boundary
    GU Yiyi, TAN Xuntao, YUAN Yubo
    2019, 39(8):  2281-2287.  DOI: 10.11772/j.issn.1001-9081.2019010162
    Asbtract ( )   PDF (1258KB) ( )  
    References | Related Articles | Metrics
    The quality and quantity of learning samples are very important for intelligent data classification systems. But there is no general good method for finding meaningful samples in data classification systems. For this reason, the concept of convex boundary of dataset was proposed, and a fast method of discovering meaningful sample set was given. Firstly, abnormal and incomplete samples in the learning sample set were cleaned by box-plot function. Secondly, the concept of data cone was proposed to divide the normalized learning samples into cones. Finally, each cone of sample subset was centralized, and based on convex boundary, samples with very small difference from convex boundary were extracted to form convex boundary sample set. In the experiments, 6 classical data classification algorithms, including Gaussian Naive Bayes (GNB), Classification And Regression Tree (CART), Linear Discriminant Analysis (LDA), Adaptive Boosting (AdaBoost), Random Forest (RF) and Logistic Regression (LR), were tested on 12 UCI datasets. The results show that convex boundary sample sets can significantly shorten the training time of each algorithm while maintaining the classification performance. In particular, for datasets with many noise data such as caesarian section, electrical grid, car evaluation datasets, convex boundary sample set can improve the classification performance. In order to better evaluate the efficiency of convex boundary sample set, the sample cleaning efficiency was defined as the quotient of sample size change rate and classification performance change rate. With this index, the significance of convex boundary samples was evaluated objectively. Cleaning efficiency greater than 1 proves that the method is effective. The higher the numerical value, the better the effect of using convex boundary samples as learning samples. For example, on the dataset of HTRU2, the cleaning efficiency of the proposed method for GNB algorithm is over 68, which proves the strong performance of this method.
    Incremental attribute reduction algorithm of positive region in interval-valued decision tables
    BAO Di, ZHANG Nan, TONG Xiangrong, YUE Xiaodong
    2019, 39(8):  2288-2296.  DOI: 10.11772/j.issn.1001-9081.2018122518
    Asbtract ( )   PDF (1293KB) ( )  
    References | Related Articles | Metrics
    There are a large number of dynamically-increasing interval data in practical applications. If the classic non-incremental attribute reduction of positive region is used for reduction, it is necessary to recalculate the positive region reduction of the updated interval-valued datasets, which greatly reduces the computational efficiency of attribute reduction. In order to solve the problem, incremental attribute reduction methods of positive region in interval-valued decision tables were proposed. Firstly, the related concepts of positive region reduction in interval-valued decision tables were defined. Then, the single and group incremental mechanisms of positive region were discussed and proved, and the single and group incremental attribute reduction algorithms of positive region in interval-valued decision tables were proposed. Finally, 8 UCI datasets were used to carry out experiments. When the incremental size of 8 datasets increases from 60% to 100%, the reduction time of classic non-incremental attribute reduction algorithm in the 8 datasets is 36.59 s, 72.35 s, 69.83 s, 154.29 s, 80.66 s, 1498.11 s, 4124.14 s and 809.65 s, the reduction time of single incremental attribute reduction algorithm is 19.05 s, 46.54 s, 26.98 s, 26.12 s, 34.02 s, 1270.87 s, 1598.78 s and 408.65 s, the reduction time of group incremental attribute reduction algorithm is 6.39 s, 15.66 s, 3.44 s, 15.06 s, 8.02 s, 167.12 s, 180.88 s and 61.04 s. Experimental results show that the proposed incremental attribute reduction algorithm of positive region in interval-valued decision tables is efficient.
    Co-training algorithm with combination of active learning and density peak clustering
    GONG Yanlu, LYU Jia
    2019, 39(8):  2297-2301.  DOI: 10.11772/j.issn.1001-9081.2019010075
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics
    High ambiguity samples are easy to be mislabeled by the co-training algorithm, which would decrease the classifier accuracy, and the useful information hidden in unlabeled data which were added in each iteration is not enough. To solve these problems, a co-training algorithm combined with active learning and density peak clustering was proposed. Before each iteration, the unlabeled samples with high ambiguity were selected and added to the labeled sample set after active labeling, then density peak clustering was used to cluster the unlabeled samples to obtain the density and relative distance of each unlabeled sample. During iteration, the unlabeled samples with higher density and further relative distance were selected to be trained by Naive Bayes (NB) classification algorithm. The processes were iteratively done until the termination condition was satisfied. Mislabeled data recognition problem could be improved by labeling samples with high ambiguity based on active learning algorithm, and the samples reflecting data space structure well could be selected by density peak clustering algorithm. Experimental results on 8 datasets of UCI and the pima dataset of Kaggle show that compared with SSLNBCA (Semi-Supervised Learning combining NB Co-training with Active learning) algorithm, the accuracy of the proposed algorithm is up to 6.67 percentage points, with an average improvement of 1.46 percentage points.
    Cyber security
    Survey on taint analysis technology
    REN Yuzhu, ZHANG Youwei, AI Chengwei
    2019, 39(8):  2302-2309.  DOI: 10.11772/j.issn.1001-9081.2019020238
    Asbtract ( )   PDF (1432KB) ( )  
    References | Related Articles | Metrics
    Taint analysis technology is an important method to protect private data security and realize vulnerability detection, and it is also a hot area of information security research. The research status and development of taint analysis technology in recent years were summarized. The theoretical basis of taint analysis, the basic concepts, key technologies and research progress of static taint analysis and dynamic taint analysis were introduced. From the perspective of the implementation, the implementation methods, core ideas, advantages and disadvantages of four taint analysis technologies based on hardware, software, virtual environment and code were expounded; from the perspective of the flow of taint data, two typical applications in related fields, privacy data leakage detection and vulnerability detection, were outlined. Finally, the shortcomings of taint analysis were briefly analyzed, and the research prospects and development trends of the technology were predicted.
    Task requirement-oriented user selection incentive mechanism in mobile crowdsensing
    CHEN Xiuhua, LIU Hui, XIONG Jinbo, MA Rong
    2019, 39(8):  2310-2317.  DOI: 10.11772/j.issn.1001-9081.2019010226
    Asbtract ( )   PDF (1328KB) ( )  
    References | Related Articles | Metrics
    Most existing incentive mechanisms in mobile crowdsensing are platform-centered design or user-centered design without multidimensional consideration of sensing task requirements. Therefore, it is impossible to make user selection effectively based on sensing tasks and meet the maximization and diversification of the task requirements. To solve these problems, a Task Requirement-oriented user selection Incentive Mechanism (TRIM) was proposed, which is a task-centered design method. Firstly, sensing tasks were published by the sensing platform according to task requirements. Based on multiple dimensions such as task type, spatio-temperal characteristic and sensing reward, the task vectors were constructed to optimally meet the task requirements. To implement the personalized sensing participation, the user vectors were constructed based on the user preferences, individual contribution value, and expected reward by the sensing users. Then, by introducing Privacy-preserving Cosine Similarity Computation protocol (PCSC), the similarities between the sensing tasks and the sensing users were calculated. In order to obtain the target user set, the user selection based on the similarity comparison results was performed by the sensing platform. Therefore, the sensing task requirements were better met and the user privacy was protected. Finally, the simulation experiment indicates that TRIM shortens the computational time overhead of exponential increments and improves the computational efficiency compared with incentive mechanism using Paillier encryption protocol in the matching process between sensing tasks and sensing users; compared with the incentive mechanism using direct PCSC, the proposed TRIM guarantees the privacy of the sensing users and achieves 98% matching accuracy.
    Virtual trajectory filling algorithm for location privacy protection
    FU Yu, WANG Hong
    2019, 39(8):  2318-2325.  DOI: 10.11772/j.issn.1001-9081.2018122585
    Asbtract ( )   PDF (1176KB) ( )  
    References | Related Articles | Metrics
    In view of the different constraints on the moving objects between road network environment and Euclidean space environment, a virtual trajectory filling algorithm was proposed, which was applicable to both constraints. The interaction between the user and the provider of Location-Based Services (LBS) was taken over by the algorithm, and virtual user trajectory was constructed to confuse and fill the real trajectory, realizing the hiding and protection of the real trajectory. Firstly, the target region was partitioned and the points of convergence were extracted. Then, the trajectory segmentation and virtual trajectory were generated based on the convergence points. Finally, the reasonable distribution of the virtual trajectory was achieved by constructing the timing preset algorithm and the trajectory confusion filling algorithm, which increased the difficulty of associating the trajectory information with a specific target object. Experimental results show that after less than 15 virtual trajectories per user being filled, the probability of the location privacy disclosure of the target object is dropped from 60% to and stabilizes at around 10%, and the trajectory privacy disclosure probability is decreased from 50% to and stabilizes at about 6%, achieving good effect of location privacy protection.
    Multi-hop multi-policy attributed-based fully homomorphic encryption scheme
    YU Qingfei, TU Guangsheng, LI Ningbo, ZHOU Tanping
    2019, 39(8):  2326-2332.  DOI: 10.11772/j.issn.1001-9081.2019010188
    Asbtract ( )   PDF (989KB) ( )  
    References | Related Articles | Metrics
    The single-policy attribute-based fully homomorphic encryption scheme cannot perform homomorphic operation and access control of ciphertexts under different attribute vectors corresponding to different policy functions, and new participant ciphertexts cannot dynamically join into the homomorphic operation. In order to solve the above problems, an efficient multi-hop multi-policy attribute-based fully homomorphic encryption scheme based on Learning with Error (LWE) problem was proposed. Firstly, the single-policy attribute-based fully homomorphic encryption scheme was appropriately modified. Secondly, the scheme was mapped to multi-user scenarios. Finally, a multi-hop multi-policy fully homomorphic transformation mechanism was used to realize the homomorphic operation after adding new participant ciphertexts. The proposed scheme is proved to be INDistinguishability under Chosen Plaintext Attack (IND-CPA) secure under the chosen attribute, and has advantages of attribute-based encryption and multi-hop multi-key fully homomorphic encryption. Compared with multi-policy attribute-based fully homomorphic encryption scheme constructed by using target policy function set, the ciphertext/plaintext ratio of the proposed scheme is significantly reduced without changing the size of the individual participant's secret key.
    Malicious code classification algorithm based on multi-feature fusion
    LANG Dapeng, DING Wei, JIANG Haocheng, CHEN Zhiyuang
    2019, 39(8):  2333-2338.  DOI: 10.11772/j.issn.1001-9081.2019010116
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics
    Concerning the fact that most malicious code classification researches are based on family classification and malicious and benign code classification, and the classification of categories is relatively few, a malicious code classification algorithm based on multi-feature fusion was proposed. Three sets of features extracted from texture maps and disassembly files were used for fusion classification research. Firstly, the gray level co-occurrence matrix features were extracted from source files and disassembly files and the sequences of operation codes were extracted by n-gram algorithm. Secondly, the improved Information Gain (IG) algorithm was used to extract the operation code features. Thirdly, Random Forest (RF) was used as the classifier to learn the multi-group features after normalization. Finally, the random forest classifier based on multi-feature fusion was realized. The proposed algorithm achieves 85% accuracy by learning and testing nine types of malicious codes. Compared with random forest under single feature, multi-layer perceptron under multi-feature and Logistic regression classifier, it has higher accuracy.
    Improved RC4 algorithm based on elliptic curve
    CHEN Hong, LIU Yumeng, XIAO Chenglong, GUO Pengfei, XIAO Zhenjiu
    2019, 39(8):  2339-2345.  DOI: 10.11772/j.issn.1001-9081.2018122459
    Asbtract ( )   PDF (1134KB) ( )  
    References | Related Articles | Metrics
    For the problem that the Rivest Cipher 4 (RC4) algorithm has invariant weak key, the randomness of the key stream sequence is not high and the initial state of the algorithm can be cracked, an improved RC4 algorithm based on elliptic curve was proposed. In the algorithm, the initial key was generated by using elliptic curve, Hash function and pseudo-random number generator, and a nonlinear transformation was performed under the action of the S-box and the pointer to finally generate a key stream sequence with high randomness. The randomness test carried out by National Institute of Standards and Technology (NIST) shows that the frequency test, run test and Maurer are 0.13893, 0.13081, and 0.232050 respectively higher than those of the original RC4 algorithm, which can effectively prevent the generation of invariant weak keys and resist the "sentence" attack. The initial key is a uniformly distributed random number without deviation, which can effectively resist the distinguishing attack. The elliptic curve and Hash function have one-way irreversibility, the pseudo-random number generator has high password strength, the initial key guess is difficult to assign and is not easy to crack, which can resist the state guessing attack. Theoretical and experimental results show that the improved RC4 algorithm is more random and safe than the original RC4 algorithm.
    Network and communications
    Successful offloading probability analysis in device-to-device caching network based on stochastic geometry
    LONG Yanshan, FU Qinxue, GUO Jibin, ZHANG Mengqi, CAI Yueming
    2019, 39(8):  2346-2353.  DOI: 10.11772/j.issn.1001-9081.2019010141
    Asbtract ( )   PDF (1192KB) ( )  
    References | Related Articles | Metrics
    In the Device-to-Device (D2D) caching network where mobile user terminals are all cache-enabled, the spatial distributions of all users were modeled as the Homogeneous Poisson Point Processes (HPPP). On this basis, combined with the randomness of content caching and requesting, the network interference was analyzed exactly and then approximately under specific scenarios. Considering that the D2D caching technique has the dual characteristics of both caching at terminals and D2D communications, which means that the content offloadeding includes self-offloading way and D2D-offloading way, and the content transmission has to meet the constraints of received Signal-to-Interference Ratio (SIR) and D2D maximal communication distance simultaneously, the closed-form expressions of Successful Offloading Probability (SOP) of the random D2D caching network was deducted. Simulation results show that the proposed SOP is a general metric and can be reduced to the special cases of existing research results. For example, when the users are distributed densely and the D2D maximal communication distance is relatively large, the SOP is reduced to the Successful Transmission Probability (STP) without considering the D2D distance constraint.
    Detection method for network-wide persistent flow based on sketch data structure
    ZHOU Aiping, ZHU Chengang
    2019, 39(8):  2354-2358.  DOI: 10.11772/j.issn.1001-9081.2019010203
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics
    Persistent flow is an important feature of hidden network attack. It does not generate a large amount of traffic and it occurs regularly in a long period, so that it brings a large challenge for traditional detection methods. Network attacks have invisibility, single monitors have heavy load and limited information. Aiming at the above problems, a method to detect network-wide persistent flows was proposed. Firstly, a sketch data structure was designed and was deployed on each monitor. Secondly, when the network flow arrived at a monitor, the summary information was extracted from network data stream and one bit in the sketch data structure was updated. Thirdly, at the end of measurement period, the summary information from other monitors was synthesized by the main monitor. Finally, the approximate estimation of flow persistence was presented. A bit vector was constructed for each flow by some simple computing, flow persistence was estimated by using probability statistical method, and the persistent flows were detected based on revised persistence estimation. The experiments were conducted on real network traffic, and their results show that compared with the algorithm of Tracing Long Duration flows (TLF), the proposed method increases the accuracy by 50% and reduces the false positive rate, false negative rate by 22%, 20% respectively. The results illustrate that the method of detecting network-wide persistent flows can effectively monitor network traffic in high-speed networks.
    WSN clustering routing algorithm based on genetic algorithm and fuzzy C-means clustering
    DONG Fazhi, DING Hongwei, YANG Zhijun, XIONG Chengbiao, ZHANG Yingjie
    2019, 39(8):  2359-2365.  DOI: 10.11772/j.issn.1001-9081.2019010134
    Asbtract ( )   PDF (963KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of limited energy of nodes, short life cycle and low throughput of Wireless Sensor Network (WSN), a WSN Clustering Routing algorithm based on Genetic Algorithm (GA) and Fuzzy C-Means (FCM) clustering (GAFCMCR) was proposed, which adopted the method of centralized clustering and distributed cluster head election. Network clustering was performed by the base station using a FCM clustering algorithm optimized by GA during network initialization. The cluster head of the first round was the node closest to the center of the cluster. From the second round, the election of the cluster head was carried out by the cluster head of the previous round. The residual energy of candidate node, the distance from the node to the base station, and the mean distance from the node to other nodes in the cluster were considered in the election process, and the weights of these three factors were real-time adjusted according to network status. In the data transfer phase, the polling mechanism was introduced into intra-cluster communication. The simulation results show that, compared with the LEACH (Low Energy Adaptive Clustering Hierarchy) algorithm and the K-means-based Uniform Clustering Routing (KUCR) algorithm, the life cycle of the network in GAFCMCR is prolonged by 105% and 20% respectively. GAFCMCR has good clustering effect, good energy balance and higher throughput.
    Link prediction algorithm based on high-order proximity approximation
    YANG Yanlin, YE Zhonglin, ZHAO Haixing, MENG Lei
    2019, 39(8):  2366-2373.  DOI: 10.11772/j.issn.1001-9081.2019010213
    Asbtract ( )   PDF (1295KB) ( )  
    References | Related Articles | Metrics
    Most of the existing link prediction algorithms only study the first-order similarity between nodes and their neighbor nodes, without considering the high-order similarity between nodes and the neighbor nodes of their neighbor nodes. In order to solve this problem, a Link Prediction algorithm based on High-Order Proximity Approximation (LP-HOPA) was proposed. Firstly, the normalized adjacency matrix and similarity matrix of a network were solved. Secondly, the similarity matrix was decomposed by the method of matrix decomposition, and the representation vectors of the network nodes and their contexts were obtained. Thirdly, the original similarity matrix was high-order optimized by using Network Embedding Update (NEU) algorithm of high-order network representation learning, and the higher-order similarity matrix representation was calculated by using the normalized adjacency matrix. Finally, a large number of experiments were carried out on four real datasets. Experiments results show that, compared with the original link prediction algorithm, the accuracy of most of the link prediction algorithms optimized by LP-HOPA is improved by 4% to 50%. In addition, LP-HOPA can transform the link prediction algorithm based on local structure information of low-order network into the link prediction algorithm based on high-order characteristics of nodes, which confirms the validity and feasibility of the link prediction algorithm based on high order proximity approximation to a certain extent.
    FIR correction filter design and FPGA implementation for array mutual coupling error
    YAO Zhicheng, WU Zhihui, YANG Jian, ZHANG Shengkui
    2019, 39(8):  2374-2380.  DOI: 10.11772/j.issn.1001-9081.2019010131
    Asbtract ( )   PDF (1001KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the traditional Finite Impulse Response (FIR) filter slows down operation speed and consumes more resources under high-order conditions, a high-speed and high-order FIR filter design method based on piecewise convolution was proposed. Faster data processing was realized by the method of parallel processing in the frequency domain. Firstly, the design order M of the filter was determined and used as the reference sequence length, and the input digital signal was subjected to M period delay. Secondly, the original sequence and the delay sequence were respectively subjected to Fast Fourier Transform (FFT). Thirdly, the transformed sequences were respectively multiplied by the filter and then subjected to Inverse Fast Fourier Transform. Finally, the merging of the two way data was realized by the method of overlapping reservation. Theoretical analysis and simulation tests show that compared with the traditional distributed method based on Look Up Table (LUT), more than 30% of register resources were saved under the same order by the proposed method. On this basis, the measured data of the experimental platform were used for verification. Experimental results show that compared with the result of uncorrected mutual coupling error, the square root of the corrected amplitude mismatch is less than 1 dB and the root mean square of phase mismatch is less than 0.1 rad. Experimental data fully demonstrate the effectiveness of the method for mutual coupling error correction.
    Dual-antenna attitude determination algorithm based on low-cost receiver
    WANG Shouhua, LI Yunke, SUN Xiyan, JI Yuanfa
    2019, 39(8):  2381-2385.  DOI: 10.11772/j.issn.1001-9081.2018122554
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that low-cost Dual-antenna Attitude determination System (DAS) has low accuracy and gross error because of using direct solution, an improved algorithm based on carrier phase and pseudo-range double-difference Real-Time Kinematic (RTK) Kalman filter was proposed. Firstly, the baseline length was employed as the observation, then the precise baseline length obtained in advance was taken as the observation error. Secondly, the position of master antenna was corrected according to the epoch time interval of the slave antenna receiver and the integer ambiguity was solved by MLABMDA (Modified LABMDA) algorithm. Experimental results in static and dynamic mode show that the accuracy of the heading angle calculated by the proposed algorithm is about 1 degree and the calculated pitch angle accuracy is about 2-3 degrees in the case of baseline length 1.1 m with GPS and Beidou dual systems. The proposed algorithm improves the robustness and accuracy of the system greatly compared with the traditional dual-antenna attitude determination by direct solution.
    Improved data rate change algorithm based on adaptive frame length in short-wave communication
    WANG Ye, HUANG Guoce, DONG Shufu
    2019, 39(8):  2386-2390.  DOI: 10.11772/j.issn.1001-9081.2019010128
    Asbtract ( )   PDF (657KB) ( )  
    References | Related Articles | Metrics
    To solve the high Bit Error Rate (BER) caused by rate oscillation in traditional Data Rate Change (DRC) algorithm, an improved DRC algorithm based on Adaptive Frame Length (AFL) was proposed for short-wave communication. Firstly, in the initialization phase, the frame length and transmission rate of the initial transmission were determined by the parameters of the current channel and the information of previous empirical values, and the data transmission was started. Then, if two frames with the same length were successively sent in the transmission process, the frame length would be accordingly increased. If the retransmission failed twice in a row, the frame length would be halved in the next transmission. Finally, the frame error rate was calculated based on the current frame length. The data rate would be increased if the value was less than the preset threshold. Compared with RapidM DRC, the average link BER of the proposed algorithm was decreased by 1.8 percentage points, and the link availability was increased by 11 percentage points. Experimental results show that the proposed algorithm can eliminate the rate oscillation and improve the communication capability of the short-wave communication system.
    Virtual reality and multimedia computing
    Self-attention network based image super-resolution
    OUYANG Ning, LIANG Ting, LIN Leping
    2019, 39(8):  2391-2395.  DOI: 10.11772/j.issn.1001-9081.2019010158
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics
    Concerning the recovery problem of high-frequency information like texture details in image super-resolution reconstruction, an image super-resolution reconstruction method based on self-attention network was proposed. Two reconstruction stages were used to gradually restore the image accuracy from-coarse-to-fine. In the first stage, firstly, a Low-Resolution (LR) image was taken as the input through a Convolutional Neural Network (CNN), and a High-Resolution (HR) image was output with coarse precision; then, the coarse HR image was used as the input and a finer HR image was produced. In the second stage, the correlation of all positions between features was calculate by the self-attention module, and the global dependencies of features were captured to enhance texture details. Experimental results on the benchmark datasets show that, compared with the state-of-the-art deep neural networks based super-resolution algorithms, the proposed algorithm not only has the best visual effect, but also has the Peak Signal-to-Noise Ratio (PSNR) improved averagely by 0.1dB and 0.15dB on Set5 and BDSD100. It indicates that the network can enhance the global representation ability of features to reconstruct high quality images.
    Image matching algorithm based on improved RANSAC-GMS
    ZHU Chengde, LI Zhiwei, WANG Kai, GAO Yan, GUO Hengchang
    2019, 39(8):  2396-2401.  DOI: 10.11772/j.issn.1001-9081.2018122590
    Asbtract ( )   PDF (1003KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that Scale Invariant Feature Transform (SIFT) algorithm has low matching accuracy and long time consuming in image matching, an improved image matching algorithm based on grid motion statistical feature, namely RANSAC-GMS, was proposed. Firstly, the image was pre-matched by Oriented FAST and Rotated BRIEF (ORB) algorithm and Grid-based Motion Statistics (GMS) was used to support the estimator to distinguish the correct matching points from the wrong matching points. Then, an improved RANdom SAmple Consensus (RANSAC) algorithm was used to filter the feature points according to the distance similarity between the matching points, and an evaluation function was used to reorganize the filtered new datasets to eliminate the mismatching points. The experiments were carried out on Oxford standard image library and images taken in reality. Experimental results show that the average matching accuracy of the proposed algorithm in image matching is over 91%. Compared with algorithms such as GMS, SIFT and ORB, the near-scene matching accuracy and the far-scene matching accuracy of the proposed algorithm are improved by 16.15 percentage points and 3.56 percentage points respectively. The proposed algorithm can effectively eliminate mismatching points and achieve further improvement of image matching accuracy.
    Fine-grained vehicle recognition under multiple angles based on multi-scale bilinear convolutional neural network
    LIU Hu, ZHOU Ye, YUAN Jiabin
    2019, 39(8):  2402-2407.  DOI: 10.11772/j.issn.1001-9081.2019010133
    Asbtract ( )   PDF (936KB) ( )  
    References | Related Articles | Metrics
    In view of the problem that it is difficult to accurately recognize the type of vehicle due to scale change and deformation under multiple angles, a fine-grained vehicle recognition model based on Multi-Scale Bilinear Convolutional Neural Network (MS-B-CNN) was proposed. Firstly, B-CNN was improved and then MS-B-CNN was proposed to realize the multi-scale fusion of the features of different convolutional layers to improve feature expression ability. In addition, a joint learning strategy was adopted based on center loss and Softmax loss. On the basis of Softmax loss, a category center was maintained for each category of the training set in the feature space. When new samples were added in the training process, the classification center distances of samples were constrained to improve the ability of vehicle recognition in multi-angle situations. Experimental results show that the proposed vehicle recognition model achieved 93.63% accuracy on CompCars dataset, verifying the accuracy and robustness of the model under multiple angles.
    Weakly supervised action localization based on action template matching
    SHI Xiangbin, ZHOU Jincheng, LIU Cuiwei
    2019, 39(8):  2408-2413.  DOI: 10.11772/j.issn.1001-9081.2019010139
    Asbtract ( )   PDF (964KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of action localization in video, a weakly supervised method based on template matching was proposed. Firstly, several candidate bounding boxes of the action subject position were given on each frame of the video, and then these candidate bounding boxes were connected in chronological order to form action proposals. Secondly, action templates were obtained from some frames of the training set video. Finally, the optimal model parameters were obtained after model training by using action proposals and action templates. In the experiments on UCF-sports dataset, the method has the accuracy of the action classification increased by 0.3 percentage points compared with TLSVM (Transfer Latent Support Vector Machine) method; when the overlapping threshold is 0.2, the method has the accuracy of action localization increased by 28.21 percentage points compared with CRANE method. Experimental results show that the proposed method can not only reduce the workload of dataset annotation, but also improve the accuracy of action classification and action localization.
    Desktop dust detection algorithm based on gray gradient co-occurrence matrix
    ZHANG Yubo, ZHANG Yadong, ZHANG Bin
    2019, 39(8):  2414-2419.  DOI: 10.11772/j.issn.1001-9081.2019010081
    Asbtract ( )   PDF (1004KB) ( )  
    References | Related Articles | Metrics
    An image similarity algorithm based on Lance Williams distance was proposed to solve the problem that the boundary of similarity between dust image and dust-free image is not obvious when illumination changes in desktop dust detection. The Lance Williams distance between template image and the images with or without dust was converted to the similarity value of (0, 1] and the difference of similarity values was expanded with exponential function properties in the algorithm. In order to enhance the dust texture feature information, the gray image was convolved with the Laplacian and then the feature parameters were obtained using co-occurrence matrix feature extraction algorithm and combined into a one-dimensional vector. The similarity of feature parameter vectors between template image and to-be-detected image was calculated by the improved similarity algorithm to determine whether the desktop has dust or not. Experimental results show that the similarity is more than 90.01% between dust-free images and less than 62.57% between dust and dust-free images in the range of 300~900 lux illumination. The average of the two similarities can be regarded as the threshold to determine whether the desktop has dust or not when illumination changes.
    Automatic segmentation algorithm for single organ of CT images based on cascaded Vnet-S network
    XU Baoquan, LING Tonghui
    2019, 39(8):  2420-2425.  DOI: 10.11772/j.issn.1001-9081.2018122445
    Asbtract ( )   PDF (1098KB) ( )  
    References | Related Articles | Metrics
    In order to realize fast and accurate segmentation of organs in Computed Tomography (CT) images, a automatic segmentation algorithm for single organ based on cascaded Vnet-S network was proposed. Firstly, the organ in the CT image was coarsely segmented by using the first Vnet-S network. Then, the maximum connection flux in the segmentation result was selected and expanded twice, and the organ boundary was determined and the organ area was extracted according to the maximum connection flux after expansion. Finally, the organ was finely segmented by using the second Vnet-S network. In order to verify the performance of the proposed algorithm, a liver segmentation experiment was carried out on the MICCAI 2017 Liver Tumor Segmentation Challenge (LiTS) dataset, and a lung segmentation experiment was carried out on the ISBI LUng Nodule Analysis 2016 (LUNA16) dataset. The cascaded Vnet-S algorithm has a Dice coefficient of 0.9600 on the online test data of 70 cases in LiTS and a Dice coefficient of 0.9810 on the 288 cases in LUNA16, which are higher than those of Vnet-S network and Vnet network. Experimental results show that the single organ segmentation algorithm based on cascaded Vnet-S network can accurately segment organs with lower computational complexity compared with Vnet and Unet networks.
    Dual mini micro-array speech enhancement algorithm under multi-noise environment
    LUO Ying, ZENG Qingning, LONG Chao
    2019, 39(8):  2426-2430.  DOI: 10.11772/j.issn.1001-9081.2018122494
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics
    In order to improve the denoising performance of dual mini micro-array speech enhancement system in multi-noise environment, an improved generalized sidelobe canceller speech enhancement algorithm for dual mini micro-array was proposed. According to the structure characteristics of the dual mini micro-array, firstly, an improved coherent filtering algorithm based on noise cross-power spectrum estimation was used to eliminate the weak correlation noise between microphones with long distances. Secondly, the strong correlation noise between microphones with short distances was eliminated by using a generalized sidelobe cancelling algorithm. Finally, the minima-controlled recursive averaging based sub-band spectrum subtraction was used to eliminate the residual noise in different spectrum bands purposefully. Experimental results show that the proposed algorithm achieves better score in perceptual evaluation of speech quality than existing dual mini micro-array speech enhancement algorithms under multi-noise environment, and improves the suppression effect of dual mini micro-array speech enhancement system on complex noise to a certain extent.
    Frontier & interdisciplinary applications
    Online task scheduling algorithm for big data analytics based on cumulative running work
    LI Yefei, XU Chao, XU Daoqiang, ZOU Yunfeng, ZHANG Xiaoda, QIAN Zhuzhong
    2019, 39(8):  2431-2437.  DOI: 10.11772/j.issn.1001-9081.2019010073
    Asbtract ( )   PDF (1056KB) ( )  
    References | Related Articles | Metrics
    A Cumulative Running Work (CRW) based task scheduler CRWScheduler was proposed to effectively process tasks without any prior knowledge for big data analytics platform like Hadoop and Spark. The running job was moved from a low-weight queue to a high-weight one based on CRW. When resources were allocated to a job, both the queue of the job and the instantaneous resource utilization of the job were considered, significantly improving the overall system performance without prior knowledge. The prototype of CRWScheduler was implemented based on Apache Hadoop YARN. Experimental results on 28-node benchmark testing cluster show that CRWScheduler reduces average Job Flow Time (JFT) by 21% and decreases JFT of 95th percentile by up to 35% compared with YARN fair scheduler. Further improvements can be obtained when CRWScheduler cooperates with task-level schedulers.
    Efficient traceability system for quality and safety of agricultural products based on consortium blockchain
    WANG Keke, CHEN Zhide, XU Jian
    2019, 39(8):  2438-2443.  DOI: 10.11772/j.issn.1001-9081.2019020235
    Asbtract ( )   PDF (952KB) ( )  
    References | Related Articles | Metrics
    Concerning of the security and efficiency problems of the agricultural product traceability system, based on the decentralization security feature of blockchain, an efficient solution based on consortium blockchain was proposed. Firstly, through Inter-Planetary File System (IPFS), the agricultural product data was hashed, so as to reduce the data size of single transactions in the block, and the initial guarantee of data was achieved by using the irreversible principle of IPFS data. Secondly, the consortium blockchain model for data verification was established, and Practical Byzantine Fault Tolerant (PBFT) algorithm was used as consensus algorithm for blockchain data verification to reduce the consensus time of the whole network. Finally, according to the number of participating nodes, block size and network bandwidth in the simulation experiment, the time curve of the verification transaction was fitted, and then the blockchain transaction efficiency under different bandwidths was calculated; by using tens of thousands of actual situations of the agricultural product traceability system with the participation of sensors, the blockchain double-chain structure was compared to obtain the analysis results. Experimental results show that under the condition of less than 1000 verification nodes, the maximum consensus time of blockchain is 32 min, and the consortium blockchain system can support 350000-400000 sensor data, which can be applied to large-scale and multi-data agricultural product traceability.
    Modeling and optimization of disaster relief vehicle routing problem considering urgency
    ZHANG Yuzhou, XU Tingzheng, ZHENG Junshuai, RAO Shun
    2019, 39(8):  2444-2449.  DOI: 10.11772/j.issn.1001-9081.2018122516
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the delay time of disaster relief materials distribution and the total transportation time of disaster relief vehicles, the concept of urgency was introduced to establish a vehicle routing problem model of disaster relief vehicles based on urgency, and an improved Genetic Algorithm (GA) was designed to solve the model. Firstly, multiple strategies were used to generate the initial population. Then, an urgency-based task redistribution algorithm was proposed as local search operator. The proposed algorithm achieved the optimal delay time and total transportation time based on urgency. The delay time was reduced by rescheduling the vehicle or adjusting the delivery sequence for delay placements. The routes of the vehicles without delay were optimized to reduce the total transportation time. In the experiments, the proposed algorithm was compared with First-Come-First-Served (FCFS) algorithm, Sort by URGency (URGS) and GA on 17 datasets. Results show that the Genetic Algorithm with Task Redistribution strategy based on Urgency Degree (TRUD-GA) reduces the average delay time by 25.0% and decreases the average transportation time by 1.9% compared with GA, and has more obvious improvement compared with FCFS and URGS algorithms.
    Regional bullying recognition based on joint hierarchical attentional network and independent recurrent neural network
    MENG Zhao, TIAN Shengwei, YU Long, WANG Ruijin
    2019, 39(8):  2450-2455.  DOI: 10.11772/j.issn.1001-9081.2019010033
    Asbtract ( )   PDF (983KB) ( )  
    References | Related Articles | Metrics
    In order to improve the utilization efficiency of deep information in text context, based on Hierarchical Attention Network (HAN) and Independent Recurrent Neural Network (IndRNN), a regional bullying semantic recognition model called HACBI (HAN_CNN_BiLSTM_IndRNN) was proposed. Firstly, the manually annotated regional bullying texts were mapped into a low-dimensional vector space by means of word embedding technology. Secondly, the local and global semantic information of bullying texts was extracted by using Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM), and internal structure information of text was captured by HAN. Finally, in order to avoid the loss of text hierarchy information and solve the gradient disappearance problem, IndRNN was introduced to enhance the description ability of model, which achieved the integration of information flow. Experimental results show that the Accuracy (Acc), Precision (P), Recall (R), F1 (F1-Measure) and AUC (Area Under Curve) values are 99.57%, 98.54%, 99.02%, 98.78% and 99.35% respectively of this model, which indicates that the effectiveness provided by HACBI is significantly improved compared to text classification models such as Support Vector Machine (SVM) and CNN.
    Gamification design and effect analysis of color education
    LYU Ruimin, YANG Fan, LU Jing, CHEN Wei
    2019, 39(8):  2456-2461.  DOI: 10.11772/j.issn.1001-9081.2019010106
    Asbtract ( )   PDF (875KB) ( )  
    References | Related Articles | Metrics
    Current research generally focuses on the application of gamification to improve the engagement of learning. However, the research on gamification in specific fields such as color education is not sufficient, and there is a lack of analysis on the gamification elements and influence factors of learning effects. For these problems, a game model for training color recognition was designed. Firstly, two different ways of playing were designed with same core gameplay but different interaction modes. Then, the same virtual reward was added in both playing ways. Finally, the effects of two playing ways on learning effect with or without virtual reward were compared, and the effect of virtual reward in the same playing way were compared. The results show that gameplay design mainly affects learning efficiency, and virtual reward mainly affects engagement.
    Semi-exponential gradient strategy and empirical analysis for online portfolio selection
    WU Wanting, ZHU Yan, HUANG Dingjiang
    2019, 39(8):  2462-2467.  DOI: 10.11772/j.issn.1001-9081.2018122588
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    Since the high frequency asset allocation adjustment of traditional portfolio strategies in each investment period results in high transaction costs and poor final returns, a Semi-Exponential Gradient portfolio (SEG) strategy based on machine learning and online learning was proposed. Firstly, the SEG strategy model was established by adjusting the portfolio only in the initial period of each segmentation of the investment period and not trading in the rest of the time, then a objective function was constructed by combining income and loss. Secondly, the closed-form solution of the portfolio iterative updating was solved by using the factor graph algorithm, and the theorem and its proof of the upper bound on the cumulative loss of assets accumulated were given, guaranteeing the return performance of the strategy theoretically. The experiments were performed on several datasets such as the New York Stock Exchange. Experimental results show that the proposed strategy can still maintain a high return even with the existence of transaction costs, confirming the insensitivity of this strategy to transaction costs.
    Bayesian network-based floor localization algorithm
    ZHANG Bang, ZHU Jinxin, XU Zhengyi, LIU Pan, WEI Jianming
    2019, 39(8):  2468-2474.  DOI: 10.11772/j.issn.1001-9081.2019010119
    Asbtract ( )   PDF (1037KB) ( )  
    References | Related Articles | Metrics
    In the process of indoor positioning and navigation, a Bayesian network-based floor localization algorithm was proposed for the problem of large error of floor localization when only the pedestrian height displacement considered. Firstly, Extended Kalman Filter (EKF) was adopted to calculate the vertical displacement of the pedestrian by fusing inertial sensor data and barometer data. Then, the acceleration integral features after error compensation was used to detect the corner when the pedestrian went upstairs or downstairs. Finally, Bayesian network was introduced to locate the pedestrian on the most likely floor based on the fusion of walking height and corner information. Experimental results show that, compared with the floor localization algorithm based on height displacement, the proposed algorithm has improved the accuracy of floor localization by 6.81%; and compared with the detection algorithm based on platform, the proposed algorithm has improved the accuracy of floor localization by 14.51%. In addition, the proposed algorithm achieves the accuracy of floor localization by 99.36% in the total 1247 times floor changing experiments.
    Target recognition algorithm for urban management cases by mobile devices based on MobileNet
    YANG Huihua, ZHANG Tianyu, LI Lingqiao, PAN Xipeng
    2019, 39(8):  2475-2479.  DOI: 10.11772/j.issn.1001-9081.2019010232
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    For the monitoring dead angles of fixed surveillance cameras installed in large quantities and low hardware performance of mobile devices, an urban management case target recognition algorithm that can run on IOS mobile devices with low performance was proposed. Firstly, the number of channels of input and output images and the number of feature maps generated by each channel were optimized by adding new hyperparameters to MobileNet. Secondly, a new recognition algorithm was formed by combining the improved MobileNet with the SSD recognition framework and was transplanted to the IOS mobile devices. Finally, the accurate detection of the common 8 specific urban management case targets was achieved by the proposed algorithm, in which the camera provided by the mobile device was used to capture the scene video. The mean Average Precision (mAP) of the proposed algorithm was 15.5 percentage points and 10.4 percentage points higher than that of the prototype YOLO and the prototype SSD, respectively. Experimental results show that the proposed algorithm can run smoothly on low-performance IOS mobile devices, reduce the dead angles of monitoring, and provide technical support for urban management team to speed up the classification and processing of cases.
    Motor imagery electroencephalogram signal recognition method based on convolutional neural network in time-frequency domain
    HU Zhangfang, ZHANG Li, HUANG Lijia, LUO Yuan
    2019, 39(8):  2480-2483.  DOI: 10.11772/j.issn.1001-9081.2018122553
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of low recognition rate of motor imagery ElectroEncephaloGram (EEG) signals, considering that EEG signals contain abundant time-frequency information, a recognition method based on Convolutional Neural Network (CNN) in time-frequency domain was proposed. Firstly, Short-Time Fourier Transform (STFT) was applied to preprocess the relevant frequency bands of EEG signals to construct a two-dimensional time-frequency domain map composed of multiple time-frequency maps of electrodes, which was regarded as the input of the CNN. Secondly, focusing on the time-frequency characteristic of two-dimensional time-frequency domain map, a novel CNN structure was designed by one-dimensional convolution method. Finally, the features extracted by CNN were classified by Support Vector Machine (SVM). Experimental results based on BCI dataset show that the average recognition rate of the proposed method is 86.5%, which is higher than that of traditional motor imagery EEG signal recognition method, and the proposed method has been applied to the intelligent wheelchair, which proves its effectiveness.
    Application of improved GoogLeNet based on weak supervision in DR detection
    DING Yingzi, DING Xiangqian, GUO Baoqi
    2019, 39(8):  2484-2488.  DOI: 10.11772/j.issn.1001-9081.2019010225
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics
    To handle the issues of small sample size and multi-target detection in the hierarchical detection of diabetic retinopathy, a weakly supervised target detection network based on improved GoogLeNet was proposed. Firstly, the GoogLeNet network was improved, the last fully-connected layer of the network was removed and the position information of the detection target was retained. A global max pooling layer was added, and the sigmoid cross entropy was used as the objective function of training to obtain the feature map with multiple feature position information. Secondly, based on the weak supervision method, only the category label was used to train the network. Thirdly, a connected region algorithm was designed to calculate the boundary coordinate set of feature connected regions. Finally, the boundary box was used to locate the lesion in the image to be tested. Experimental results show that under the small sample condition, the accuracy of the improved model reaches 94%, which is improved by 10% compared with SSD (Single Shot mltibox Detector) algorithm. The improved model realizes end-to-end lesion recognition under small sample condition, and the high accuracy of the model ensures its application value in fundus screening.
2024 Vol.44 No.6

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF