Table of Content

    10 October 2017, Volume 37 Issue 10
    High-speed mobile time-varying channel modeling under U-shaped groove
    LIAO Yong, HU Yi
    2017, 37(10):  2735-2741.  DOI: 10.11772/j.issn.1001-9081.2017.10.2735
    Asbtract ( )   PDF (1224KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of the domestic high-speed railway construction, customer demand for mobile office and entertainment on high-speed railway is growing rapidly. While both of the existing cellular mobile communication and proprietary communication network for Global System for Mobile communication-Railway (GSM-R) cannot satisfy customer demand for Quality of Service (QoS) of broadband wireless communication. High-speed railway will experience all kinds of complex scenarios during the actual driving, and U-shaped groove scene is a common one. However, there is not a full research on time-varying channel modeling of the U-shaped groove scenario under high-speed mobile environment. Therefore, a U-shaped groove time-varying channel modeling method under high-speed mobile environment was proposed and simulated. Firstly, the geometric random distribution theory was used to established geometric distribution model for high-speed railway scenario under U-shaped groove, and the change law of scatterers was analyzed. Besides, the parameters' closed mathematical expressions such as line-of-sight distribution, time-varying angle spread, time-varying Doppler spread were deduced, and the closed solution of the channel impulse response was given. Secondly, the time-variant space-time cross-correlation function, time-variant auto-correlation function and time-variant space-Doppler power spectrum density were analyzed. Finally, the simulations of statistical performance were carried out to verify the proposed model. The simulation results show that the proposed model has the properties of time-varying and high correlation, which verifies the non-stationary of high-speed wireless channel and satisfies the characteristics of high-speed wireless channel.
    Secure data storage and sharing system based on consortium blockchain in smart grid
    WU Zhenquan, LIANG Yuhui, KANG Jiawen, YU Rong, HE Zhaoshui
    2017, 37(10):  2742-2747.  DOI: 10.11772/j.issn.1001-9081.2017.10.2742
    Asbtract ( )   PDF (1049KB) ( )  
    References | Related Articles | Metrics
    In order to realize the reliable, safe and efficient power grids, Wireless Sensor Networks (WSNs) are widely deployed in smart grids to monitor power grids and deal with emergencies of power grids in time. In the existing smart grids, sensing data of the WSNs are needed to be uploaded to a trusted central node for storage and sharing. However, this central way suffers from many security problems including single point failure and data tampering. To address these security problems, an emerging technology named consortium blockchain was exploited to form a Data Storage Consortium Blockchain (DSCB), which consists of pre-selected data collection base stations in the smart grid. In DSCB, data sharing was accomplished by smart contracts. The constraints about data sharing were set by data owners, and computer language was used to replace the legal terms to regulate data visitors, thus achieving a decentralized, safe and reliable data storage database. Security analysis shows that DSCB can achieve safe and effective data storage and sharing.
    High efficiency medium access control protocol based on cooperative network coding
    YAO Yukun, LI Xiaoyong, REN Zhi, LIU Jiangbing
    2017, 37(10):  2748-2753.  DOI: 10.11772/j.issn.1001-9081.2017.10.2748
    Asbtract ( )   PDF (992KB) ( )  
    References | Related Articles | Metrics
    The transmission energy consumption of nodes does not be considered in the exiting Network Coding Aware Cooperative MAC (NCAC-MAC) protocol for Ad Hoc Network, and the control message sent by the candidate cooperative relay node can not make the other candidate nodes which are not in the communication range give up competition, thus causing collision. To deal with those problems, a high efficiency Medium Access Control (MAC) protocol based on cooperative network coding High Efficiency MAC protocol based on Cooperative Network Coding (HECNC-MAC) was proposed. Three optimization schemes were carried out by the protocol. Firstly, candidate cooperative relay node need to prejudge whether the destionation can decode the packet, so as to reduce the number of competitive relay nodes and ensure that the destination node could be successfully decoded. Secondly, the transmission energy consumption of nodes should be synthetically considered when selecting the cooperative relay node. Finally, the Eager To Help (ETH) message is canceled, and the destination node sents conformation message through pseudo-broadcast. Theoretical analysis and simulation results show that in the comparison experiments with Carrier Sense Multiple Access (CSMA), Phoenix and NCAC-MAC protocols, the transmission energy consumption of nodes and the end-to-end delay of date packages can be effectively reduced, and the network throughput can be improved by HECNC-MAC.
    Multi-constraints deadline-aware task scheduling heuristic in virtual clouds
    ZHANG Yi, CHENG Xiaohui, CHEN Liuhua
    2017, 37(10):  2754-2759.  DOI: 10.11772/j.issn.1001-9081.2017.10.2754
    Asbtract ( )   PDF (967KB) ( )  
    References | Related Articles | Metrics
    Many existing scheduling approaches in cloud data centers try to consolidate Virtual Machines (VMs) by VM live migration technique to minimize the number of Physical Machines (PMs) and hence minimize the energy consumption, however, it introduces high migration overhead; furthermore, the cost factor that leads to high payment cost for cloud users is usually not taken into account. Aiming at energy reduction for cloud providers and payment saving for cloud users, as well as guaranteeing the deadline of user tasks, a heuristic task scheduling algorithm called Energy and Deadline-Aware with Non-Migration Scheduling (EDA-NMS) was proposed. The execution of the tasks that have loose deadlines was postponed to avoid waking up new PMs and migration overhead, thus reducing the energy consumption. The results of extensive experiments show that compared with Proactive and Reactive Scheduling (PRS) algorithm, by selecting a smart VM combination scheme, EDA-NMS can reduce the static energy consumption and ensure the lowest payment with meeting the deadline requirement for key user tasks.
    Dynamic data stream load balancing strategy based on load awareness
    LI Ziyang, YU Jiong, BIAN Chen, WANG Yuefei, LU Liang
    2017, 37(10):  2760-2766.  DOI: 10.11772/j.issn.1001-9081.2017.10.2760
    Asbtract ( )   PDF (1299KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of unbalanced load and incomplete comprehensive evaluation of nodes in big data stream processing platform, a dynamic load balancing strategy based on load awareness algorithm was proposed and applied to a data stream processing platform named Apache Flink. Firstly, the computational delay time of the nodes was obtained by using the depth-first search algorithm for the Directed Acyclic Graph (DAG) and regarded as the basis for evaluating the performance of the nodes, and the load balancing strategy was created. Secondly, the load migration technology for data stream was implemented based on the data block management strategy, and both the global and local load optimization was implemented through feedback. Finally, the feasibility of the algorithm was proved by evaluating its time-space complexity, meanwhile the influence of important parameters on the algorithm execution was discussed. The experimental results show that the proposed algorithm increases the efficiency of the task execution by optimizing the load sharing between nodes, and the task execution time is shortened by 6.51% averagely compared with the traditional load balancing strategy of Apache Flink.
    Improved particle swarm optimization algorithm based on Hamming distance for traveling salesman problem
    QIAO Shen, LYU Zhimin, ZHANG Nan
    2017, 37(10):  2767-2772.  DOI: 10.11772/j.issn.1001-9081.2017.10.2767
    Asbtract ( )   PDF (880KB) ( )  
    References | Related Articles | Metrics
    An improved Particle Swarm Optimization (PSO) algorithm based on Hamming distance was proposed to solve the discrete problems. The basic idea and process of traditional PSO was retained, and a new speed representation based on Hamming distance was defined. Meanwhile, in order to make the algorithm be more efficient and avoid the iterative process falling into the local optimum, new operators named 2-opt and 3-opt were designed, and the random greedy rule was also used to improve the quality of the solution and speed up the convergence. At the later period of the algorithm, in order to increase the global search ability of the particles in the whole solution space, a part of particles was regenerated to re-explore the solution space. Finally, a number of TSP standard examples were used to verify the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm can find the historical optimal solution for small scale TSP; for large-scale TSP, for example, the city number is more than 100, satisfactory solutions can also be found, and the deviations between the known and the optimal solutions are small, usually within 5%.
    Multi-population artificial bee colony algorithm based on hybrid search
    CHEN Hao, ZHANG Jie, YANG Qingping, DONG Yaya, XIAO Lixue, JI Minjie
    2017, 37(10):  2773-2779.  DOI: 10.11772/j.issn.1001-9081.2017.10.2773
    Asbtract ( )   PDF (1137KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of Artificial Bee Colony (ABC) algorithm, which are the single search mechanism and the high coupling between global search and local search, a Multi-Population ABC (MPABC) algorithm based on hybrid search was proposed. Firstly, the population was sorted according to the fitness value to get an ordered queue, which was divided into three sorted subgroups including random subgroup, core subgroup and balanced subgroup. Secondly, different difference vectors were constructed according to the corresponding individual selection mechanism and search strategy to different subgroups. Finally, in the process of group search, the effective control of individuals with different fitness functions was realized through three subgroups, thus improving the balance ability of global search and local search. The simulation results based on 16 benchmark functions show that compared with ABC algorithm with Variable Search Strategy (ABCVSS), Modified ABC algorithm based on selection probability (MABC), Particle Swarm-inspired Multi-Elitist ABC (PS-MEABC) algorithm, Multi-Search Strategy of the ABC (MSSABC) and Improved ABC algorithm for optimizing high-dimensional complex functions (IABC), MPABC achieves a better optimization effect; and on the solution of high dimensional (100 dimensions) problems, compared with ABC, MPABC has higher convergence speed which is increased by about 23% and better search accuracy.
    Real-time task threshold scheduling method for cryptography cloud based on rolling optimization
    WANG Zewu, SUN Lei, GUO Songhui
    2017, 37(10):  2780-2786.  DOI: 10.11772/j.issn.1001-9081.2017.10.2780
    Asbtract ( )   PDF (1108KB) ( )  
    References | Related Articles | Metrics
    Since the current cloud task scheduling algorithm in the cryptography cloud environment cannot achieve the target that tasks are processed in real-time, a real-time threshold scheduling method based on rolling optimization window was proposed. Firstly, a cryptography cloud service architecture was given by integrating the link of key calling into the process of cryptographic task; secondly, to realize real-time scheduling, a cryptographic task scheduler model based on the rolling window and a throughput analysis model which was used to obtain the real-time throughput data were established; finally, to meet the objective needs of high-speed cryptographic service for cloud tenants, a throughput threshold scheduling algorithm was proposed, which migrates virtual cipher machine in real-time according to the changes of real-time throughput relative to throughput threshold. The simulation results show that compared with the method without the rolling optimization window or virtual machine migration technology, the proposed method has characteristics of shorter task completion time and lower CPU utility, meanwhile the real-time throughput of it can be continuously kept in 70%-85% of the network bandwidth, thus verifying its effectiveness and real-time performance in the cryptography cloud environment.
    Mechanism of personal privacy protection based on blockchain
    ZHANG Ning, ZHONG Shan
    2017, 37(10):  2787-2793.  DOI: 10.11772/j.issn.1001-9081.2017.10.2787
    Asbtract ( )   PDF (1120KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of personal privacy protection in Internet car rental scenario, a personal privacy protection mechanism based on blockchain was proposed. Firstly, a framework for personal privacy protection based on blockchain was proposed for solving personal privacy issues exposed in the Internet car rental. Secondly, the design and definition of the model were given by participant profile, database design and performance analysis, and the framework and implementation of the model were expounded from the aspects of granting authority, writing data, reading data and revoking authority. Finally, the realizability of the mechanism was proved by the system development based on blockchain.
    Distributed neural network for classification of attack behavior to social security events
    XIAO Shenglong, CHEN Xin, LI Zhuo
    2017, 37(10):  2794-2798.  DOI: 10.11772/j.issn.1001-9081.2017.10.2794
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    In the era of big data, the social security data becomes more diverse and its amount increases rapidly, which challenges the analysis and decision of social security events significantly. How to accurately categorize the attack behavior in a short time and support the analysis and decision making of social security events becomes an urgent problem needed to be solved in the field of national and cyberspace security. Aiming at the behavior of aggression in social security events, a new Distributed Neural Network Classification (DNNC) algorithm was proposed based on the Spark platform. The DNNC algorithm was used to analyze the related features of the attack behavior categories, and the features were used as the input of the neural network. Then the function relationship between the individual features and attack categories were established, and a neural network classification model was generated to classify the attack categories of social security events. Experimental results on the data provided by the global terrorism database show that the proposed algorithm can improve the average accuracy by 15.90 percentage points compared with the decision tree classification, and by 8.60 percentage points compared with the ensemble decision tree classification, only decreases the accuracy on part attack type.
    Rumor detection method based on burst topic detection and domain expert discovery
    YANG Wentai, LIANG Gang, XIE Kai, YANG Jin, XU Chun
    2017, 37(10):  2799-2805.  DOI: 10.11772/j.issn.1001-9081.2017.10.2799
    Asbtract ( )   PDF (1213KB) ( )  
    References | Related Articles | Metrics
    It is difficult for existing rumor detection methods to overcome the disadvantage of data collection and detection delay. To resolve this problem, a rumor detection method based on burst topic detection inspired by the momentum model and domain expert discovery was proposed. The dynamics theory in physics was introduced to model the topic features spreading among the Weibo platform, and dynamic physical quantities of the topic features were used to describe the burst characteristics and tendency of topic development. Then, emergent topics were extracted after feature clustering. Next, according to the domain relativity between the topic and the expert, domain experts for each emergent topic were selected within experts pool, which is responsible for identifying the credibility of the emergent topic. The experimental results show that the proposed method gets 13 percentage points improvement on accuracy comparing with the Weibo rumor identification method based merely on supervised machine learning, and the detection time is reduced to 20 hours compared with dominating manual methods, which means that the proposed method is applicable for real rumor detection situation.
    Video information hiding algorithm based on diamond coding
    CHEN Yongna, ZHOU Yu, WANG Xiaodong, GUO Lei
    2017, 37(10):  2806-2812.  DOI: 10.11772/j.issn.1001-9081.2017.10.2806
    Asbtract ( )   PDF (1167KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of limited hiding capacity and obvious increasing bit rate in the existing hiding solutions, an intra-frame video information hiding algorithm based on diamond coding was proposed. Firstly, based on High Efficiency Video Coding (HEVC), two prediction models of adjacent 4×4 blocks were combined into a pattern pair, then the improved diamond coding algorithm was used to guide pattern modulation and information embedding. Next, the embedding coding for hidden informtion was done for second time with keeping the optimal coding division, thus ensuring the embedding quantity and eliminating intra frame distortion drift. The experimental results show that the Peak Signal-to-Noise Ratio (PSNR) is reduced by less than 0.03dB and the bit rate is increased by less than 0.53% by using the proposed algorithm, while the embedding capacity is greatly improved, and both the subjective and objective qualities of the video are well guaranteed.
    Dialog generation based on hierarchical encoding and deep reinforcement learning
    ZHAO Yuqing, XIANG Yang
    2017, 37(10):  2813-2818.  DOI: 10.11772/j.issn.1001-9081.2017.10.2813
    Asbtract ( )   PDF (1127KB) ( )  
    References | Related Articles | Metrics
    Aiming at dialog generation problem, a dialog generation model based on hierarchical encoding and deep reinforcement learning, namely Enhanced Hierarchical Recurrent Encoder-Decoder (EHRED) was proposed to solve the problem that standard sequence to sequence (seq2seq) architectures are more likely to raise highly generic responses due to the Maximum Likelihood Estimate (MLE) loss function. A multi-round dialog model was built by hierarchical structure, and a hierarchical layer was added to enhance the memory of history dialog based on the standard seq2seq architecture, and then a language model was used to build reward function, replacing traditional MLE loss function with policy gradient method in deep reinforcement learning for training. Experimental results show that EHRED can generate responses with richer semantic information and improve by 5.7-11.1 percentage points in standard manual evaluation compared with the widely used traditional standard seq2seq Recurrent Neural Network (RNN) dialog generation model.
    Probabilistic distribution model based on Wasserstein distance for nonlinear dimensionality reduction
    CAO Xiaolu, XIN Yunhong
    2017, 37(10):  2819-2822.  DOI: 10.11772/j.issn.1001-9081.2017.10.2819
    Asbtract ( )   PDF (669KB) ( )  
    References | Related Articles | Metrics
    Dimensionality reduction plays an important role in big data analysis and visualization. Many dimensionality reduction techniques with probabilistic distribution models rely on the optimizaition of cost function between low-dimensional model distribution and high-dimensional real distribution. The key issue of this type of technology is to efficiently construct the probabilistic distribution model representing the feature of original high-dimensional dataset most. In this paper, Wasserstein distance was introduced to dimensionality reduction, and a novel method named Wasserstein Embedded Map (W-map) was presented for high-dimensional data reduction and visualization. W-map converts dimensionality reduction problem into optimal transportation problem by constructing the similar Wasserstein flow in the high-dimensional dataset and its corresponding low-dimensional representation, and then the best matched low-dimensional visualization was found by solving the optimal transportation problem of Wasserstein distance. Experimental results demonstrate that the presented method performs well in dimensionality reduction and visualization for high-dimensional data.
    Many-objective optimization algorithm based on linear weighted minimal/maximal dominance
    ZHU Zhanlei, LI Zheng, ZHAO Ruilian
    2017, 37(10):  2823-2827.  DOI: 10.11772/j.issn.1001-9081.2017.10.2823
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    In Many-objective Optimization Problems (MaOP), the Pareto dominance has exponential increase of non-dominated solutions and the decrease of selection pressure with increasing optimization objectives. To solve these issues, a new type of dominance, namely Linear Weighted Minimal/Maximal dominance (LWM-dominance) was proposed based on the ideas of comparing multi-objective solutions by using linear weighted aggregation and Pareto dominance. It is theoretically proved that LWM non-dominated solution set is a subset of Pareto non-dominated solution set, meanwhile the important corner solutions are reserved. Furthermore, an MaOP algorithm based on LWM dominance was presented. The empirical studies proved the corollaries of the proposed LWM dominance. In detail, the experimental results in random objective space show that the LWM dominance is suitable for the MaOPs with 5-15 objectives; the experiment on comparing the number of LWM non-dominated solutions and Pareto non-dominated solutions with subjects of DTLZ1-DTLZ7 shows that the proportion of non-dominated solutions decreases by about 17% on average when the number of optimization objectives is 10 and 15.
    Video recommendation algorithm based on clustering and hierarchical model
    JIN Liang, YU Jiong, YANG Xingyao, LU Liang, WANG Yuefei, GUO Binglei, Liao Bin
    2017, 37(10):  2828-2833.  DOI: 10.11772/j.issn.1001-9081.2017.10.2828
    Asbtract ( )   PDF (1025KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of data sparseness, cold start and low user experience of recommendation system, a video recommendation algorithm based on clustering and hierarchical model was proposed to improve the performance of recommendation system and user experience. Focusing on the user, similar users were obtained by analyzing Affiliation Propagation (AP) cluster, then historical data of online video of similar users was collected and a recommendation set of videos was geberated. Secondly, the user preference degree of a video was calculated and mapped into the tag weight of the video. Finally, a recommendation list of videos was generated by using analytic hierarchy model to calculate the ranking of user preference with videos. The experimental results on MovieLens Latest Dataset and YouTube video review text dataset show that the proposed algorithm has good performance in terms of Root-Mean-Square Error (RMSE) and the recommendation accuracy.
    Multi-constraint nonnegative matrix factorization algorithm based on feature fusion
    SUN Jing, CAI Xibiao, SUN Fuming
    2017, 37(10):  2834-2840.  DOI: 10.11772/j.issn.1001-9081.2017.10.2834
    Asbtract ( )   PDF (1142KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issues that the sparseness of data is reduced after factorization and the single image feature cannot describe the image content well, a multi-constraint nonnegative matrix factorization based on feature fusion was proposed. The information provided by few known labeled samples and sparseness constraint were considered, and the graph regularization was processed, then the decomposed image features with different sparseness were fused, which improved the clustering performance and effectiveness. Extensive experiments were conducted on both Yale-32 and COIL20 datasets, and the comparisons with four state-of-the-art algorithms demonstrate that the proposed method has superiority in both clustering accuracy and sparseness.
    Image labeling based on fully-connected conditional random field
    LIU Tong, HUANG Xiutian, MA Jianshe, SU Ping
    2017, 37(10):  2841-2846.  DOI: 10.11772/j.issn.1001-9081.2017.10.2841
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    The traditional image labeling models often have two deficiencies; they only can model short-range contextual information in pixel-level of the image and have a complicated inference. To improve the precision of image labeling, the fully-connected Conditional Random Field (CRF) model was used; to simplify the inference of the model, the mean filed approximation based on Gaussian kd-tree for inference was proposed. To verify the effectiveness of the proposed algorithm, the experimental image datasets not only contained the standard picture library MSRC-9, but also contained MyDataset_1 (machine parts) and MyDataset_2 (office table) which made by authors. The precisions of the proposed method on those three datasets are 77.96%, 97.15% and 95.35% respectively, and the mean cost time of each picture is 2s. The results indicate that the fully-connected CRF model can improve the precision of image labeling by considering the contextual information of image and the mean field approximation using Gaussian kd-tree can raise the efficiency of inference.
    Interval-valued hesitant fuzzy grey compromise relation analysis method for security of cloud computing evaluation
    GAO Zhifang, LAI Yuqing, PENG Dinghong
    2017, 37(10):  2847-2853.  DOI: 10.11772/j.issn.1001-9081.2017.10.2847
    Asbtract ( )   PDF (1031KB) ( )  
    References | Related Articles | Metrics
    In order to solve the dynamic evaluation problem of cloud computing security, a method named interval-valued hesitant fuzzy grey compromise relation analysis for accurately evaluating the security of cloud computing was proposed. Firstly, a new interval-valued hesitant fuzzy distance formula was defined to measure the distance between two interval-valued hesitant fuzzy sets. Then, two new interval-valued hesitant fuzzy normalization formulas were constructed to solve the different dimensions of attributes. Meanwhile, the concept of grey compromise relation degree was put forward to consider all the expert opinions and solve the situation when the attributes have conflicts. On the basis of this, a new interval-valued hesitant fuzzy grey compromise relation analysis method was presented to evaluate the security of cloud computing. The analysis results show that the proposed method is feasible, and its scientific and effective characters are proved by comparing with the existed interval-valued hesitant fuzzy multiple attribute decision making literatures.
    Neural network model for PM2.5 concentration prediction by grey wolf optimizer algorithm
    SHI Feng, LOU Wengao, ZHANG Bo
    2017, 37(10):  2854-2860.  DOI: 10.11772/j.issn.1001-9081.2017.10.2854
    Asbtract ( )   PDF (1140KB) ( )  
    References | Related Articles | Metrics
    Focusing on high cost and complicated process of the fine particulate matter (PM2.5) measurement system, a neural network model based on grey wolf optimizer algorithm was established. From the perspective of non-mechanism model, the daily PM2.5 concentration in Shanghai was forecasted with meteorological factors and air pollutants, and the important factors were analyzed by mean impact value. To avoid the "over training" and ensure the generalization ability, the validation datasets were used to monitor the training process. The experimental results show that the most significant factors that affecting the PM2.5 concentration are PM10, and then are the CO and the previous day's PM2.5. Based on the datasets obtained from November 1, 2016 to November 12, the relative average error of the proposed model is 13.46%, the absolute average error is 8μg/m3; the relative average error of it is decreased by about 3 percentage points, 5 percentage points and 1 percentage points compared with the prediction models based on Particle Swarm Optimization (PSO), BP neural network and Support Vector Regression (SVR). The neural network model based on the grey wolf optimizer algorithm is more suitable for forecasting PM2.5concentration and air quality in Shanghai.
    Question answer matching method based on deep learning
    RONG Guanghui, HUANG Zhenhua
    2017, 37(10):  2861-2865.  DOI: 10.11772/j.issn.1001-9081.2017.10.2861
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics
    For Chinese question answer matching tasks, a question answer matching method based on deep learning was proposed to solve the problem of lack of features and low accuracy due to artificial structural feature in machine learning. This method mainly includes 3 different models. The first model is the combination of Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN), which is used to learn the deep semantic features in the sentence and calculate the similarity distance of feature vectors. Moreover, adding two different attention mechanism into this model, the feature representation of answer was constructed according to the question to learn the detailed semantic matching relation of them. Experimental results show that the combined deep nerual network model is superior to the method of feature construction based on machine learning, and the hybrid model based on attention mechanism can further improve the matching accuracy where the best results can reach 80.05% and 68.73% in the standard evaluation of Mean Reciprocal Rank (MRR) and Top-1 accuracy respectively.
    Click through rate prediction algorithm based on user's real-time feedback
    YANG Cheng
    2017, 37(10):  2866-2870.  DOI: 10.11772/j.issn.1001-9081.2017.10.2866
    Asbtract ( )   PDF (780KB) ( )  
    References | Related Articles | Metrics
    At present, most of the Click Through Rate (CTR) prediction algorithms for online advertising mainly focus on mining the correlation between users and advertisements from large-scale log data by using machine learning methods, but not considering the impact of user's real-time feedback. After analyzing a lot of real world online advertising log data, it is found that the dynamic changes of CTR is highly correlated with previous feedback of user, which is that the different behaviors of users typically have different effects on real-time CTR. On the basis of the above analysis, an algorithm based on user's real-time feedback was proposed. Firstly, the correlation between user's feedback and real-time CTR were quantitatively analyzed on large scale of real world online advertising logs. Secondly, based on the analysis results, the user's feedback was characterized and fed into machine learning model to model the user's behavior. Finally, the online advertising impression was dynamically adjusted by user's feedback, which improves the precision of CTR prediction. The experimental results on real world online advertising datasets show that the proposed algorithm improves the precision of CTR prediction significantly, compared with the contrast models, the metrics of Area Under the ROC Curve (AUC) and Relative Information Gain (RIG) are increased by 0.83% and 6.68%, respectively.
    Improvement of sub-pixel morphological anti-aliasing algorithm
    LIU Jingrong, DU Huimin, DU Qinqin
    2017, 37(10):  2871-2874.  DOI: 10.11772/j.issn.1001-9081.2017.10.2871
    Asbtract ( )   PDF (815KB) ( )  
    References | Related Articles | Metrics
    Since Sub-pixel Morphological Anti-Aliasing (SMAA) algorithm extracts images with less contour and needs larger storage, an improved algorithm for SMAA was presented.In the improved algorithm, the multiplication of luminance of a pixel and an adjustable factor was regarded as a dynamic threshold, which was used to decide whether the pixel is a boundary pixel. Compared with fixed threshold for boundary decision in SMAA, the dynamic threshold is stricter for deciding whether a pixel is a boundary one, so the presented algorithm can extract more boundaries. Based on the analysis of different morphological models and used storage, some redundant storages were merged so as to reduce the size of memory. The algorithm was implemented by Microsoft DirectX SDK and HLSL under Windows 7. The experimental results show that the proposed algorithm can extract clearer boundaries and the size of the memory is reduced by 51.93%.
    Unified algorithm for scattered point cloud denoising and simplification
    ZHAO Jingdong, YANG Fenghua, GUO Yingxin
    2017, 37(10):  2879-2883.  DOI: 10.11772/j.issn.1001-9081.2017.10.2879
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    Since it is difficult to denoise and simplify a three dimensional point cloud data by a same parameter, a new unified algorithm based on the Extended Surface Variation based Local Outlier Factor (ESVLOF) for denoising and simplification of scattered point cloud was proposed. Through the analysis of the definition of ESVLOF, its properties were given. With the help of the surface variability computed in denoising process and the default similarity coefficient, the parameter γ which decreased with the increase of surface variation was constructed. Then the parameter γ was used as local threshold for denoising and simplifying point cloud. The simulation results show that this method can preserve the geometric characteristics of the original data. Compared with traditional 3D point-cloud preprocessing, the efficiency of this method is nearly doubled.
    3D simultaneous localization and mapping for mobile robot based on VSLAM
    LIN Huican, LYU Qiang, WANG Guosheng, ZHANG Yang, LIANG Bing
    2017, 37(10):  2884-2887.  DOI: 10.11772/j.issn.1001-9081.2017.10.2884
    Asbtract ( )   PDF (829KB) ( )  
    References | Related Articles | Metrics
    The Simultaneous Localization And Mapping (SLAM) is an essential skill for mobile robots exploring in unknown environments without external referencing systems. As the sparse map constructed by feature-based Visual SLAM (VSLAM) algorithm is not suitable for robot application, an efficient and compact map construction algorithm based on octree structure was proposed. First, according to the pose and depth data of the keyframes, the point cloud map of the scene corresponding to the image was constructed, and then the map was processed by the octree map technique, and a map suitable for the application of the robot was constructed. Comparing the proposed algorithm with RGB-Depth SLAM (RGB-D SLAM) algorithm, ElasticFusion algorithm and Oriented FAST and Rotated BRIEF SLAM (ORB-SLAM) algorithm on publicly available benchmark datasets, the results show that the proposed algorithm has high validity, accuracy and robustness. Finally, the autonomous mobile robot was built, and the improved VSLAM system was applied to the mobile robot. It can complete autonomous obstacle avoidance and 3D map construction in real-time, and solve the problem that the sparse map cannot be used for obstacle avoidance and navigation.
    Acquisition of camera dynamic extrinsic parameters in free binocular stereo vision system
    LI Xiao, GE Baozhen, LUO Qijun, LI Yunpeng, TIAN Qingguo
    2017, 37(10):  2888-2894.  DOI: 10.11772/j.issn.1001-9081.2017.10.2888
    Asbtract ( )   PDF (989KB) ( )  
    References | Related Articles | Metrics
    Aiming to solve the change of the extrinsic parameters between the two cameras in free binocular stereo vision system caused by the rotation of the cameras, a method for acquiring the dynamic extrinsic parameters based on calibration of rotation axis was proposed. Multiple rotation and translation matrixes were obtained by the calibration at different positions, then the parameters of rotation axis could be calculated by using least square method. Combined with the intrinsic and extrinsic parameters at initial position and rotation angle, the dynamic extrinsic parameters between the two cameras could be calculated in real time. The chessboard corners were reconstructed with the dynamic extrinsic parameters calculated by the proposed method, the result showed that the average error was 0.241mm and the standard deviation was 0.156mm. Compared with the calibration method based on multiple-plane calibration target, the proposed method is easier to implement and has higher precision, where dynamic extrinsic parameters can be acquired without real-time calibration.
    On-line light source position calculation based on inter-frame gray-scale variation analysis
    SHENTU Lifeng, XI Jiaqi
    2017, 37(10):  2895-2898.  DOI: 10.11772/j.issn.1001-9081.2017.10.2895
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the position of the light source cannot be determined in practical production, an on-line position calculation for the light source was proposed, which fully analyzed the gray-scale variation of the feature region in consecutive frames. In this approach, a featured region was first selected based on the local gray-scale distribution and used as a reference point, then its position was tracked in the following consecutive frames via block matching method. Combing with the analytic calculation from the light transport model, the relationship between the gray-scale value and geometric position was established. Finally, the set of equations were solved by linear regression method to find the position of light source. The experiment results show that the errors between the calculated results and the measured results are within 5%. The proposed approach has been successfully applied in the real-time manufacturing with good estimation accuracy.
    Distortion analysis of digital video transcoding
    SU Jianjun, MU Shiyou, YANG bo, SUN Xiaobin, ZHAO Haiwu, GU Xiao
    2017, 37(10):  2899-2902.  DOI: 10.11772/j.issn.1001-9081.2017.10.2899
    Asbtract ( )   PDF (709KB) ( )  
    References | Related Articles | Metrics
    Video transcoding is applied in the field of Internet video coding. When the original video is transcoded multiple times, only the distortion between the input video and the output video can be calculated and the distortion between the output video and the original video can not be learned. Here an algorithm for estimating the distortion between the output video and the original video was proposed to control the quality of the output program. Firstly, the superposition of distortion caused by multiple lossy transcoding was analyzed to derive the lower limit of total distortion. Then the probability method was exploited to make an estimation on the distortion between the original video and the final output video. Finally, the least square fitting was used to correct the estimation according to the prediction error. Experimental results demonstrate that the proposed algorithm can accurately estimate the distortion with the prediction error of 0.02dB, 0.05dB and 0.06dB for Y, U and V components on average respectively after correction.
    Image splicing detection method based on correlation between color components
    ZHENG Jiming, SU Huijia
    2017, 37(10):  2903-2906.  DOI: 10.11772/j.issn.1001-9081.2017.10.2903
    Asbtract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics
    When using digital camera to get natural images, there exits Color Filter Array (CFA) interpolation effect, which makes great correlation between color components of the images. A new method was proposed to detect splicing operation by using interpolation characteristics of CFA interpolation process. Firstly, the prediction error was obtained by CFA interpolation for image color components. Then, the local weighted variance of image block predicting error was calculated to get the CFA characteristics of the natural image. Finally, to classify and derive the local splicing area, the Gaussian Mixture Model (GMM) was used according to extracted features. Experimental results in the standard splicing tamper the image data set demonstrate that the proposed method can effectively detect the exact location of the tampered area of the image.
    Hybrid control rate algorithm based on dynamic adaptive streaming over HTTP protocol
    JIN Yanxia, MA Guangyuan, LEI Haiwei
    2017, 37(10):  2907-2911.  DOI: 10.11772/j.issn.1001-9081.2017.10.2907
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that Smooth Flow (SF) algorithm in bandwidth prediction has flash phenomenon and frequent play stagnation caused by bandwidth prediction without cache control, a dynamic adaptive hybrid rate control algorithm was proposed. First of all, calculation of fluctuation parameter in original SF algorithm was replaced by standard deviation, which can eliminate the flash phenomenon. Secondly, for the original SF algorithm, there is frequent play stagnation problem because the bandwidth prediction does not consider the cache state, at the same time, the traditional cache control method has problem with hierarchical difficulties, a new cache control strategy was intoduced to solve these problems. Finally, the improved SF algorithm was combined with the new cache control strategy to form a hybrid algorithm to select the video bitrate. The experimental results show that the hybrid algorithm not only eliminates the flash phenomenon of SF algorithm in bandwidth prediction, but also overcomes the shortcoming of selecting bit rate by only single algorithm; the selected video not only reduces the frequency of play stagnation (the frequency was significantly decreased by about 43% under bad network environment), but also obey the actual network situation, improving the users' viewing experience.
    Image classification algorithm based on fast low rank coding and local constraint
    GAN Ling, ZUO Yongqiang
    2017, 37(10):  2912-2915.  DOI: 10.11772/j.issn.1001-9081.2017.10.2912
    Asbtract ( )   PDF (681KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of large feature reconstruction error and local constraint loss between features in fast low rank coding algorithm, an enhanced local constraint fast low rank coding algorithm was put forward. Firstly, the clustering algorithm was used to cluster the features in the image, and obtain the local similarity feature set and the corresponding clustering center. Secondly, the K visual words were found by using the K Nearest Neighbor (KNN) strategy in the visual dictionary, and then the K visual words were combined into the corresponding visual dictionary. Finally, the corresponding feature code of the local similarity feature set was obtained by using the fast low rank coding algorithm. On Scene-15 and Caltech-101 image datasets, the classification accuracy of the modified algorithm was improved by 4% to 8% compared with the original fast low rank coding algorithm, and the coding efficiency was improved by 5 to 6 times compared with sparse coding. The experimental results demonstrate that the modified algorithm can make local similarity features have similar codes, so as to express the image content more accurately, and improve the classification accuracy and coding efficiency.
    Single image dehazing algorithm based on sky segmentation
    MAO Xiangyu, LI Weixiang, DING Xuemei
    2017, 37(10):  2916-2920.  DOI: 10.11772/j.issn.1001-9081.2017.10.2916
    Asbtract ( )   PDF (829KB) ( )  
    References | Related Articles | Metrics
    To address the problem that dark channel prior algorithm is invalid for sky region and the problem that the color of the restored image became darker, a single image dehazing algorithm based on sky segmentation was presented. Firstly, the segmentation algorithm based on edge detection was used to divide the original image into sky region and non-sky region. Then, based on the dark channel prior method, the estimation method for atmospheric light and transmittance was improved for the dehaze of non-sky region. Finally, the sky region was processed by an optimized contrast enhancement algorithm based on cost function. The experimental results demonstrate that, compared with dark channel prior algorithm, many technical specifications of restored images such as variance, average gradient and entropy are greatly improved. The proposed algorithm can effectively avoid the Halo effect in sky region and restore the true scene color while maintaining high operating efficiency.
    Switching kernel regression fitting algorithm for salt-and-pepper noise removal
    YU Yinghuai, XIE Shiyi
    2017, 37(10):  2921-2925.  DOI: 10.11772/j.issn.1001-9081.2017.10.2921
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    Concerning salt-and-pepper noise removal and details protection, an image denoising algorithm based on switching kernel regression fitting was proposed. Firstly, the pixels corrupted by salt-and-pepper noises were identified exactly by efficient impulse detector. Secondly, the corrupted pixels were take as missing data, and then a kernel regression function was used to fit the non-noise pixels in a neighborhood of current noisy pixel, so as to obtain a kernel regression fitting surface that met local structure characteristics of the image. Finally, the noisy pixel was restored by resampling of the kernel regression fitting surface in terms of its spatial coordinates. In the comparison experiments at different noise densities with some state-of-the-art algorithms such as Standard Median Filter (SMF), Adaptive Median Filter (AMF), Modified Directional-Weighted-Median Filter (MDWMF), Fast Switching based Median-Mean Filter (FSMMF) and Image Inpainting (Ⅱ), the proposed scheme had better performance in subjective visual quality of restored image. At low, medium and high noise density levels, the average Peak Signal-to-Noise Ratio (PSNR) of different images by using the proposed scheme was increased by 6.02dB, 6.33dB and 5.58dB, respectively; and the average Mean Absolute Error (MAE) was decreased by 0.90, 5.84 and 25.29, respectively. Experimental results show that the proposed scheme outperforms all the compared techniques in removing salt-and-pepper noise and preserving details at various noise density levels.
    Objective quality assessment for color-to-gray images based on visual similarity
    WANG Man, YAN Jia, WU Minyuan
    2017, 37(10):  2926-2931.  DOI: 10.11772/j.issn.1001-9081.2017.10.2926
    Asbtract ( )   PDF (1158KB) ( )  
    References | Related Articles | Metrics
    The Color-to-Gray (C2G) image quality evaluation algorithm based on structural similarity does not make full use of the gradient feature of the image, and the contrast similarity feature ignores the consistency of the continuous color blocks of the image, thus leading to a large difference between the algorithm and the subjective judgment of human vision. A C2G image quality evaluation algorithm named C2G Visual Similarity Index Measurement (C2G-VSIM) was proposed based on Human Visual System (HVS). In this algorithm, the color image was regarded as the reference image, the corresponding decolorization image obtained by different algorithms was regarded as the test image. By applying color space conversion and Gaussian filtering to these reference and test images, taking full account of the characteristics of image brightness similarity and structual similarity, a new color consistency contrast feature was introduced to help C2G-VSIM to capture the global color contrast feature; then the gradient amplitude feature was also introduced into C2G-VSIM to improve the sensitivity of the algorithm to the image gradient feature. Finally, by combining those above features, a new imgage quality evaluation operator named C2G-VSIM was obtained. Experimental results on Cadík's dataset showed that in terms of accuracy and preference evaluation, the Spearman Rank Order Correlation Coefficient (SROCC) between C2G-VSIM and subjective assessment of human visuality was 0.8155 and 0.7634, respectively, the accuracy was improved significantly without increasing the time consuming compared to C2G Structure Similarity Index Measurement (C2G-SSIM). The proposed algorithm has high consistency compared to human visuality, as well as simple calculation, which can effectively and automatically evaluate decolorization images in actual project with large scale.
    Fast outlier detection algorithm based on local density
    ZOU Yunfeng, ZHANG Xin, SONG Shiyuan, NI Weiwei
    2017, 37(10):  2932-2937.  DOI: 10.11772/j.issn.1001-9081.2017.10.2932
    Asbtract ( )   PDF (914KB) ( )  
    References | Related Articles | Metrics
    Mining outliers is to find exceptional objects that deviate from the most rest of the data set. Outlier detection based on density has attracted lots of attention, but the density-based algorithm named Local Outlier Factor (LOF) is not suitable for the data set with abnormal distribution, and the algorithm named INFLuenced Outlierness (INFLO) solves this problem by analyzing both k nearest neighbors and reverse k nearest neighbors of each data point at cost of inferior efficiency. To solve this problem, a local density-based algorithm named Local Density Based Outlier detection (LDBO) was proposed, which can improve outlier detection efficiency and effectiveness simultaneously. LDBO introduced definitions of strong k nearest neighbors and weak k nearest neighbors to realize outlier relation analysis of those data points located nearby. Furthermore, to improve the outlier detection efficiency, prejudgement was applied to avoid unnecessary reverse k nearest neighbor analysis as far as possible. Theoretical analysis and experimental results Indicate that LDBO outperforms INFLO in efficiency, and it is effective and feasible.
    Trajectory pattern mining with differential privacy
    JIN Kaizhong, PENG Huili, ZHANG Xiaojian
    2017, 37(10):  2938-2945.  DOI: 10.11772/j.issn.1001-9081.2017.10.2938
    Asbtract ( )   PDF (1476KB) ( )  
    References | Related Articles | Metrics
    To address the problems of high global query sensitivity and low utility of mining results in the existing works, a Lattice-Trajectory Pattern Mining (LTPM) algorithm based on prefix sequence lattice and trajectory truncation was proposed for mining sequential patterns with differential privacy. An adaptive method was employed to obtain the optimal truncation length, and a dynamic programming strategy was used to truncate the original database. Based on the truncated database, the equivalent relation was used to construct the prefix sequence lattice for mining trajectory patterns. Theoretical analysis shows that LTPM satisfies ε-differential privacy. The experimental results show that the True Postive Rate (TPR) and Average Relative Error (ARE) of LTPM are better than those of N-gram and Prefix algorithms, which verifies that LTPM can effectively improve the utility of the mining results.
    Optimization of density-based K-means algorithm in trajectory data clustering
    HAO Meiwei, DAI Hualin, HAO Kun
    2017, 37(10):  2946-2951.  DOI: 10.11772/j.issn.1001-9081.2017.10.2946
    Asbtract ( )   PDF (1029KB) ( )  
    References | Related Articles | Metrics
    Since the traditional K-means algorithm can hardly predefine the number of clusters, and performs sensitively to the initial clustering centers and outliers, which may result in unstable and inaccurate results, an improved density-based K-means algorithm was proposed. Firstly, high-density trajectory data points were selected as the initial clustering centers to perform K-means clustering by considering the density of the trajectory data distribution and increasing the weight of the density of important points. Secondly, the clustering results were evaluated by the Between-Within Proportion (BWP) index of cluster validity function. Finally, the optimal number of clusters and clustering were determined according to the clustering results evaluation. Theoretical researches and experimental results show that the improved algorithm can be better at extracting the trajectory key points and keeping the key path information. The accuracy of clustering results was 28 percentage points higher than that of the traditional K-means algorithm and 17 percentage points higher than that of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm. The proposed algorithm has a better stability and a higher accuracy in trajectory data clustering.
    Soft subspace clustering algorithm for imbalanced data
    CHENG Lingfang, YANG Tianpeng, CHEN Lifei
    2017, 37(10):  2952-2957.  DOI: 10.11772/j.issn.1001-9081.2017.10.2952
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the current K-means-type soft-subspace algorithms cannot effectively cluster imbalanced data due to uniform effect, a new partition-based algorithm was proposed for soft subspace clustering on imbalanced data. First, a bi-weighting method was proposed, where each attribute was assigned a feature-weight and each cluster was assigned a cluster-weight to measure its importance for clustering. Second, in order to make a trade-off between attributes with different types or those categorical attributes having various numbers of categories, a new distance measurement was then proposed for mixed-type data. Third, an objective function was defined for the subspace clustering algorithm on imbalanced data based on the bi-weighting method, and the expressions for optimizing both the cluster-weights and feature-weights were derived. A series of experiments were conducted on some real-world data sets and the results demonstrated that the bi-weighting method used in the new algorithm can learn more accurate soft-subspace for the clusters hidden in the imbalanced data. Compared with the existing K-means-type soft-subspace clustering algorithms, the proposed algorithm yields higher clustering accuracy on imbalanced data, achieving about 50% improvements on the bioinformatic data used in the experiments.
    Accurate search method for source code by combining syntactic and semantic queries
    GU Yisheng, ZENG Guosun
    2017, 37(10):  2958-2963.  DOI: 10.11772/j.issn.1001-9081.2017.10.2958
    Asbtract ( )   PDF (985KB) ( )  
    References | Related Articles | Metrics
    In the process of programming and source code reuse, since simple keyword-based code search often leads to inaccurate results, an accurate search method for source code was proposed. Firstly, according to the objectivity and uniqueness of syntax and semantics, the syntactic structure and semantics of I/O of a function in source code were considered as part of a query. Such query should be submitted following a regularized format. Secondly, the syntactic structure, semantics of I/O, keyword-compatible match algorithms along with the reliability calculation algorithm were designed. Finally, the accurate search method by combining syntactic and semantic queries was realized by using the above algorithms. The test result shows that the proposed method can improve Mean Reciprocal Rank (MRR) by more than 62% compared with the common keyword-based search method, and it is effective in improving the accuracy of source code search.
    Cache replacement strategy based on access mechanism of ciphertext policy attribute based encryption
    CHEN Jian, SHEN Xiaojun, YAO Yiyang, XING Yafei, JU Xiaoming
    2017, 37(10):  2964-2967.  DOI: 10.11772/j.issn.1001-9081.2017.10.2964
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics
    In order to improve the performance of cache for encrypted data based on Ciphertext Policy Attribute Based Encryption (CP-ABE), an effective replacement algorithm named Minimum Attribute Value (MAV) algorithm was proposed.Combining the access mechanism of ciphertext in CP-ABE and counting the number of high frequency attribute values, the attribute similarity was calculated by using cosine similarity method and the table of high frequency attribute values; meanwhile, the attribute value of each cache file was calculated according to the attribute similarity and size of the encrypted file, then the file with the minimum attribute valuve was replaced. The experimental results prove that the MAV algorithm has better performance in increasing byte hit rate and file request hit rate than the algorithms of Least-Recently-Used (LRU), Least-Frequently-Used (LFU) and Size for encrypted data based on CP-ABE.
    False positive recognition method based on classification for null pointer dereference defects
    WANG Shuyan, QUAN Yafei, SUN Jiaze
    2017, 37(10):  2968-2972.  DOI: 10.11772/j.issn.1001-9081.2017.10.2968
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics
    Focusing on the false positive problem of null pointer dereference (NPD) defect in static testing, a new false positive recognition method for null pointer reference defect based on classification was proposed. The knowledge of NPD defect was mined and preprocessed to generate data set of the defects. Then the data set of NPD defects was classified via ID3 classification algorithm based on rough set theory, and there were two kinds of classification results, one was false positive null pointer reference defect instances, the other was real null pointer reference defect instances. The real NPD defects were confirmed according to the classification results of the defect instances by recognizing the false positive NPD defects. The method was tested on ten benchmark programs and compared to the NPD defect detection method based on the mainstream static testing tool FindBugs, the false positive rate was reduced by 25%, and the confirmation amount was reduced by 24% for NPD defects. The experimental result shows that the proposed method can effectively reduce defect confirmation overhead and improve the detection efficiency and stability for NPD defects in static testing.
    Component retrieval method based on identification of faceted classification and cluster tree
    QIAN Xiaojie, DU Shenghao
    2017, 37(10):  2973-2977.  DOI: 10.11772/j.issn.1001-9081.2017.10.2973
    Asbtract ( )   PDF (817KB) ( )  
    References | Related Articles | Metrics
    To quickly and efficiently retrieve the target component from a large software component library, a component retrieval method based on identification of faceted classification and cluster tree was proposed. The component with facet classification identification was described by using the set of component identification, which overcomes the impact of subjective factors when only using facets classification to describe and retrieve components. By introducing cluster tree, the component cluster tree was established by analysis clustering of components based on semantic similarity, thus narrowing the retrieval area, reducing the number of comparisons with component libary, and improving the search efficiency. Finally, the proposed method was experimented and compared with other common retrieval methods. The results show that the precision of the proposed method is 88.3% and the recall ratio is 93.1%; moreover, the proposed method also has a good retrieval effect when searching in a large-scale component library.
    Optimization of ordered charging strategy for large scale electric vehicles based on quadratic clustering
    ZHANG Jie, YANG Chunyu, JU Fei, XU Xiaolong
    2017, 37(10):  2978-2982.  DOI: 10.11772/j.issn.1001-9081.2017.10.2978
    Asbtract ( )   PDF (745KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of unbalanced utilization rate distribution of charging station caused by disordered charging for a large number of electric vehicles, an orderly charging strategy for electric vehicles was proposed. Firstly, the location of the electric vehicle's charging demand was clustered, and the hierarchical clustering and quadratic division based on K-means were used to achieve the convergence of electric vehicles with similar properties. Furthermore, the optimized path to charging station was determined by Dijkstra algorithm, and by using the even distribution and the shortest charging distance of electric vehicles as objectives functions, the charging scheduling model based on electric vehicle clustering was constructed, and the genetic algorithm was used to solve the problem. The simulation results show that compared with the charging scheduling strategy without clustering of electric vehicles, the computation time of the proposed method can be reduced by more than a half for large scale vehicles, and it has higher practicability.
    Intelligent integration approach of big data for urban infrastructure management and maintenance
    LIU Jiajun, YU Gang, HU Min
    2017, 37(10):  2983-2990.  DOI: 10.11772/j.issn.1001-9081.2017.10.2983
    Asbtract ( )   PDF (1394KB) ( )  
    References | Related Articles | Metrics
    In order to improve the efficiency of data integration, enhance both statistical and decisional analysis performance of the platform and reduce Extract-Transform-Load (ETL) execution time and the burden of data center, according to the operation and maintenance big data with characteristics of high dimension, diversity and variability, a Multilevel Task Scheduling (MTS) ETL framework (MTS-ETL) was proposed for intelligent maintenance requirements. Firstly, the data warehouse was divided into a series of parts, including data temporary area, data storage area, data classification area and data analysis area. In the light of the sub-region, the integral ETL process was divided into four levels of ETL task scheduling. Moreover, the multi-frequency ETL operation scheduling and sequential and non-sequential ETL working modes were designed at the same time. Secondly, the conceptual modelling, logical modelling and physical modelling of data integration were implemented based on the non-sequential mode of MTS-ETL framework. Finally, the ETL transformation module and job module were designed by using Pentaho Data Integration to realize this data integration method. In the traffic flow data integration experiment, the method integrated 136754 data for only 28.4 seconds, and reduced the total average execution time by 6.51% compared to the traditional ETL method in a thousand-scale data integration experiment. The reliability of ETL process was proved by the report analysis results of integrating 4 million data. The proposed method can effectively integrate the operation and maintenance of big data, improve the statistical analysis performance of platform and maintain ETL execution time at a low level.
    Agent-based dynamic scheduling system for hybrid flow shop
    WANG Qianbo, ZHANG Wenxin, WANG Bailin, WU Zixuan
    2017, 37(10):  2991-2998.  DOI: 10.11772/j.issn.1001-9081.2017.10.2991
    Asbtract ( )   PDF (1172KB) ( )  
    References | Related Articles | Metrics
    Aiming at the uncertainty and dynamism in agile manufacturing and the features of Hybrid Flow Shop (HFS) scheduling problem, a multi-Agent based dynamic scheduling system for hybrid flow shop was developed, which consists of management Agent, strategy Agent, job Agent and machine Agent. First, a HFS aimed Interpolation Sorting (HIS) algorithm was proposed and integrated into the strategy Agent, which is suitable for static scheduling and dynamic scheduling under a variety of dynamic events. Then the coordination mechanism between the various Agents was designed. In the process of production, all Agents worked independently and coordinate with each other according to their behavioral logic. If a dynamic event occurred, the strategy Agent called HIS algorithm to generate the sequence of jobs according to the current workshop state, and then the Agents continued to coordinate according to the generated sequence until the production was finished. Finally, simulations of dynamic scheduling such as machine failure, rush order and online scheduling were carried out. The experimental results show that compared with a variety of scheduling rules, HIS algorithm has better schedule results than those by scheduling rules in these cases; especially in machine breakdown rescheduling, the consistency of makespan before and after rescheduling was up to 97.6%, which means that the HFS dynamic scheduling system is effective and flexible.
    Automatic hyponymy extracting method based on symptom components
    WANG Ting, WANG Qi, HUANG Yueqi, YIN Yichao, GAO Ju
    2017, 37(10):  2999-3005.  DOI: 10.11772/j.issn.1001-9081.2017.10.2999
    Asbtract ( )   PDF (1095KB) ( )  
    References | Related Articles | Metrics
    Since the hyponymy between symptoms has strong structural features, an automatic hyponymy extracting method based on symptom components was proposed. Firstly, it was found that symptoms can be divided into eight parts: atomic symptoms, adjunct words, and so on, and the composition of these parts satisfied certain constructed rules. Then, the lexical analysis system and Conditional Random Field (CRF) model were used to segment symptoms and label the parts of speech. Finally, the hyponymy extraction was considered as a classification problem. Symptom constitution features, dictionary features and general features were selected as the features of different classification algorithms to train the models. The relationship between symptoms were divided into hyponymy and non-hyponymy. The experimental results show that when these features are selected simultaneously, precision, recall and F1-measure of Support Vector Machine (SVM) are up to 82.68%, 82.13% and 82.40%, respectively. On this basis, by using the above hyponymy extracting algorithm, 20619 hyponymies were extracted, and the knowledge base of symptom hyponymy was built.
    Face annotation in news images based on multi-modal information fusion
    ZHENG Cha, JI Lixin, LI Shaomei, GAO Chao
    2017, 37(10):  3006-3011.  DOI: 10.11772/j.issn.1001-9081.2017.10.3006
    Asbtract ( )   PDF (1141KB) ( )  
    References | Related Articles | Metrics
    The traditional face annotation methods for news images mainly rely on similarity information of the faces, and have poor ability to distinguish non-noise faces from noise faces and to annotate non-noise faces. Aiming at this issue, a face annotation method based on multi-modal information fusion was proposed. Firstly, according to the co-occurrence relations between faces and names, face-name match degrees based on face similarity were obtained by using a modified K-Nearest Neighbor (KNN) algorithm. After that, face importance degrees were characterized by the size and position information of faces extracted from images, and name importance degrees were characterized by the name position information extracted from images. Finally, Back Propagation (BP) neural network was applied to fuse the above information to infer labels of faces, and an annotation result correcting strategy was proposed to further improve the annotation results. Experimental results on Label Yahoo!News dataset demonstrate that the accuracy, precision and recall of the proposed method reach 77.11%, 73.58% and 78.75% respectively; compared with the methods only based on face similarity, the proposed method has outstanding ability to distinguish non-noise faces from noise faces and to annotate non-noise faces.
    Multi-channel pedestrian detection algorithm based on textural and contour features
    HAN Jiandong, DENG Yifan
    2017, 37(10):  3012-3016.  DOI: 10.11772/j.issn.1001-9081.2017.10.3012
    Asbtract ( )   PDF (950KB) ( )  
    References | Related Articles | Metrics
    In order to solving the problem that the pedestrian detection algorithm based on Aggregated Channel Feature (ACF) has a low detection precision and a high false detection rate in complex scenes, a multi-channel pedestrian detection algorithm combined with features of texture and contour was proposed in this paper. The algorithm flows included training classifier and detection. In the training phase, the ACF, the texture features of Local Binary Patterns (LBP) and the contour features of Sketch Tokens (ST) were extracted, and trained separately by the Real AdaBoost classifier. In the detection phase, the cascading detection idea was used. The ACF classifier was used to deal with all objects, then the complicated classifier of LBP and ST were used to gradually filter the result of the previous step. In the experiment, the INRIA data set was used in the simulation of our algorithm, the results show that our algorithm achieves a Log-Average Miss Rate (LAMR) of 13.32%. Compared with ACF algorithm, LAMR is decreased by 3.73 percent points. The experimental results verify that LBP and ST can be used as a complementation of ACF. So some objects of false detection can be eliminated in the complicated scenes and the accuracy can be improved. At the same time, the efficiency of multi-feature algorithm is ensured by cascading detection.
    Automatic lane division method based on echo signal of microwave radar
    XIU Chao, CAO Lin, WANG Dongfeng, ZHANG Fan
    2017, 37(10):  3017-3023.  DOI: 10.11772/j.issn.1001-9081.2017.10.3017
    Asbtract ( )   PDF (990KB) ( )  
    References | Related Articles | Metrics
    When police carry out traffic law enforcement using multi-target speed measuring radar, one of the most essential things is to judge which lane each vehicle belongs to, and only in this way the captured pictures can serve as the law enforcement evidence. To achieve lane division purpose, traditional way is to obtain a fixed threshold by manual measurement and sometimes the method of coordinate system rotation is also needed, but this method has a large error with difficulty in operating. A new lane division algorithm called Kernel Clustering algorithm based on Statistical and Density Features (K-CSDF) was proposed, which includes two steps: firstly, a feature extraction method based on statistical feature and density feature was used to process the vehicle data captured by radar; secondly, a dynamic clustering algorithm based on kernel and similarity was introduced to cluster the processed data. Simulations with Gaussian Mixture Model (GMM) algorithm and Self-Organizing Maps (SOM) algorithm were conducted. Simulation results show that the proposed algorithm and SOM algorithm can achieve a lane accuracy of more than 90% when only 100 sample points are used, while GMM algorithm cannot detect the lane center line. In terms of running time, when 1000 sample points are taken, the proposed algorithm and GMM algorithm spend less than one second, and the real-time performance can be guaranteed, while SOM algorithm takes about 2.5 seconds. The robustness of the proposed algorithm is better than GMM algorithm and SOM algorithm when sample points have a non-uniform distribution. When different amounts of sample points are used for clustering, the proposed algorithm can achieve an average lane division accuracy of more than 95%.
    Missile hit prediction model based on adaptively-mutated chaotic particle swarm optimization and support vector machine
    XU Lingkai, YANG Rennong, ZHANG Binchao, ZUO Jialiang
    2017, 37(10):  3024-3028.  DOI: 10.11772/j.issn.1001-9081.2017.10.3024
    Asbtract ( )   PDF (812KB) ( )  
    References | Related Articles | Metrics
    Intelligent air combat is a hot research topic in military aviation field and missile hit prediction is an important part of intelligent air combat. Aiming at the shortcomings of insufficient research on missile hit prediction, poor optimization ability of the algorithm, and low prediction accuracy of the model, a missile hit prediction model based on Adaptively-Mutated Chaotic Particle Swarm Optimization (AMCPSO) and Support Vector Machine (SVM) was proposed. Firstly, feature extraction of air combat data was carried out to build sample library for model training; then, the improved AMCPSO algorithm was used to optimize the penalty factor C and the kernel function parameter g in SVM, and the optimized model was used to predict the samples; finally, comparison tests with classical PSO algorithm, the BP neural network method and the method based on lattice were made. The results show that the global and local optimization ability of the proposed algorithm are both stronger, and the prediction accuracy of the proposed model is higher, which can provide a reference for missile hit prediction research.
    Monitoring and analysis of operation status under architecture of stream computing and memory computing
    ZHAO Yongbin, CHEN Shuo, LIU Ming, WANG Jianan, BEN Chi
    2017, 37(10):  3029-3033.  DOI: 10.11772/j.issn.1001-9081.2017.10.3029
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics
    In real-time operation state analysis of power grid, in order to meet the requirements of real-time analysis and processing of large-scale real-time data, such as real-time electricity consumption data, and provide fast and accurate data analysis support for power grid operation decision, the system architecture for large-scale data analysis and processing based on stream computing and memory computing was proposed. The Discrete Fourier Transform (DFT) was used to construct abnormal electricity behavior evaluation index based on the real-time electricity consumption data of the users by time window. The K-Means clustering algorithm was used to classify the users' electricity behavior based on the characteristics of user electricity behavior constructed by sampling statistical analysis. The accuracy of the proposed evaluation indicators of abnormal behavior and user electricity behavior was verified by the experimental data extracted from the actual business system. At the same time, compared with the traditional data processing strategy, the system architecture combined with stream computing and memory computing has good performance in large-scale data analysis and processing.
    Nuclear magnetic resonance logging reservoir permeability prediction method based on deep belief network and kernel extreme learning machine algorithm
    ZHU Linqi, ZHANG Chong, ZHOU Xueqing, WEI Yang, HUANG Yuyang, GAO Qiming
    2017, 37(10):  3034-3038.  DOI: 10.11772/j.issn.1001-9081.2017.10.3034
    Asbtract ( )   PDF (791KB) ( )  
    References | Related Articles | Metrics
    Duing to the complicated pore structure of low porosity and low permeability reservoirs, the prediction accuracy of the existing Nuclear Magnetic Resonance (NMR) logging permeability model for low porosity and low permeability reservoirs is not high. In order to solve the problem, a permeability prediction method based on Deep Belief Network (DBN) algorithm and Kernel Extreme Learning Machine (KELM) algorithm was proposed. The pre-training of DBN model was first carried out, and then the KELM model was placed as a predictor in the trained DBN model. Finally, the Deep Belief Kernel Extreme Learning Machine Network (DBKELMN) model was formed with supervised training by using the training data. Considering that the proposed model should make full use of the information of the transverse relaxation time spectrum which reflected the pore structure, the transverse relaxation time spectrum of NMR logging after discretization was taken as the input, and the permeability was taken as the output. The functional relationship between the transverse relaxation time spectrum of NMR logging and permeability was determined, and the reservoir permeability was predicted based on the functional relationship. The applications of the example show that the permeability prediction method based on DBN algorithm and KELM algorithm is effective and the Mean Absolute Error (MAE) of the prediction sample is 0.34 lower than that of Schlumberger Doll Researchcenter (SDR) model. The experimental results show that the combination of DBN algorithm and KELM algorithm can improve the prediction accuracy of low porosity and low permeability reservoir, and can be used to the exploration and development of oil and gas fields.
2024 Vol.44 No.5

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF