Table of Content

    10 March 2018, Volume 38 Issue 3
    Network public opinion prediction by empirical mode decomposition-autoregression based on extreme gradient boosting model
    MO Zan, ZHAO Bing, HUANG Yanying
    2018, 38(3):  615-619.  DOI: 10.11772/j.issn.1001-9081.2017071846
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics
    With the arrival of big data, network public opinion data reveals the features of massive information and wide coverage. For the complicated network public opinion data, traditional single models may not efficiently predict the trend of network public opinion. To address this question, the improved combination model based on the Empirical Mode Decomposition-AutoRegression (EMD-AR) model was proposed, called EMD-ARXG (Empirical Mode Decomposition-AutoRegression based on eXtreme Gradient boosting)model. EMD-ARXG model was applied to the prediction of the trend of complex network public opinion. In this model, the Empirical Mode Decomposition (EMD) algorithm was employed to decompose the time series, and then AutoRegression (AR) model was applied to fit the decomposed time series and establish sub-models. Finally, the sub-models were reconstructed and then the modelling process was completed. In addition, in the fitting process AR model, in order to reduce the fitting error, the residual error was learned by eXtreme Gradient Boosting (XGBoost), and each sub-model was iteratively updated to improve its prediction accuracy. In order to verify the prediction performance of EMD-ARXG model, the proposed model was compared with wavelet neural network model and back propagation neural network based on EMD model. The experimental results show that the EMD-ARXG model is superior to two other models in terms of the statistical indicators including Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE) and Theil Inequality Coefficient (TIC).
    Temporal semantic understanding for intelligent service systems
    JIA Shengbin, XIANG Yang
    2018, 38(3):  620-625.  DOI: 10.11772/j.issn.1001-9081.2017092251
    Asbtract ( )   PDF (955KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that it is hard to process the temporal semantic information during formulating and providing intelligent services, a temporal semantic understanding model for intelligent service systems was proposed. For service message texts in natural language, temporal information extraction, mapping, and semantic modeling were implemented, so as to provide a universal temporal semantic expression pattern for intelligent service systems. Firstly, a heuristic strategy was adopted to automatically extract temporal phrases and construct time information knowledge base without any manual intervention. Then, a temporal information mapping method based on temporal unit was proposed, to complete quantitative expression of absolute time and logical reasoning of relative time. Finally, a temporal semantic model was constructed by comprehensively using temporal information and contextual information. In service message test set, experimental results show that the precision of time information extraction is as high as 97.58% and the mapping precision is greater than 85%. And the satisfying effect of semantic modeling is shown.
    Recognition of temporal relation in Chinese electronic medical records
    SUN Jian, GAO Daqi, RUAN Tong, YIN Yichao, GAO Ju, WANG Qi
    2018, 38(3):  626-632.  DOI: 10.11772/j.issn.1001-9081.2017082087
    Asbtract ( )   PDF (1121KB) ( )  
    References | Related Articles | Metrics
    The temporal relation or temporal links (denoted by the TLink tag) in Chinese electronic medical records includes temporal relations within a sentence (hereafter referred to as "within-sentence TLinks"), and between-sentence TLinks. Among them, within-sentence TLinks include event/event TLinks and event/time TLinks, and between-sentence TLinks include event/event TLinks. The recognition of temporal relation in Chinese electronic medical record was transformed into classification problem on entity pairs. Heuristic rules with high accuracy were developed and two different classifiers with basic features, phrase syntax, dependency features, and other features were trained to determine within-sentence TLinks. Apart from heuristic rules with high accuracy, basic features, phrase syntax, and other features were used to train the classifiers to determine between-sentence TLinks. The experimental results show that Support Vector Machine (SVM), SVM and Random Forest (RF) algorithms achieve the best performance of recognition on within-sentence event/event TLinks, within-sentence event/time TLinks and between-sentence event/event TLinks, with F1-scores of 84.0%, 85.6% and 63.5% respectively.
    Collaborative filtering recommendation algorithm based on multi-level hybrid similarity
    YUAN Zhengwu, CHEN Ran
    2018, 38(3):  633-638.  DOI: 10.11772/j.issn.1001-9081.2017071718
    Asbtract ( )   PDF (946KB) ( )  
    References | Related Articles | Metrics
    In view of performance flaws in the case of sparse data and the lack of similarity measurement methods in traditional collaborative filtering recommendation algorithm, a collaborative filtering recommendation algorithm based on multi-level hybrid similarity was proposed to improve the recommendation accuracy. The algorithm is divided into three different levels. Firstly, the concept of fuzzy set was used to fuzzify the user rating and then to calculate the user's fuzzy preferences, and the adjusted cosine-based similarity of the user rating and the Jarccad similarity of the user rating were combined as the user rating similarity. Secondly, the use rating was classified to predict the degree of interest of the user to the item category so that the user's interest similarity was calculated. Thirdly, the user's characteristic similarity was predicted by the characteristic attributes between users. Then, the user's interest similarity and user's characteristic similarity were dynamically integrated by the number of user ratings. Finally, the similarities of three levels were fused as the result of user similarity. The experimental results show that the improved hybrid algorithm has a decrease of 5% in Mean Absolute Error (MAE) compared to the adjusted cosine-based similarity algorithm when the number of neighbors is small. Compared with the improved MKJCF (Modified K-pow Jaccard similarity Cooperative Filtering) algorithm, the improved hybrid algorithm has a slight advantage, and the MAE fell by an average of about 1% with the increase of neighbor number. The proposed algorithm uses a multi-level recommendation strategy to improve the user's recommendation accuracy, effectively alleviates the sparseness of data and the impact of single measurement method.
    Personalized test question recommendation method based on unified probalilistic matrix factorization
    LI Quan, LIU Xinghong, XU Xinhua, LIN Song
    2018, 38(3):  639-643.  DOI: 10.11772/j.issn.1001-9081.2017082071
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    In recent years, test question resources in online education has grown at an explosive rate. It is difficult for students to find appropriate questions from the mass of question resources. Many test question recommendation methods for students have been proposed to solve this problem. However, many problems exist in traditional test question recommendation methods based on unified probalilistic matrix factorization; especially information of student knowledge points is not considered, resulting in low accuracy of recommendation results. Therefore, a kind of personalized test question recommendation method based on unified probalilistic matrix factorization was proposed. Firstly, through a cognitive diagnosis model, the student knowledge point mastery information was obtained. Secondly, the process of unified probalilistic matrix factorization was executed by combining the information of students, test questions and knowledge points. Finally, according to the difficulty range, the test questions were recommended. The experimental results show that the proposed method gets the best recommedation results in the aspect of accuracy of question recommendation for different range of difficulty, compared to other traditional recommendation methods, and has a good application prospect.
    Hybrid recommendation algorithm based on probability matrix factorization
    YANG Fengrui, ZHENG Yunjun, ZHANG Chang
    2018, 38(3):  644-649.  DOI: 10.11772/j.issn.1001-9081.2017082116
    Asbtract ( )   PDF (870KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of data sparseness and cold start in social network recommendation systems, a hybrid social network recommendation algorithm based on feature Transform and Probabilistic Matrix Factorization (TPMF) was proposed. Using Probability Matrix Factorization (PMF) method as recommendation framework, trust network, the relationship between the recommended items, user-item score matrix and adaptive weight were combined to balance the impact of individual and social potential characteristics on users. The trust feature transfer was introduced into the recommendation system as valid basis for recommendation. Compared to the User-Based Collaborative Filtering (UBCF), TidalTrust, PMF and SoRec, the experimental results show that the Mean Absolute Error (MAE) of TPMF was decreased by 4.1% to 20.8%, and the Root Mean Square Error (RMSE) of TPMF was decreased by 3.3% to 18.5%. Compared with the above four algorithms, for the cold start problem, the Mean Absolute Error was decreased by 1.6 to 14.7%, and the RMSE was decreased by 1.2% to 9.7%, which verifies TPMF effectively alleviates cold start problem and improves the robustness of the algorithm.
    Diversity analysis and improvement of AdaBoost
    WANG Lingdi, XU Hua
    2018, 38(3):  650-654.  DOI: 10.11772/j.issn.1001-9081.2017092226
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of how to measure diversity among weak classifiers created by AdaBoost as well as the over-adaptation problem of AdaBoost, an improved AdaBoost method based on double-fault measure was proposed, which was based on the analysis and study of the relationship between four diversity measures and the classification accuracy of AdaBoost. Firstly, Q statistics, correlation coefficient, disagreement measure and double-fault measure were selected for experiment on the data sets from the UCI (University of CaliforniaIrvine Irvine) machine learning repository. Then, the relationship between diversity and ensemble classifier's accuracy was evaluated with Pearson correlation coefficient. The results show that each measure tends to a stable value in the later stage of iteration; especially double-fault measure changes similarly on different data sets, increasing in the early stage and tending to be stable in the later stage of iteration. Finally, a selection strategy of weak classifier based on double-fault measure was put forward. The experimental results show that compared with the other commonly used ensemble methods, the test error of the improved AdaBoost algorithm is reduced by 1.5 percentage points in average, and 4.8 percentage points maximally. Therefore, the proposed algorithm can improve classification performance.
    Alerting algorithm of low-level wind shear based on fuzzy C-means
    XIONG Xinglong, YANG Lixiang, MA Yuzhao, ZHUANG Zibo
    2018, 38(3):  655-660.  DOI: 10.11772/j.issn.1001-9081.2017081942
    Asbtract ( )   PDF (978KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that the China new-generation Doppler weather radar named CINRAD is easy to lose small shear in radial or tangential direction, a new alerting algorithm of low-level wind shear based on Fuzzy C-Means (FCM) was proposed for wind shear identification of front and tornado. In order to achieve high shear and low shear warning, the core idea of this algorithm was to use 8-neighborhood system, according to the wind speed divergence characteristics to identify varying degrees of shear. Firstly, the Total Variation (TV) model was used in radar velocity base data denoising while maintaining the details of the data. Secondly, the 8-neighborhood system was convoluted in turn with 4-direction template to obtain the omni directional velocity gradient. Then, in order to achieve different intensity of wind shear altering, the FCM algorithm was used to classify the gradient values into two categories. Using the measured data provided with the Wuhan Rainstorm Research Institute to test and verify, the small shear was identified. The results show that the algorithm to detect wind shear is superior to the wind shear recognition algorithm based on radial or tangential direction in terms of both position accuracy and edge recognition, which has important guiding significance to judgment of position and intensity and analysis of wind shear caused by different weather.
    Weather radar echo extrapolation method based on convolutional neural networks
    SHI En, LI Qian, GU Daquan, ZHAO Zhangming
    2018, 38(3):  661-665.  DOI: 10.11772/j.issn.1001-9081.2017082098
    Asbtract ( )   PDF (963KB) ( )  
    References | Related Articles | Metrics
    Extrapolation technique of weather radar echo possesses a widely application prospects in short-term nowcast. The traditional methods of radar echo extrapolation are difficult to obtain long limitation period and have low utilization rate of radar data. This problem is researched from deep learning perspective in this paper, and a new model named Dynamic Convolutional Neural Network based on Input (DCNN-I) was proposed. According to the strong correlation between weather radar echo images at adjacent times, dynamic sub-network and probability prediction layer were added, and a function was created that maped the convolution kernels to the input, through which the convolution kernels could be updated based on the input weather radar echo images during the testing. In the experiments of radar data from Nanjing, Hangzhuo and Xiamen, this method achieved higher accuracy of prediction images compared with traditional methods, and extended the limitation period of exploration effectively.
    Application of Faster R-CNN model in vehicle detection
    WANG Lin, ZHANG Hehe
    2018, 38(3):  666-670.  DOI: 10.11772/j.issn.1001-9081.2017082025
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics
    Since the traditional machine learning methods are easy to be affected by light, target scale and image quality in vehicle detection applications, resulting the low efficiency and generalization ability, a vehicle detection method based on improved Faster Regions with Convolutional Neural Network features (R-CNN) model was proposed. On the basis of Faster R-CNN model, through convolution and pooling operations to extract the features of vehicles, by combining with multi-scale training and hard negative sample mining strategy to reduce the influence of complex environment, the KITTI data set was used to train the deep neural network model, and the images were collected from actual scene to test the trained neural network model. In the simulation experiments, while the detection time was guaranteed, the detection accuracy of the proposed method was improved by about 8% compared to the original Faster R-CNN algorithm. The experimental results show that the proposed method can automatically extract the features of vehicles, solve the time-consuming and laborious problem of extracting features by traditional methods, effectively improve the accuracy of vehicle detection, and has good generalization ability and wide range of applications.
    Modified scale dependent pooling model for traffic image recognition
    XU Zhe, FENG Changhua
    2018, 38(3):  671-676.  DOI: 10.11772/j.issn.1001-9081.2017082054
    Asbtract ( )   PDF (1033KB) ( )  
    References | Related Articles | Metrics
    Aiming at these problems that the traffic sign has a small proportion in the natural scene, the extracted features are insufficient and the recognition accuracy is low, an improved Scale Dependent Pooling (SDP) model was proposed for the recognition of small-scale traffic images. Firstly, because the deep convolution layer of neural network has better contour information and class characteristics, Supplementary Deep convolution layer characteristic Scale-Dependent Pooling (SD-SDP) model for deep convolution layer characteristic was used to extract features based on the feature information of shallow convolution by SDP model, enriching feature information. Secondly, the Multi-scale Sliding window Pooling (MSP) was used to make up the edge information of the target object, instead of the single-layer spatial pyramid method in the original SDP algorithm. Finally, the improved SDP model was applied to the recognition of traffic signs. The experimental result show that, compared to SDP algorithms, the extracted feature dimension increases and the accuracy of small scale traffic image recognition is improved.
    Deduplication algorithm based on Winnowing fingerprint matching
    WANG Qingsong, GE Hui
    2018, 38(3):  677-681.  DOI: 10.11772/j.issn.1001-9081.2017082023
    Asbtract ( )   PDF (974KB) ( )  
    References | Related Articles | Metrics
    There are some problems in big data that the chunking size of the deduplication algorithm for Content-Defined Chunking (CDC) is difficult to control, the expense of fingerprint calculation and comparison is high, and the parameter needs to be set in advance. Thus, a Deduplication algorithm based on Winnowing Fingerprint Matching (DWFM) was proposed. Firstly, the chunking size prediction model was introduced before chunking, which can accurately calculate proper chunking size according to the application scenario. Then, the ASCⅡ/Unicode was used as the data block fingerprint in the calculation of the fingerprint. Finally, when determining the block boundary, the proposed algorithm based on chunk fingerprint matching does not need to set the parameters in advance to reduce fingerprint calculation and contrast overhead. The experimental results on a variety of datasets show that DWFM is about 10% higher than FSP (Fixed-Sized Partitioning) and CDC algorithms in deduplication rate, and about 18% in fingerprint computing and contrast overhead. As a result, the chunking size and boundaries of DWFM are more consistent with data characteristics, reducing the impact of parameter settings on the performance of deduplication algorithms, meanwhile, effectively eliminating more duplicate data when dealing with different types of data.
    Adaptive security mechanism for defending On-off attack based on trust in Internet of things
    ZHANG Guanghua, YANG Yaohong, PANG Shaobo, CHEN Zhenguo
    2018, 38(3):  682-687.  DOI: 10.11772/j.issn.1001-9081.2017092214
    Asbtract ( )   PDF (1034KB) ( )  
    References | Related Articles | Metrics
    To reduce the unnecessary overhead of data source authentication in static security mechanism and defend the On-off attack in trust threshold mechanism, an adaptive security mechanism based on trust was proposed in the Internet of Things (IoT). Firstly, the trust evaluation model was built according to node behavior in information interaction, further the measure method for total trust value of nodes was given. Then, for the nodes whose total trust values were higher than the trust threshold, the trust-based adaptive detection algorithm was used to detect the changes of the total trust values of these nodes in real time. Finally, the relay nodes determined whether to authenticate the received message according to the returned result of adaptive detection algorithm. The simulation results and analysis show that the proposed mechanism reduces the energy overhead of relay nodes, and plays a better role in defense against On-off attacks in IoT.
    Frequent location privacy-preserving algorithm based on geosocial network
    NING Xueli, LUO Yonglong, XING Kai, ZHENG Xiaoyao
    2018, 38(3):  688-692.  DOI: 10.11772/j.issn.1001-9081.2017071686
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics
    Focusing on the attack of frequent location as background knowledge causing user identity disclosure in geosocial network, a privacy-preserving algorithm based on frequent location was proposed. Firstly, The frequent location set was generated by the frequency of user check-in which was allocated for every user. Secondly,according to the background knowledge, hyperedges were composed by frequent location subset. Some hyperedges were remerged which did not meet anonymity parameter k, meanwhile the minimum bias of user and bias of location were chosen as hyperedges remerging metrics. Finally, in the comparison experiments with (k,m)-anonymity algorithm, when the background knowledge was 3, the average bias of user and bias of location were decreased by about 19.1% and 8.3% on dataset Gowalla respectively, and about 22.2% and 10.7% on dataset Brightkite respectively. Therefore, the proposed algorithm can effectively preserve frequent location privacy, and reduces bias of user and location.
    Defense strategy against browser cache pollution
    DAI Chengrui, CHEN Wei
    2018, 38(3):  693-698.  DOI: 10.11772/j.issn.1001-9081.2017082139
    Asbtract ( )   PDF (1095KB) ( )  
    References | Related Articles | Metrics
    Browser cache is mainly used to speed up the user's request for network resources, however, an attacker can implement cache pollution attack via man-in-the-middle attacks. The general defense strategies against browser cache pollution cannot cover different types of network attack, therefore, a controllable browser cache pollution defense strategy was proposed. The proposed strategy was deployed between the client and the server. The strategy includes random number judgement, request-response delay judgement, resource representation judgement, hash verification and crowdsourcing strategy, by which the browser cache pollution problems were effectively defended. 200 JavaScript resource files were selected as experiment samples and 100 of them were polluted via man-in-the-middle attack. By accessing these resources, defense scripts were enabled to analyze the detection rate of contaminated samples and the false positive rate of normal samples. The experimental results show that under the loose conditions, the hit rate of contaminated samples reaches 87% and false positive rate of normal samples is 0%; while under the strict conditions, the hit rate of contaminated sample reaches 95% and false positive rate of normal samples is 4%. At the same time, the request response time difference of all experimental samples is 5277ms and 6013ms respectively, which are both less than the time difference of reloading all the resources. The proposed strategy defends most of the polluted resources and shortens the time of user access. The strategy simplifies the process of cache pollution prevention, and also makes tradeoff between the security and usability with different parameters to satisfy different users.
    Task scheduling algorithm based on weight in Storm
    LU Liang, YU Jiong, BIAN Chen, YING Changtian, SHI Kangli, PU Yonglin
    2018, 38(3):  699-706.  DOI: 10.11772/j.issn.1001-9081.2017082125
    Asbtract ( )   PDF (1385KB) ( )  
    References | Related Articles | Metrics
    Apache Storm, a typical platform for big data stream computing, uses a round-robin scheduling algorithm as the default scheduler, which does not consider the fact that differences of computational and communication cost are ubiquitous among different tasks and different data streams in one topology. Hence optimization is needed in terms of load balance and communication cost. To solve this problem, a Task Scheduling Algorithm based on Weight in Storm (TSAW-Storm) was proposed. In the algorithm, CPU occupation was taken as the weight of a task in a specific topology, and similarly tuple rate between a pair of tasks was taken as the weight of a data stream. Then tasks were assigned to the most suitable work node gradually by maximizing the gain of weight of data streams via transforming inter-node data streams into intra-node ones as many as possible with load balance ensured in order to reduce network overhead. Experimental results show that TSAW-Storm can reduce latency and inter-node tuple rate by about 30.0% and 32.9% respectively, and standard deviation of CPU load of work nodes is only 25.8% when compared to Storm default scheduling algorithm in WordCount benchmark with 8 work nodes. Additionally, online scheduler is deployed in contrast experiment. Experimental results show that TSAW-Storm can reduce latency, inter-node tuple rate and standard deviation of CPU load by about 7.76%, 11.8% and 5.93% respectively, which needs only a bit of executive overhead compared to online scheduler. Therefore, the proposed algorithm can reduce communication cost as well as improve load balance effectively, which makes a great contribution to the efficient operation of Apache Storm.
    Cloud task scheduling strategy based on clustering and improved symbiotic organisms search algorithm
    LI Kunlun, GUAN Liwei, GUO Changlong
    2018, 38(3):  707-714.  DOI: 10.11772/j.issn.1001-9081.2017092311
    Asbtract ( )   PDF (1217KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of some Quality of Service (QoS)-based scheduling algorithms in cloud computing environment, such as slow optimizing speed and imbalance between scheduling cost and user satisfaction, a cloud task scheduling strategy based on clustering and improved SOS (Symbiotic Organisms Search) algorithm was proposed. Firstly, the tasks and resources were clustered by fuzzy clustering and the resources were reordered and placed, and then the tasks were guided and assigned according to the similarity of attributes to reduce the selection range of resources. Secondly, the SOS algorithm was improved according to the cross and rotation learning mechanism to improve the algorithm search ability. Finally, the driving model was constructed by weighted summation to balance the relationship between scheduling cost and system performance. Compared with the improved global genetic algorithm, hybrid particle swarm optimization and genetic algorithm, and discrete SOS algorithm, the proposed algorithm can effectively reduce the evolution generation, reduce the scheduling cost and improve the user's satisfaction. Experimental results show that the proposed algorithm is a feasible and effective task scheduling algorithm.
    Firefly algorithm based on uniform local search and variable step size
    WANG Xiaojing, PENG Hu, DENG Changshou, HUANG Haiyan, ZHANG Yan, TAN Xujie
    2018, 38(3):  715-721.  DOI: 10.11772/j.issn.1001-9081.2017082039
    Asbtract ( )   PDF (1137KB) ( )  
    References | Related Articles | Metrics
    Since the convergence speed of the Firefly Algorithm (FA) is slow, and the solution accuracy of the FA is low, an improved Firefly Algorithm with Uniform local search and Variable step size (UVFA) was proposed. Firstly, uniform local search was established by the uniform design theory to accelerate convergence and to enhance exploitation ability. Secondly, search step size was dynamically tuned by using the variable step size strategy to balance exploration and exploitation. Finally, uniform local search and variable step size were fused. The results of simulation tests on twelve benchmark functions show that the objective function mean of UVFA was significantly better than FA, WSSFA (Wise Step Strategy for Firefly Algorithm), VSSFA (Variable Step Size Firefly Algorithm) and Uniform local search Firefly Algorithm (UFA), and the time complexity was obviously reduced. UVFA is good at solving low dimensional and high dimensional problems, and has good robustness.
    Protein function prediction method based on PPI network and machine learning
    TANG Jiaqi, WU Jingli
    2018, 38(3):  722-727.  DOI: 10.11772/j.issn.1001-9081.2017082042
    Asbtract ( )   PDF (948KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the prediction method of protein function based on the current Protein-Protein Interaction (PPI) network has low precision and is susceptible to data noise, a new machine learning protein function prediction method named HPMM (HC, PCA and MLP based Method) was proposed, which combined Hierarchical Clustering (HC), Principal Component Analysis (PCA) and Multi-layer Perception (MLP). HPMM took comprehensive consideration from macro and micro perspectives. It combined the information of protein families, domains and important sites into the vertex attributes of PPI networks to alleviate the effect from the data noise of networks. Firstly, the features of function modules and attribute principal components were extracted by using HC and PCA. Secondly, a mapping relationship between multi-feature and multi-function, used to predict protein functions, was constructed by training the MLP model. Three homo sapiens PPI networks, which were annotated by Molecular Functions (MF), Biological Processes (BP), and Cellular Components (CC) respectively, were adopted in the experiments. Comparisons were performed among the HPMM algorithm, the Cosine Iterative Algorithm (CIA) and the Diffusing GO Terms in the Directed PPI Network (GoDIN) Algorithm. The experimental results indicate that HPMM can obtain higher precision and F-measure than algorithms CIA and GoDIN, which are purely PPI network based methods.
    Access protocol for terahertz wireless personal area networks based on high efficient handover of piconet coordinator
    REN Zhi, TIAN Jieli, YOU Lei, LYU Yuhui
    2018, 38(3):  728-733.  DOI: 10.11772/j.issn.1001-9081.2017082062
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics
    To resolve the problems currently existing in access protocol for Terahertz Wireless Personal Area Network (THz-WPAN) for PicoNet Coordinator (PNC) handover process, such as imperfect PNC handover process, redundant information transmission, waste of obvious timeslot resources, an access protocol of piconet coordinator high efficient handover for terahertz wireless personal area networks called PCHEH-AP (Piconet Coordinator High Efficient Handover Access Protocol) was proposed. By using deleting the specified node redundancy information mechanism, confirming first and handover next mechanism, PNC handover adaptive slot allocation mechanism, PCHEH-AP generally regulates the PNC handover process, improves channel utilization, reduces data delay, and makes PNC handover more reasonable and efficient. The simulation results show that compared with the protocol defined by IEEE 802.15.3c and the HTLL-MAC protocol, the data delay of PCHEH-AP is reduced by at least 8.98% and the throughput of MAC layer is also increased by 3.90%.
    Delay tolerant network clustering routing algorithm based on vehicular Ad Hoc network communication terminals and motion information
    HE He, LI Linlin, LU Yunfei
    2018, 38(3):  734-740.  DOI: 10.11772/j.issn.1001-9081.2017071647
    Asbtract ( )   PDF (1142KB) ( )  
    References | Related Articles | Metrics
    For complex battlefield environment is lack of stable end-to-end communication path between user terminals, a Delay Tolerant Network (DTN) clustering routing algorithm based on Vehicular Ad Hoc NETwork (VANET) communication terminals and motion information named CVCTM (Cluster based on VANET Communication Terminals and Motion information) was proposed. Firstly, the clustering algorithm based on cluster head election was studied; secondly, the routing algorithm for intra-cluster source vehicle was studied based on hop count, relay mode and geographical location information. Then, the routing algorithm for inter-cluster source vehicle was realized by introducing waiting time, threshold of retransmission times and downstream cluster heads. Finally, the optimal way of communicating with upper headquarter was chosen by VANET communication terminals. The ONE simulation results show that the message delivery ratio of CVCTM increased nearly 5%, the network overhead of it decreased nearly 10%, the recombination times of cluster structure decreased nearly 25% in the comparison with AODV (Ad Hoc On-demand Distance Vector routing); the message delivery ratio of CVCTM increased nearly 10%, the network overhead of it decreased nearly 25%, the recombination times of cluster structure decreased nearly 40% in the comparison with CBRP (Cluster Based Routing Protocal) algorithm and DSR (Dynamic Source Routing) protocal. CVCTM can effectively reduce network overhead and recombination times of cluster structure and increase message delivery ratio.
    Time and frequency synchronization for OFDM/OQAM in ground air channel
    TANG Yaxin, LI Yanlong, YANG Chao, WANG Bo
    2018, 38(3):  741-745.  DOI: 10.11772/j.issn.1001-9081.2017071885
    Asbtract ( )   PDF (779KB) ( )  
    References | Related Articles | Metrics
    For the Orthogonal Frequency Division Multiplexing/Offset Quadrature Amplitude Modulation (OFDM/OQAM) system has no cyclic prefix, it is sensitive to time error and has a high requirement for frequency offset estimation in fast time-varying ground-air channel with large Doppler frequency offset, an AutoCorrelation Estimation (ACE) time frequency synchronization algorithm for OFDM/OQAM system in ground-air channel was proposed. In the algorithm, the symbol timing was used to achieve fast acquisition and timing with fewer auxiliary sequences. The frequency offset estimation was carried out by optimizing the autocorrelation sequence and performing two autocorrelation operations. The final frequency offset was obtained by weighting and averaging the estimated frequency offset of the two operations. The simulation results showed that symbol timing correlation peak of the ACE increased 3 compared with the Modified Linear Square (MLS) and Training Sequence 2 (TR2) algorithm. There was a 10dB SNR (Signal-to-Noise Ratio) gain when BER (Bit Error Rate) was 10-2 at the en-route state of ground-air channel, and there was a 3dB SNR gain when the system BER was 10-3 at the arrival state of ground-air channel. The simulation results show that the ACE algorithm further enhances time frequency synchronization accuracy and bit error performance.
    Multi-scale network replication technology for fusion of virtualization and digital simulation
    WU Wenyan, JIANG Xin, WANG Xiaofeng, LIU Yuan
    2018, 38(3):  746-752.  DOI: 10.11772/j.issn.1001-9081.2017081956
    Asbtract ( )   PDF (1193KB) ( )  
    References | Related Articles | Metrics
    The network replication technology has become the cornerstone of the evaluation platform for network security experiments and the system for network emulation. Facing the requirements of fidelity and scalability of network replication, a multi-scale network replication technology based on cloud platform for the fusion of lightweight virtualization, full virtualization and digital simulation was proposed. The architecture of the seamless fusion of these three scales was introduced at the beginning; And then the network construction technology based on the architecture was studied. The emulation experimental results show that the emulation network which is built with the construction technology has the characteristics of flexibility, transparency and concurrency; in addition, the construction technology is capable of emulating networks with high extensibility. At last, communication tests for a variety of protocols and simple network security experiments on the large-scale emulation network were conducted to verify the availability of this large-scale emulation network. The extensive experimental results show that the multi-scale network replication technology for the fusion of virtualization and digital simulation can be used as the powerful support for creating large-scale emulation networks.
    Participant reputation evaluation scheme in crowd sensing
    WANG Taochun, LIU Tingting, LIU Shen, HE Guodong
    2018, 38(3):  753-757.  DOI: 10.11772/j.issn.1001-9081.2017082049
    Asbtract ( )   PDF (804KB) ( )  
    References | Related Articles | Metrics
    For a Mobile Crowd Sensing (MCS) network has a large group of participants, and the acquisition and submission of tasks are almost unrestricted, so that data redundancy is high and data quality cannot be guranteed. To solve the problem, a method called Participant Reputation Evaluation Scheme (PRES) was proposed to evaluate the data quality and the reputation of participants. A participant's reputation was evaluated from five aspects:response time, distance, historical reputation, data correlation and quality of submitted data. The five parameters were quantified, and a regression equation was established by using logistic regression model to get the participant reputation after submitting data. The reputation credibility of a participant was in the interval[0.0, 1.0], and concentrated in[0.0,0.2] and[0.8, 1.0], making it easier for the group of mental perception network to choose appropriate participants, and the accuracy of the evaluation results by the crowd sensing showed that PRES was more than 90%.
    Energy consumption of WSN with multi-mobile sinks considering QoS
    WANG Manman, SHU Yong'an
    2018, 38(3):  758-762.  DOI: 10.11772/j.issn.1001-9081.2017082130
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics
    Concerning the excessively high energy consumption, long transmission delay and poor data integrity of nodes in Wireless Sensor Network (WSN),a routing algorithm named MSTSDI (Multi-Sink Time Sensitive Data Integrity) based on multi-mobile sinks considering Quality of Service (QoS) was proposed. Firstly, The density of the nodes was determined by the strength of the signal received from the base station,and the WSN was divided into autonomous areas according to the K-means theory. Secondly, a mobile sink was assigned to each autonomous area, and the trajectory of the mobile sink was determined by using Support Vector Regression (SVR). Finally, the depth and queue potential fields were introduced to transmit data packets with high sensitivity and high data integrity through Improved-IDDR (Integrity and Delay Differentiated Routing) algorithm. Theoretical analysis and simulation results showed that compared with GLRM (Grid-based Load-balanced Routing Method) algorithm and LEACH (Low Energy Adaptive Clustering Hierarchy protocol) algorithm, the energy consumption of routing strategy improved-IDDR was decreased by 21.2% and 23.7%; and the end-to-end delay of the algorithm was decreased by 15.23% and 17.93%; the data integrity was better. Experimental results showed that MSTSDI can effectively improve the performance of the system in real networks.
    Fast virtual grid matching localization algorithm based on Pearson correlation coefficient
    HAO Dehua, GUAN Weiguo, ZOU Linjie, JIAO Meng
    2018, 38(3):  763-768.  DOI: 10.11772/j.issn.1001-9081.2017071760
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the location fingerprint matching localization algorithm has a large workload of offline database collection in an indoor environment, a fast virtual grid matching algorithm based on Pearson correlation coefficient was proposed. Firstly, the Received Signal Strength Indicator (RSSI) was preprocessed with Gaussian filter to obtain the received signal strength vector. Then, the Bounding-Box method was used to determine the initial virtual grid region. The grid region was rapidly and iteratively subdivided, the distance log vectors of the grid center point to beacon nodes were calculated, and the Pearson correlation coefficients between the received signal strength vector and the distance log vectors were calculated. Finally, the k nearest neighbor coordinates whose correlation coefficients close to -1 were selected, and the optimal estimation position of the undetermined node was determined by the weighted estimation of correlation coefficients. The simulation results show that the localization error of the proposed algorithm is less than 2m in 94.2% probability under the condition of 1m virtual grid and RSSI noise standard deviation of 3dBm. The positioning accuracy is better than that of the location fingerprint matching algorithm, and the RSSI fingerprint database is no longer needed, which greatly reduces the workload of localization.
    Deterministic layered construction algorithm based on network coding
    XU Guangxian, ZHAO Yue, LAI Junning
    2018, 38(3):  769-775.  DOI: 10.11772/j.issn.1001-9081.2017081982
    Asbtract ( )   PDF (1214KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that the construction algorithm of multi-source multicast network coding costs long convergence time, a deterministic layered construction algorithm based on network coding was proposed. On the basis of existing studies, a virtual source was used for virtual trial. Firstly, the nodes with non-full rank local coding matrix were determined layer-by-layer by decision tree algorithm. Then, the local encoding coefficients of the upper transform nodes were reconstructed and a new encoding vector was generated. Finally, the new encoding vector was transmitted to the lower node corresponding to it, and the local coding matrix of the lower node was full rank, so a feasible coding scheme was obtained to realize network coding. Moreover, when redundant data is found in some links, pruning branches method was implemented to improve bandwidth utilization. The algorithm only needs one virtual trial multicast in comparison with the Sink Nodes Feedback Deterministic Network Coding (SNFDNC), and the simulation results show that the convergence time of the proposed algorithm is shorter in the medium scale network, and the average transmission rate of multicast communication is further improved.

    Game theory based SDN master controller reselection mechanism
    FAN Zifu, ZHOU Kaiheng, YAO Jie
    2018, 38(3):  776-779.  DOI: 10.11772/j.issn.1001-9081.2017071688
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics
    For the overload problem of single controller in Software Defined Network (SDN), a game theory based master controller reselection mechanism-GAME-System Model (GAME-SM) was proposed. Firstly, the problem of switch migration constrained by resource was translated into maximizing revenue problem of zero sum game, and the GAME-SM mechanism was proposed. Secondly, the upper and lower thresholds of the controller load were set to determine the trigger conditions of the game, and the controller whose load reached the upper limit invited the neighboring controllers to participate in the game as game players. Finally, the game strategy was designed based on the zero sum game to maximize the revenue of each participant, and the master controller was reselected by the repeated game with the change of utility degree, and the load balance of the whole system was realized eventually. The simulation results show that the proposed mechanism can significantly improve the controller load balance, and the controller response time is reduced by 50% compared with Distributed-CoNTroLler (D-CNTL).
    Asynchronous hexadecimal digital secure communication system based on shift keying of coupled hyperchaotic systems
    2018, 38(3):  780-785.  DOI: 10.11772/j.issn.1001-9081.2017082018
    Asbtract ( )   PDF (976KB) ( )  
    References | Related Articles | Metrics
    To improve the communication efficiency of shift keying, an asynchronous secure communication scheme was proposed based on shift keying and coupled hyperchaotic systems. In the sender, hexadecimal digital signal was embedded into gained state variables, and the original signal was masked alternately through converter control module, then was sent out after adding Gaussian noise. In the receiver, the signal could be extracted successfully through adjusting the threshold dynamically. Numerical simulation verifies that under the condition of channel with noise, the sender and the receiver can securely communicate successfully, the Bit Error Ratio (BER) reduces smoothly with the increase of Signal-to-Noise Ratio (SNR), through adjusting the gain of chaotic signals adaptively, which ensures the stability of the proposed communication system.
    Design and implementation of carrier aggregation in LTE-A air-interface analyzer
    LI Ruying, ZHANG Zhizhong, DENG Xiangtian
    2018, 38(3):  786-790.  DOI: 10.11772/j.issn.1001-9081.2017081988
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics
    Focusing on the difficulties in communication networks test and optimization since some key technologies like carrier aggregation applied in communication networks, and the shortage of Long Term Evolution (LTE) air-interface analyzer in domestic market, a Long Term Evolution-Advanced (LTE-A) air-interface analyzer design scheme was proposed, which supports 3GPP R10/11 protocol standards and LET-A key technologies like carrier aggregation. Firstly, the physical and logical architecture of LTE-A air-interface analyzer was introduced, and the relationship between the two, the function of each module in the physical and logical architecture was illustrated, then an implementation scheme of carrier aggregation in the instrument was designed. At the same time, in order to meet the demand of new technologies in communication network as well as the requirements of users and base station equipment test, a scheme which could support multi-user in the case of multi-carrier and multi-cell for the analyzer was proposed. The application of the scheme, can accelerate the commercialization of carrier aggregation in communication network, speed up the network deployment process and shorten the network construction cycle, and it will play an indispensable role in communication network operation and maintenance.
    Software modularization optimization algorithm with eliminating isolated clusters
    MU Lifeng, WANG Fangyuan
    2018, 38(3):  791-798.  DOI: 10.11772/j.issn.1001-9081.2017081940
    Asbtract ( )   PDF (1243KB) ( )  
    References | Related Articles | Metrics
    Considering the isolated cluster problem caused by traditional software modularization methods, a new metric named Improved Modularization Quality (IMQ) was proposed and used as the fitness function of an evolutionary algorithm to eliminate isolated clusters effectively. A mathematical programming model with the goal of maximizing IMQ was developed to represent software modularization problem. In addition, an Improved Genetic Algorithm (IGA) with competition and selection mechanism similarity was designed to solve this model. Firstly, a heuristic strategy based on edge contraction was used to generate high-quality solutions. Then the solutions were implanted as seeds into the initial population. At last, the proposed IGA was employed to further improve solution quality. Comparison experimental results prove that IMQ can effectively reduce the number of isolated clusters, and IGA has stronger robustness and ability of finding better solutions than Improved Hill Climbing Algorithm (IHC) and GA based on Group Number Encoding (GNE).
    Modeling and verification approach for temporal properties of self-adaptive software dynamic processes
    HAN Deshuai, XING Jianchun, YANG Qiliang, LI Juelong
    2018, 38(3):  799-805.  DOI: 10.11772/j.issn.1001-9081.2017081992
    Asbtract ( )   PDF (1152KB) ( )  
    References | Related Articles | Metrics
    Current modeling and verification approaches for self-adaptive software rarely consider temporal properties. However, in time-critical application domain, the correct operation of self-adaptive software depends on the correctness of self-adaptive logic as well as temporal properties of self-adaptive software dynamic processes. For this end, temporal properties for self-adaptive software were explicitly defined, such as, monitoring period, delay trigger time, deadline of self-adaptive process, self-adaptive adjusting time and self-adaptive steady time. Then, a Timed Automata Network (TAN) based modeling templates for temporal properties of self-adaptive software dynamic processes were constructed. Finally, the temporal properties were formally described with Timed Computation Tree Logic (TCTL), and then were analyzed and verified. Combining with a self-adaptive example, this paper has validated the proposed approach. The results show that the proposed approach can explicitly depict temporal properties of self-adaptive software, and can reduce its formal modeling complexity.
    Software birthmark extraction algorithm based on multiple features
    WANG Shuyan, ZHAO Pengfei, SUN Jiaze
    2018, 38(3):  806-811.  DOI: 10.11772/j.issn.1001-9081.2017082068
    Asbtract ( )   PDF (867KB) ( )  
    References | Related Articles | Metrics
    Concerning the low accuracy of existing software birthmark extraction algorithms in detecting code theft problem, a new static software birthmark extraction algorithm was proposed. The birthmark generated by this algorithm covered two kinds of software features. The source program and the suspicious program were preprocessed to get the program meta data, which was used to generate Application Programming Interface (API) call set and instruction sequence as two features. These two features were synthesized to generate software birthmarks. Finally, the similarity of source program and suspicious program was calculated to determine whether there was code theft between the two programs. The experimental result verifies that the birthmark combined by API call set and instruction sequence has credibility and resilience, and has stronger resilience compared with k-gram birthmark.
    Optimization of source code search based on multi-feature weight assignment
    LI Zhen, NIU Jun, WANG Kui, XIN Yuanyuan
    2018, 38(3):  812-817.  DOI: 10.11772/j.issn.1001-9081.2017082043
    Asbtract ( )   PDF (968KB) ( )  
    References | Related Articles | Metrics
    It is a precondition of achieving code reuse to search open source code accurately. The current methods based on keyword search only concern matching function signatures. Considering the source code comments on the semantic description of the method's function, a method based on keyword search was proposed, which took into account code comments. The features of code, such as function signatures and different types of comments, were identified from the generated abstract syntax tree of source code; the code features and query statements were transformed into vectors respectively, and then based on the cosine similarity between the vectors, the scoring mechanism of multi-feature weight assignment to the results was created. According to the scores, an ordered list of relevant functions was obtained that reflects the associations between code features in the functions and a query. The experimental results demonstrate that the accuracy of search results can be improved by using multiple code features with different weights.
    Android malware detection model based on Bagging-SVM
    XIE Lixia, LI Shuang
    2018, 38(3):  818-823.  DOI: 10.11772/j.issn.1001-9081.2017082143
    Asbtract ( )   PDF (1076KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low detection rate caused by data imbalance in Android malware detection, an Android malware detection model based on Bagging-SVM (Support Vector Machine) integrated algorithm was proposed. Firstly, the permission information, intent information and component information were extracted as features from the file AndroidManifest.xml. Secondly, IG-ReliefF hybrid selection algorithm was proposed to reduce the dimension of data sets, and multiple balanced data sets were formed by bootstrap sampling method. Finally, a Bagging-based SVM ensemble classifier was trained by the multiple balanced data sets to detect Android malware. In the classification experiment, the detection rates of Bagging-SVM and random forest algorithm were 99.4% when the number of benign and malicious samples was balanced. When the ratio of benign and malicious samples was 4:1, the detection rate of Bagging-SVM algorithm was 6.6% higher than random forest algorithm and AdaBoost algorithm without reducing the detection accuracy. The experiment results show that the proposed model still has high detection rate and classification accuracy and can detect the vast majority of malware in the case of data imbalance.
    Impact of regression algorithms on performance of defect number prediction model
    FU Zhongwang, XIAO Rong, YU Xiao, GU Yi
    2018, 38(3):  824-828.  DOI: 10.11772/j.issn.1001-9081.2017081935
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the existing studies do not consider the imbalanced data distribution problem in defect datasets and employ improper performance measures to evaluate the performance of regression models for predicting the number of defects, the impact of different regression algorithms on models for predicting the number of defects were explored by using Fault-Percentile-Average (FPA) as the performance measure. Experiments were conducted on six datasets from PROMISE repository to analyze the impact on the models and the difference of ten regression algorithms for predicting the number of defects. The results show that the forecast results of models for predicting the number of defects built by different regression algorithms are various, and gradient boosting regression algorithm and Bayesian ridge regression algorithm can achieve better performance as a whole.
    Color gradient filling using main skeleton in complex shape
    WANG Jiarun, REN Fei, RONG Ming, LUO Tongxin
    2018, 38(3):  829-835.  DOI: 10.11772/j.issn.1001-9081.2017082089
    Asbtract ( )   PDF (1108KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of color gradient filling of complicated shape in its stretching trend, a Shape Main Skeleton Color Gradient Filling Algorithm (SMSCGFA) was proposed by using shape main skeleton. Based on visual salience estimating vector, the main skeleton was extracted from a shape by using whole selecting and local geometric optimization. Some important methods in SMSCGFA were studied that included extracting skeleton using Constrained Delaunay Triangulation (CDT), and extracting skeleton path by double stacks. Gradient filling color in main skeleton was computed, and the whole shape gradient filling color was completed by local main skeleton color filling information. The experiment results show that the optimized skeleton path ratio reduces to 5.5%, and more skeletons with redundant branches are eliminated, and SMSCGFA satisfies subjective visual perception in shape stretching trend compared with color linear filling.
    Performance analysis of motor imagery training based on 3D visual guidance
    HU Min, LI Chong, LU Rongrong, HUANG Hongcheng
    2018, 38(3):  836-841.  DOI: 10.11772/j.issn.1001-9081.2017082010
    Asbtract ( )   PDF (992KB) ( )  
    References | Related Articles | Metrics
    To improve the training efficiency of Motor Imagery (MI) under visual guidance and the classification accuracy of Brain-Computer Interface (BCI), the influence of Virtual Reality (VR) environment on MI training and the differences of ElectroEncephaloGram (EEG) classification models under different visual guidance were studied. Firstly, three kinds of 3D hand interactive animation and EEG acquisition program were designed. Secondly, in the rendering environment of Helmet-Mounted Display (HMD) and planar Liquid Crystal Display (LCD), the left hand and right hand MI training was conducted on 5 healthy subjects, including standard experiment (the single experiment lasted for 5min) and long-time experiment (the single experiment lasted for 15min). Finally, through the pattern classification of EEG data, the influence of rendering environment and content form on classification accuracy was analyzed. The experimental results show that there is a significant difference in the presentation of HMD and LCD in visual guided MI training. The VR environment presented by HMD can improve the accuracy of MI classification and prolong the duration of single training. In addition, the classification model under different visual guidance content is also different. When the testing samples and training samples have the same visual guidance content, the average classification accuracy is 16.34% higher than that of different samples.
    Noise image segmentation model with local intensity difference
    LI Gang, LI Haifang, SHANG Fangxin, GUO Hao
    2018, 38(3):  842-847.  DOI: 10.11772/j.issn.1001-9081.2017082134
    Asbtract ( )   PDF (1173KB) ( )  
    References | Related Articles | Metrics
    It is difficult to get correct segmentation results of the images with unknown intensity and distribution of noise, and the existing models are poor in robustness to complex noise environment. Thus, a noise adaptive algorithm for image segmentation was proposed based on local intensity difference. Firstly, Local Correntropy-based K-means (LCK) model and Region-based model via Local Similarity Factor (RLSF) model were analyzed to reduce the sensitivity to noise pixels. Secondly, a correction function based on local intensity statistical information was introduced to reduce the interference of samples to be away from local mean to segmentation results. Finally, the active contour energy function and iterative equation integrated with the correction function were deduced. Experimental results performed on synthetic, and real-world noisy images show that the proposed model is more robust with higher precision, recall and F-score in comparison with Local Binary Fitting (LBF) model, LCK model and RLSF model, and it can achieve good performance on the images with intensity inhomogeneity and noise.
    Speckle suppression algorithm for ultrasound image based on Bayesian nonlocal means filtering
    FANG Hongdao, ZHOU Yingyue, LIN Maosong
    2018, 38(3):  848-853.  DOI: 10.11772/j.issn.1001-9081.2017071780
    Asbtract ( )   PDF (1122KB) ( )  
    References | Related Articles | Metrics
    Ultrasound imaging is one of the most important diagnostic techniques of modern medical imaging. However, due to the presence of multiplicative speckle noise, the development of ultrasound imaging has been limited. For this problem, an improved strategy for Bayesian Non-Local Means (NLM) filtering algorithm was proposed. Firstly,a Bayesian formulation was applied to derive an NLM filter adapted to a relevant ultrasound noise model, which leads to two methods of calculating distance between the image blocks, the Pearson distance and the root distance. Secondly, to lighten the computational burden, a image block pre-selection process was used to accelerate the algorithm when a similar image block was selected in the non-local area. In addition, the relationship between parameter and noise variance was determined experimentally, which made the parameter being adaptive to the noise.Finally, the VS (Visual Studio) and OpenCV (Open source Computer Visual library) were used to realize the algorithm, making the program running time greatly reduced. In order to evaluate the denoising performance of the proposed algorithm, experiments were conducted on both phantom images and real ultrasound images. The experimental results show that the algorithm has a great improvement in the performance of removing speckle noise and achieves satisfactory results in terms of preserving the edges and image details, compared with some existing classical algorithms.
    Single image super resolution combining with structural self-similarity and convolution networks
    XIANG Wen, ZHANG Ling, CHEN Yunhua, JI Qiumin
    2018, 38(3):  854-858.  DOI: 10.11772/j.issn.1001-9081.2017081920
    Asbtract ( )   PDF (879KB) ( )  
    References | Related Articles | Metrics
    Aiming at the ill-posed inverse problem of single-image Super Resolution (SR) restoration, a single image super resolution algorithm combining with structural self-similarity and convolution networks was proposed. Firstly, the self-structure similarity of samples to be reconstructed was obtained by scaling decomposition. Combined with external database samples as training samples, the problem of over-dispersion of samples could be solved. Secondly, the sample was input into a Convolution Neural Network (CNN) for training and learning, and the prior knowledge of the super resolution of the single image was obtained. Then, the optimal dictionary was used to reconstruct the image by using a nonlocal constraint. Finally, an iterative backprojection algorithm was used to further improve the image super resolution effect. The experimental results show that compared with the excellent algorithms such as Bicubic, K-SVD (Singular Value Decomposition of k iterations) algorithm and Super-Resolution Convolution Neural Network (SRCNN) algorithm, the proposed algorithm can get super-resolution reconstruction with clearer edges.
    Multi-focus image fusion based on lifting stationary wavelet transform and joint structural group sparse representation
    ZOU Jiabin, SUN Wei
    2018, 38(3):  859-865.  DOI: 10.11772/j.issn.1001-9081.2017081970
    Asbtract ( )   PDF (1250KB) ( )  
    References | Related Articles | Metrics
    An image fusion algorithm based on Lifting Stationary Wavelet Transform(LSWT) and joint structural group sparse representation was proposed to restrain pseudo-Gibbs phenomenon created by conventional wavelet transform in multi-focus image fusion, overcome the defect that the fusion method with conventional sparse representation was likely to lead textures, edges, and other detail features of fused images to the tendency of smoothness, and improve the efficiency and quality of multi-focus image fusion. Firstly, lifting stationary wavelet transform was conducted on the experimental images, different fusion modes were adopted according to the respective physical characteristics of low frequency coefficients and high frequency coefficients after decomposition. When selecting coefficients of low frequency, the scheme of coefficient selection based on joint structural group sparse representation was adopted; When selecting coefficients of high frequency, the scheme of coefficient selection based on Directional Region Sum Modified-Laplacian (DRSML) and matched-degree was adopted. Finally, ultimate fusion image was obtained by inverse transform. According to the experiment results, the improved algorithm can effectively improve such image indicators as mutual information and average gradient, keep textures, edges, and other detail features of images intact, and produce better image fusion effects.
    Image saliency detection via adaptive fusion of local and global sparse representation
    WANG Xin, ZHOU Yun, NING Chen, SHI Aiye
    2018, 38(3):  866-872.  DOI: 10.11772/j.issn.1001-9081.2017081933
    Asbtract ( )   PDF (1134KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of local or global sparse representation based image saliency detection methods, such as incomplete object extracted, unsmooth boundary and residual noise, an image saliency detection algorithm based on adaptive fusion of local sparse representation and global sparse representation was proposed. Firstly, the original image was divided into a set of image blocks, and these blocks were used to substitute the image pixels, which may decrease the computational complexity. Secondly, the blocked image was represented via local sparse representation. Specifically, for each image block, an overcomplete dictionary was generated by using its surrounding image blocks, and based on such dictionary the image block was sparsely reconstructed. As a result, an initial local saliency map which may effectively extract the edges of the salient objects could be gotten. Thirdly, the blocked image was represented by global sparse representation. The procedures were similar to the above steps. The difference was that, for each image block, the overcomplete dictionary was constructed by using the image blocks from the four margins of the input image. According to this, an initial global saliency map which could effectively detect the inner areas of the salient objects was obtained. Finally, the initial local and global saliency maps were adaptively fused together to compute the final saliency map. Experimental results demonstrate that compared with several classical saliency detection methods, the proposed algorithm significantly improves the precision, recall and F-measure.
    Visual simultaneous location and mapping based on improved closed-loop detection algorithm
    HU Zhangfang, BAO Hezhang, CHEN Xu, FAN Tingkai, ZHAO Liming
    2018, 38(3):  873-878.  DOI: 10.11772/j.issn.1001-9081.2017082004
    Asbtract ( )   PDF (1040KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that maps may be not consistent caused by accumulation of errors in visual Simultaneous Location and Mapping (SLAM), a Visual SLAM (V-SLAM) system based on improved closed-loop detection algorithm was proposed. To reduce the cumulative error caused by long operation of mobile robots, an improved closed-loop detection algorithm was introduced. By improving the similarity score function, the perceived ambiguity was reduced and finally the closed-loop recognition rate was improved. At the same time, to reduce the computational complexity, the environment image and depth information were directly obtained by Kinect, and feature extraction and matching was carried out by using small and robust ORB (Oriented FAST and Rotated BRIEF) features. RANdom SAmple Consensus (RANSAC) algorithm was used to delete mismatching pairs to obtain more accurate matching pairs, and then the camera poses were calculated by PnP. More stable and accurate initial estimation poses are critical to back-end processing, which were attained by g2o to carry on unstructured iterative optimization for camera poses. Finally, in the back-end Bundle Adjustment (BA) was used as the core of the map optimization method to optimize poses and road signs. The experimental results show that the system can meet the real-time requirements, and can obtain more accurate pose estimation.
    Moving object removal forgery detection algorithm in video frame
    YIN Li, LIN Xinqi, CHEN Lifei
    2018, 38(3):  879-883.  DOI: 10.11772/j.issn.1001-9081.2017092198
    Asbtract ( )   PDF (862KB) ( )  
    References | Related Articles | Metrics
    Aiming at the tampering operation on digital video intra-frame objects, a tamper detection algorithm based on Principal Component Analysis (PCA) was proposed. Firstly, the difference frame obtained by subtracting the detected video frame from the reference frame was denoised by sparse representation method, which reduced the interference of the noise to subsequent feature extraction. Secondly, the denoised video frame was divided into non-overlapping blocks, the pixel features were extracted by PCA to construct eigenvector space. Then, k-means algorithm was used to classify the eigenvector space, and the classification result was expressed by a binary matrix. Finally, the binary morphological image was operated by image morphological operation to obtain the final detection result. The experimental results show that by using the proposed algorithm, the precision and recall are 91% and 100% respectively, and the F1 value is 95.3%, which are better than those the video forgery detection algorithm based on compression perception to some extent. Experimental results show that for the background still video, the proposed algorithm can not only detect the tampering operation to the moving objects in the frame, but also has good robustness to lossy compressed video.
    Cell-phone source identification based on spectral fusion features of recorded speech
    PEI Anshan, WANG Rangding, YAN Diqun
    2018, 38(3):  884-890.  DOI: 10.11772/j.issn.1001-9081.2017071864
    Asbtract ( )   PDF (1084KB) ( )  
    References | Related Articles | Metrics
    With the popularity of cell-phone recording devices and the availability of various powerful and easy to operate digital media editing software, source cell-phone identification has become a hot topic in multimedia forensics, a cell-phone source recognition algorithm based on spectral fusion features was proposed to solve this problem. Firstly, the same speech spectrograms of different cell-phones were analyzed, it was found that the speech spectral characteristics of different cell-phones were different; then the logarithmic spectrum, phase spectrum and information quantity for a speech were researched. Secondly, the three features were connected in series to form the original fusion feature, and the sample feature space was constructed with the original fusion feature of each sample. Finally, the evaluation function CfsSubsetEval of WEKA platform was selected according to the best priority search method to select features, and LibSVM was used to model training and sample recognition after feature selection. Twenty-three popular cell-phone models were evaluated in the experiment, the results showed that the proposed spectral fusion feature has higher identification accuracy for cell-phone brands than spectral single feature and the average identification accuracies achieved 99.96% and 99.91% on TIMIT database and CKC-SD database. In addition, it was compared with the source identification algorithm of Hanilci based on Mel frequency cepstral coefficients, the average identification accuracy was improved by 6.58 and 5.14 percentage points respectively. Therefore, the proposed algorithm can improve the average identification accuracy and effectively reduce the false positives rate of cell-phone source identification.
    Musical instrument identification based on multiscale time-frequency modulation and multilinear principal component analysis
    WANG Fei, YU Fengqing
    2018, 38(3):  891-894.  DOI: 10.11772/j.issn.1001-9081.2017092175
    Asbtract ( )   PDF (815KB) ( )  
    References | Related Articles | Metrics
    Aiming at time or frequency feature, cepstrum feature, sparse feature and probability feature's poor classification performance for kindred and percussion instrument, an enhanced model for extracting time-frequency information and with lower redundancy was proposed. Firstly, a cochlea model was set to filter music signal, whose output was called Auditory Spectrum (AS) containing harmonic information and close to human perception. Secondly, time-frequency feature was acquired by Multiscale Time-Frequency Modulation (MTFM). Then, dimension reduction was implied by Multilinear Principal Component Analysis (MPCA) to preserve the structure and intrinsic correlation. Finally, classification was conducted using Support Vector Machine (SVM). The experimental results show that MTFM's average accuracy is 92.74% on IOWA database and error rate of percussion or kindred instrument is 3% and 9.12%, which wins out the features mentioned above. The accuracy of MPCA was higher 6.43% than that of Principle Component Analysis (PCA). It is proved that the proposed model is an option for kindred and percussion instrument identification.
    Spatio-temporal two-stream human action recognition model based on video deep learning
    YANG Tianming, CHEN Zhi, YUE Wenjing
    2018, 38(3):  895-899.  DOI: 10.11772/j.issn.1001-9081.2017071740
    Asbtract ( )   PDF (1029KB) ( )  
    References | Related Articles | Metrics
    Deep learning has achieved good results in human action recognition, but it still needs to make full use of video human appearance information and motion information. To recognize human actions by using spatial information and temporal information in video, a video human action recognition model based on spatio-temporal two-stream was proposed. Two convolutional neural networks were used to extract spatial and temporal features of video sequences respectively in the proposed model, and then the two neural networks were merged to extract the middle spatio-temporal features, finally the video human action recognition was completed by inputting the extracted features into a 3D convolutional neural network. The video human action recognition experiments were carried out on the data set UCF101 and HMDB51. Experimental results show that the proposed 3D convolutional neural network model based on the spatio-temporal two-stream can effectively recognize the video human actions.
    Ship course identification model based on recursive least squares algorithm with dynamic forgetting factor
    SUN Gongwu, XIE Jirong, WANG Junxuan
    2018, 38(3):  900-904.  DOI: 10.11772/j.issn.1001-9081.2017082041
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics
    To improve the speed and robustness of Recursive Least Squares (RLS) algorithm with forgetting factor in the parameter identification of ship course motion mathematical model, an RLS algorithm with dynamic forgetting factor based on fuzzy control was proposed. Firstly, the residual between the theoretical model output and actual model output was calculated. Secondly, an evaluation function was constructed on the basis of the residual, to assess the parameter identification error. Then, a fuzzy controller with evaluation function and its change rate as two inputs was adopted to realize the dynamic adjustment of the forgetting factor. Combined with designed fuzzy control rule table, the modification of the forgetting factor was obtained by the fuzzy controller at last. Simulation results show that the forgetting factor can be adjusted according to the parameter identification error in the presented algorithm, which achieves higher precision and faster parameter identification than RLS algorithm with constant forgetting factor.
    Sub-health state identification method of subway door based on time series data mining
    XUE Yu, MEI Xue, ZHI Youran, XU Zhixing, SHI Xiang
    2018, 38(3):  905-910.  DOI: 10.11772/j.issn.1001-9081.2017081912
    Asbtract ( )   PDF (974KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the sub-health state of subway door is difficult to identify, a sub-health state identification method based on time series data mining was proposed. First of all, the angle, speed and current data of the subway door motor were discretized by combining multi-scale sliding window method and Extension of Symbolic Aggregate approXimation (ESAX) algorithm. And then, the features were obtained by calculating the distances among the templates under the normal state of the subway door, in which the Principal Component Analysis (PCA) was adopted to reduce feature dimension. Finally, combining with basic features, a hierarchical pattern recognition model was proposed to identify the sub-health state from coarse to fine. The real test data of subway door were taken as examples to verify the effectiveness of the proposed method. The experimental results show that the proposed method can recognize sub-health state effectively, and its recognition rate can reach 99%.
    Target range and speed measurement method based on Golomb series modulation
    WANG Ruidong, CHENG Yongzhi, XIONG Ying, ZHOU Xinglin, MAO Xuesong
    2018, 38(3):  911-915.  DOI: 10.11772/j.issn.1001-9081.2017081915
    Asbtract ( )   PDF (824KB) ( )  
    References | Related Articles | Metrics
    In view of the problems that upper limit of radiated peak power is low for continuous wave laser radar, which limits the maximum measurement range in the application of range and speed measurement, a waveform of modulated signal based on Golomb series was proposed, and the feasibility of simultaneously measuring target's range and speed in road environments by the method was studied. Firstly, the problem of low transmitted signal peak power that exists in continuous wave modulating method was analyzed by using a quasi-continuous, i.e., Pseudo random Noise (PN) code modulation as an example. Characteristics of Golomb series were discussed, and a modulation method based on Golomb series was proposed for raising the peak power of transmitted pulse. Then, a method for analyzing spectrum of Doppler signal modulated by Golomb series was discussed, as well as a data accumulation method for locating signal delay time, such that range and speed could be measured simultaneously. Finally, within the range of Doppler frequency that is generated by moving road targets in road environment, simulations were performed to verify the correctness of the proposed method. The experimental results show that Fast Fourier Transform (FFT) can be used for obtaining the frequency of Doppler signal even when the sampling frequency provided by the pulse series is much lower than the Nyquist frequency, thus largely increasing the peak power of single pulse under the condition that average transmission power keeps unchanged. Furthermore, data accumulation method can be used for locating laser pulse flight time by exploiting the non-equal interval property of Golomb series, ensuring both target range measurement and speed measurement from the same signal.
    Non-contact heart rate measurement method based on Eulerian video magnification
    SU Peiquan, XU Liang, LIANG Yongjian
    2018, 38(3):  916-922.  DOI: 10.11772/j.issn.1001-9081.2017071808
    Asbtract ( )   PDF (1089KB) ( )  
    References | Related Articles | Metrics
    Aiming at problems of inconvenient operations, large noise interference of same frequency band in heart rate, and great influence by environmental temperature in existing non-contact measurement of heart rate, a non-contact measurement method of heart rate based on Eulerian video magnification technology was proposed. Firstly, a tiny beating of radial artery of wrist was magnified by Eulerian video magnification technology. Secondly, a statistics of luminance variance for pixels in an enlarged video frame was performed in time domain. Meanwhile, skin area in YCrCb color space was segmented. Thirdly, the pulsing region of radial artery in a video was extracted by luminance variance statistics and skin segmentation incorporating with image morphological processing. Finally, the non-contact measurement of heart rate was implemented by time-frequency analysis through Fourier transform of luminance signal of radial artery extracted in time domain. The experimental results show that Root Mean Square Error (RMSE) is reduced by 50.5% and 32.6%, respectively compared to Independent Component Analysis (ICA) and pulse alternating current signal analysis; Mean Absolute Difference (MAD) is 12% lower than wavelet filtering method. In this paper, the proposed approach for non-contact measurement of heart rate has a good consistency with pulse oximeter measurement, which is satisfied to pharmaceutical industry standards. It also can be used in daily family health care and telemedicine to monitor heart rate.
2024 Vol.44 No.6

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF