Loading...

Table of Content

    10 October 2018, Volume 38 Issue 10
    Cloud service composition method based on uncertain QoS-aware ness
    WANG Sichen, TU Hui, ZHANG Yiwen
    2018, 38(10):  2753-2758.  DOI: 10.11772/j.issn.1001-9081.2018041187
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of uncertain Quality of Service (QoS)-aware cloud service composition optimization, an Uncertain-Long Time Series (ULST) model and Tournament strategy based Genetic Algorithm (T-GA) was proposed. Firstly, based on different access rules of users to services in different periods, the long-term change of QoS was modeled as an uncertain-long time series, which can accurately describe the users' actual QoS access record to service over a period of time. Secondly, an improved genetic algorithm based on uncertain QoS model was proposed, which used tournament strategy instead of basic roulette wheel selection strategy. Finally, a lot of experiments were carried out on real data. The uncertain-long time series model can effectively solve the problem of uncertain QoS-aware cloud service composition; the proposed T-GA is superior to the Genetic Algorithm based on Elite selection strategy (E-GA) in optimization results and stability, and the execution speed is improved by almost one time, which is a feasible, high efficient and stable algorithm.
    Big data active learning based on MapReduce
    ZHAI Junhai, ZHANG Sufang, WANG Cong, SHEN Chu, LIU Xiaomeng
    2018, 38(10):  2759-2763.  DOI: 10.11772/j.issn.1001-9081.2018041141
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics
    Considering the problem that traditional active learning algorithms can only handle small and medium size data sets, a big data active learning algorithm based on MapReduce was proposed. Firstly, a classifier was trained by Extreme Learning Machine (ELM) algorithm on an initial training set, and the outputs of the classifier were transformed into a posterior probability distribution by softmax function. Secondly, the big data set without labels was partitioned into l subsets, which were deployed to a cloud computing platform with l nodes. On each node, the information entropies of instances of each subset were calculated by the trained classifier, and q instances with maximum information entropies were selected for labeling, then the l×q labeled instances were added into the training set. Repeat the above steps until the predefined termination criterion was satisfied. Contrast test with ELM-based active learning algorithm were conducted on 4 data sets including Artificial, Skin, Statlog and Poker. Experimental results show that the proposed algorithm can complete active instance selection on 4 data sets, while the active learning algorithm based on ELM can only complete active instance selection on the smallest data set, indicating that the proposed algorithm outperforms the active learning algorithm based on ELM.
    Incremental attribute reduction method for incomplete hybrid data with variable precision
    WANG Yinglong, ZENG Qi, QIAN Wenbin, SHU Wenhao, HUANG Jintao
    2018, 38(10):  2764-2771.  DOI: 10.11772/j.issn.1001-9081.2018041293
    Asbtract ( )   PDF (1260KB) ( )  
    References | Related Articles | Metrics
    In order to deal with the highly computational complexity of static attribute reduction when the data increasing dynamically in incomplete hybrid decision system, an incremental attribute reduction method was proposed for incomplete hybrid data with variable precision. The important degrees of attributes were measured by conditional entropy in the variable precision model. Then the incremental updating of conditional entropy and the updating mechanism of attribute reduction were analyzed and designed in detail when the data is dynamically increased. An incremental attribute reduction method was constructed by heuristic greedy strategy which can achieve the dynamical updating of attribute reduction of incomplete numeric and symbolic hybrid data. Through the experimental comparison and analysis of five real hybrid datasets in UCI, in terms of the reduction effects, when the incremental size of the Echocardiogram, Hepatitis, Autos, Credit and Dermatology increased to 90%+10%, the original number of attributes is reduced from 12, 19, 25, 17, 34 to 6, 7, 10, 11, 13, which is accounted for 50.0%, 36.8%, 40.0%, 64.7%, 38.2% of the original attribute set; in terms of the execution time, the average time consumed by the incremental algorithm in the five datasets is 2.99, 3.13, 9.70, 274.19, 50.87 seconds, and the average time consumed by the static algorithm is 284.92, 302.76, 1062.23, 3510.79, 667.85 seconds. The time-consuming of the incremental algorithm is related to the distribution of the instance size, the number of attributes, and the attribute value type of the data set. The experimental results show that the incremental attribute reduction algorithm is significantly superior to the static algorithm in time-consuming, and can effectively eliminate redundant attributes.
    Representative-based ensemble learning classification with leave-one-out
    WANG Xuan, ZHANG Lin, GAO Lei, JIANG Haokun
    2018, 38(10):  2772-2777.  DOI: 10.11772/j.issn.1001-9081.2018041101
    Asbtract ( )   PDF (862KB) ( )  
    References | Related Articles | Metrics
    In order to response the effect of sampling non-uniformity, based on the representative-based classification algorithm, a Leave-One-Out Ensemble Learning Classification Algorithm (LOOELCA) for symbolic data classification was proposed. Firstly, n small training sets were obtained through leave-one-out methods, where n is the initial training set size. Then independent representative-based classifiers were built by using training sets, and the misclassified classifiers and objects were marked out. Finally, the marked classifier and the original classifier formed a committee and the test set objects were classified. If the committee voted the same, the test object was directly labeled with a class label; otherwise, the test object was classified based on the k-Nearest Neighbor (kNN) algorithm and the marked objects. The experimental results on the UCI standard dataset show that the accuracy of LOOELCA improved 0.35-2.76 percentage points on average compared with the Representative-Based Classification through Covering-Based Neighborhood Rough Set (RBC-CBNRS); compared with ID3, J48, Naïve Bayes, OneR and other methods, LOOELCA also has higher classification accuracy.
    Multi-center convolutional feature weighting based image retrieval
    ZHU Jie, ZHANG Junsan, WU Shufang, DONG Yukun, LYU Lin
    2018, 38(10):  2778-2781.  DOI: 10.11772/j.issn.1001-9081.2018041100
    Asbtract ( )   PDF (674KB) ( )  
    References | Related Articles | Metrics
    Deep convolutional features can provide rich semantic information for image content description. In order to highlight the object content in the image representation, the multi-center convolutional feature weighting method was proposed based on the relationship between high response positions and object regions. Firstly, the pre-trained deep network model was used to extract the deep convolutional features. Secondly, the activation map was obtained by summing the feature maps in all the channels and the positions with top few highest responses were considered as the centers of the object. Thirdly, the number of the centers was considered as the scale, and the descriptors corresponding to different positions were weighted based on the distances between these centers and the positions. Finally, the image representation for image retrieval was generated by merging the image features obtained based on different numbers of centers. Compared with Sum-pooled Convolutional (SPoC) algorithm and Cross-dimensional (CroW) algorithm, the proposed method can provide scale information and highlight the object content in the image representation, and achieves excellent retrieval results in the Holiday, Oxford and Paris image retrieval datasets.
    Video object segmentation via information entropy constraint
    DING Feifei, YANG Wenyuan
    2018, 38(10):  2782-2787.  DOI: 10.11772/j.issn.1001-9081.2018041099
    Asbtract ( )   PDF (992KB) ( )  
    References | Related Articles | Metrics
    In most of the graph-based segmentation methods, prior saliency regions are often obtained by analyzing motion and appearance information and then the energy model was minimized for further segmentation. These methods often ignore refined analysis of appearance information, and are not robust to complex scenarios. Since information entropy can measure sample purity and information entropy minimization has a consistent goal with energy model minimization, a video object segmentation via information entropy constraint was proposed. Firstly, the segmentation results of the first stage were obtained by combining with optical flow vector and the point-in-polygon principle from the computational geometry. Secondly, the uniform movement and performance were gained through presenting superpixel as the basic division unit. Finally, video segmentation was formulated as a pixel labeling optimization problem with two labels by introducing information entropy constraint into energy function, and more accurate segmentation results were obtained by minimizing the energy function. The experimental results on public datasets show that the proposed method can effectively improve the robustness of video object segmentation.
    Unconstrained face verification based on 3D frontalization and similarity learning
    XU Xin, LIANG Jiuzhen
    2018, 38(10):  2788-2793.  DOI: 10.11772/j.issn.1001-9081.2018041068
    Asbtract ( )   PDF (1184KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problems of small samples, large face pose changes, occlusion and complex background, under unconstrained condition, a face verification method based on 3D frontalization and similarity learning was proposed. Firstly, the 3D frontalization progress was applied to generate the frontal face of the face image. Secondly, the complex background was removed by cropping the relevant face regions. Finally, a similarity learning method based on intra-personal subspace was applied to measure the similarity of the image pairs. Experiments were conducted on several databases that were built up by preprocessing the Labeled Faces in the Wild (LFW) database. the difference between these databases and original LFW is their images have been preprocessed. In the experiment with Local Ternary Pattern (LTP) descriptor as the feature extraction method and 625 training image pairs, the recognition rate of the proposed algorithm Similarity Learning over subspace (sub-SL) was 15.6% and 8.4% higher than that of Metric Learning over subspace (sub-ML) and Similarity Metric Learning over subspace (sub-SML) respectively. Experimental results show that the proposed algorithm can effectively improve the accuracy of face verification under unconstrained conditions.
    Evidence combination rule with similarity collision reduced
    WANG Jian, ZHANG Zhiyong, QIAO Kuoyuan
    2018, 38(10):  2794-2800.  DOI: 10.11772/j.issn.1001-9081.2018030532
    Asbtract ( )   PDF (1010KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of decision error caused by similarity collision in evidence theory, a new combination rule for evidence theory was proposed. Firstly, the features of focal-element sequence in evidence were extracted and converted into a sort matrix to reduce similarity collision. Secondly, the weight of each evidence was determined based on sort matrix and information entropy. Finally, the Modified Average Evidence (MAE) was generated based on the evidence set and evidence weight, and the combination result was obtained by combing MAE for n-1 times by using Dempster combination rule. The experimental results on the online dataset Iris show that the F-Score of average-based combination rule, similarity-based combination rule, evidence distance-based combination rule, evidence-credit based combination rule and the proposed method are 0.84, 0.88, 0.88, 0.88 and 0.91. Experimental results show that the proposed method has higher accuracy of decision making and more reliable combination results, which can provide an efficient solution for decision-making based on evidence theory.

    Multiple attribute decision method based on improved fuzzy entropy and evidential reasoning
    XIONG Ningxin, WANG Yingming
    2018, 38(10):  2801-2806.  DOI: 10.11772/j.issn.1001-9081.2018030677
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that attribute weights are difficult to obtain under the framework of evidential reasoning, a multi-attribute decision method based on improved fuzzy entropy and evidential reasoning was proposed. Firstly, the formula of trigonometric fuzzy entropy under the framework of belief decision matrix of evidential reasoning was defined, and it was proved that it satisfies the four axiomatic definitions of entropy. Secondly, the proposed method can simultaneously deal with two situations in which the attribute weights are completely unknown and the attribute weights information are partially known. When the attribute weights were completely unknown, the attribute weights were calculated based on the basic idea of fuzzy entropy and entropy weight method under the framework of belief. When part of the information of attribute weights were known, the weighted fuzzy entropy was defined, and the linear programming model with the minimum expected fuzzy entropy was established to get the optimal attribute weights. Finally, the evidential reasoning algorithm was used to aggregate the degree of belief of attributes and got the result of the ranking of alternatives combined with expected utility theory. Through example calculations and comparative analysis with the method of traditional fuzzy entropy, it is verified that the proposed method can more fully reflect the original decision information, which is more objective and general.
    Multi-label classification algorithm based on gravitational model
    LI Zhaoyu, WANG Jichao, LEI Man, GONG Qin
    2018, 38(10):  2807-2811.  DOI: 10.11772/j.issn.1001-9081.2018040813
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that multi-label classification algorithms cannot fully utilize the correlation between labels, a new multi-label classification algorithm based on gravitational model namely MLBGM was proposed, by establishing the positive and negative correlation matrices of labels to mine different correlations among labeled. Firstly, by traversing all samples in the training set, k nearest neighbors for each training sample were obtain. Secondly, according to the distribution of labels in all neighbors of each sample, positive and negative correlation matrices were established for each training sample. Then, the neighbor density and neighbor weights for each training sample were calculated. Finally, a multi-label classification model was constructed by calculating the interaction between data particles. The experimental results show that the HammingLoss of MLBGM is reduced by an average of 15.62% compared with 5 contrast algorithms that do not consider negative correlation between labels; on the MicroF1, the average increase is 7.12%; on the SubsetAccuracy, the average increase is 14.88%. MLBGM obtains effective experimental results and outperforms comparison algorithms as it makes full use of the different correlations between labels.
    Adaptive differential evolution algorithm based on multiple mutation strategies
    ZHANG Qiang, ZOU Dexuan, GENG Na, SHEN Xin
    2018, 38(10):  2812-2821.  DOI: 10.11772/j.issn.1001-9081.2018030684
    Asbtract ( )   PDF (1379KB) ( )  
    References | Related Articles | Metrics
    In order to overcome the disadvantages of Differential Evolution (DE) algorithm such as low optimization accuracy, slow convergence and poor stability, an Adaptive Differential Evolution algorithm based on Multi-Mutation strategy (ADE-MM) was proposed. Firstly, two disturbance thresholds with learning functions were used in the selection of three mutation strategies to increase the diversity of the population and expand the search scope. Then, according to the successful parameters of the last iteration, the current parameters were adjusted adaptively to improve the search accuracy and speed. Finally, vector particle pool method and central particle method were used to generate new vector particles to further improve the search effect. Tests were performed on 8 functions for 5 comparison algorithms (Random Mutation Differential Evolution (RMDE), Cross-Population Differential Evolution algorithm based on Opposition-based Learning (OLCPDE), Adaptive Differential Evolution with Optional External Archive (JADE), Self-adaptive Differential Evolution (SaDE), Modified Differential Evolution with p-best Crossover (MDE_pBX)), and each example was independently performed 30 times. The ADE-MM algorithm achieves a complete victory in the comparison of mean and variance, 5 independent wins and 3 tie wins are achieved in the 30-dimensional case; 6 independent wins and 2 tie wins are obtained in the 50-dimensional case; in 100-dimensional case, all are won independently. At the same time, in the Wilcoxon rank sum test, winning rate and time-consuming analysis, the ADE-MM algorithm also achieves excellent performance. The results show that ADE-MM algorithm has stronger global search ability, convergence and stability than other five comparison algorithms.
    Group decision-making model based on incomplete probability information
    DAI Yiyu, CHEN Jiang
    2018, 38(10):  2822-2826.  DOI: 10.11772/j.issn.1001-9081.2018030657
    Asbtract ( )   PDF (856KB) ( )  
    References | Related Articles | Metrics
    A group decision making model based on optimization model and consistency adjustment algorithm was established for the group decision problems with incomplete occurrence probability information of hesitant fuzzy elements. First of all, some new concepts were introduced, including Probability Incomplete Hesitant Fuzzy Preference Relations (PIHFPRs), the expected consistency of PIHFPRs and the acceptable additive expected consistency of PIHFPRs. Secondly, the minimization of deviations between PIHFPRs and the weight vectors was regarded as the objective function, a linear optimization model was constructed to calculate the probability information of the PIHFPRs. Then, by using the integrated operator for weighted probability incomplete hesitant fuzzy preference relations, the comprehensive PIHFPR was determined. A group consistency adjustment algorithm was further designed, which not only makes the adjusted PIHFPRs are acceptable expected consistent, but also can obtain the weight vectors for alternatives. Finally, the proposed group decision-making model was applied to a numerical example about the selection of block chain. Experimental results show that the decision-making results are reasonable and reliable, and the actual situation can be reflected.
    Adaptive backstepping sliding mode control for robotic manipulator with the improved nonlinear disturbance observer
    ZOU Sifan, WU Guoqing, MAO Jingfeng, ZHU Weinan, WANG Yurong, WANG Jian
    2018, 38(10):  2827-2832.  DOI: 10.11772/j.issn.1001-9081.2018030525
    Asbtract ( )   PDF (799KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of control input chattering of traditional sliding mode, requiring acceleration term, and limited application model of traditional disturbance observes in manipulator joint position tracking, a self-adaptive inverse sliding mode control algorithm for manipulators with improved nonlinear disturbance observer was proposed. Firstly, an improved nonlinear disturbance observer to perform on-line testing was designed. In the sliding mode control law, interference estimates were added to compensate for observable disturbance, and then appropriate design parameters were selected to make the observation error converge exponentially; the adaptive control law was performed to estimate the unobservable interference and further improved the tracking performance of the control system. Finally, the Lyapunov function was used to verify the asymptotic stability of the closed-loop system, then the system was applied to the joint position tracking of the manipulator. The experimental results show that compared with the traditional sliding mode algorithm, the improved control algorithm not only accelerates the response speed of the system, but also effectively suppresses the system chattering, avoids measuring acceleration items and expands the application scope of the application model.
    Improved single shot multibox detector based on the transposed convolution
    GUO Chuanlei, HE Jia
    2018, 38(10):  2833-2838.  DOI: 10.11772/j.issn.1001-9081.2018030720
    Asbtract ( )   PDF (984KB) ( )  
    References | Related Articles | Metrics
    Since the mean Average Precision (mAP) of Single Shot multibox Detector (SSD) drops significantly when evaluating with higher Intersection over Union (IoU), a feature aggregation method using transposed convolution as main component was proposed. On the basis of SSD model, a deep Residual convolutional Network (ResNet) with 101 layers was used to extract features. Firstly, abstraction of semantics and context information was generated by using transposed convolutional layers which doubled the scales of deeper feature maps. Secondly, fully connected convolutional layers were applied to shallow layers to prevent unexpected bias. Finally, the shallow and deep feature maps were concatenated together, and convolutional layers with kernel size 1 were used to reduce the channel sizes. The feature aggregation can repeat multiple times. The experiments were conducted on KITTI dataset and took 0.7 as IoU threshold. Experimental results show that the mAP was improved by about 5.1 and 2 percent points compared to the original SSD model and the state-of-the-art Faster R-CNN model. The feature aggregation model can effectively improve the mAP and generate high quality bounding boxes in object detection tasks.
    Short utterance speaker recognition algorithm based on multi-featured i-vector
    SUN Nian, ZHANG Yi, LIN Haibo, HUANG Chao
    2018, 38(10):  2839-2843.  DOI: 10.11772/j.issn.1001-9081.2018030598
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics
    When the length of the test speech is sufficient, the information and discrimination of single feature is sufficient to complete the speaker recognition task. However, when the length of the test speech was very short, the performance of speaker recognition is decreased significantly due to the small data size and insufficient discrimination. Aiming at the problem of insufficient speaker information under the short speech condition, a short utterance speaker recognition algorithm based on multi-featured i-vector was proposed. Firstly, different acoustic feature vectors were extracted and combined into a high-dimensional feature vector. Then Principal Component Analysis (PCA) was used to remove the correlation of the feature vectors, so that the features were orthogonalized. Finally, the most discriminating features were picked out by Linear Discriminant Analysis (LDA), which led to reduce the spatial dimension. Therefore, this multi-featured system can achieve a better speaker recognition performance. With the TIMIT corpus under the same short speech (2 s) condition, the experimental results showed that the Equal Error Rate (EER) of the multi-featured system decreased respectively by 72.16%, 69.47% and 73.62% compared with the single-featured systems including Mel-Frequency Cepstrum Coefficient (MFCC), Linear Prediction Cepstrum Coefficient (LPCC) and Perceptual Log Area Ratio (PLAR) based on i-vector. For the different lengths of the short speech, the proposed algorithm provided rough 50% improvement on EER and Detection Cost Function (DCF) compared with the single-featured system based on i-vector. Experimental results fully indicate that the multi-featured system can make full use of the speaker's characteristic information in the short utterance speaker recognition, and improves the speaker recognition performance.
    Probability model-based algorithm for non-uniform data clustering
    YANG Tianpeng, CHEN Lifei
    2018, 38(10):  2844-2849.  DOI: 10.11772/j.issn.1001-9081.2018020375
    Asbtract ( )   PDF (1008KB) ( )  
    References | Related Articles | Metrics
    Aiming at the "uniform effect" of the traditional K-means algorithm, a new probability model-based algorithm was proposed for non-uniform data clustering. Firstly, a Gaussian mixture distribution model was proposed to describe the clusters hidden within non-uniform data, allowing the datasets to contain clusters with different densities and sizes at the same time. Secondly, the objective optimization function for non-uniform data clustering was deduced based on the model, and an EM (Expectation Maximization)-type clustering algorithm defined to optimize the objective function. Theoretical analysis shows that the new algorithm is able to perform soft subspace clustering on non-uniform data. Finally, experimental results on synthetic datasets and real datasets demostrate that the accuracy of the proposed algorithm is increased by 5% to 50% compared with the existing K-means-type algorithms and under-sampling algorithms.
    Improved K-means clustering algorithm based on multi-dimensional grid space
    SHAO Lun, ZHOU Xinzhi, ZHAO Chengping, ZHANG Xu
    2018, 38(10):  2850-2855.  DOI: 10.11772/j.issn.1001-9081.2018040830
    Asbtract ( )   PDF (828KB) ( )  
    References | Related Articles | Metrics
    K-means algorithm is a widely used clustering algorithm, but the selection of the initial clustering centers in the traditional K-means algorithm is random, which makes the algorithm easily fall into local optimum and causes instability in the clustering result. In order to solve this problem, the idea of multi-dimensional grid space was introduced to the selection of initial clustering center. Firstly, the sample set was mapped to a virtual multi-dimensional grid space structure. Secondly, the sub-grids containing the largest number of samples and being far away from each other were searched as the initial cluster center grids in the space structure. Finally, the mean points of the samples in the initial cluster center grids were calculated as the initial clustering centers. The initial clustering centers chosen by this method are very close to the actual clustering centers, so that the final clustering result can be obtained stably and efficiently. By using computer simulation data set and UCI machine learning data sets to test, both the iterative number and error rate of the improved algorithm are stable, and smaller than the average of the traditional K-means algorithm. The improved algorithm can effectively avoid falling into local optimum and guarantee the stability of clustering result.
    Unsupervised feature selection algorithm based on self-paced learning
    GONG Yonghong, ZHENG Wei, WU Lin, TAN Malong, YU Hao
    2018, 38(10):  2856-2861.  DOI: 10.11772/j.issn.1001-9081.2018020448
    Asbtract ( )   PDF (886KB) ( )  
    References | Related Articles | Metrics
    Concerning that the samples are treated equally and the difference of samples is ignored in the conventional feature selection algorithms, as well as the learning model cannot effectively avoid the influence from the noise samples, an Unsupervised Feature Selection algorithm based on Self-Paced Learning (UFS-SPL) was proposed. Firstly, a sample subset containing important samples for training was selected automatically to construct the initial feature selection model, then more important samples were added gradually into the former model to improve its generalization ability, until a robust and generalized feature selection model was constructed or all samples were selected. Compared with Convex Semi-supervised multi-label Feature Selection (CSFS), Regularized Self-Representation (RSR) and Coupled Dictionary Learning method for unsupervised Feature Selection (CDLFS), the clustering accuracy, normalized mutual information and purity of UFS-SPL were increased by 12.06%, 10.54% and 10.5%, respectively. The experimental results show that UFS-SPL can effectively remove the effect of irrelevant information from original data sets.
    Data fusion algorithm of coupled images
    REN Xiaoxu, LYU Liangfu, CUI Guangtai
    2018, 38(10):  2862-2868.  DOI: 10.11772/j.issn.1001-9081.2018020482
    Asbtract ( )   PDF (1023KB) ( )  
    References | Related Articles | Metrics
    Coupled data fusion algorithms mainly improve estimation accuracy and explain related latent variables of the other coupled data sets by using the information of one data set. Aiming at the large number of coupled images existing in reality, based on the Coupled Matrix and Tensor Factorization-OPTimization (CMTF-OPT) algorithm in coupled data fusion, a Coupled Images Factorization-OPTimization (CIF-OPT) algorithm for coupled images was proposed. The corresponding theoretical analysis and experimental results show that the effects of coupled image fusion by CIF-OPT algorithm under different noise influence are robust, and better than those by other coupling algorithms. Particularly, the CIF-OPT algorithm can accurately restore an image of missing some data elements with its coupled image at a certain probability.
    Spatio-temporal query algorithm based on Hilbert-R tree hierarchical index
    HOU Haiyao, QIAN Yurong, YING Changtian, ZHANG Han, LU Xueyuan, ZHAO Yi
    2018, 38(10):  2869-2874.  DOI: 10.11772/j.issn.1001-9081.2018040749
    Asbtract ( )   PDF (993KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of multi-path query in tree-spatial index and not considering temporal index, A Hilbert-R tree index construction scheme combining time and clustering results was proposed. Firstly, according to the periodicity of data collection, the spatial-temporal dataset was divided, and on this basis, a time index was established. The spatial data was partitioned and encoded by the Hilbert curve, and the spatial coordinates were mapped to one-dimensional intervals. Secondly, according to the distribution of the feature object in space, a clustering algorithm using dynamic determination of K value was adopted, to build an efficient Hilbert-R tree spatial index. Finally, based on several common key-value data structures of Redis, the hierarchical indexing mechanism of time attributes and clustering results was built. Compared with the Cache Conscious R+tree (CCR+), the proposed algorithm can effectively reduce the time overhead, and the query time is shortened by about 25% on average in the experiment of spatial-temporal range and target vector object query. It has good adaptability to different intensive data and can better support Redis for massive spatio-temporal data queries.
    Application of weighted incremental association rule mining in communication alarm prediction
    WANG Shuai, YANG Qiuhui, ZENG Jiayan, WAN Ying, FAN Zhening, ZHANG Guanglan
    2018, 38(10):  2875-2880.  DOI: 10.11772/j.issn.1001-9081.2018020392
    Asbtract ( )   PDF (926KB) ( )  
    References | Related Articles | Metrics
    Aiming at the shortcomings such as low prediction accuracy and low efficiency of model training in alarm prediction of communication networks, a communication network alarm forecasting scheme based on Canonical-order tree (Can-tree) weighted incremental association rule mining algorithm was proposed. Firstly, the alarm data was preprocessed to determine the alarm data weight and compressed into the Can-tree structure. Secondly, the Can-tree was mined by using the incremental association rule mining algorithm to generate alarm association rules. Finally, a pattern matching method was used to predict real-time alarm information, and the results were optimized. The experimental results show that the proposed method is efficient, and the previously mined results can improve the mining efficiency. The alarm weight assigning scheme can reasonably distinguish the importance of alarm data, help mine the alarm association rules with high importance, speed up the elimination of outdated alarm association rules, and improve the accuracy and precision of the prediction.
    Alarm-filtering algorithm of alarm management system for telecom networks
    XU Bingke, ZHOU Yuzhe, YANG Maolin, XIE Yuanhang, LI Xiaoyu, LEI Hang
    2018, 38(10):  2881-2885.  DOI: 10.11772/j.issn.1001-9081.2018040879
    Asbtract ( )   PDF (774KB) ( )  
    References | Related Articles | Metrics
    A large amount of alarms considerably complicate the root-cause analysis in telecom networks, thus a new alarm filtering algorithm was proposed to minimize the interference on the analysis. Firstly, a quantitative analysis for the alarm data, e.g., the quantity distribution and the average duration, was conducted, and the concepts of alarm impact and high-frequency transient alarm were defined. Subsequently, the importance of each alarm instance was evaluated from four perspectives:the amount of the alarms, the average duration of the alarms, the alarm impact, and the average duration of the alarm instance. Accordingly, an alarm filtering algorithm with O (n) computation complexity in principle was proposed, where n is the number of alarms under analysis. Single-factor experimental analysis show that the compression ratio of the alarm data has a positive correlation with the alarm amount of a specific alarm element, the average duration of the alarms, the alarm impact, and the duration of the alarm instance; further, the accuracy of the proposed algorithm is improved by 18 percentage points at most compared with Flexible Transient Flapping Determination (FTD) algorithm. The proposed algorithm can be used both for off-line analysis of historical alarm data and for on-line alarm filtering.
    Commodity recommendation method integrating user trust and brand recognition
    FENG Yong, HAN Xiaolong, FU Chenping, WANG Rongbing, XU Hongyan
    2018, 38(10):  2886-2891.  DOI: 10.11772/j.issn.1001-9081.2018040766
    Asbtract ( )   PDF (848KB) ( )  
    References | Related Articles | Metrics
    Concerning the low recommendation accuracy of personalized commodity recommendation methods, a Commodity Recommendation Method Integrating User Trust and Brand Recognition (TBCRMI) was proposed. By analyzing the user's purchase behavior and evaluation behavior, the user's recognition of brands and the activities of users themselves were calculated. Then Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm was used to cluster the users, based on which the user trust relationship was fused, and the nearest neighbors were obtained by Top-K method. Finally, the target user commodity recommendation list was generated based on the nearest neighbors. In order to verify the effectiveness of the algorithm, two datasets (Amazon Food and Unlocked Mobile Phone) were used, User based Collaborative Filtering (UserCF) algorithm, Collaborative Filtering recommendation algorithm with User trust (SPTUserCF) and Merging Trust in Collaborative Filtering (MTUserCF) algorithm were chosen, and the accuracy, recall and F1 value were compared and analyzed. The experimental results show that TBCRMI is superior to the commonly used personalized commodity recommendation methods in either multi-brand comprehensive recommendation or single brand recommendation.
    Public auditing scheme of data integrity for public cloud
    MIAO Junmin, FENG Chaosheng, LI Min, LIU Xia
    2018, 38(10):  2892-2898.  DOI: 10.11772/j.issn.1001-9081.2018030510
    Asbtract ( )   PDF (1067KB) ( )  
    References | Related Articles | Metrics
    Aimming at the problem of leaking privacy to Third-Party Auditors (TPA) and initiating alternative attacks by Cloud Storage Server (CSS) in public auditing, a new public auditing scheme of data integrity for public cloud was proposed. Firstly, the hash value obfuscation method was used to obfuscate the evidence returned by the cloud storage server to prevent TPA from analyzing and calculating the original data. Then during the audit process, TPA itself calculated the overlay tree of the Merkle Hash Tree (MHT) corresponding to the challenge request, and matched with the overlay tree returned by CSS to prevent the cloud storage server from responding to audit challenges with other existing data. Experimental results show that the performance in terms of computational overhead, storage overhead and communication overhead does not change by orders of magnitude after solving the privacy and attack problems of the existing scheme.
    Network virus propagation modeling considering social network user behaviors
    FENG Liping, HAN Xie, HAN Qi, ZHENG Fang
    2018, 38(10):  2899-2902.  DOI: 10.11772/j.issn.1001-9081.2018040850
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics
    Concerning that the existing networks virus propagation models do not consider the influence of interactive behaviors among the users in different social networks on network virus propagation, a dynamic model of differential equations was established. The stability theory was used to analyze the dynamical behaviors of the network virus propagation, and the accurate expression of the basic reproduction number was obtained, which is the threshold of controlling the network virus propagation. Furthermore, using Runge-Kutta numerical method, the correctness of theoretic analysis was verified by simulations. The results show that the basic reproduction number is the direct decisive factor of network virus prevalence situations. When the value of the basic reproduction number is less than or equal to one, the propagation of the network viruses will be controlled with the evolution of time. Additionally, the research reveals that it is helpful for distributing the users to different social networks to slow the prevalence of network viruses.
    Intrusion detection model based on hybrid convolutional neural network and recurrent neural network
    FANG Yuan, LI Ming, WANG Ping, JIANG Xinghe, ZHANG Xinming
    2018, 38(10):  2903-2907.  DOI: 10.11772/j.issn.1001-9081.2018030710
    Asbtract ( )   PDF (918KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of advanced persistent threats in power information networks, a hybrid Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) intrusion detection model was proposed, by which current network states were classified according to various statistical characteristics of network traffic. Firstly, pre-processing works such as feature encoding and normalization were performed on the network traffic obtained from log files. Secondly, spatial correlation features between different hosts' intrusion traffic were extracted by using deformable convolution kernels in CNN. Finally, the processed data containing spatial correlation features were staggered in time, and the temporal correlation features of the intrusion traffic were mined by RNN. The experimental results showed that the Area Under Curve (AUC) of the model was increased by 7.5% to 14.0% compared to traditional machine learning models, and the false positive rate was reduced by 83.7% to 52.7%. It indicates that the proposed model can accurately identify the type of network traffic and significantly reduce the false positive rate.
    Secure performance analysis based on opportunity relaying transmission scheme
    ZHANG Yongjian, HE Yucheng, ZHOU Lin
    2018, 38(10):  2908-2912.  DOI: 10.11772/j.issn.1001-9081.2018030665
    Asbtract ( )   PDF (835KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of information being intercepted by illegal users during wireless communication, a secure transmission strategy based on optimal relay selection was proposed. Firstly, the pre-designed artificial noise and useful information were integrated at the source node, and the best relay selection algorithm was used to select the best relay to forward the received information. Secondly, the secrecy capacity, outage probability and interpret probability of the system were derived. Finally, the optimal number of relays were determined by the comprehensive performance of security and reliability. Theoretical analysis and experimental simulation results show that compared with the traditional system model without artificial noise, the performance of the proposed system can be significantly improved by adding relay nodes.
    Physical layer security performance analysis of full-duplex wireless-powered IoT networks
    LIU Ming, MAO Yuming, LENG Supeng
    2018, 38(10):  2913-2917.  DOI: 10.11772/j.issn.1001-9081.2018030725
    Asbtract ( )   PDF (786KB) ( )  
    References | Related Articles | Metrics
    Facing to jammers and eavesdroppers, conventional security data transmission is generally based on the cryptographic method. Enormous issues arise when the cryptographic method applied in dynamic wireless scenarios, such as key distribution for symmetric cryptosystems, and high computing complexity of asymmetric cryptosystems. With the rapid growing of wireless traffic and massive device accessing of Internet-of-Things (IoT), the computing complexity and energy consumption aggravate, which leads to security degrade of wireless networks. To address this issue, a secure communication scheme based on physical layer security was proposed for Full-Duplex (FD) wireless-powered IoT networks, which limited the amount of information received at the unauthorized receiver by exploiting the randomness of noise and wireless channel. In this method, secrecy capacity was analyzed based on information theory, and then the Secrecy Outage Probability (SOP) was derived with the analysis model of secrecy capacity. In addition, considering the influence of noise, jammer interference, spatial mutual interference, and residual self-interference on safety capacity, a secure beamforming method was proposed to increase the mutual information between the transmitting and receiving ends and improve the secrecy capacity of the full-duplex wireless-powered IoT network by decreasing the joint interference. Derivation results are verified through Monte Carlo simulation. Simulation results show that the FD wireless-powered IoT with secure beamforming is superior to the conventional wireless-powered IoT in terms of secrecy capacity and SOP.
    Spectral clustering algorithm based on differential privacy protection
    ZHENG Xiaoyao, CHEN Dongmei, LIU Yuqing, YOU Hao, WANG Xiangshun, SUN Liping
    2018, 38(10):  2918-2922.  DOI: 10.11772/j.issn.1001-9081.2018040888
    Asbtract ( )   PDF (753KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of privacy leakage in the application of traditional clustering algorithm, a spectral clustering algorithm based on differential privacy protection was proposed. Based on the differential privacy model, the cumulative distribution function was used to generate random noise that satisfies Laplasse distribution. Then the noise was added to the sample similarity function calculated by the spectral clustering algorithm, which disturbed the weight values between the individual samples and realized information hiding between sample individuals for privacy protection. Experimental results of UCI dataset verify that the proposed algorithm can achieve effective data clustering within a certain degree of information loss, and can also protect clustered data.
    Information hiding scheme based on generative adversarial network
    WANG Yaojie, NIU Ke, YANG Xiaoyuan
    2018, 38(10):  2923-2928.  DOI: 10.11772/j.issn.1001-9081.2018030666
    Asbtract ( )   PDF (882KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that information-hidden carriers will leave traces of modification, and it is fundamentally difficult to resist statistical steganalysis algorithm detection, a new security steganography model based on Generative Adversarial Network (GAN) was proposed. In this scheme, the generator model G in GAN was utilized to generate the original carrier information with noise as the driver. Next, by using the ±1 embedding algorithm, the secret message was embedded into the generated carrier information to generate the secret information. Finally, the secret information and the real image sample were used as the input of discriminator D in the GAN for iterative optimization. At the same time, discriminative model S was used to detect whether the image has a steganography operation, and timely feedback to generate image quality features, G&D&S competed with each other in the iterative process, and the performance was continuously improved. The proposed strategy is different from the two schemes of Steganographic GAN (SGAN) and Secure Steganography based on GAN (SSGAN). The main feature is that the secret information and the real image sample are used as input for the discriminative model, and the discriminative network D is reconstructed, so that the network can better evaluate the performance of the generated images. Compared with SGAN and SSGAN, the proposed model reduces the detection accuracy of steganalysis by 13.1% and 6.4% respectively. Experimental results show that the new information hiding scheme guarantees the security of information hiding by generating more suitable carrier information and can effectively resist the detection of steganographic algorithms, it is significantly superior to the contrast schemes in terms of anti-steganography analysis and security indicators.
    Malicious file detection method based on image texture and convolutional neural network
    JIANG Chen, HU Yupeng, SI Kai, KUANG Wenxin
    2018, 38(10):  2929-2933.  DOI: 10.11772/j.issn.1001-9081.2018030691
    Asbtract ( )   PDF (716KB) ( )  
    References | Related Articles | Metrics
    In big data environment, traditional malicious file detection methods have low detection accuracy for malicious files after code variant and confusion, and weak versatility of cross-platform malicious files. To resolve these problems, a malicious file detection method based on image texture and Convolutional Neural Network (CNN) was proposed. Firstly, a grayscale image generation algorithm was used to convert the executable files on Android and Windows platforms, namely.dex and.exe files, into corresponding grayscale images. Then, the texture features of these grayscale images were automatically extracted and learned by using CNN algorithm, to construct a malicious file detection model. Finally, a large number of unknown files were used to test the accuracy of the proposed model. The experimental results on a large number of malicious samples showed that the highest accuracy of the proposed model on Android platform and Windows platform reached 79.6% and 97.6%, and the average accuracy were approximately 79.3% and 96.8%, respectively. Compared with the texture fingerprint-based malicious code detection method, the accuracy of the proposed method was improved by about 20%. Experimenatal results indicate that the proposed method can effectively avoid the problems caused by manual screening features, greatly improve the detection accuracy and efficiency, successfully solve the cross-platform detection problem, and achieve an end-to-end malicious file detection model.
    New design of linear structure for round-reduced Keccak
    LIU Xiaoqiang, WEI Yongzhuang, LIU Zhenghong
    2018, 38(10):  2934-2939.  DOI: 10.11772/j.issn.1001-9081.2018030617
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics
    Focusing on the linear decomposition of the S-box layer in Keccak algorithm, a new linear structure construction method was proposed based on the algebraic properties of the S-box. Firstly, to ensure the state data was still linear with that after this linear structure, some constraints about input bits of S-box needed to be fixed. Then, as an application of this technique, some new zero-sum distinguishers of round-reduced Keccak were constructed by combining the idea of meet-in-the-middle attack. The results show that a new 15-round distinguisher of Keccak is found, which extends 1-round forward and 1-round backward. This work is consistent with the best known ones and its complexity is reduced to 2257. The new distinguisher, which extends 1-round forward and 2-round backward, has the advantages of more free variables and richer distinging attack combinations.
    Multi-factor authentication key agreement scheme based on chaotic mapping
    WANG Songwei, CHEN Jianhua
    2018, 38(10):  2940-2944.  DOI: 10.11772/j.issn.1001-9081.2018030642
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics
    In the open network environment, identity authentication is an important means to ensure information security. Aiming at the authentication protocol proposed by Li, et al (LI X, WU F, KHAN M K, et al. A secure chaotic map-based remote authentication scheme for telecare medicine information systems. Future Generation Computer Systems, 2017, 84:149-159.), some security defects were pointed out, such as user impersonation attacks and denial service attacks. In order to overcome those vulnerabilities, a new protocol scheme with multi-factor was proposed. In this protocol, extended chaotic mapping was adopted, dynamic identity was used to protect user anonymity, and three-way handshake was used to achieve asynchronous authentication. Security analysis result shows that the new protocol can resist impersonation attacks and denial service attacks and protect user anonymity and unique identity.
    Design and implementation of PDSCH de-resource mapping in LTE-A air interface analyzer
    WANG Meile, ZHANG Zhizhong, WANG Guangya
    2018, 38(10):  2945-2949.  DOI: 10.11772/j.issn.1001-9081.2018030518
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics
    In view of the problem of computational redundancy due to the repeated computation of resource mapping positions in the traditional de-resource mapping method of Long Term Evolution-Advanced (LTE-A) physical layer, a new architecture of Physical Downlink Shared channel (PDSCH) de-resource mapping method was proposed, which provides support for the related physical layer processing of the LTE-A air interface analyzer. Firstly, before to the mapping of the physical downlink signal and the channel de-resource, the resource indexes of each signal and channel in single antenna port 0 mode, transmit diversity mode, single-stream beamforming, and dual-stream beamforming were generated; and then, the time-frequency location of the resource was directly located according to the resource index; finally, the PDSCH de-resource mapping module was put in the entire LTE-A link level simulation platform, and the simulations were given in four transmission modes, and the corresponding bit error rate and throughput comparison chart was obtained, which provides a theoretical reference to final hardware implementation. At the same time, compared with the de-resource mapping module under the traditional architecture, it shows that the de-resource mapping module under the new architecture costs 33.33% less time than the traditional computation mapping simulation, which reduces the de-resources and device resource consumption when de-resources mapping.
    Improved QRD-M detection algorithm for spatial modulation system
    ZHOU Wei, GUO Mengyu, XIANG Danlei
    2018, 38(10):  2950-2954.  DOI: 10.11772/j.issn.1001-9081.2018030721
    Asbtract ( )   PDF (781KB) ( )  
    References | Related Articles | Metrics
    In Spatial Modulation (SM) system, the Maximum Likelihood (ML) detection algorithm with the best performance has high complexity, while the complexity can be reduced by M-algorithm based on QR decomposition (QRD-M) of channel matrix. However, when the traditional QRD-M algorithm is used, fixed M nodes were chosen at each layer, which leads to additional computation. Therefore, for the problem of the traditional QRD-M algorithm, a Low-Complexity QR-Decomposition M-algorithm with dynamic value of M (LC-QRD-dM) was proposed. In LC-QRD-dM, by comparing the designed threshold with the cumulative branch metrics, the number of reserved nodes that does not exceed M was adaptively selected at each layer, thus reducing the computational complexity with the cost of a small amount of performance. Then, concerning the high bit error rate of LC-QRD-dM with deep channel fading, QR-Decomposition M-algorithm with dynamic value of M based on Channel State (CS-QRD-dM) was further proposed. Based on the principle of LC-QRD-dM, the number of reserved nodes that do not less than M was selected by the threshold value at each layer when the Signal-to-Noise Ratio (SNR) is not high; and the number of reserved nodes that do not exceed M was selected by the threshold value at each layer when the SNR is high. Theoretic analysis and simulation results show that, compared with the traditional QRD-M algorithm, CS-QRD-dM achieves about 1.3 dB SNR advantage (when the bit error rate is 10-2) at low SNR, which can significantly improve the detection performance at the cost of small complexity increase; and its detection performance and complexity are the same as LC-QRD-dM at high SNR.
    Joint channel non-coherent network coded modulation method
    GAO Fengyue, WANG Yan, LI Mu, YU Rui
    2018, 38(10):  2955-2959.  DOI: 10.11772/j.issn.1001-9081.2018030591
    Asbtract ( )   PDF (894KB) ( )  
    References | Related Articles | Metrics
    For physical-layer network coding over time-varying bi-directional relay channels, a joint channel coding and non-coherent physical-layer network coded modulation and detection scheme without channel state information was designed in multiple-antenna environment. Firstly, the spatial modulation matrix at the source was designed to achieve physical-layer network coding. Then, a differential spatial modulation was combined with physical-layer network coding and maximum a posteriori probability of superimposed signal was derived at the relay. Moreover, considering the constellation of the superimposed signal, a mapping function to map superimposed signal to broadcasting signal was designed. Lastly, taking advantage of the linear structure of channel coding, and combining bit interleaving, channel decoding and soft-input soft-output detection algorithm, an iterative detection approach for joint channel differential physical-layer network coding was obtained. Simulation results show that the proposed scheme can achieve non-coherent transmission and detection used to physical-layer network coding for two-way relay channels and can effectively enhance the throughput and spectral efficiency of the system.
    Fast intra algorithm based on quality scalable high efficiency video coding
    LIU Yanjun, ZHAO Zhiqiang, LIU Yan, CUI Ying, WANG Dayong, RAN Peng, GUO Yijun
    2018, 38(10):  2960-2964.  DOI: 10.11772/j.issn.1001-9081.2018010162
    Asbtract ( )   PDF (786KB) ( )  
    References | Related Articles | Metrics
    To increase the coding speed of the quality Scalable High efficiency Video Coding (SHVC), a new intra prediction algorithm based on quality SHVC was proposed. Firstly, the potential depth was predicted by using inter-layer correlation, and the depths with low possibility were eliminated. Secondly, for the likely depth, Inter-Layer Reference (ILR) mode was used to code and examine the residual distribution by using distribution fitting to determine whether the residuals follow a Laplace distribution. If the residual followed the Laplace distribution, intra prediction was skipped. Finally, the depth residual coefficient of depth coding was tested to determine whether to satisfy the depth of early termination, if the condition was met, the code process would be terminated to improve the coding speed. The experimental results show that the proposed algorithm can improve the coding speed by 79% with negligible coding loss.
    Energy efficiency analysis of relay assisted cellular network
    CHEN Yonghong, GUO Lili, ZHANG Shibing
    2018, 38(10):  2965-2970.  DOI: 10.11772/j.issn.1001-9081.2018030628
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of low Energy Efficiency (EE) of relay assisted cellular networks where the Macro Base Station (MBS) was equipped with a single-antenna, the downlink transmission of multi-antenna relay-assisted cellular networks were considered, meanwhile, a strategic sleep scheme was proposed. Firstly, according to whether the number of users serviced by the relay exceeds a given threshold, the relay's working mode was dynamically adjusted. And then the coverage probabilities and mean achievable rates of MBS to user (UE), MBS to Relay Station (RS), RS to UE links were deduced. Finally, the energy efficiency of the system was derived based on the power consumption per unit area and the reachable rate per unit area. The simulation results show that when the density of MBS is 2×10-5m-2, the energy efficiency of the multi-antenna network with strategic sleep scheme is about 5.6% higher than that of the cellular network with non-sleeping strategy; the system energy efficiency of MBS equipped with multiple antennas is 30% high than without sleep strategy scheme with single antenna. Experimental results indicate that the multi-antenna relay-assisted cellular network with sleep strategy scheme has higher energy efficiency than the single-antenna relay-assisted cellular network.
    HK extended model with tunable degree correlation and clustering coefficient
    ZHOU Yujiang, WANG Juan
    2018, 38(10):  2971-2975.  DOI: 10.11772/j.issn.1001-9081.2018030592
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that most of the existing social network growing models have negative degree correlation, considering the characteristics of positive degree correlations and high clustering coefficients, a new social network growing model was proposed based on Holme and Kim (HK) model. Firstly, the topological structure of a real-world social network was analyzed to obtain some important topological parameters of real social networks. Secondly, the HK model was improved by introducing triad formation mechanism, namely HK extended model with Turnable Degree Correlation and Clustering coefficient (HK-TDC&C), by which both clustering coefficients and degree correlations in the network could be adjusted. The model could be used to construct social networks with various topological properties. Finally, using mean field theory, the degree distribution of the model was analyzed, and Matlab was used for numerical simulation to calculate other topological parameters of the network. The results show that, by turning preferred attachment parameters and connection probabilities, the social network constructed by HK-TDC&C model can satisfy the basic characteristics of social networks, including scale-free characteristics, small world characteristics, high clustering coefficient characteristics and degree positive correlation properties, and its topology is closer to the real social network.
    Dynamic algorithm of load balancing based on D-S evidence theory with improved weight
    TAI Yingying, PANG Ying, DUAN Keke, FU Yunpeng
    2018, 38(10):  2976-2981.  DOI: 10.11772/j.issn.1001-9081.2018030548
    Asbtract ( )   PDF (1130KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of load unbalance among servers in large network games, a load balancing strategy based on Dempster/Shafer (D-S) evidence theory was proposed. The multiple factors which influenced the servers were taken as parameters. Firstly, according to D-S evidence theory, the multiple factors affecting the performance of the server were used as the criteria, the dynamic weight was computed by comparing the historic data with the threshold, and then the basic belief function was set up according to the relationship between the dynamic weight and original reliability. After that, the belief functions corresponding to different criteria was calculated, and the calculation results were merged by the rules of evidence synthesis. Lastly, whether the server was overloaded or not was evaluated by the analysis of aforementioned results. Simulation results show that compared with the dynamic load balancing algorithm based on negative feedback, the proposed algorithm is more accurate and more realistic; the running time of the proposed algorithm is obviously less than that of the dynamic load balancing algorithm based on negative feedback and the weighted loop algorithm. Analysis indicates that the proposed algorithm can effectively reduce the delay of running judgement and make a quick deduction for the server load according to the historical parameters, and the dicision results are more reliable and more consistent with the actual situation.
    Power amplifier modeling of X-parameter based on load-pulling and memory effect
    NAN Jingchang, FAN Shuang, GAO Mingming
    2018, 38(10):  2982-2989.  DOI: 10.11772/j.issn.1001-9081.2018010029
    Asbtract ( )   PDF (1140KB) ( )  
    References | Related Articles | Metrics
    In order to describe the Radio Frequency (RF) power amplifier with memory effect more quickly and accurately, a new X-parameter power amplifier modeling method was proposed based on the traditional X-parameter model combined with the memory effect and load-pulling of the power amplifier. Firstly,the load reflection coefficient was introduced into the new scheme. Secondly,the two-memory path model was used to extract the nonlinear function to represent the memory effect instead of the kernel function. Three variables including amplitude, load reflection coefficient and frequency were regarded as output signals to build the new Feed-Forward (FF) structure. In the end, the step signal was used instead of the original two-tone signal to simplify the model extraction method and improve the feasibility of model extraction. By using the proposed new X-parameter modeling program to model power amplifier, the simulation results of the power amplifier CGH40045F data showed that compared with the traditional X-parameter model, FF structure X-parameter model and FeedBack (FB) structure X-parameters model, the relative error was reduced; compared with the FF model and the FB model, the simulation time was reduced by 4.08 s and 1.64 s, respectively. The results prove that the model modeled by the proposed mehtod can characterize the amplifier with non-linear memory effect more quickly and effectively
    Software regression verification based on witness automata
    JIA Shangkun, HE Fei
    2018, 38(10):  2990-2995.  DOI: 10.11772/j.issn.1001-9081.2018030733
    Asbtract ( )   PDF (1103KB) ( )  
    References | Related Articles | Metrics
    In order to utilize the shared information between adjacent versions in multi-version program verification, and extract and reuse loop invariants in the witness automaton belonging to the previous version, a software regression verification based on witness automata was proposed. Firstly, the witness file applicable to the new version of programs was generated by witness preprocessing. Then, based on the auxiliary-invariant-enhanced k-induction, the regression verification process was implemented to validate the new witness file and verify the new version of programs. Finally, performance of three kinds of regression verification was compared by contrast experiments, including the so-called "direct" verification that did not use invariant information and verification methods combined with or without data flow analysis. Compared with the direct verification, the time consumption of the regression verification combined with or without data flow analysis was reduced by 49% and 75% respectively, and the memory consumption was reduced by 18% and 50% respectively. The results show that when the program satisfies its verification specification, the regression verification based on witness automata can greatly improve verification efficiency, and the regression method combined with data flow analysis can make it even better.
    Automatic tracing method from Chinese document to source code based on version control
    SHEN Li, LIU Hongxing, LI Yonghua
    2018, 38(10):  2996-3001.  DOI: 10.11772/j.issn.1001-9081.2018020302
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics
    Information Retrieval (IR) technology is widely used in automatic tracing from software documents to source codes, but Chinese document and source code are written in different languages, which leads to low accuracy of automatic tracing by using IR. In view of the above problems, an automatic tracing method of Chinese document to source code based on version control was proposed. Firstly, the similarity score between the documents and the source code was calculated by information retrieval method combined with text-to-source heuristic rules. Then the score was modified by the version update information which was submitted to the version control software during software development and maintenance. Finally, the tracing relationship between the Chinese document and source code was determined according to the set threshold. The experimental results show that the precision and recall of the proposed method have a certain improvement compared with the traditional IR method, and the tracing relationship missed in the traditional IR method can be extracted.
    Model of root branching based on swarm Parrondo's game
    LI Songyang, GAO Jixun, WANG Miao, LIU Xiaodong, YU Wenqi
    2018, 38(10):  3002-3005.  DOI: 10.11772/j.issn.1001-9081.2018030637
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that root branching plasticity cannot be achieved by using sequential model in root branch modeling, a new root branching method based on swarm Parrondo's game was proposed to analyze root branching plasticity in heterogeneous root growth environment. Firstly, root primordial swarm based on individual root primordium was constructed. Secondly, Parrondo's game was used to achieve interaction among root primordial swarm affected by environment. Finally, root branch modeling process was simulated according to auxin that was updated based on the interaction results of root primordial. Prediction of root branching probability was achieved in four different root growth environments. The experimental results show that compared with RootMap, etc, the proposed method can be used to model development process of root primordium into root branch affected by spatial and temporal changes in the growth environment of the root system, and also provides analysis means for root system modeling and simulation research.
    Multi-focus image fusion based on phase congruency motivate pulse coupled neural network-based in NSCT domain
    LIU Dong, ZHOU Dongming, NIE Rencan, HOU Ruichao
    2018, 38(10):  3006-3012.  DOI: 10.11772/j.issn.1001-9081.2018040885
    Asbtract ( )   PDF (991KB) ( )  
    References | Related Articles | Metrics
    Since the traditional Pulse Coupled Neural Network-based (PCNN) image fusion methods cannot extract the focus region clearly, a multi-focus image fusion technique using Phase Congruency (PC) and Spatial Frequency (SF) combined with PCNN model in Non-Subsampled Contourlet Transform (NSCT) domain was proposed. Firstly, the source images were decomposed into high frequency subband and low frequency subband by NSCT. Secondly, the values of SF and PC were calculated to motivate PCNN neurons to fire to find the focus regions, and then the high and low frequency subbands were fused respectively. Lastly, the fused image was reconstructed through inverse NSCT. Multi-focus image datasets Clock, Pepsi and Lab were utilized as the experimental image sets. In comparison, four classical fusion methods and three newly put forward fusion algorithms were compared with the proposed algorithm. Objective indicators including mutual information, edge intensity, entropy, standard deviation and average gradient were calculated, and the values of the proposed method were greater than or very close to the maximum value of the comparison algorithms; meanwhile, it was clearly found from the difference maps between the experimental result image and the source image that the difference graph of the proposed method contained significantly fewer traces of the clear region of the source image. The experimental results indicate that the proposed method can better extract the clear region of the fused image, and it can better retain details such as edges and textures of the source images, thus, a superior fusion effect is acquired.
    Eyeball control accuracy improvement method based on digital image processing
    YAN Desai, ZENG Cheng
    2018, 38(10):  3013-3016.  DOI: 10.11772/j.issn.1001-9081.2018040778
    Asbtract ( )   PDF (661KB) ( )  
    References | Related Articles | Metrics
    To improve the accuracy of the eyeball control on the screen and complete high-accuracy operation to mobile phones or computers, based on the principle that the focus of human eye in screen and the image point in retinal can determine a line through the center of pupil, and the screen's luminous contour reflects on the eyeball to form a rectangular outline, an eyeball control accuracy improvement method based on digital image processing was proposed. The relationship between the pupil center and the rectangular contour is the specific position of the human eye focus on the screen. Real-time video of the eyeball was obtained through a high-definition camera, then digital image processing technology was used to analyze and process each frame of image for obtaining the position coordinates of human eye focus on the screen. The calculated coordinates of each frame were output to the mouse cursor to track the focus of the eyeball, and the position was output to a controlled device with a screen via wireless technology to achieve eyeball control. Simulation results show that the average accuracy of eye control of the proposed mapping method achieves 0.7 degree.
    Data modeling and data-driven method in collaborative design of complex products
    YIN Xuemei, ZHOU Junhua, ZHU Yaoqin
    2018, 38(10):  3017-3024.  DOI: 10.11772/j.issn.1001-9081.2018030614
    Asbtract ( )   PDF (1249KB) ( )  
    References | Related Articles | Metrics
    In the traditional workflow-based collaborative design, the difficulties of communication and task coordination among different professional designers lead to low efficiency of product design. In order to solve this problem, the "A Meta-model with Three Levels" data model of complex product and data-driven collaborative design technology of complex product based on data driven were proposed. Firstly, multi-dimensional and multi-granularity data modeling and ontology description were used to complete the information modeling of complex products. Then the semantic retrieval technology based on ontology was used to complete the data subscription of the collaborative design process task. Finally, a complex product task collaboration technology based on data subscription/publishing was implemented. The experimental results show that the data-driven collaborative design technology solves the difficulties of communication and task coordination among different professional designers in the traditional collaborative design process, and achieves a spiral of rise product collaborative design process, thereby improving the efficiency of complex product design.
    Trigger probability model of transit signal priority strategies based on signal timing
    HUANG Hainan, LI Xiaofeng, LIAN Peikun, RONG Jian
    2018, 38(10):  3025-3029.  DOI: 10.11772/j.issn.1001-9081.2018030640
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the existing signal control logic cannot respond to the bus cumulative number and the sensitivity of control parameters is poor, a bus priority strategy trigger probability model was constructed to detect and analyze the methods for improving trigger accuracy. Based on the Siemens 2070 signal controller, the triggering theory of Transit Signal Priority (TSP) was analyzed, and the trigger probability models were constructed for green-extension strategy and early-green strategy. Taking the actual intersection as an example, the trigger probability results of different signal timing plans were calculated and compared by simulation, the trigger characteristic of TSP strategies was studied and the improvement was discussed. The research shows that the trigger probability of green-extension strategy is so far below the early-green strategy; the trigger probability of green-extension is inversely proportional to the mini-green and the max-green time, while the trigger probability of early-green strategy is mainly related to the number of buses which applying for priority in the non-favored signal phase; the trigger probability of green-extension strategy can be improved by optimizing the mini-green, max-green time and increasing the bus number applying for priority; the trigger probability of early-green strategy can be improved by optimizing the original signal timing scheme then adding TSP subsequently.
    Vehicular trajectory planning method based on improved artificial fish swarm algorithm
    YUAN Na, SHI Xin, ZHAO Xiangmo
    2018, 38(10):  3030-3035.  DOI: 10.11772/j.issn.1001-9081.2018030695
    Asbtract ( )   PDF (1011KB) ( )  
    References | Related Articles | Metrics
    Concerning large fluctuation in velocity and trajectory of typical vehicle trajectory planning methods in the Internet of Vehicles (IoV) environment, a new vehicle trajectory planning method based on improved artificial fish swarm algorithm was proposed. Using Delicated Short Range Communications (DSRC) application scenario as a design platform, taking the optimal speed as the core calculation basis, the optimal trajectory of the vehicle was analyzed and achieved. Firstly, the advantages and disadvantages of artificial fish swarm algorithm in the application scene of IoV were analyzd, and an improved artificial fish swarm algorithm was proposed by introducing universal gravitational model and obstacle avoidance mode control. Secondly, the force constraints of the vehicle in the IoV application scenario were analyzd, and the self-organizing behavior control strategy of networked vehicle was used to derive the optimal speed. Finally, real-time trajectory guidance and trajectory obstacle avoidance control planning for the vehicles was realized based on the optimal speed. The simulation results show that after using the trajectory planning model, the driving speed of the vehicle is more stable, the trajectory is less fluctuating, and zero collision can be achieved. In the case of multi-vehicle encounters, when the number of test vehicles is between 2 and 40, compared to the original artificial fish swarm algorithm and firefly algorithm, the iteration number of vehicular trajectory planning method using the improved artificial fish swarm algorithm was reduced,and iteration efficiency increased by 3 to 7 times and 4 to 8 times. The more vehicles, the more obvious the improvement of iteration efficiency.
    Hybrid variable neighborhood search algorithm for long-term carpooling problem
    GUO Yuhan, YI Peng
    2018, 38(10):  3036-3041.  DOI: 10.11772/j.issn.1001-9081.2018020343
    Asbtract ( )   PDF (1021KB) ( )  
    References | Related Articles | Metrics
    A Hybrid Variable Neighborhood Search Algorithm (HVNSA) was proposed for solving Long-Term CarPooling Problem (LTCPP), which reduced the number of vehicle trips by matching the users with the same destination. Firstly, a comprehensive and accurate mathematical model of LTCPP was built. Then all users were assigned to the car pools by the composite distance preference algorithm, the time window and vehicle capacity constraint were verified to obtain the initial carpooling scheme. Secondly, the initial carpooling scheme was optimized by using variable neighborhood search algorithm to obtain the optimal long-term carpooling scheme. The experimental results show that HVNSA can obtain high quality of optimal carpooling scheme within 1 second for 100 people and 200 people instances; at the same time, the algorithm can obtain higher quality of optimal carpooling scheme within 2-4 seconds for the larger-scale instances such as 400 people and 1000 people.
    P2P loan default prediction model based on TF-IDF algorithm
    ZHANG Ning, CHEN Qin
    2018, 38(10):  3042-3047.  DOI: 10.11772/j.issn.1001-9081.2018030673
    Asbtract ( )   PDF (887KB) ( )  
    References | Related Articles | Metrics
    Concerning that current P2P loan default prediction models are limited by the information asymmetry of lenders and borrowers, and do not take differences between loan lenders into account, a P2P loan default prediction model based on Term Frequency-Inverse Document Frequency (TF-IDF) algorithm of information retrieval was proposed. Firstly, based on the investment utility theory, a loan default prediction model was established by using the information such as lender's historical investment profit rate and loan bid interest rate. Secondly, referred to TF-IDF algorithm of information retrieval, loan lender's reverse investment scale factor was constructed to quantify the lender's differences, and the weight factor in the model were optimized. Experimental results show that the prediction effect of this model is better than those of other models on different data sets, its prediction accuracy increases by an average of 6% compared with other models.
    Brain node recognition method based on extended low-rank multivariate general linear model
    YANG Yaqian, TANG Shaoting
    2018, 38(10):  3048-3052.  DOI: 10.11772/j.issn.1001-9081.2018020432
    Asbtract ( )   PDF (764KB) ( )  
    References | Related Articles | Metrics
    Identifying brain nodes with different responses under different conditions plays an important role in human brain research. Due to the low detection accuracy of existing single-voxel models and the excessive calculation time and usage limitations of the Low-rank Multivariate General Linear Model (LRMGLM), a brain node identification method based on Extended LRMGLM (ELRMGLM) was proposed. Firstly, an ELRMGLM that can simultaneously process all node data in two experiments was established to improve the accuracy of the algorithm with more time and space information. Then, an optimization function with spatio-temporal smoothing penalty terms was used to introduce the prior information and the model parameters were solved through the iterative algorithm. Finally, a quick selection strategy based on K-means clustering was adopted to speed up penalty parameter selection and brain node identification. In three sample experiments, the accuracy of ELRMGLM was respectively increased by about 20%, 8% and 20% compared with that of canonical Hemodynamic Response Function (HRF) method (canonical), Smooth Finite Impulse Response (SFIR) and Tikhonov-regularization and Generalized-Cross-Validation (Tik-GCV), which was slightly better than LRMGLM. However, the calculation time of ELRMGLM was 1/750 of that of LRMGLM. The experimental results show that ELRMGLM can effectively improve the identification accuracy and reduce the calculation time.
2024 Vol.44 No.11

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF