Loading...

Table of Content

    01 August 2014, Volume 34 Issue 8
    Adaptive approach for data cleansing in wireless sensor networks
    XIA Ying BI Haiyang LEI Jianjun PEI Haiying
    2014, 34(8):  2145-2147.  DOI: 10.11772/j.issn.1001-9081.2014.08.2145
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics

    Since the data gathered in Wireless Sensor Network (WSN) are inaccurate and unreliable, a flexible space model based on the spatial correlation of sensor data was defined, and an adaptive neighbor-space approach for data cleansing (ANSA) was proposed. The approach adjusted neighbor-space dynamically according to sensor data fluctuation and calculated the weighted average of neighbors' measurements to clean local raw data. The experimental results show that, the sensor data error after cleansing by the proposed approach is less than 0.5, and compared to the classic Weighted Moving Average (WMA), it is more accurate and the energy consumption is reduced by about 36%.

    Hybrid optimization algorithm of low-energy adaptive clustering hierarchy protocol for wireless sensor networks
    SHEN Mengnan GENG Shengling LIU Zhen
    2014, 34(8):  2148-2154.  DOI: 10.11772/j.issn.1001-9081.2014.08.2148
    Asbtract ( )   PDF (1057KB) ( )  
    References | Related Articles | Metrics

    In the research of Wireless Sensor Network (WSN) protocol, the central topics are reducing the energy consumption of sensor nodes and prolonging the life of the network. Because of the weakness of Low-Energy Adaptive Clustering Hierarchy (LEACH) in clustering mechanism and data communications for WSN, a hybrid optimization protocol, called HOBDE-LEACH (Hybrid Optimization LEACH Protocol Based on Distance and Energy), was proposed. In the new protocol, the strategy of dividing all nodes into clusters and then selecting head node in each cluster was adopted. The clustering algorithm of coverage radius and seed-scan (CR-SSCA) was introduced to fast clustering and guaranteed that the whole area would be covered. During the running of network, considering the load balance together with energy and distance, the different cluster head selections and communication mechanisms were adopted in different stages. The simulation results show that, compared with the LEACH protocol, the round robin of first node death is extended by 66% and the round robin of 50% nodes death is extended by 20% in HOBDE-LEACH protocol; Compared with the LEACH-EI protocol, the round robin of first node death is extended by 50%, the round robin of 50% node death is extended by 19%. The HOBDE-LEACH protocol can balance the network load and energy consumption of cluster heads effectively, distribute cluster nodes reasonably and prolong the lifetime of networks obviously.

    Data driven construction and inference methodology of belief rule-base
    YU Ruiyin YANG Longjie FU Yanggeng
    2014, 34(8):  2155-2160.  DOI: 10.11772/j.issn.1001-9081.2014.08.2155
    Asbtract ( )   PDF (1042KB) ( )  
    References | Related Articles | Metrics

    Considering the problem of the low inference accuracy of the extended Belief Rule Base (BRB) which was proposed by Liu, etc (LIU J, MARTINEZ L, CALZADA A, et al. A novel belief rule base representation, generation and its inference methodology. Knowledge-Based Systems, 2013, 53: 129-141), an improved method of rule-base construction and inference was proposed. This approach was based on the method of Liu's rule-base construction, and a new generation method of rule antecedents and a new calculation method of rule weights were provided. Subsequently, in order to avoid activating so many unnecessary rules, the 80/20 rule was introduced to improve the strategy of rule activation. Then an integrated construction and inference methodology of belief rule-base was formed. Finally, in order to validate the accuracy and efficiency of the new approach, the case study in pipeline leak detection was provided. The experimental results show that the proposed approach not only can keep lower time-consumption, but also can make the Mean Absolute Error (MAE) of system be reduced to 0.17342. This proves that the new approach has high accuracy and efficiency.

    Parameter training approach based on variable particle swarm optimization for belief rule base
    SU Qun YANG Longjie FU Yanggeng WU Yingjie GONG Xiaoting
    2014, 34(8):  2161-2165.  DOI: 10.11772/j.issn.1001-9081.2014.08.2161
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of optimization learning models in Belief Rule Base (BRB), a new parameter training approach based on the Particle Swarm Optimization (PSO) algorithm was proposed, which is one of the swarm intelligence algorithms. The optimization learning model was converted to nonlinear optimization problem with constraints. During the optimization process, all particles were limited in the search space and the particles with no speed were given velocity in order to maintain the diversity of the population of particles and achieve parameter training. In the practical pipeline leak detection problem, the Mean Absolute Error (MAE) of the trained system was 0.166478. The experimental results show the proposed method has good accuracy and it can be used for parameter training.

    Novel validity index for fuzzy clustering
    ZHENG Hongliang XU Benqiang ZHAO Xiaohui ZOU Li
    2014, 34(8):  2166-2169.  DOI: 10.11772/j.issn.1001-9081.2014.08.2166
    Asbtract ( )   PDF (582KB) ( )  
    References | Related Articles | Metrics

    It is necessary to pre-define a cluster number in classical Fuzzy C-means (FCM) algorithm. Otherwise, FCM algorithm can not work normally, which limits the applications of this algorithm. Aiming at the problem of pre-assigning cluster number for FCM algorithm, a new fuzzy cluster validity index was presented. Firstly, the membership matrix was got by running the FCM algorithm. Secondly, the intra class compactness and the inter class overlap were computed by the membership matrix. Finally, a new cluster validity index was defined by using the intra class compactness and the inter class overlap. The proposal overcomes the shortcomings of FCM that the cluster number must be pre-assigned. The optimal cluster number can be effectively found by the proposed index. The experimental results on artificial and real data sets show the validity of the proposed index. It also can be seen that the optimal cluster number are obtained for three different fuzzy factor values of 1.8, 2.0 and 2.2 which are general used in FCM algorithm.

    Sorting method and its application based on tolerance dominance relation
    CHEN Wancui LYV Yuejin WENG Shizhou
    2014, 34(8):  2170-2174.  DOI: 10.11772/j.issn.1001-9081.2014.08.2170
    Asbtract ( )   PDF (715KB) ( )  
    References | Related Articles | Metrics

    Concerning at the problems that the classical dominance relation is too strict under the ordered information system which may lead to failure of the sorting method, the concept of tolerance dominance relation was proposed and its relevant properties were studied. Then basing on tolerance dominance relation, the definition of dominant degree was obtained and a project sorting method based on the tolerance dominance relation was proposed. In the end the sorting method was applied to the comprehensive evaluation of smart grid. The experimental results show that, compared with the classical dominance relation, the tolerance dominance relation possesses stronger capability of fault tolerance for the data. And the sorting results have stronger differentiation degree. The proposed tolerance dominance relation can effectively avoid the problem of failure that caused by a large number of attributes and the different merits of attribute values in the classical dominance relation.

    MREclat: new algorithm for parallel mining frequent itemsets
    ZHANG Zhigang JI Genlin TANG Mengmeng
    2014, 34(8):  2175-2178.  DOI: 10.11772/j.issn.1001-9081.2014.08.2175
    Asbtract ( )   PDF (605KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the memory and computational capability is insufficient while using Eclat algorithm to mine frequent itemsets from massive dataset, a parallel mining algorithm based on Map/Reduce framework, called MREclat(MapReduce Eclat), was proposed. Firstly, MREclat algorithm converted the horizontal database into a vertical one. Secondly, it redistributed the converted dataset according to the first item of each frequent 2-itemset and load-balance was taken into consideration while distributing datasets. Then, all the frequent itemsets prefixed by the same item were computed in each computing node. Finally, MREclat algorithm collected the result of each computing node and generated the whole frequent itemsets. In this paper, the idea of MREclat was introduced and the performance of the algorithm was studied. The experimental results show that MREclat algorithm is twice as efficient as PEclat algorithm, and the speedup performance of MREclat algorithm is 64% higher than that of PEclat.

    Link prediction algorithm based on link importance and data field
    CHEN Qiaoyu BAN Zhijie
    2014, 34(8):  2179-2183.  DOI: 10.11772/j.issn.1001-9081.2014.08.2179
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics

    The existing link prediction methods based on node similarity usually ignore the link strength of network topology and the weight value in the typological path method with weight is difficult to set. To solve these problems, a new prediction algorithm based on link importance and data field was proposed. Firstly, this method assigned different weight for each link according to the topology graph. Secondly, it took into account the interaction between potential link nodes and pre-estimated the link values for the partial nodes without links. Finally, it calculated the similarity between two nodes with data field potential function. The experimental results on some typical data sets of the real-world network show that, the proposed method has good performance with both classification index and recommended index. In comparison to the Local Path (LP) algorithm with the same complexity, the proposed algorithm raises Area Under Curve (AUC) by 3 to 6 percentages, and raises Discounted Cumulative Gain (DCG) by 1.5 to 2.5 points. On the whole, it improves the prediction accuracy. Because of its easy parameter determination and low time complexity, this new approach can be deployed simply.

    Unsupervised discretization algorithm based on ensemble learning
    XU Yingying ZHONG Caiming
    2014, 34(8):  2184-2187.  DOI: 10.11772/j.issn.1001-9081.2014.08.2184
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics

    Some algorithms in pattern recognition and machine learning can only deal with discrete attribute values, while in real world many data sets consist of continuous data values. An unsupervised method was proposed according to the question of discretization. First, K-means method was employed to partition the data set into multiple subgroups to acquire label information, and then a supervised discretization algorithm was applied to the divided data set. When the process was repeatedly executed, multiple discrete results were obtained. These results were then integrated with an ensemble technique. Finally, the minimum sub-intervals were merged after priority dimensions and adjacent intervals were determined according to the neighbor relationship of data, where the number of sub-intervals was automatically estimated by preserving the correlation so that the intrinsic structure of the data set was maintained. The experimental results of applying categorical clustering algorithms such as spectral clustering demonstrate the feasibility and effectiveness of the proposed method. For example, its clustering accuracy improves by about 33% on average than other four methods. Discrete data attained can be used for some data mining algorithm, such as ID3 decision tree algorithm.

    Automatic annotation methods for Chinese micro-blog corpus with sentiment class
    YANG Aiming ZHOU Yongmei ZHOU Jianfeng
    2014, 34(8):  2188-2191.  DOI: 10.11772/j.issn.1001-9081.2014.08.2188
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics

    For the difficulty of manual annotation on large-scale micro-blog corpus, three automatic annotation methods and an integrated annotation method by voting for Chinese micro-blog corpus were proposed. Three automatic annotation methods included keywords-based annotation method, probability-summation-based annotation method and probability-product-based annotation method. During the process of automatic annotation, firstly, micro-blog corpus were annotated by three annotation methods respectively, and three results were obtained, then the final annotation results were determined by voting method with the integrated strategy. By designing automatic annotation experiment system, experimental results verify the feasibility and effectiveness of the proposed methods, and show that the accuracy of the single annotation method is more than 70%, and it is more than 90% for the voting method.

    Building and consistency analysis of movie ontology
    GAO Xiaolong ZHU Xinde ZHAO Jianmin CAO Cungen XU Huiying WU De
    2014, 34(8):  2192-2196.  DOI: 10.11772/j.issn.1001-9081.2014.08.2192
    Asbtract ( )   PDF (881KB) ( )  
    References | Related Articles | Metrics

    To tackle the higher requirement of mobile network for movie service system and the lack of description of movie domain knowledge, the necessity and feasibility of establishing the Movie Ontology (MO) were illustrated. Firstly, the objects and components of MO were summarized, and the principle and method for building the MO model were also put forward, with using the Web Ontology Language (OWL) and Protege 4.1 to build the model. After that, the concrete representation of the class, property, individual, axioms and inference rules in the MO were explained. Finally, the consistency of MO was analyzed, including the consistency analysis of relationship between classes and the consistency analysis based on axioms.

    Rule-based tagging method of Chinese ambiguity words
    LI Huadong JIA Zhen YI Hongfeng YANG Yan
    2014, 34(8):  2197-2201.  DOI: 10.11772/j.issn.1001-9081.2014.08.2197
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics

    Concerning the low accuracy of tagging Chinese ambiguity words, a combined tagging method of rules and statistical model was proposed in this paper. Firstly, three kinds of traditional statistical models, including Hidden Markov Model (HMM), Maximum Entropy (ME) and Condition Random Field (CRF), were used to tagging problem of the ambiguity words. Then, the improved mutual information algorithm was applied to learn Part Of Speech (POS) tagging rules. Tagging rules were got through the calculation of correlation between the target words and the nearby word units. Finally, rules were combined with statistical model algorithm to tag Chinese ambiguity words. The experimental results show that after adding the rule algorithm, the average accuracy of POS tagging promotes by 5%.

    Agent-based language competition model with social circle network
    WANG Chao BI Guihong ZHANG Shouming WEI Chuntao
    2014, 34(8):  2202-2208.  DOI: 10.11772/j.issn.1001-9081.2014.08.2202
    Asbtract ( )   PDF (1102KB) ( )  
    References | Related Articles | Metrics

    Language transmission network is a typical social network, the structure and dynamics of language networks have a significant impact on competition and the spread of the language. Therefore, using the language competition in the same area as the object of study, and then the paper proposed an Agent-based social circles network to build the social network closer to the actual language. The whole network parameters and structural parameters of individual networks have good social network characteristics. Agents in the network could be distributed to a social circles of different size. They can move, born and die, which led to the disconnection of previous links and the establishment of new contacts. Each Agent adopted one of three possible states: monolingual language in X, monolingual language in Y and bilingual language in Z, and transmitted horizontally and vertically. On the basis of the analysis of the language status, attractive parameter, the peak rate of horizontal and vertical transmission, the proportion of speakers on the impact of language competition, the article analyzed the impact of social interaction radius and social mobility on language competition. The simulation results indicate that compared with the static social network model, the proposed model is closer to the actual society, and it can effectively increase the likelihood of coexistence between languages, provides a better environment for the study of the preservation of endangered language maintenance.

    Gender identification of microblog users based on rough set
    HUANG Faliang XIONG Jinbo HUANG Tianqiang LIU Ximeng
    2014, 34(8):  2209-2211.  DOI: 10.11772/j.issn.1001-9081.2014.08.2209
    Asbtract ( )   PDF (487KB) ( )  
    References | Related Articles | Metrics

    Concerning gender tendency hidden in microblog messages posted by microblog users, a novel approach based on rough set theory was proposed to identify microblog user gender. In the proposed approach, a new Representation Model based on Tolerance Rough Set (TRSRM) was devised, which can effectively represent gender characteristics of microblog messages. The experimental results show that the accuracy rate of the proposed approach is 7% higher than frequency model approach by testing messages of 1000 real microblog users, and so the TRSRM achieves better recognition performance.

    PM2.5 concentration prediction model of least squares support vector machine based on feature vector
    LI Long MA Lei HE Jianfeng SHAO Dangguo YI Sanli XIANG Yan LIU Lifang
    2014, 34(8):  2212-2216.  DOI: 10.11772/j.issn.1001-9081.2014.08.2212
    Asbtract ( )   PDF (781KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of Fine Particulate Matter (PM2.5) concentration prediction, a PM2.5 concentration prediction model was proposed. First, through introducing the comprehensive meteorological index, the factors of wind, humidity, temperature were comprehensively considered; then the feature vector was conducted by combining the actual concentration of SO2, NO2, CO and PM10; finally the Least Squares Support Vector Machine (LS-SVM) prediction model was built based on feature vector and PM2.5 concentration data. The experimental results using the data from the city A and city B environmental monitoring centers in 2013 show that, the forecast accuracy is improved after the introduction of a comprehensive weather index, error is reduced by nearly 30%. The proposed model can more accurately predict the PM2.5 concentration and it has a high generalization ability. Furthermore, the author analyzed the relationship between PM2.5 concentration and the rate of hospitalization, hospital outpatient service amount, and found a high correlation between them.

    Time series outlier detection based on sliding window prediction
    YU Yufeng ZHU Yuelong WAN Dingsheng GUAN Xingzhong
    2014, 34(8):  2217-2220.  DOI: 10.11772/j.issn.1001-9081.2014.08.2217
    Asbtract ( )   PDF (791KB) ( )  
    References | Related Articles | Metrics

    To solve data quality problems for hydrological time series analysis and decision-making, a new prediction-based outlier detection algorithm was proposed. The method first split given hydrological time series into subsequences so as to build a forecasting model to predict future values, and then outliers were assumed to take place if the difference between predicted and observed values was above a certain threshold. The setup of sliding window and parameters in the detection algorithm were analyzed, and the corresponding result was validated with the real data. The experimental results show that the proposed algorithm can effectively detect the outliers in time series and improves the sensitivity and specificity to at least 80 percent and 98 percent respectively.

    Application of gray cumulative projection histogram in detection of tire crown crack
    Han Yanbin WANG Jie XIA Yingjie LI Jinping
    2014, 34(8):  2221-2226.  DOI: 10.11772/j.issn.1001-9081.2014.08.2221
    Asbtract ( )   PDF (950KB) ( )  
    References | Related Articles | Metrics

    For automatic detection of tire crown cord overlap defect, a detection method based on the crown X ray image was presented. Firstly, the gray cumulative projection curves that X-ray image was projected along different angles were obtained. Secondly, the local peak energy distribution of curves were calculated and the energy feature vector was constructed by the n largest peak energy values. Thirdly, the tire crown crack image was recognized by the maximum projection curve which could be distinguished through the energy feature vector by Support Vector Machine (SVM). Lastly, using the position inverse calculation, the tire crown crack was located. The experimental results demonstrate that the proposed approach was effective to detect the defects of tire crown which caused by tire cord overlap. The highest rate of correct detection can reach 97.7% in the 1000 crown images collected by the process of production.

    Face recognition via kernel-based non-negative sparse representation
    BO Chunjuan ZHANG Rubo LIU Guanqun JIANG Yuzhe
    2014, 34(8):  2227-2230.  DOI: 10.11772/j.issn.1001-9081.2014.08.2227
    Asbtract ( )   PDF (615KB) ( )  
    References | Related Articles | Metrics

    A novel kernel-based non-negative sparse representation (KNSR) method was presented for face recognition. The contributions were mainly three aspects: First, the non-negative constraints on representation coefficients were introduced into the Sparse Representation (SR) and the kernel function was exploited to depict non-linear relationships among different samples, based on which the corresponding objective function was proposed. Second, a multiplicative gradient descent method was proposed to solve the proposed objective function, which could achieve the global optimum value in theory. Finally, local binary feature and the Hamming kernel were used to model the non-linear relationships among face samples and therefore achieved robust face recognition. The experimental results on some challenging face databases demonstrate that the proposed algorithm has higher recognition rates in comparison with algorithms of Nearest Neighbor (NN), Support Vector Machine (SVM), Nearest Subspace (NS), SR and Collaborative Representation (CR), and achieves about 99% recognition rates on both YaleB and AR databases.

    Kinect depth image filtering algorithm based on joint bilateral filter
    LI Zhifei CHEN Yuan
    2014, 34(8):  2231-2234.  DOI: 10.11772/j.issn.1001-9081.2014.08.2231
    Asbtract ( )   PDF (846KB) ( )  
    References | Related Articles | Metrics

    Usually the depth image obtained by Kinect camera contains noise and black holes, so the effect is poor if it is directly applied into human motion tracking and recognition system. To solve this problem, an efficient depth image filtering algorithm based on joint bilateral filter was proposed. The principle of joint bilateral filtering was used in the proposed algorithm, and the depth and color images were captured by Kinect camera at the same time as the input. Spatial distance weight value of depth image and grayscale weight value of RGB color image were computed by Gaussian kernel function. Then these two weight values were multiplied to get the weight value of joint bilateral filter. A joint bilateral filter was designed by replacing the Gaussian kernel function with fast Gaussian transform. Finally, this filtered result was convolved with the noisy image to filter the Kinect depth image. The experimental results show that the proposed algorithm can significantly improve the robustness to noise in the human motion tracking and identification system and increase the recognition rate by 17.3%. The average running time of the proposed algorithm is 371ms, and is much lower than similar other algorithms. The proposed algorithm keeps the advantages of joint bilateral filter. Since the color image is introduced into the algorithm, the proposed algorithm can well repair the black holes while reducing the noise. The proposed algorithm is better than traditional bilateral filter and joint bilateral filter in denoising and repairing holes for the Kinect depth image, and it has higher real-time performance.

    Network and communications
    Identification method of network traffic flow based on evidence theory fusion
    ZHANG Jian CAO Ping SHOU Guochu
    2014, 34(8):  2235-2238.  DOI: 10.11772/j.issn.1001-9081.2014.08.2235
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics

    In multi-classifier decision fusion, there is great warp when using limited training data to estimate the probability parameters of classifier. For dealing with this problem, a multi-classifier decision fusion method based on D-S (Dempster-Shafer) Evidential Reasoning (ER) was presented. The method utilized the advantages of D-S theory to describe uncertainty of classifiers. To solve the paradox problem in high conflict circumstance among multiple classifiers, a reliability weighted fusion algorithm was proposed to realize the traffic identification decision fusion. The experimental results show that the accuracy rate of majority voting and Bayes maximum posteriori probability are 78.3% and 81.7% respectively, while the proposed algorithm can improve the accuracy rate up to 82.2%-91.6%, and remain the reject rate between 4.1% and 6.2%.

    Spectrum allocation based on immune multi-objective optimization in cognitive mesh networks
    LI Yalun YANG Yanan CAI Zhengyi
    2014, 34(8):  2239-2242.  DOI: 10.11772/j.issn.1001-9081.2014.08.2239
    Asbtract ( )   PDF (588KB) ( )  
    References | Related Articles | Metrics

    To study spectrum allocation problem in Cognitive Wireless Mesh Network (CWMN), an immune-based multi-objective optimization algorithm was proposed. The problem was modeled as a multi-objective optimization problem to maximize total bandwidth and minimize the total number of occupied spectra. Antibody encoding, whole cloning operator and non-dominated anti-body selection operator that are suitable for solving the problem were designed. The simulations results show that the proposed algorithm can obtain the Pareto optimal solutions of CWMN spectrum allocation, which improves the total bandwidth and minimizes the occupied spectra, so it optimizes the spectrum allocation performance.

    GOMDI: GPU OpenFlow massive data network analysis model
    ZHANG Wei XIE Zhenglong DING Yaojun ZHANG Xiaoxiao
    2014, 34(8):  2243-2247.  DOI: 10.11772/j.issn.1001-9081.2014.08.2243
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics

    OpenFlow enhances the Quality of Service (QoS) of traditional networks, but it has disadvantage that its network session identification efficiency is low and the network packet forwarding path is poor and so on. On the basis of the current study of the OpenFlow, GPU OpenFlow Massive Data Network Analysis (GOMDI) model was proposed by this paper, through integrating the biological sequence algorithm, GPU parallel computing algorithm and machine learning methods. The network session matching algorithm and path selection algorithm of GOMDI were designed. The experimental results show that the speedup of the GOMDI network session matching algorithm is over 300 higher than the CPU environment in real network, and the network packet loss rate of its path selection algorithm is lower than 5%, the network delay is less than 20ms. Thus, the GOMDI model can effectively improve network performance and meet the needs of the real-time processing for massive information in big data environment.

    Frequency offset tracking and estimation algorithm in orthogonal frequency division multiplexing based on improved strong tracking unscented Kalman filter
    YANG Zhaoyang YANG Xiaopeng LI Teng YAO Kun ZHANG Hengyang
    2014, 34(8):  2248-2251.  DOI: 10.11772/j.issn.1001-9081.2014.08.2248
    Asbtract ( )   PDF (697KB) ( )  
    References | Related Articles | Metrics

    Towards the large frequency offset caused by Doppler effect in high speed moving environment, a dynamic state space model of Orthogonal Frequency Division Multiplexing (OFDM) was built, and a kind of frequency offset tracking and estimation algorithm in OFDM based on improved Strong Tracking Unscented Kalman Filter (STUKF) was proposed. By combining strong tracking filter theory and UKF together, the fading factor was introduced during the process of calculating the measurement predictive covariance and cross covariance. The frequency offset estimation error covariance was adjusted; meanwhile, the process noise covariance was also controlled, and the gain matrix was adjusted in real-time. So the tracking ability to time-varying frequency offset was enhanced and the estimated accuracy was raised. The simulation test was carried out in time-invariant and time-varying frequency offset models. The simulation results show that the proposed algorithm has better tracking and estimation performance than the UKF frequency offset estimation algorithm, the Signal-to-Noise Ratio (SNR) raises about 1dB under the same Bit Error Rate (BER).

    Virtual machine allocation method based on gray correlation degree in cloud computing
    HE Li
    2014, 34(8):  2252-2255.  DOI: 10.11772/j.issn.1001-9081.2014.08.2252
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics

    For the balancing problem of improving resource utilization and reducing energy consumption in cloud computing system, a new virtual machine allocation method based on gray correlation degree was proposed. By using the basic theory of gray correlation, the author established an allocation model of virtual machines based on the evaluation functions of Service Level Agreement (SLA) violation rate, system energy consumption and server load, constructed the virtual machine allocation algorithm based on gray correlation degree, and experimented on the platform of CloudSim. The experimental results show that, compared with the traditional multi-objective optimization method based on the simple linear weight, the virtual machine allocation method based on gray correlation degree can achieve average decrease by 6.8%, 5.2% and 15.5% in the system energy consumption, the SLA violation rate and the migrating number of virtual machines under different virtual machine selection strategies. Thus, the proposed method under different virtual machine selection strategies can greatly reduce the migrating number of virtual machines, and meets the demand of system optimization on the energy consumption and SLA violation rate preferably.

    Energy-efficient strategy for dynamic management of cloud storage replica based on user visiting characteristic
    WANG Zhengying YU Jiong YING Changtian LU Liang BAN Aiqin
    2014, 34(8):  2256-2259.  DOI: 10.11772/j.issn.1001-9081.2014.08.2256
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics

    For low server utilization and serious energy consumption waste problems in cloud computing environment, an energy-efficient strategy for dynamic management of cloud storage replica based on user visiting characteristic was put forward. Through transforming the study of the user visiting characteristics into calculating the visiting temperature of Block, DataNode actively applied for sleeping so as to achieve the goal of energy saving according to the global visiting temperature.The dormant application and dormancy verifying algorithm was given in detail, and the strategy concerning how to deal with the visit during DataNode dormancy was described explicitly. The experimental results show that after adopting this strategy, 29%-42% DataNode can sleep, energy consumption reduces by 31%, and server response time is well. The performance analysis show that the proposed strategy can effectively reduce the energy consumption while guaranteeing the data availability.

    Task scheduling and resource selection algorithm with data-dependent constraints
    LIAO Bin YU Jiong ZHANG Tao YANG Xingyao
    2014, 34(8):  2260-2266.  DOI: 10.11772/j.issn.1001-9081.2014.08.2260
    Asbtract ( )   PDF (1100KB) ( )  
    References | Related Articles | Metrics

    Like MapReduce, tasks under big data environment are always with data-dependent constraints. The resource selection strategy in distributed storage system trends to choose the nearest data block to requestor, which ignored the server's resource load state, like CPU, disk I/O and network, etc. On the basis of the distributed storage system's cluster structure, data file division mechanism and data block storage mechanism, this paper defined the cluster-node matrix, CPU load matrix, disk I/O load matrix, network load matrix, file-division-block matrix, data block storage matrix and data block storage matrix of node status. These matrixes modeled the relationship between task and its data constraints. And the article proposed an optimal resource selection algorithm with data-dependent constraints (ORS2DC), in which the task scheduling node is responsible for base data maintenance, MapRedcue tasks and data block read tasks take different selection strategies with different resource-constraints. The experimental results show that, the proposed algorithm can choose higher quality resources for the task, improve the task completion quality while reducing the NameNode's load burden, which can reduce the probability of the single point of failure.

    Energy-efficient algorithm based on data classification for cloud storage system
    ZHANG Tao LIAO Bin SHUN Hua LI Fengjun JI Jinhu
    2014, 34(8):  2267-2272.  DOI: 10.11772/j.issn.1001-9081.2014.08.2267
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics

    Constant expansion and that energy consumption factors are ignored with its design process, bring the problem of high energy consumption and low efficiency of the cloud storage system. And this problem has become a main bottleneck in the development of cloud computing and big data. Most of previous studies had been mostly used to adjust the entire storage node to the low-power mode to save energy. According to the repetition of data and access rules, new storage model based on data classification was proposed. The storage area was divided into HotZone, ColdZone and ReduplicationZone so as to divisionally store the data according to the repetition and activity factor characteristics of each data file. Based on the new storage model, an energy-efficient storage algorithm was designed and a new storage model was constructed. The experimental results show that, the new storage model improves the energy utilization rate of the distributed storage system nearly 25%, especially when the system load is lower than the given threshold.

    Artificial intelligence
    Introspective learning adjustment approach for attribute weights of case-based reasoning classifier
    ZHANG Chunxiao YAN Aijun WANG Pu
    2014, 34(8):  2273-2278.  DOI: 10.11772/j.issn.1001-9081.2014.08.2273
    Asbtract ( )   PDF (909KB) ( )  
    References | Related Articles | Metrics

    Aiming at the optimal allocation problem of attribute weights in Case-Based Reasoning (CBR) classifier, an introspective learning-based iterative adjustment approach for the attribute weights was proposed. The attribute weights could be adjusted according to the classification result of the training case by CBR classifier. Based on the success-driven weight learning strategy, if the current training case was classified successfully, the weights of matched attributes would be increased and the weights of mismatched attributes would be decreased according to weight adjustment formulas, then all of the weights would be normalized as the new weights of the current iteration. The experimental results show that the accuracy on UCI dataset PD, Heart and WDBC of CBR classifier with the proposed method are respectively 1.72%, 4.44% and 1.05% higher than the traditional CBR classifier. This illustrates that success-driven introspective learning method for the weights adjustment can improve the rationality of weight allocation, and then improve the accuracy of CBR classifier.

    High-dimensional data clustering algorithm with subspace optimization
    WU Tao CHEN Lifei GUO Gongde
    2014, 34(8):  2279-2284.  DOI: 10.11772/j.issn.1001-9081.2014.08.2279
    Asbtract ( )   PDF (968KB) ( )  
    References | Related Articles | Metrics

    A new soft subspace clustering algorithm was proposed to address the optimization problem for the projected subspaces, which was generally not considered in most of the existing soft subspace clustering algorithms. Maximizing the deviation of feature weights was proposed as the sub-space optimization goal, and a quantitative formula was presented. Based on the above, a new optimization objective function was designed which aimed at minimizing the within-cluster compactness while optimizing the soft subspace associated with each cluster. A new expression for feature-weight computation was mathematically derived, with which the new clustering algorithm was defined based on the framework of the classical k-means. The experimental results show that the proposed method significantly reduces the probability of trapping in local optimum prematurely and improves the stability of clustering results. And it has good performance and clustering efficiency, which is suitable for high-dimensional data cluster analysis.

    Immune robust regression analysis for data set of multiple models
    XU Xuesong SHU Jian
    2014, 34(8):  2285-2290.  DOI: 10.11772/j.issn.1001-9081.2014.08.2285
    Asbtract ( )   PDF (948KB) ( )  
    References | Related Articles | Metrics

    Classical regression algorithms for data set analysis of multiple models have the defects of long calculating time and low detecting accuracy of models. Therefore, a heuristic robust regression analysis method was proposed. This method mimicked the clustering principle of immune system. The B cell network was taken as classifier of data set and memory of model set. Conformity between data and model was used as the classification criteria, which improved the accuracy of the data classification. The extraction process of model set was divided into a parallel iterative trial including clustering, regressing and clustering again, by which the solution of model set was gradually approximated to. The simulation results show that the proposed algorithm needs obviously less calculating time and it has higher detecting accuracy of models than classical ones. According to the results of the eight-model data set analysis in this paper, among the classical algorithms, the best algorithm is the successive extraction algorithm based on Random Sample Consensus (RANSAC). Its mean model detecting accuracy is 90.37% and the calculating time is 53.3947s. The detecting accuracy of those classical algorithms which calculating time is below 0.5s is bellow 1%. By the contrary, the proposed algorithm needs only 0.5094s and its detecting accuracy is 98.25%.

    Software defects prediction based on under-sampling and ensemble algorithm
    LI Yong
    2014, 34(8):  2291-2294.  DOI: 10.11772/j.issn.1001-9081.2014.08.2291
    Asbtract ( )   PDF (745KB) ( )  
    References | Related Articles | Metrics

    Software defects prediction is considered as a means for the improvement of test efficiency and assurance of software reliability. To improve the accuracy of software defect prediction, a model based on under-sampling and decision tree ensemble algorithm was proposed. Firstly, taking into account class imbalance of software defect data, the random under-sampling technique was used to rebalance the data according to the imbalance rate. Then, several decision tree sub-classifiers were trained by using Bagging's random sampling. Finally, the defect prediction model was constructed based on majority rule. The experiments were carried out on the NASA MDP datasets. The experimental results show that, compared with three standard methods, the Probability of False alarm (PF) of the proposed model is reduced by 10% while ensuring probability of detection and the comprehensive evaluation index is improved significantly. It has low PF of defect prediction, and it is more effective and stable in software defects prediction practices.

    Fruit fly optimization algorithm based on cellular automata
    HE Zhiming SONG Jianguo MEI Hongbiao
    2014, 34(8):  2295-2298.  DOI: 10.11772/j.issn.1001-9081.2014.08.2295
    Asbtract ( )   PDF (743KB) ( )  
    References | Related Articles | Metrics

    The Fruit fly Optimization Algorithm (FOA) was widely used in all kinds of optimization problems as a new kind of optimization search algorithm. In order to overcome the shortcomings of low precision, easily trapping in local optimum and the slow convergence in later period, a novel algorithm of FOA based on Cellular Automata (CAFOA) was proposed. CAFOA used cellular evolution rules to select the best individual drosophila neighborhood during the first evolution, then it selected the location of individual fruit fly to conduct random perturbation and replaced the previous location before evolution with its neigborhood's, so it could obtain the value of secondary optimization, jump out of local extremum and continue to optimize. Experiments were conducted on the six kinds of classical test functions for operation simulation. The experimental results show that, the average convergence precision of the proposed algorithm is 10% higher than the traditional algorithm's and the average number of iterations to achieve stable global optimal values is reduced to 870, which demonstrates the effectiveness of the new algorithm.

    Artificial bee colony algorithm based on elite swarm search strategy
    MA Wei SUN Zhengxing
    2014, 34(8):  2299-2305.  DOI: 10.11772/j.issn.1001-9081.2014.08.2299
    Asbtract ( )   PDF (1013KB) ( )  
    References | Related Articles | Metrics

    There are some problems in the Artificial Bee Colony (ABC) algorithm, such as the slow convergence speed, low solution precision and easy to fall in local optimum. In this paper, the scout bees firstly explored the food source by a random motivation. Along with the process of colony bee foraging behavior, the elite swarm was constructed to guide the colony bee to achieve better solutions. Hence, the paper proposed a continuous optimization algorithm based on elite swarm search strategy, which simulated the foraging behavior of scout bees. The search mechanism of the algorithm was enhanced by constructing elite swarm strategy, improving the scout bee search mechanism and selecting the best solution based on the objective function value. The numerical experiment results show that the proposed algorithm has high searching precision, success rate and fast convergence speed. It is also suitable for solving high-dimensional space optimization problems.

    Hybrid particle swarm optimization algorithm with cooperation of multiple particle roles
    WU Yiting DAI Mingyue JI Zhicheng WU Dinghui
    2014, 34(8):  2306-2310.  DOI: 10.11772/j.issn.1001-9081.2014.08.2306
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that Particle Swarm Optimization (PSO) falls into local minima easily and converges slowly at the last stage, a kind of hybrid PSO algorithm with cooperation of multiple particle roles (MPRPSO) was proposed. The concept of particle roles was introduced into the algorithm to divide the population into three roles: Exploring Particle (EP), Patrolling Particle (PP) and Local Exploiting Particle (LEP). In each iteration, EP was used to search the solution space by the standard PSO algorithm, and then PP which was based on chaos was used to strengthen the global search capability and replace some EPs to restore population vitality when the algorithm trapped in local optimum. Finally, LEP was used to strengthen the local search to accelerate convergence by unidimensional asynchronous neighborhood search. The 30 times independent runs in the experiment show that, the proposed algorithm in the conditions that particle roles ratio is 0.8∶〖KG-*3〗0.1∶〖KG-*3〗0.1 has the mean value of 2.352E-72,4.678E-29,7.780E-14 and 2.909E-14 respectively in Sphere, Rosenbrock, Ackley and Quadric, and can converge to the optimal solution of 0 in Rastrigrin and Griewank, which is better than the other contrastive algorithms. The experimental results show that proposed algorithm improves the optimal performance with certain robustness.

    Data quality assessment of Web article content based on simulated annealing
    HAN Jingyu CHEN Kejia
    2014, 34(8):  2311-2316.  DOI: 10.11772/j.issn.1001-9081.2014.08.2311
    Asbtract ( )   PDF (1008KB) ( )  
    References | Related Articles | Metrics

    Because the existing Web quality assessment approaches rely on trained models, and users' interactions not only cannot meet the requirements of online response, but also can not capture the semantics of Web content, a data Quality Assessment based on Simulated Annealing (QASA) method was proposed. Firstly, the relevant space of the target article was constructed by collecting topic-relevant articles on the Web. Then, the scheme of open information extraction was employed to extract Web articles' facts. Secondly, Simulated Annealing (SA) was employed to construct the dimension baselines of two most important quality dimensions, namely accuracy and completeness. Finally, the data quality dimensions were quantified by comparing the facts of target article with those of the dimension baselines. The experimental results show that QASA can find the near-optimal solutions within the time window while achieving comparable or even 10 percent higher accuracy with regard to the related works. The QASA method can precisely grasp data quality in real-time, which caters for the online identification of high-quality Web articles.

    Sentiment analysis for goods evaluation based on text classification
    ZHONG Jiang YANG Siyuan SUN Qigan
    2014, 34(8):  2317-2321.  DOI: 10.11772/j.issn.1001-9081.2014.08.2317
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics

    To improve the efficiency of recognition while determining the emotional tendencies of goods evaluation accurately, this paper proposed a text classification approach based on Matrix Projection (MP) and Normalized Vector (NLV) to realize sentiment analysis for goods evaluation. Firstly, this approach extracted feature words of goods evaluation by utilizing matrix projection, and then computed the average Feature Frequency (FF) of feature words in each category, and obtained normalized vector through normalized processing to feature frequency of each category by using Normalized Function (NLF). Finally, it predicted the sentiment tendency by comparing similarity between feature vector of goods evaluation and normalized vector of each category. Compared with the k-Nearest Neighbor (kNN), Naive Bayesian (NB) and Support Vector Machine (SVM) algorithm, the experimental results show that the proposed approach has higher prediction accuracy and speed of classification. Especially compared with the kNN the approach has obvious advantages, its macro average F1 value is more than 12% higher than the kNN and classification time is reduced by 11/12〖BP(〗reduce to或reduce by〖BP)〗. Compared with the SVM its speed is greatly improved.

    Trustworthy sort method for shopping customer reviews based on correlation degree with product features
    HUANG Tingting ZENG Guosun XIONG Huanliang
    2014, 34(8):  2322-2327.  DOI: 10.11772/j.issn.1001-9081.2014.08.2322
    Asbtract ( )   PDF (1163KB) ( )  
    References | Related Articles | Metrics

    In E-commerce website, massive disorder shopping reviews may make the consumers be lost in the massive shopping reviews and can not distinguish trusted reviews. Therefore, this paper proposed a trustworthy sort method for customer reviews. Firstly, focusing on commercial advertising information in websites and concerning about whether the contents of the online customer reviews and product functional properties are closely related, the authors designed an algorithm of product's key features extractions from shopping websites based on HTML script format, and presented a method of customer reviews features extractions based on natural language processing. Secondly, the authors used the technique of words similarity to analyze the correlation degree between product features and customer reviews contents, and then proposed the computational method of trust degree for shopping customer reviews. Finally, through analyzing the method with an example, the proposed method achieves a trustworthy sort for large online shopping customer reviews. Thus customers need not browse all reviews to judge which one can be trusted or have the real reference value. It decreases information search costs and improves the efficiency of decision making.

    Collaborative filtering recommendation method of integrating social tags and users background information
    JIANG Sheng WANG Zhong-qun XIU Yu HUANG Subin
    2014, 34(8):  2328-2331.  DOI: 10.11772/j.issn.1001-9081.2014.08.2328
    Asbtract ( )   PDF (617KB) ( )  
    References | Related Articles | Metrics

    To address the difficulty of data sparsity and lower recommendation precision in the traditional Collaborative Filtering (CF) recommendation algorithm, a new CF recommendation method of integrating social tags and users background information was proposed in this paper. Firstly, the similarities of different social tags and different users background information were calculated respectively. Secondly, the similarities of different users ratings were calculated. Finally, these three similarities were integrated to generate the integrated similarity between users and undertook the recommendations about items for target users. The experimental results show that, compared with the traditional CF recommendation algorithm, the Mean Absolute Error (MAE) of the proposed algorithm respectively reduces by 16% and 22.6% in the normal dataset and cold-start dataset. The new method can not only improve the accuracy of recommendation algorithm, but also solve the problems of data sparsity and cold-start.

    Microblog bursty topic detection based on topic tree
    QIU Yunfei GUO Milun SHAO Liangshan
    2014, 34(8):  2332-2335.  DOI: 10.11772/j.issn.1001-9081.2014.08.2332
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics

    A kind of topic tree detection method based on Latent Dirichlet Allocation (LDA) model was put forward, in order to solve the problems of nonstandard terms, randomness, uncertainty of reference and large number of network terms in microblog texts, which can not be solved in traditional detection method. Relevant microblogs were reorganized into a topic tree by increasing information entropy in Natural Language Processing (NLP), combining with the design idea that Dirichelet prior experience value α and experience value β vary with the topic number, then the contribution statistics of every word in the text was achieved using the specific dual probability statistical method of this model. Thus, the interference information would be disposed in advance and the influence of garbage data on topic detection was excluded. Using this contribution as the parameter value of the improved Vector Space Model (VSM), bursty topics were extracted through calculating the similarity between texts, in order to improve the detection precision of bursty topics. Experiments of the proposed detection method were made from two aspects: comparison of the value of F and the manual detection. The experimental data show that, this algorithm not only can detect the bursty topics, but also can improve the precision about 3% and 7% respectively compared with the HowNet model and the TF-IDF (Term Frequency-Inverse Document Frequency) algorithm, and it is more in accordance with human's logic judgments than the traditional ones.

    Lightweight privacy-preserving data aggregation algorithm
    CHEN Yanli FU Chunjuan XU Jian YANG Geng
    2014, 34(8):  2336-2341.  DOI: 10.11772/j.issn.1001-9081.2014.08.2336
    Asbtract ( )   PDF (986KB) ( )  
    References | Related Articles | Metrics

    Private data is easy to suffer from the attacks about data confidentiality, integrity and freshness. To resolve this problem, a secure data aggregation algorithm based on homomorphic Hash function was proposed, called HPDA (High-Efficiency Privacy Preserving Data Aggregation) algorithm. Firstly, it used homomorphic encryption scheme to provide data privacy-preserving. Secondly, it adopted homomorphic Hash function to verify the integrity and freshness of aggregated data. Finally, it reduced the communication overhead of the system by improved ID transmission mechanism. The theoretical analyses and experimental simulation results show that HPDA can effectively preserve data confidentiality, check data integrity, satisfy data freshness, and bring low communication overhead.

    Security analysis and improvement of certificateless signature scheme
    PAN Aiwan SHEN Yuan ZHAO Weiting
    2014, 34(8):  2342-2344.  DOI: 10.11772/j.issn.1001-9081.2014.08.2342
    Asbtract ( )   PDF (627KB) ( )  
    References | Related Articles | Metrics

    By analyzing the security of a certificateless signature scheme without bilinear pairing proposed by Wang Y, et al. (WANG Y, DU W. Security analysis and improvement of certificateless signature scheme without bilinear pairing. Journal of Computer Applications, 2013, 33(8): 2250-2252), the result that the scheme can not resist forgery attack was pointed out and an improved scheme was proposed. The improved scheme enhanced the relationship of parameters in signature algorithm to resist forgery attack. The results of security analysis show that the improved scheme is proved to be existentially unforgeable against adaptive chosen message and identity attacks in random oracle model. The improved scheme is more efficient than the existing schemes for avoiding bilinear pairings and inverse operation.

    Construction method of virtual position in process of cross-domain access control based on organization based 4 levels access control model
    PENG You SONG Yan JU Hang WANG Yanzhang
    2014, 34(8):  2345-2349.  DOI: 10.11772/j.issn.1001-9081.2014.08.2345
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics

    For the problems of Organization Based 4 Levels Access Control (OB4LAC) model on how to build the virtual positions based on the requested permission sets from users in other domain, this paper proposed a detailed process based on the following three stages, which are the searching stage of the role sets based on the required permission, the determining stage of Separation of Duty (SoD) and activating constraints, the creation and revoke stage of virtual position. Aiming to the searching stage of the role sets based on the required permission, the authors gave three searching algorithms that match three different cases respectively, which are complete matching, available matching and least privilege matching; for the determining stage of SoD and activating constraints, the authors defines three kinds of matrixes which are Separate of Duty Matrix (SODM), Cardinality Constraint Matrix (CCM) and Anti-connection Inherit Matrix (AIM), then based on those matrixes and corresponding process to solve these problems of constraints; aiming to the creation and revoke stage of virtual position, this paper gave the management functions required for completing the process. Through these specific processes and realization algorithms, the authors resolved the problems of building the virtual positions in multi-domain environment for OB4LAC model.

    network coding; data transmission; digital watermarking; stack shuffle; Message Authentication Code (MAC)
    ZHU Xinpei KOU Yingzhan WANG Zhanyu
    2014, 34(8):  2350-2355.  DOI: 10.11772/j.issn.1001-9081.2014.08.2350
    Asbtract ( )   PDF (924KB) ( )  
    References | Related Articles | Metrics

    To improve the integrity, confidentiality and privacy of network-coding-based data transmission, a secure protection mechanism combined digital watermarking, stack shuffle and Message Authentication Code (MAC) was proposed. In this mechanism, the confidentiality and privacy were provided by mixing up messages using exclusive OR (XOR) encryption and stack shuffle. Furthermore, the confidentiality was enhanced by randomly inserting MACs into mixed messages with digital watermarking technique. And the integrity was provided by checking MACs on intermediate nodes during transmitting. The simulation results show that the spread hops of polluted information were effectively reduced by using this mechanism (less than 1.5). The collusion probability was less than 0.1 even if there were 25 collusion attackers and the size of key pool was 100. Both of theoretical analysis and simulation experiment demonstrate that the proposed mechanism can defend eavesdropping attacks, flow analysis attacks and polluting attacks with low expense.

    Personalized Privacy Preservation against Sensitivity Homogeneity Attack in Location-based Services
    WU Lei PAN Xiao PU Chunhui LI Zhanping
    2014, 34(8):  2356-2360.  DOI: 10.11772/j.issn.1001-9081.2014.08.2356
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics

    The existing privacy preservation methods in location-based services only focus on the protection of user location and identification information. It produces the truth of sensitive homogeneity attack when the queries in a cloaking set are sensitive information. To solve this problem, a personalized (k,p)-sensitive anonymization model was presented. On the basis of this model, a pruning tree-based cloaking algorithm called PTreeCA was proposed. The tree-type index in the spatial database has two features. The one is that mobile users are roughly partitioned into different groups according to the locations of mobile users; the other one is that the aggregated information can be stored in the intermediate nodes. By utilizing the two features, PTreeCA could find the cloaking set from the leaf node where the query user is and its sibling nodes, which are benefit for improving efficiency of the anonymization algorithm. The efficiency and effectiveness of PTreeCA are validated by a series of designed experiments on the simulated and real data sets. The average success rate is 100%, and the average cloaking time is only about 4ms. The experimental results show that PTreeCA is effective in terms of success rate, cloaking time, and the anonymization cost when the privacy requirements levels are low or medium.

    Quick tampering detection and recovering algorithm based on reversible watermark in medical images
    LIU Dingjun CHEN Zhigang DENG Xiaohong
    2014, 34(8):  2361-2364.  DOI: 10.11772/j.issn.1001-9081.2014.08.2361
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of low efficiency of tampering detection and accuracy of location, a medical image tampering detection and recovering method based on reversible watermark and quad-tree decomposition was proposed. The algorithm has higher accuracy and faster tampering location speed by using the hierarchical structure of the quad-tree in the decomposition of the medical images. The method used the diagonal pixel mean in the block as the recovered feature value, which ensures the recovery quality of tampered image. The experimental results show that compared with the existed methods, the proposed algorithm reduces the comparing times for locating tampered region to about 6.7 in the 512×512 images and improves the tampering detection accuracy about 5%.

    Best fusion method of hyperspectral and panchromatic imagery based on Earth Observing-1 satellite
    LIN Zhilei YAN Luming
    2014, 34(8):  2365-2370.  DOI: 10.11772/j.issn.1001-9081.2014.08.2365
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics

    Subject to the imaging principle, manufacturing technology and other factors, the spatial resolution of spaceborne hyperspectral remote sensing imagery is relatively low. Therefore, the thesis proposed the image fusion of hyperspectral imagery and high spatial resolution imagery, and designed the best fusion algorithm to enhance spatial resolution of hyperspectral remote sensing imagery. According to the characteristics of Earth Observing-1 (EO-1) Hyperion hyperspectral imagery and Advanced Land Imager (ALI) panchromatic imagery, 4 kinds of fusion algorithms were selected to carry out a comparative study of the image fusion effect for the city and mountain regions from 9 kinds of remote sensing image fusion algorithms, namely Gram-Schmidt spectral sharpening fusion method, transform fusion method of Smoothing Filter-based Intensity Modulation (SFIM), Weighted Average Method (WAM) fusion method and Wavelet Transformation (WT) fusion method. And it carried out the comprehensive evaluation and analysis of the image fusion effect from 3 aspects of qualitative, quantitative and classification precision, which aims to determine the best fusion method for EO-1 hyperspectral imagery and panchromatic imagery. The experimental results show that: 1) from the image fusion effect, Gram-Schmidt spectral sharpening fusion method is the best in 4 kinds of fusion methods used; 2) from the image classification effect, the classification results based on the fusion image is better than the classification results based on the source image. The theoretical analysis and experimental results show that Gram-Schmidt spectral sharpening fusion method is an ideal fusion algorithm for hyperspectral imagery and high spatial resolution imagery, and it can provide powerful support to improve the clarity of hyperspectral remote sensing imagery, the reliability and the accuracy of the image object recognition and classification.

    Design of 3D visual odometry based on Kinect
    WANG Yalong ZHANG Qizhi ZHOU Yali
    2014, 34(8):  2371-2374.  DOI: 10.11772/j.issn.1001-9081.2014.08.2371
    Asbtract ( )   PDF (787KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of 3D trajectory estimation for mobile service robots in unknown environments, this thesis proposed a novel framework for using Kinect sensor to estimate the motion trajectory of mobile robots in real time. RGB-D information of successive frames in the environment was captured by a Kinect: firstly, the feature points of Speeded Up Robust Feature (SURF) of the target frame and reference frame were extracted and matched; secondly, initial 6 Degree Of Freedom (DOF) pose estimation was computed by a novel solution for the classical Perspective-3-Point (P3P) problem and an improved Random Sample Consensus (RANSAC) algorithm combining with depth information; lastly, the pose estimation was refined by minimizing the reprojection error of inliers of initial value via a nonlinear least-squares solver, and then the motion trajectory of the robot was gained. The experimental results show that the error of the odometry is reduced to 3.1% by the proposed approach in real time. It can provide important prior information for simultaneous localization and mapping of robots.

    Fast intra prediction algorithm for high efficiency video coding
    XU Dongxu LIN Qiwei
    2014, 34(8):  2375-2379.  DOI: 10.11772/j.issn.1001-9081.2014.08.2375
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    To further reduce the great computational complexity for High Efficiency Video Coding (HEVC) intra prediction, a novel algorithm was proposed in this paper. First, in Coding Unit (CU) level, the minimum Sum of Absolute Transformed Difference (SATD) of current CU was used to decide an early termination for the split of this CU at each depth level: if the minimum SATD of this CU is smaller than the given threshold value. Meanwhile, based on statistical analysis, the probabilities of each candidate prediction modes being optimal mode were used to further reduce the number of candidate modes which have almost no chance to be selected as the best mode. The experimental results show that, the proposed algorithm can save an average of 30.5% of the encoding time with negligible loss of coding efficiency (only 0.02dB Y-PSNR(Y-Peak Signal-to-Noise Ratio) loss) compared with the reference model HM10.1. Besides, the proposed algorithm is easy to provide software and hardware implementations, and it is also easy to be combined with other methods to further reduce the great computational complexity for HEVC intra coding.

    Sparse tracking algorithm based on multi-feature fusion
    HU Shaohua XU Yuwei ZHAO Xiaolei HE Jun
    2014, 34(8):  2380-2384.  DOI: 10.11772/j.issn.1001-9081.2014.08.2380
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics

    This paper proposed a novel sparse tracking method based on multi-feature fusion to compensate for incomplete description of single feature. Firstly, to fuse various features, multiple feature descriptors of dictionary templates and particle candidates were encoded as the form of kernel matrices. Secondly, every candidate particle was sparsely represented as a linear combination of all atoms of dictionary. Then the sparse representation model was efficiently solved using a Kernelizable Accelerated Proximal Gradient (KAPG) method. Lastly, in the framework of particle filter, the weights of particles were determined by sparse coefficient reconstruction errors to realize tracking. In the tracking step, a template update strategy which employed incremental subspace learning was introduced. The experimental results show that, compared with the related state-of-the-art methods, this algorithm improves the tracking accuracy under all kinds of factors such as occlusions, illumination changes, pose changes, background clutter and viewpoint variation.

    Automatic coding algorithm based on color structure light
    WANG Yong RAO Qinfei TANG Jing YUAN Chaoyan
    2014, 34(8):  2385-2389.  DOI: 10.11772/j.issn.1001-9081.2014.08.2385
    Asbtract ( )   PDF (779KB) ( )  
    References | Related Articles | Metrics

    The properties of the measured objects in 3D profile using the grating projection are more and more complex, there are a large number of splits in the extracted refinement grating stripes, and the refinement stripe encoding is very difficult. An automatic coding algorithm based on color structure light was proposed. The paper designed a new model of color structure light, introduced its design principle and implemented a new automatic stripe coding algorithm. First, the algorithm extracted the refinement grating stripe with color information from the color structure grating. Then, orderly encoded the refined stripes of each color by judging the best connected domain. Finally, the article got the stripe coding of the total image through combined coding by using the periodicity of grating model. The simulation experiment results show that the model design of color structure light is simple, the automatic coding algorithm of stripe has high accuracy and the error is decreased to 10 percent. The ideal 3D points cloud data model can be reconstructed through the strip coded data.

    Distortion-driven cross-layer optimization for video transmission over 802.11e
    WU Weimin TAN Juan DUAN Ping
    2014, 34(8):  2390-2393.  DOI: 10.11772/j.issn.1001-9081.2014.08.2390
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics

    Distortion of H.264 video over 802.11e is jointly caused by transmission packet loss and encoder quantization, to tackle this issue, this paper proposed a distortion-driven cross-layer video transmission optimization. The relationship between Quantization Parameter (QP) and quantization distortion was obtained firstly by a rate-distortion model. Then according to the loss rate of video data partition, the transmission and total distortions at the receiver side were estimated. Based on the total distortion, a selection algorithm of optimal quantization parameter was presented. The experimental results show that, compared with the method of up-bottom cross-layer optimization with various queue priorities for video data partitions and bottom-up cross-layer optimization with an adaptive quantization parameter selection, the proposed method gets 1~2dB average Peak Signal-to-Noise Ratio (PSNR) improvement, and it has less distortion at the receiver side.

    Gaussian weighted multiple classifiers for object tracking
    LAN Yuandong DENG Huifang CAI Zhaoquan YANG Xiong
    2014, 34(8):  2394-2398.  DOI: 10.11772/j.issn.1001-9081.2014.08.2394
    Asbtract ( )   PDF (977KB) ( )  
    References | Related Articles | Metrics

    When the appearance of an object changes rapidly, most of the weak learners can not capture the new feature distributions which will lead to tracking failure. In order to deal with that issue, a Gaussian weighted online multiple classifiers algorithm boosting for object tracking was proposed. This algorithm defined one weak classifier which included a simple visual feature and a threshold for each domain problem. Gaussian weighting function was introduced to weigh each weak classifier's contribution in a particular sample, therefore the tracking performance was improved through joint learning of multiple classifiers. In the process of object tracking, online multiple classifiers can not only simultaneously determine the location and estimate the pose of the object, but also successfully learn multi-modal appearance models and track an object under rapid appearance changes. The experimental results show that, after a short initial training phase, the average tracking error rate of the proposed algorithm is 12.8%, which proves that the tracking performance has enhanced significantly.

    Semi-supervised support vector machine for image classification based on mean shift
    WANG Shuochen WANG Xili MA Junli
    2014, 34(8):  2399-2403.  DOI: 10.11772/j.issn.1001-9081.2014.08.2399
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics

    Semi-Supervised Support Vector Machine using label mean (meanS3VM) for image classification selects a small number of unlabeled instances randomly to train the classifier, and the classification accuracy is low; meanwhile, the parameter's determination always derives much oscillation of the results. In allusion to the above problems, meanS3VM image classification method based on mean shift was proposed. The smoothed image acquired by mean shift was used as original segmented image to reduce diversities of image features; an instance in each smoothed area was randomly selected as unlabeled instance to ensure that it carried useful information for classification and had a more efficient classifier; and the parameters value were also investigated and improved, the grid search method was used for sensitive parameters, the parameter ep was estimated by combining with Support Vector Machine (SVM) mean shift results, so that there will be a better and more stable result. The experimental results indicate that the classification rate of the proposed method to ordinary and noise image can be averagely increased more than 1% and 5%, and it has higher efficiency and avoids the oscillation of the results effectively, which is suitable for image classification.

    Micro-blog information diffusion effect based on behavior analysis
    QI Chao CHEN Hongchang YU Yan
    2014, 34(8):  2404-2408.  DOI: 10.11772/j.issn.1001-9081.2014.08.2404
    Asbtract ( )   PDF (854KB) ( )  
    References | Related Articles | Metrics

    The research of dissemination effect of micro-blog message has an important role in improving marketing, strengthening public opinion monitoring and discovering hotspots accurately. Focused on difference between individuals which was not considered previously, this paper proposed a method of predicting scale and depth of retweeting based on behavior analysis. This paper presented a predictive model of retweet behavior with Logistic Regression (LR) algorithm and extracted nine relative features from users, relationship and content. Based on this model, this paper proposed the above predicting method which considered the character of information disseminating along users and iterative statistical analysis of adjacent users step by step. The experimental results on Sina micro-blog dataset show that the accuracy rate of scale and depth prediction approximates 87.1% and 81.6 respectively, which can predict the dissemination effect well.

    Real-time trajectory simplification algorithm of moving objects based on minimum bounding sector
    WANG Xinran YANG Zhiying
    2014, 34(8):  2409-2414.  DOI: 10.11772/j.issn.1001-9081.2014.08.2409
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics

    To improve the efficiency of the application of trajectory data, reduce communication cost and computational overhead of mobile terminal, the raw trajectory data of moving objects which were collected by Global Positioning System (GPS) equipment must be simplified. A method based on Minimum Bounding Sector (MBS) for real-time trajectory simplification of moving objects was proposed. The algorithm is different from those which approximated the original trajectory with a polygonal line. It adopted sector to predict the moving range, which could estimate and simplify the original trajectory. In order to control simplification error efficiently, the identical polar radius error metric method was proposed based on the characteristics of sector angle and distance. In addition, the affect of GPS positioning error on the simplified algorithm was discussed. The experimental results show that, the simplified trajectory of the proposed algorithm is efficient and stable, it has smaller error (no more than 20% of the error threshold) in comparison with the original trajectory and has good fault tolerant ability on GPS positioning error.

    Path planning algorithm based on regional-advance strategy for aircraft fuel tank inspection robot
    NIU Guochen ZHANG Weicheng LI Ziwei
    2014, 34(8):  2415-2418.  DOI: 10.11772/j.issn.1001-9081.2014.08.2415
    Asbtract ( )   PDF (528KB) ( )  
    References | Related Articles | Metrics

    To get a path for a continuum robot in the environment like the aircraft fuel tank, a path planning algorithm based on regional-advance strategy was proposed. By combining with the mechanical constraints of the robot, the method could ensure that arbitrary points can be reached in the single cabin. With the flexibility of movement, but the hyper-redundant freedom degree of the continuum robot brings about both the multiple path solutions in three-dimensional space and high time complexity. The approach based on dimension reduction, which is transforming the planning in three-dimensional space into that in two-dimensional plane, was presented to reduce the computing complexity. The single cabin of the aircraft fuel tank was divided to two regions, and the planning strategy was determined by the regional location of the target point. Finally, the Matlab simulation experiments were carried out, and the practicability and effectiveness of the proposed method were verified.

    Optimization model and algorithm for production order acceptance problem of hot-rolled bar
    BAI Liang WANG Lei
    2014, 34(8):  2419-2423.  DOI: 10.11772/j.issn.1001-9081.2014.08.2419
    Asbtract ( )   PDF (779KB) ( )  
    References | Related Articles | Metrics

    According to the influence of earliness and reworking penalties, the production order acceptance problem of hot-rolled bar was studied. A mathematical model with the objective of maximize gross profit of order was proposed. A hybrid algorithm with improved NEH (Nawaz-Enscore-Ham) algorithm and Modified Harmony Search (MHS) algorithm was proposed for the model. With the consideration of the constraints in the model, an initial solution was generated by the improved NEH algorithm and further optimized by MHS algorithm. Furthermore, the idea of Teaching-Learning-Based Optimization (TLBO) was introduced to the process of selection and updating for harmony vector to take control of the acceptance of new solutions. Meanwhile, in order to balance the breadth and depth of this algorithm's searching ability, the parameters were adjusted dynamically to improve the global optimization ability. The simulation experiments with practical production data show that the proposed algorithm can effectively improve total profit and acceptance rate, and validate the feasibility and effectiveness of the model and algorithm.

    Application of enhanced multi-objective evolutionary algorithm based on decomposition with differential evolution in configuration of satellite payload
    LI Hui YUAN Wenbing XIONG Muzhou
    2014, 34(8):  2424-2428.  DOI: 10.11772/j.issn.1001-9081.2014.08.2424
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    To solve the satellite payload configuration problem, a satellite payload configuration model based on Enhanced Multi-Objective Evolutionary Algorithm based on Decomposition with Differential Evolution (EMOEA/D-DE) algorithm was proposed. This model turned the configuration problem into a Multi-objective Optimization Problem (MOP), which took the number of satellites and satellite redundancy as the optimization objectives, and solved it by using EMOEA/D-DE algorithm. Furthermore, to overcome the concentration of population's distribution in objective space resulted by the original randomly uniform initialization, a new random initialization combined with optimization objectives was introduced. The experimental results show that the solution set obtained by this model has good stability and distribution. The average difference is less than 0.05 and the distribution of value is above 0.9. Besides, the improved algorithm doubles the convergence speed nearly, and the approximation of Pareto front obtained is relatively better.

    Diagnosis of aluminum reduction cell status based on optimized relative principal component analysis
    HUANG Di LI Taifu YI Jun TIAN Yingfu
    2014, 34(8):  2429-2433.  DOI: 10.11772/j.issn.1001-9081.2014.08.2429
    Asbtract ( )   PDF (873KB) ( )  
    References | Related Articles | Metrics

    Concerning the problems that the parameters of the state of the aluminum reduction cells are multivariate and with strong coupling, the calculation of established diagnosis model is large and the precision of diagnosis is limited, this paper proposed Optimized Relative Principal Component Analysis (ORPCA) method to diagnose the status of aluminum reduction cells. An effective principle of determining the relative weight was put forward, which took advantage of Relative Principal Component Analysis (RPCA) in reducing dimensions. In the method, Genetic Algorithm (GA) was used to optimize the fitness function about false alarm rate. The diversification of the sample project in principal component space and residual space was observed to acquire the best relative transforming matrix, so the false alarm rate of Hotelling's T2 test and Squared Prediction Error (SPE) were reduced to the least. By using a group data of 170kA operating aluminum smelter from a factory, the experimental results show that, when the confidence coefficients are 95% and 97.5%, the false alarm rates of T2 test are 16.79% and 9.77% respectively, meanwhile, the false alarm rates of SPE test are 4.01% and 1.75% respectively. Compared with other similar algorithms, the proposed method can test the abnormal condition of aluminum reduction cells and obviously reduce the false alarm rate of Hotelling's T2 test and SPE test.

    New NAND device management solution with high storage density
    WEI Bing GUO Yutang SONG Jie ZHANG Lei
    2014, 34(8):  2434-2437.  DOI: 10.11772/j.issn.1001-9081.2014.08.2434
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics

    Focused on the problem of low storage density in embedded systems, in this paper, a new NAND device management solution with high storage density was proposed. In the proposed solution, a generalized mode of information structure in NAND page was designed by researching a great number of NAND storage structures and BCH(Bose-Chaudhuri-Hocquenghem) parity coding programs. In the mode, data layout in Out of Band (OOB) could achieve Error Correcting Code (ECC) capability while accommodating device management information of partition, thus the main page could be completely used for data storage, which can be treated as a basis for development of device read-write solution and Wear Leveling mechanism. The experimental results show that the proposed solution improves storage density up to 98%, and it is superior to most current common file systems. Having an excellent data storage density, as well as relatively stable device read-write efficiency and Program/Erase (P/E) endurance, the solution has good application advantages in embedded systems.

    Bearing fault diagnosis method based on dual singular value decomposition and least squares support vector machine
    LI Kui FAN Yugang WU Jiande
    2014, 34(8):  2438-2441.  DOI: 10.11772/j.issn.1001-9081.2014.08.2438
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics

    In order to solve the difficult problem that the different number of singular values affects the accuracy of fault identification, caused by Singular Value Decomposition (SVD) for different signals. A fault diagnosis method based on dual SVD and Least Squares Support Vector Machine (LS-SVM) was put forward. The proposed method could adaptively choose effective singular values by using the curvature spectrum of singular values for reconstructing a signal. SVD was carried out again to acquire the same number of orthogonal components and its energy entropy was calculated to construct the feature vector. Finally, it could be used in the LS-SVM classification model for fault identification. Compared with the method of using limited principal singular values as feature vector, the results show that the proposed method applied to the bearing fault diagnosis improves the accuracy of 13.34%. Also, it is feasible and valid.

    Promoting accuracy of trust bootstrapping from rating network
    LIU Bin ZHANG Renjin
    2014, 34(8):  2442-2446.  DOI: 10.11772/j.issn.1001-9081.2014.08.2442
    Asbtract ( )   PDF (771KB) ( )  
    References | Related Articles | Metrics

    To reduce the influence of evaluating trust of a commodity may be affected easily by unfair and malicious rates when the commodity has only few rates on e-commerce platform, a trust bootstrapping method based on assessing the credibility of the rate was presented. The credibility of a rate was got through evaluating the rates for other commodities and related to the factors of the number of rates by the rater, the rater's transaction amount and the price of the rated commodity. The trust value of a commodity without a rate was derived from the shop to which this commodity belonged and the declared attributes of this commodity. The trust value of a commodity which owned rates with sufficient high credibility was determined by these rates with high credibility. Otherwise the trust value was determined partly by rates or was processed according to a commodity without a rate. Calculation, analysis and experimental results show that this presented method, evaluating the credibility of a rate by its rating network, compared with the conventional method and k-means clustering method, has the smallest error and is not sensitive to the ratio of malicious rates. This method can help users select reliable commodities sold at the initial stage on e-commerce platforms.

    Design and implementation of large capacity radio frequency identification system based on embedded technology
    LIU Zhanjie ZHAO Yu LIU Kaihua MA Yongtao ZHANG Yan
    2014, 34(8):  2447-2450.  DOI: 10.11772/j.issn.1001-9081.2014.08.2447
    Asbtract ( )   PDF (601KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems of current aviation card readers, include poor portability, slow speed and tags' little capacity, a design method of large capacity Radio Frequency Identification (RFID) system based on STM32 was proposed. Using STM32 microprocessor as a core and adopting CR95HF radio chip, a new handled RFID card reader which worked in High Frequency (HF) and supported ISO 15693, ISO 18092 protocols was designed. The design of power, antenna and optimization of software speed, error rate was discussed in detail. A new large compiled capacity passive tag was also designed whose capacity is up to 32KB to form a large capacity RFID system with card reader. The experimental results show that, compared with the traditional card reader, the reading and writing speed of the card reader increases by 2.2 times, error rate reduces by 91.7% and tag capacity increases 255 times. It provides a better choice for fast, accurate and high data requirements of aviation logistics.

2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF