Loading...

Table of Content

    01 November 2013, Volume 33 Issue 11
    Multimedia processing technology
    Application on cloud visualization based on structured particle model
    WANG Chang XIE Yonghua YUAN Fuxing
    2013, 33(11):  2013-01. 
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    The 3D virtualization of cloud data has always been the hotspot of computational graphics and meteorology. A method for data modeling and rendering based on Weather Research and Forecasting (WRF) was proposed to realize the 3D virtual simulation of real-world cloud data. Due to the complexity of particle system modeling and its poor real-time performance, a WRF cloud model was first set up, regarding the relationship between particles in the cloud system; further illumination rendering and 3D simulation were completed based on the illumination model and billboard technique; meanwhile, Imposter technique was introduced to speed up the texture mapping and improve the performance. The simulation results show that the proposed method owns the benefits of fast modeling and rendering of cloud data as well as good fidelity of the 3D virtualization model.
    Network and communications
    Transmission and scheduling scheme based on W-learning algorithm in wireless networks
    ZHU Jiang PENG Zhenzhen ZHANG Yuping
    2013, 33(11):  3005-3009. 
    Asbtract ( )   PDF (973KB) ( )  
    Related Articles | Metrics
    To solve the problem of transmission in wireless networks, a transmission and scheduling scheme based on W-learning algorithm in wireless networks was proposed in this paper. Building the system model based on Markov Decision Progress (MDP), with the help of W-learning algorithm, the goal of using this scheme was to transmit intelligently, namely, the package loss under the premise of energy saving by choosing which one to transmit and the transmit mode legitimately was reduced. The curse of dimensionality was overcome by state aggregate method, and the number of actions was reduced by action set reduction scheme. The storage space compression ratio of successive approximation was 41%; the storage space compression ratio of W-learning algorithm was 43%. Finally, the simulation results were given to evaluate the performances of the scheme, which showed that the proposed scheme can transport data as much as possible on the basis of energy saving, the state aggregation method and the action set reduction scheme can simplify the calculation with little influence on the performance of algorithms.
    Network proxy research and implementation for Internet of things applications based on constrained application protocol
    SONG Yan FU Qian
    2013, 33(11):  3010-3015. 
    Asbtract ( )   PDF (964KB) ( )  
    Related Articles | Metrics
    To solve the problem of high technology bottleneck caused by the three-layer research and development independence of Internet of Things (IoT), a solution based on Constrained Application Protocol (CoAP) was proposed. This solution developed CoAP-HTTP network proxy based on CoAP implementation, which enabled users to access the IoT node resources through the browser directly, to discover resources, to query data and to subscribe resources, etc. The test results show that the proxy mode does not affect the response rate of the system and it is stable enough to support multiple users accessing the network node data at the same time. The CoAP Agent model can help application developers circumvent the complexity of low-level software development and data exchange development; hence they are able to generate a new application independently. The CoAP Agent model provides a new way for the application development of IoT.
    Calibration based DV-Hop algorithm with credible neighborhood distance estimation
    JIANG Yusheng CHEN Xian LI Ping
    2013, 33(11):  3016-3018. 
    Asbtract ( )   PDF (611KB) ( )  
    Related Articles | Metrics
    Concerning the poor localization precision of Distance Vector-Hop (DV-Hop), a calibration based DV-Hop algorithm with credible neighborhood distance estimation (CDV-Hop) was proposed, which defined a new measure to estimate the neighborhood distances by relating the proximity of two neighbors to their connectivity difference, and then calculated the more accurate neighborhood distances. According to the unique location relationship between the unknown nodes and their nearest anchor nodes, this algorithm added the calibration step, which took the credible neighborhood distances as the calibration standard to correct the position of unknown nodes. The simulation results show that the CDV-Hop algorithm works stably in different network environment. With the ratio of anchor nodes increasing, there is an improvement of 4.57% to 10.22% in localization precision compared with DV-Hop algorithm and 3.2% to 8.93% in localization precision compared with Improved DV-Hop (IDV-Hop) algorithm.
    New medium access control protocol of terahertz ultra-high data-rate wireless network
    ZHOU Xun CAO Yanan ZHANG Qingwei REN Zhi QI Ziming
    2013, 33(11):  3019-3023. 
    Asbtract ( )   PDF (744KB) ( )  
    Related Articles | Metrics
    To realize 10Gbps level wireless access under the condition of terahertz (THz) carrier frequency, a new Medium Access Control (MAC) protocol for THz ultra-high data-rate wireless networks, MAC-T (Medium Access Control for THz) was proposed in this paper. In MAC-T, a new TDMA (Time Division Multiple Access)+CSMA (Carrier Sense Multiple Access) adaptive hybrid MAC access control mechanism and a new superframe structure were designed. Moreover, some key parameters corresponding to terahertz communications were defined. Therefore, MAC-T could make the maximum data transmission rate reach up to 10Gbps or higher. The theoretical analysis and simulation results show that MAC-T can operate normally in terahertz networks and the data rate can reach up to 18.3Gbps which is 2.16 times 5.78Gbps that IEEE 802.15.3c can achieve. Meanwhile, the average access delay of MAC-T is 0.0044s which improves about 42.1% compared with that of IEEE 802.15.3 which is 0.0076s. Thus, MAC-T can provide significant support in the research and application of terahertz ultra-high data-rate wireless networks.
    Efficient node deployment algorithm based on dynamic programming in wireless sensor networks
    XU Xiulan LI Keqing HUANG Yuyue
    2013, 33(11):  3024-3027. 
    Asbtract ( )   PDF (785KB) ( )  
    Related Articles | Metrics
    To solve the node deployment problem caused by unreliable information provided by the sensors, four different forms of static Wireless Sensor Network (WSN) deployment were addressed. The four problems were formalized as combinatorial optimization problems, which were Non-deterministic Polynomial (NP)-complete. Furthermore, an uncertainty-aware deployment algorithm based on dynamic programming was proposed. Firstly, the K-best placements of sensor nodes within the region of interest were found, and then the best deployment scheme was selected over the K-best placements. The proposed algorithm was able to determine the minimum number of sensors and their locations to achieve both coverage and connectivity. The simulation results show that, compared with the state-of-the-art deployment strategies, the performance of the proposed algorithm is better than the existing methods in terms of the uniform coverage requirement, the preferential coverage requirement and the network connectivity.
    Construction of optimal frequency hopping sequence sets with no-hit zone based on matrix permutation
    CHEN Haoyuan KE Pinhui ZHANG Shengyuan
    2013, 33(11):  3028-3031. 
    Asbtract ( )   PDF (595KB) ( )  
    Related Articles | Metrics
    A general construction method of optimal frequency hopping sequence sets with no-hit zone was proposed in this paper, which included several known constructions as special cases. The general method was obtained by performing the column permutation on the signal matrix. In the proposed construction, the length, the number of the sequences and the length of the no-hit zone could be changed flexibly. Furthermore, the available concrete construction methods were abundant. Some properties of the frequency hopping sequence sets were influenced by the concrete construction methods and the parameters. The parameters of frequency hopping sequence sets obtained by this method reach the theoretical bound; hence they are classes of optimal frequency hopping sequence sets with no-hit zone.
    Wideband estimating signal parameters via rotational invariance technique algorithm based on spatial-temporal discrete Fourier transformation projection
    BIAN Hongyu WANG Junlin
    2013, 33(11):  3032-3034. 
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    The spatial-temporal Discrete Fourier Transformation (DFT) projection method was modified by using the relation between the sampling frequency and frequency domain data, and the impact of the sampling frequency on the performance of solving coherent sources was analyzed. A narrowband ESPRIT(Estimating Signal Parameters via Rotational Invariance Techniques)-like algorithm was applied to estimate Direction of Arrival (DOA) of wideband coherent signals, and a wideband ESPRIT algorithm based on spatial-temporal DFT projection was proposed. The simulation results indicate that increasing the sampling frequency appropriately can improve the performance of DOA estimation, and spatial-temporal DFT projection method has better DOA estimation performance than Fast Fourier Transformation (FFT) interpolation method.
    Evaluation model of network service performance based on fuzzy analytic hierarchy process
    ZHAO Huaqiong TANG Xuewen
    2013, 33(11):  3035-3038. 
    Asbtract ( )   PDF (635KB) ( )  
    Related Articles | Metrics
    A service-oriented network performance comprehensive evaluation model based on fuzzy Analytic Hierarchy Process (AHP) was put forward in order to take account of users preference and the actual network situation, and to solve the problem of the one-sidedness index weight calculation. According to users preference, the model took application services as the guidance: firstly set up an evaluating hierarchy, separately handled the criteria weight and the scheme weight, then normalized the actual measurement data; finally obtained the application performance evaluation value of the target link by using the fuzzy AHP. The experimental results show that the proposed method not only can evaluate the whole performance of the target link, but also can effectively assess the performance of the network application services combined with users preference, which is more favorable for achieving the differentiated service network.
    Improved equalizer of nonlinear satellite channel
    GUO Yecai XU Ran
    2013, 33(11):  3039-3041. 
    Asbtract ( )   PDF (631KB) ( )  
    Related Articles | Metrics
    Concerning the defect that Volterra structure equalizer has large amount of computation, an improved adaptive equalizer of nonlinear satellite channel was proposed, and a new cascade structure of a linear equalizer and a nonlinear equalizer were obtained by analyzing the truncated Volterra series. The third-order nonlinear memory term multiplying in the Volterra equalizer expression was converted to second-order nonlinear memory term multiplying in the new structure equalizer. When signals passed through the equalizer, the number of complex multiplications was reduced. The simulation results show that the number of complex multiplications of improved nonlinear satellite channel adaptive equalizer is 1/9 times as that of Volterra equalizer when channel has strong memory depth. Meanwhile, 16 Amplitude Phase Shift Keying (16APSK) signal equalized by the improved adaptive equalizer has more compact constellation points.
    Improving amount of feedback in limited feedback systems with multi-level linear prediction
    ZENG Shilun XU Jiapin
    2013, 33(11):  3042-3044. 
    Asbtract ( )   PDF (591KB) ( )  
    Related Articles | Metrics
    In this paper, the limited feedback techniques based on the codebook for Multiple-Input Multiple-Output (MIMO) system in the Long Term Evolution-Advanced (LTE-Advanced) applications were investigated. A new limited feedback technique based on multi-level linear prediction was proposed. This method utilized the time correlation of the channel to predict the value of the channel based on multi-level linear prediction. By minimizing the mean square error to design a quantization code book in the basis of the prediction error, code number in the code book was used for system feedback. The simulation results show that multi-level linear prediction can effectively reduce the system prediction error, equivalently, reducing feedback overhead of the system and the maximum reduction rate of 15% to the amount of system feedback that can be reached.
    Database technology
    Frequent itemsets grouping algorithm based on Hash list
    Wang Hongmei HU Ming
    2013, 33(11):  3045-3048. 
    Asbtract ( )   PDF (700KB) ( )  
    Related Articles | Metrics
    Apriori algorithm is a classic algorithm in frequent itemsets mining. In view of the shortcomings of pruning operations and multiply scanning data set in Apriori, a Hash-based Frequent itemsets Grouping algorithm, named HFG was put forward. In this paper, 2-length itemsets pruning property was proved, frequent 2-length itemsets were stored based on Hash list, the time complexity of Apriori algorithm in pruning operation was dropped from O(k×|Lk|) to O(1); the concept of sub-itemset of first term was defined, dataset was divided into subsets with Ii as first item and stored by grouping index list. Therefore, only the sub data set with Ii as the first item was scanned to find the frequent itemsets, and the time cost of scanning dataset was reduced. The experimental results show that the HFG algorithm is much more efficient than Apriori algorithm in time performance because of the cumulative benefits of pruning operations and skipping the invalid itemsets and records in HFG algorithm.
    Incremental maintenance of discovered spatial association rules
    DONG Lin SHU Hong
    2013, 33(11):  3049-3051. 
    Asbtract ( )   PDF (687KB) ( )  
    Related Articles | Metrics
    Executing spatial association rule mining repeatedly is often necessary to get interesting and effective rules. Though incremental maintenance algorithms can be introduced to improve the efficiency of association rule mining, currently there exists no such algorithm that can use spatial datasets directly. To solve this problem, the update strategy of the discovered rules was discussed. Both threshold changes and spatial datasets updates were taken into consideration, and an incremental mining algorithm called Incremental Spatial Apriori (ISA) was suggested. ISA algorithm aimed to update frequent predicate sets and association rules after the minimum support threshold decreased or new spatial layers added. This algorithm did not rely on the creation and update of spatial transaction tables; it directly used spatial layers as input data. In experiments with real-world data, the mining result extracted by ISA and Apriori-like algorithms are identical, but ISA can save 20.0% to 71.0% time comparatively. Besides, 1372722 rules were successfully updated with the filtering method, costing less than 0.1 seconds. These results indicate the incremental update strategy and algorithm for spatial association rules suggested in this paper are correct, efficient and applicable.
    Interval-similarity based fuzzy time series forecasting algorithm
    LIU Fen GUO Gongde
    2013, 33(11):  3052-3056. 
    Asbtract ( )   PDF (743KB) ( )  
    Related Articles | Metrics
    There are limitations in establishing fuzzy logical relationship of the existing fuzzy time series forecasting methods, which makes it hard to adapt to the appearance of new relationship. In order to overcome the defects, an interval-similarity based fuzzy time series forecasting (ISFTS) algorithm was proposed. Firstly, based on fuzzy theory, an average-based method was used to redivide the intervals of the universe of discourse. Secondly, the fuzzy sets were defined and the historical data were fuzzified, then the third-order fuzzy logical relationships were established and a formula was used to measure the similarity between logical relationships. By computing the changing trend of future data, the fuzzy values were obtained. Finally, the fuzzy values were defuzzified and the forecasting values were obtained. The proposed algorithm makes up for the shortcomings in logical relationship of the existing forecasting algorithms because it forecasts the changing trend of future data. The experimental results show that the proposed algorithm ISFTS is superior to other forecasting algorithms on forecasting error, including Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). Therefore, the algorithm ISFTS is more adaptive in time series forecasting, especially in the case of large data.
    Speeding up outlier detection in large-scale datasets
    XUE Anrong WEN Dandan LIU Bin
    2013, 33(11):  3057-3061. 
    Asbtract ( )   PDF (779KB) ( )  
    Related Articles | Metrics
    The existing distance-based outlier detection algorithms suffer from low efficiency when dealing with large-scale datasets. To relieve this problem, a distributed outlier detection algorithm based on clustering and indexing (DODCI) was presented. The algorithm partitioned the original dataset into clusters by employing a certain clustering method. Then the index of each cluster was built in parallel on each distributed node. Afterwards, detection of outliers was implemented on each node looply using two optimization strategies and two pruning rules. The experimental results on synthetic dataset and preprocessed KDD CUP datasets show that the proposed algorithm is almost up to an order-of-magnitude faster than the two existing algorithms (Orca and iDOoR) when the dataset is large enough. The theoretical and experimental analyses show that the proposed algorithm can effectively raise the speed of outlier detection in large-scale datasets.
    Collaborative filtering recommendation models considering item attributes
    YANG Xingyao YU Jiong Turgun IBRAHIM QIAN Yurong SHUN Hua
    2013, 33(11):  3062-3066. 
    Asbtract ( )   PDF (1027KB) ( )  
    Related Articles | Metrics
    The traditional User-based Collaborative Filtering (UCF) models do not consider the attributes of items fully in the process of measuring the similarity of users. In view of the drawback, this paper proposed two collaborative filtering recommendation models considering item attributes. Firstly, the models optimized the rating-based similarity between users, and then summed the rating numbers of different items by users according to item attributes, in order to obtain the optimized and attribute-based similarity between users. Finally, the models coordinated the two types of similarity measurements by a self-adaptive balance factor, to complete the item prediction and recommendation process. The experimental results demonstrate that the newly proposed models not only have reasonable time costs in different data sets, but also yield excellent improvements in prediction accuracy of ratings, involving an average improvement of 5%, which confirms that the models are efficient in improving the accuracy of user similarity measurements.
    Collaborative recommendation algorithm under social network circumstances
    LI Hui HU Yun SHI Jun
    2013, 33(11):  3067-3070. 
    Asbtract ( )   PDF (632KB) ( )  
    Related Articles | Metrics
    Concerning data sparsity and malicious behavior of traditional collaborative filtering algorithm, a new social recommendation method combining trust and matrix factorization was proposed in this paper. Firstly, the incredible nodes in the network were founded by computing their prestige value and bias value, and then the weight of their evaluation would be weakened. Finally, the collaborative recommendation was conducted under the social network circumstance by combining the user-item matrix and trust matrix. The experimental results show that the proposed algorithm reduces the importance of not credible node to weaken the negative influence the false or malicious score brings to recommendation system, the data sparsity and malicious behavior problems can be alleviated, and a higher prediction accuracy than that of the traditional collaborative filtering algorithms can be achieved.
    On-line forum hot topic mining method based on topic cluster evaluation
    JIANG Hao CHEN Xingshu DU Min
    2013, 33(11):  3071-3075. 
    Asbtract ( )   PDF (795KB) ( )  
    Related Articles | Metrics
    Hot topic mining is an important technical foundation for monitoring public opinion. As current hot topic mining methods cannot solve the affection of word noise and have single hot degree evaluation way, a new mining method based on topic cluster evaluation was proposed. After forum data was modeled by Latent Dirichlet Allocation (LDA) topic model and topic noise was cut off, the data were then clustered by improved cluster center selection algorithm K-means++. Finally, clusters were evaluated in three aspects: abruptness, purity and attention degree of topics. The experimental results show that both cluster quality and clustering speed can rise up by setting topic noise threshold to 0.75 and cluster number to 50. The effectiveness of ranking clusters by their probability of the existing hot topic with this method has also been proved on real data sets tests. At last a method was developed for displaying hot topics.
    Micro-blog hot topics detection method based on user role orientation
    YANG Wu LI Yang LU Ling
    2013, 33(11):  3076-3079. 
    Asbtract ( )   PDF (642KB) ( )  
    Related Articles | Metrics
    To solve the low extraction efficiency for extracting hot topics in huge amounts of micro-blog data, a new topics detection method based on user role orientation was proposed. Firstly, some noise data of parts of users were filtered out by user role orientation. Secondly, the feature weight was calculated by the Term Frequency-Inverse Document Frequency (TF-IDF) function combined with semantic similarity to reduce the error caused by semantic expression. Then, the improved Single-Pass clustering algorithm was used to extract the topics of micro-blog. Lastly, the heat evaluation of micro-blog topics was made according to the number of reposts and comments, thus the hot topics were found. The results show that the average missing rate and false detection rate respectively decrease by 12.09% and 2.37%, and further indicate the topic detection accuracy rate is effectively improved and the method is feasible.
    Dynamic finding of authors‘ research interests in scientific literature
    SHI Qingwei LI Yanni GUO Pengliang
    2013, 33(11):  3080-3083. 
    Asbtract ( )   PDF (534KB) ( )  
    Related Articles | Metrics
    To solve the problems of mining relationships among topics, authors and time in large scale scientific literature corpora, this paper proposed the Author-Topic over Time (AToT) model according to the intra-features and inter-features of scientific literature. In AToT, a document was represented as a mixture of probabilistic topics and each topic was correspondent with a multinomial distribution over words and a beta distribution over time. The word-topic distribution was influenced not only by word co-occurrence but also by document timestamps. Each author was also correspondent with a multinomial distribution over topics. The word-topic distribution and author-topic distribution were used to describe the topics evolution and research interests changes of the authors over time respectively. Parameters in AToT could be learned from the documents by employing methods of Gibbs sampling. The experimental results by running in the collections of 1700 NIPS conference papers show that AToT model can characterize the latent topics evolution, dynamically find authors research interests and predict the authors related to the topics. Meanwhile, AToT model can also lower perplexity compared with the author-topic model.
    Artificial intelligence
    Transfer learning support vector regression
    SHI Yingzhong WANG Shitong JIANG Yizhang LIU Peilin
    2013, 33(11):  3084-3089. 
    Asbtract ( )   PDF (857KB) ( )  
    Related Articles | Metrics
    The classical regression systems modeling methods suppose that the training data are sufficient, but partial information missing may weaken the generalization abilities of the regression systems constructed based on this dataset. In order to solve this problem, a regression system with the transfer learning abilities, i.e. Transfer learning Support Vector Regression (T-SVR for brevity) was proposed based on support vector regression. T-SVR could use the current data information sufficiently, and learn from the existing useful historical knowledge effectively, so that remedy the information lack in the current scene. Reinforced current model was obtained through controlling the similarity between current model and history model in the object function and current model can benefit from history scene when information is missing or insufficient. The experiments on simulation data and real data show that T-SVR has better adaptability than the traditional regression modeling method in the scene with information missing.
    Multi-view semi-supervised collaboration classification algorithm with combination of agreement and disagreement label rules
    YU Chongchong LIU Yu TAN Li SHANG Lili MA Meng
    2013, 33(11):  3090-3093. 
    Asbtract ( )   PDF (618KB) ( )  
    Related Articles | Metrics
    To improve the performance of the co-training algorithm and expand the range of applications, a multi-view semi-supervised collaboration classification algorithm with the combination of consistent and inconsistent label rules was proposed, which aimed at providing a more effective method for the classification of the bridge structured health data. The proposed algorithm used combination of agreement and disagreement label rules for the unlabeled data by judging whether the two classifiers were consistent. Put the sample to the label set, if the label results were consistent. If the label results were inconsistent and the confidence was beyond the threshold, it put the label result of the high confidence to the label set, took full use of the unlabeled data to improve the performance of the classifier, and updated the classification model by the difference of the classifiers. The experimental results of the proposed algorithm on the bridge structured health datasets and standard UCI datasets verify the effectiveness and feasibility of the proposed model on the multi-view classification problems.
    Sparse Bayesian learning for credit risk evaluation
    LI Taiyong WANG Huijun WU Jiang ZHANG Zhilin TANG Changjie
    2013, 33(11):  3094-3096. 
    Asbtract ( )   PDF (609KB) ( )  
    Related Articles | Metrics
    To solve the low classification accuracy and poor interpretability of selected features in traditional credit risk evaluation, a new model using Sparse Bayesian Learning (SBL) to evaluate personal credit risk (SBLCredit) was proposed in this paper. The SBLCredit utilized the advantages of SBL to get as sparse as possible solutions under the priori knowledge on the weight of features, which led to both good classification performance and effective feature selection. SBLCredit improved the classification accuracy of 4.52%, 6.40%, 6.26% and 2.27% averagely when compared with the state-of-the-art K-Nearest Neighbour (KNN), Nave Bayes, decision tree and support vector machine respectively on real-world German and Australian credit datasets. The experimental results demonstrate that the proposed SBLCredit is a promising method for credit risk evaluation with higher accuracy and fewer features.
    Face recognition based on orthogonal and uncorrelated marginal neighborhood preserving embedding
    CHEN Dayao CHEN Xiuhong
    2013, 33(11):  3097-3101. 
    Asbtract ( )   PDF (733KB) ( )  
    Related Articles | Metrics
    Neighborhood Preserving Embedding (NPE) is still an unsupervised method in nature, and it does not take advantage of the existing classification information to improve the classification efficiency. Therefore, two supervised manifold learning methods named Orthogonal Marginal Neighborhood Preserving Embedding (OMNPE) and Uncorrelated Marginal Neighborhood Preserving Embedding (UMNPE) were proposed. Both methods firstly constructed within-class graph and between-class graph and defined within-class reconstructive error and between-class reconstructive error. Then, OMNPE and UMNPE sought to find a projection that simultaneously minimized the within-class reconstructive error and maximized the between-class reconstructive error, under the orthogonal and uncorrelated constraint conditions, respectively. The training samples and testing samples were projected onto low-dimensional subspace respectively. Finally, the nearest neighbor classifier was used for classification. Extensive experiments in ORL and Yale face databases illustrate that the proposed algorithms outperform those of subspace face recognition algorithms with average recognition rate by 0.5%~3%, such as Linear Discriminant Analysis (LDA), Marginal Fisher Analysis (MFA), which proves the effectiveness of the proposed algorithms.
    Producer pre-selection mechanism based on self-adaptive group search optimizer
    YU Changqing WANG zhurong
    2013, 33(11):  3102-3106. 
    Asbtract ( )   PDF (768KB) ( )  
    Related Articles | Metrics
    To overcome the prematurity of Group Search Optimizer (GSO) and improve its convergence speed, a producer pre-selection mechanism based self-adaptive group search optimizer (PSAGSO) algorithm was proposed. Firstly, the reverse mutation operator and pre-selection mechanism were employed to generate a new producer by producer-scrounger model to guide the search directions of scrounger and effectively maintain the diversity of population. Secondly, a self-adaptive method based on linear decreasing weight was adopted to adjust the proportion of rangers, which is to improve individual vigor of the population and benefit to escape from local optima. Experiments were conducted on a set of 12 benchmark functions. For 30-dimensional function optimization, the test data obtained by the PSAGSO algorithm was better than that in the literature (HE S, WU Q H, SAUNDERS J R. Group search optimizer: an optimization algorithm inspired by animal searching behavior. IEEE Transactions on Evolutionary Computation, 2009, 13(5): 973-990). For 300-dimensional numerical optimization problems, the PSAGSO algorithm exhibited better performance. The experimental result demonstrates that the PSAGSO algorithm improves the group search optimizer, and to some extent it improves the algorithm convergence speed and accuracy.
    Modified method for wavelet neural network model
    ZHANG Yanliang CHEN Xin LI Yadong
    2013, 33(11):  3107-3110. 
    Asbtract ( )   PDF (757KB) ( )  
    Related Articles | Metrics
    To improve the performance of Wavelet Neural Network (WNN) model in dealing with complex nonlinear problems, concerning the shortcomings of premature convergence, poor late diversity, poor search accuracy of Quantum-behaved Particle Swarm Optimization (QPSO) algorithm, a modified quantum-behaved particle swarm algorithm was proposed for WNN training by introducing weighting coefficients, introducing Cauchy random number, improving contraction-expansion coefficient and introducing natural selection at the same time. And then, it replaced the gradient descent method with the modified quantum-behaved particle swarm algorithm, trained the wavelet coefficients and network weights, and then input the optimized combination of parameters into wavelet neural network to achieve the algorithm coupling. The simulation results on three UCI standard datasets show that the running time of the Modified Quantum-behaved Particle Swarm Optimization-Wavelet Neural Network (MQPSO-WNN) was reduced by 11%~43%, while the calculation error was decreased by 8%~57%, compared with wavelet neural network, PSO-WNN and QPSO-WNN. Therefore, the MQPSO-WNN model can approximate the optimal value more quickly and more accurately.
    Improved ant colony genetic optimization algorithm and its application
    LIU Chuansong
    2013, 33(11):  3111-3113. 
    Asbtract ( )   PDF (581KB) ( )  
    Related Articles | Metrics
    In order to overcome the limitation of the current path planning algorithms for mobile robot, a fusion algorithm based on ant colony optimization and genetic optimization was proposed. First, this method used pheromone update and path node selection technology to find optimization paths quickly so as to form initial population, and the ant executed a local search again once the robot went forward and dealt with random obstacles. Second, it optimized the individuals of the population by using Genetic Algorithm (GA) in the global scope, which could make the robot move on a globally optimal path to the ending node. The simulation results indicate the feasibility and effectiveness of the proposed method.
    Agent-based multi-issue negotiation algorithm and strategy of technology innovation platform
    CHU Junfei PAN Yu ZHANG Zhenhai
    2013, 33(11):  3114-3118. 
    Asbtract ( )   PDF (853KB) ( )  
    Related Articles | Metrics
    To solve the problem of technology docking negotiation of technology innovation platform, the Agent-based multi-issue negotiation algorithm and strategy were analyzed and designed with reference to the technology of IntelliSense Agent. In the practical application environment of technology docking on technology innovation platform, the data of history technology docking proposals were fully used and the technology docking benefits of both the technology trading sides was fully considered, and then the Agent-based multi-issue negotiation algorithm was designed. Based on this algorithm, the technology docking strategy and the proposed solution in technology docking negotiation were designed and proposed. Thus, the optimality of comprehensive benefits was ensured and the "win-win" benefits situation was reached by the two trading sides in the technology docking negotiation. Through practical examples of technology docking on the technology innovation platform, the applicability, rationality, feasibility and effectiveness of this negotiation algorithm and negotiation strategy in its practical application environment were exemplified.
    Prediction of moving object trajectory based on probabilistic suffix tree
    WANG Xing JIANG Xinhua LIN Jie XIONG Jinbo
    2013, 33(11):  3119-3122. 
    Asbtract ( )   PDF (828KB) ( )  
    Related Articles | Metrics
    In the prediction of moving object trajectory, concerning the low accuracy rate of low order Markov model and the expansion of state space in high order model, a dynamic adaptive Probabilistic Suffix Tree (PST) prediction method based on variable length Markov model was proposed. Firstly, moving objects trajectory path was serialized according to the time; then the probability characteristic of sequence context was trained and calculated from the historical trajectory data of moving objects, the probabilistic suffix tree model based path sequence was constructed, combined with the actual trajectory data, thus the future trajectory information could be predicted dynamically and adaptively. The experimental results show that the highest prediction accuracy was obtained in second order model, with the order of the model increasing, the prediction accuracy was maintained at about 82% and better prediction results were achieved. In the meantime, space complexity was decreased exponentially and storage space was reduced greatly. The proposed method made full use of historical data and current trajectory information to predict the future trajectory, and provided a more flexible and efficient location-based services.
    Collision avoidance algorithm for multi-robot system based on improved artificial coordinating field
    WU Jin ZHANG Guoliang TANG Wenjun SHUN Yijie
    2013, 33(11):  3123-3128. 
    Asbtract ( )   PDF (895KB) ( )  
    Related Articles | Metrics
    Concerning the collision avoidance problem for multi-robot system, a collision avoidance algorithm based on improved artificial coordinating field was advanced. Firstly, a method was adopted, which made obstacle convex and chose new target initiatively to resolve the deadlock problem with artificial coordinating field in non-protruding polygon obstacle surroundings. Secondly, a repelling force was modeled based on velocity and distance to overcome the problem of low space utilization ratio with artificial coordinating field, especially in the situation that the target was near by the obstacle. Lastly, a force mixer was designed, and it was applied to avoidance movement tremble. The experimental results indicate that, the algorithm is effective and reliable to resolve collision avoidance problem for multi-robot system, and it improves the adaptability of multi-robot system for complicated environment.
    Community detection in complex networks based on immune genetic algorithm
    CAO Yongchun TIAN Shuangliang SHAO Yabin CAI Zhenqi
    2013, 33(11):  3129-3133. 
    Asbtract ( )   PDF (811KB) ( )  
    Related Articles | Metrics
    As many of the community detection methods based on intelligent optimization algorithms suffer from degeneracy, unsatisfactory optimization ability, complex computational process, requiring priori knowledge, etc., a community detection method in complex networks based on immune Genetic Algorithm (GA) was proposed. The algorithm combined the improved character encoding with the corresponding genetic operator, and automatically acquired the optimal community number and the community detection solution without the priori knowledge. Immune principle was introduced into selection operation of GA, which maintained the diversity of individuals, and therefore improved the intrinsic degeneracy of GA. By utilizing the local information of the network topology structure in initialization population, crossover operation and mutation operation, the search space was compressed and the optimization ability was improved. The simulation results on both computer-generated networks and real-world networks show that the algorithm acquires the optimal community number and the community detection solution,and has a higher accuracy. This indicates the algorithm is feasible and valid for community detection in complex networks.
    TAN Model For Ties Prediction In Social Networks
    WU Jiehua
    2013, 33(11):  3134-3137. 
    Asbtract ( )   PDF (711KB) ( )  
    Related Articles | Metrics
    In the research field of social ties prediction, taking common neighbors property as the similarity-based topological measure to carry the task of prediction has been widely used and better results have been achieved, which nevertheless has strong assuming independence and can not reflect the "link" and related network structure. This paper proposed a new measure of link prediction by introducing a Tree Augmented Nave Bayesians (TAN) classification model, which used information entropy measure to define the role of the node pair and gave differentiated neighbors set contribution to the task of social ties prediction, and then it was extended to Common Neighbor (CN), Adamic-Adar (AA) and Resource Allocation (RA) similarity-based prediction algorithms. The experimental evaluation by Area Under ROC Curve (AUC) and Receiver Operating Characteristic (ROC) curve on five real social networks prove that the proposed model can mine the latent common neighbors contribution and alleviate the independence hypothesis, which leads to enhance the accuracy of link prediction.
    Rapid speech keyword spotting method based on template matching
    ZHU Guoteng SUN Wei
    2013, 33(11):  3138-3140. 
    Asbtract ( )   PDF (484KB) ( )  
    Related Articles | Metrics
    When dealing with keywords detection without training samples, template matching-based keyword spotting can still be able to spot compared with the traditional method. However, template matching-based method is time-consuming, because it uses frame-by-frame move method to calculate the local minimum distance. The extreme points of the local minimum distance are usually near phoneme segmentation points. A fast template matching method can come out by combining their positions with interpolation idea. By using interpolation to generate the local minimum distance between phoneme segmentation points, this method can greatly reduce the calculation time. When running on the TIMIT and CASIA corpus, the improved method approximately is 2.8 times faster than the conventional template matching-based keyword spotting.
    Oil adulteration detection with multi-label learning vector quantization
    CHEN Jingbo
    2013, 33(11):  3141-3143. 
    Asbtract ( )   PDF (436KB) ( )  
    Related Articles | Metrics
    To improve the detection effect in oil adulteration, a new algorithm called ML-LVQ (Multi-Label Learning Vector Quantization) was proposed, which adapted Learning Vector Quantization (LVQ) to solve the multi-label learning problem on High Performance Liquid Chromatography (HPLC) data. It could minimize the upper bound of the ranking error, which would benefit the ranking measure. Moreover, the meta-labeler was used to identify the number of the labels for improving the bipartitions measure. The experimental results on nine classes of pure oil and their mixed oil samples show that the proposed algorithm is superior to the improved AdaBoost.RMH.
    Advanced computing
    Scheduling algorithm for multi-satellite and point target task on swinging mode based on evolution algorithm
    WANG Maocai CHENG Ge DAI Guangming SONG Zhiming
    2013, 33(11):  3144-3148. 
    Asbtract ( )   PDF (861KB) ( )  
    Related Articles | Metrics
    Concerning the low efficiency of earth observation satellite, a scheduling method for multi-satellite and point target task on swinging mode was proposed. In this paper, the angle relation between satellites and ground targets and the computing method of the positive and negative swinging angle and time window were firstly analyzed. On the basis of these analyses, a scheduling model for multi-satellite and point target task on swinging mode was developed. In the model, the maximum observation was obtained, the minimum swinging number and the minimum total swinging angle were set as the optimization objectives. Based on evolution algorithm, an optimal scheduling algorithm on swinging mode was proposed. In the algorithm, the single point crossover operator and the mutation operation through the selection of time window were adopted. The fitness function based on conflict was defined. The conflicts were reduced by adjusting the actual start time of the activities. The selection strategy was designed according to the order of the optimization objectives. The conflicts were eliminated by defining the conflict cost. Finally, the scheduling result and the simulation of a practical example on 5 satellites and 100 point targets with swinging mode were given, and the scheduling performances were also analyzed with the swinging angle of 0°, 10° and 25°. The experimental results show that the observation efficiency improves by 18% when the swinging angle is 25°. The method has important application value in emergency relief and wartime rapid response.
    Single instruction multiple data vectorization of non-normalized loops
    HOU Yongsheng ZHAO Rongcai GAO Wei GAO Wei
    2013, 33(11):  3149-3154. 
    Asbtract ( )   PDF (948KB) ( )  
    Related Articles | Metrics
    Concerning that the upper, lower bounds and stride of the non-normalized loop are uncertain, some issues were normalized based on a transform method such as that loop conditions were logical expression, increment-reduction statement and do-while. An unroll-jam method was proposed to deal with the loops that cannot be normalized, which mined the unroll-jam results by Superword Level Parellelism (SLP) vectorization. Compared with the existing Single Instruction Multiple Data (SIMD) vectorization method for non-normalized loops, the experimental results show that the transform method and unroll-jam method are better to explore the parallelism of the non-normalized loops, which can improve the performance by more than 6%.
    Load balancing strategy of cloud computing based on multi-layer and fault-tolerant mechanism
    CHEN Bo ZHANG Xihuang
    2013, 33(11):  3155-3159. 
    Asbtract ( )   PDF (816KB) ( )  
    Related Articles | Metrics
    When hybrid dynamic load balancing algorithm is applied to cloud computing, some problems will occur, such as that frequent exchange of sites information leads to low processing efficiency and algorithm is lack of fault-tolerant mechanism. Hence this paper proposed a load balancing algorithm based on multi-layer and fault- tolerant mechanism. The algorithm mixed the advantages of centralized and distributed methods. By organizing neighbor sites, the sites information exchange was controlled within a range of neighbor sites. When site scheduled tasks, it appended the load information of itself and its neighbors to the job transfer request. The method resolved network business and low efficiency of servers which was caused by broadcasting load information frequently. The algorithm achieved load balancing in cloud computing and minimum response time. The method introduced fault-tolerant mechanism to enhance the scalability of cloud system. The experimental results show that the load balancing strategy of cloud computing based on multi-layer and fault-tolerant mechanism is superior to traditional algorithms more than 20% in task distribution time and response time. Besides, it surpasses the traditional one in the stability of the algorithm.
    Cloud computing task scheduling based on dynamically adaptive ant colony algorithm
    WANG Fang LI Meian DUAN Weijun
    2013, 33(11):  3160-3162. 
    Asbtract ( )   PDF (621KB) ( )  
    Related Articles | Metrics
    A task scheduling strategy based on the dynamically adaptive ant colony algorithm was proposed for the first time to solve the drawbacks like slow convergence and easily falling into local optimal that have long existed in the ant colony algorithm. Chaos disruption was introduced when selecting the resource node, the pheromone evaporation factors were adjusted adaptively based on nodes pheromone and the pheromone were updated dynamically according to the solutions performance. When the number of tasks was greater than 150, compared with the dynamically adaptive ant colony algorithm and ant colony algorithm, time efficiency could be maximally improved up to 319% and resource load was 0.51.The simulation results prove that the proposed algorithm is suitable for improving convergence rate and the global searching ability.
    Lightweight evaluation algorithm for infix arithmetic expression
    BAI Yu GUO Xiane
    2013, 33(11):  3163-3166. 
    Asbtract ( )   PDF (595KB) ( )  
    Related Articles | Metrics
    Concerning the bulky or complex current evaluation algorithm of infix arithmetic expression, a lightweight evaluation algorithm for infix arithmetic expression was proposed. The algorithm was based on the idea of the reverse split infix arithmetic expression, using the method of recursive analysis, equivalent to infix arithmetic expressions of constructing binary tree. The experimental results show that compared with the algorithm of traditional Reverse Polish Notation (RPN) transformation and evaluation, the proposed algorithm did not need to do RPN transform and assist of hand-operated stack, its amount of code was only 1/6 of RPN transformation, and the efficiency declined by only 6.9%. To compared with W3Eval algorithm, it did not need to use transpose table of symbol, supporting define or redefine of operator, and its amount of code was less than half of W3Eval algorithm. The algorithm demands low-cost implementation, and it is suitable for lightweight applications such as browser-side of Web applications and embedded applications.
    Multimedia processing technology
    Multi-feature fusion method for mesh simplification
    WANG Hailing WANG Jian YIN Guisheng FU Qiao ZHOU Bo
    2013, 33(11):  3167-3171. 
    Asbtract ( )   PDF (780KB) ( )  
    Related Articles | Metrics
    Most mesh simplification algorithms for three-dimensional (3D) may cause oversimplification and distortion in the processing of simplification. To address this problem, an efficient multi-feature fusion method for mesh simplification was proposed. The proposed method measured geometric feature information based on quadric error metric weighted by normal information firstly, then used torsion weighted by side ratio of triangle to measure visual feature information, at last proposed a multi-feature fusion metric to guide model simplification. The experimental results have been compared on execution time and visual quality with other edge contraction algorithms; the results show that the proposed method can guarantee computational efficiency, improve visual shape feature, and reduce oversimplification and distortion for simplified model.
    Best viewpoints selection based on feature points detection
    ZHU Fan YANG Fenglei
    2013, 33(11):  3172-3175. 
    Asbtract ( )   PDF (902KB) ( )  
    Related Articles | Metrics
    This paper proposed a new best viewpoints selection approach that was capable of selecting best viewpoints for 3D models based on a feature points detection process. First, a new saliency measure was defined to compute the saliency of 3D meshes vertices, which assumed that the saliency of a given vertex on a 3D model could be described by its average difference of distances within a local space. Then, the effective feature points were promisingly able to be extracted based on vertices saliency. Finally, a simple selection strategy was adopted to determine the best viewpoints for 3D mesh models. The quality of viewpoints was a combination of the geometirc distribution and the saliency of visible feature points. The experimental results validate the effectiveness of the proposed approach, which can measure viewpoint quality objectively and obtain the best viewpoints of good visual effect.
    Image memorability model based on visual saliency entropy and Object Bank feature
    CHEN Changyuan HAN Junwei HU Xintao CHENG Gong GUO Lei
    2013, 33(11):  3176-3178. 
    Asbtract ( )   PDF (674KB) ( )  
    Related Articles | Metrics
    To improve the prediction ability of image memorability, a method for automatically predicting the memorability of an image was proposed by using visual saliency entropy and improved Object Bank feature. The proposed method improved the traditional Object Bank feature and extracted the visual saliency entropy feature. Then a prediction model of image memorability was constructed by using Support Vector Regression (SVR). The experimental results show that the correlation coefficiency of the proposed method is three percentage higher than the state-of-the-art method. The proposed model can be used in image memorability prediction, image retrieval ranking and advertisement assessment analysis.
    Mean Shift tracking for video moving objects in combination with scale invariant feature transform and Kalman filter
    ZHU Zhiling RUAN Qiuqi
    2013, 33(11):  3179-3182. 
    Asbtract ( )   PDF (825KB) ( )  
    Related Articles | Metrics
    To solve the problem of poor tracking performance when the moving target has a relatively large scale change, rotation, fast-moving or occlusion, an object tracking method combining Scale Invariant Feature Transform (SIFT) matching and Kalman filter with the Mean Shift algorithm was put forward. First, the Kalman filter was used to predict the movement state of the moving target and its estimated value was taken as the initial position of Mean Shift tracking. Then, when the measure coefficient for the similarity of the candidate target model and the initial target model was less than a certain threshold, SIFT feature matching was used to look for the possible position of the target and the new candidate target model was built there, meanwhile, the similarity with the initial target model was measured. Finally, by comparing the two matching coefficients, the position associated with a larger one was selected as the targets final position. The experimental results show that the average tracking error of this algorithm is decreased by about twenty percent than the tracking algorithms only combining the SIFT feature or Kalman filter with the Mean Shift alone.
    Belly shape modeling with new combined invariant moment based on stereo vision
    LIU Huan ZHU Ping XIAO Rong TANG Weidong
    2013, 33(11):  3183-3186. 
    Asbtract ( )   PDF (642KB) ( )  
    Related Articles | Metrics
    To overcome the influence from both the light change and blurring in actual shooting for the three-dimensional reconstruction based on the stereo vision technique, the new illumination-robust combined invariant moments were put forward. Meanwhile, for the purpose of improving the performance of the image feature matching which solely depended on similarity, the dual constraints of the slope and the distance were involved into the similarity measurement, and then the matching process was carried out with their combined actions. Finally the three-dimensional reconstruction of the whole belly contour was built automatically. The parameters of the belly shape obtained by the proposed method can achieve the same accuracy as the 3D scanner and the measurement error with the actual value was less than 0.5cm. The experimental results show that the hardware of this system is simple, low cost as well as fast and reliable for information collection. The system is suitable for apparel design.
    Behavior recognition in rehabilitation training based on modified naive bayes classifier
    ZHANG Yi HUANG Cong LUO Yuan
    2013, 33(11):  3187-3189. 
    Asbtract ( )   PDF (638KB) ( )  
    Related Articles | Metrics
    This paper proposed a modified behavior recognition method to improve the recognition rate in rehabilitation training. First, it adopted Kinect sensor to detect human skeleton locations, defined the motion feature in rehabilitation training and designed the Bayes classifier. Second, the threshold selection process was improved to increase the recognition rate. The comparative experimental results with the unmodified one show that the modified naive Bayes classifier is simple and rapid, and it gains better identification effects in rehabilitation training.
    Multi-scale analysis for remote sensing target recognition
    BO Shukui JING Yongju
    2013, 33(11):  3190-3192. 
    Asbtract ( )   PDF (566KB) ( )  
    Related Articles | Metrics
    To solve the problem that the features of a target change with the image resolution in target recognition, a multi-scale analysis of target recognition was provided in this paper. First, the multi-scale features of a target were analyzed based on mixed pixels in multi-resolution images. At different scale, the features of a target had different behaviors that were related to the proportion of mixed pixels in the target. Second, the structural differences of the target were analyzed and illustrated from the experiments of multi-scale target recognition. Last, the concept of "dominating class" was proposed based on the multi-scale analysis. The experimental results show that the shape and structural features change with scale of the image and large scale leads big change. This paper studies the target features from multi-scale images and provides guidance on image target recognition.
    Color image registration algorithm based on hypercomplex bispectrum slices
    LIAN Wei ZUO Junyi
    2013, 33(11):  3193-3196. 
    Asbtract ( )   PDF (646KB) ( )  
    Related Articles | Metrics
    To solve the color image registration problem where similarity transformations may exist between two images both in spatial and color spaces, a new bispectrum slices formulation suitable for hypercomplex domain was presented. The formulation could be derived by applying the hypercomplex Fourier transform to the time domain form of the complex bispectrum slices. The resulting hypercomplex bispectrum slices was translation invariant and could be used to solve rotation and scale changes between two color images. The simulation results show that compared with the method of complex bispectrum slices, the proposed method has better robustness against disturbances. Its error is generally only half that of the former method.
    Adaptive weighted mean filtering algorithm based on city block distance
    CAO Meng ZHANG Youhui WANG Zhiwei DONG Rui ZHEN Yingjuan
    2013, 33(11):  3197-3200. 
    Asbtract ( )   PDF (700KB) ( )  
    Related Articles | Metrics
    Concerning the defect that the traditional filtering window cannot be adaptively extended and the standard mean filter algorithm could blur edges easily, a new adaptive weighted mean filtering algorithm based on city block distance was proposed. First, the noise points can be detected with switch filtering ideas. Then, for each noise point, the window was extended according to the city block distance, and the window size was adaptively adjusted based on the number of signal points within the window. Last, the weighted mean of the signal points in the window was taken as the gray value of the noise points to achieve the effective recovery of the noise points. The experimental results show that the algorithm can effectively filter out salt-and-pepper noise, especially for the larger-noise-density image, and denoising effect is more significant.
    Ultrasound image anisotropic diffusion de-speckling method based on Mallat-Zhong discrete wavelet transform wavelet
    WU Shibin CHEN Bo DONG Wanli GAO Xiaoming
    2013, 33(11):  3201-3203. 
    Asbtract ( )   PDF (545KB) ( )  
    Related Articles | Metrics
    In view of speckle noise in ultrasound image, there are some disadvantages of traditional anisotropic diffusion methods, such as in-sufficient noise suppression and edge details preservation. A de-speckling method based on Mallat-Zhong Discrete Wavelet Transform (MZ-DWT) wavelet was proposed. The method used MZ-DWT wavelet and Expectation Maximization (EM) algorithm as the discrimination factor between homogeneous and edge regions, making it more accurately to control diffusion intensity and rate and achieving the noise suppression and details preservation. The experimental results show that, the proposed algorithm can better de-speckle while preserving image details and the performance of the method is better than the traditional anisotropic diffusion methods.
    Iterative filtering algorithm for color image based on visual sensitivity and improved directional distance
    LI Gaoxi CAO Jun ZHANG Fuyuan LI Hua
    2013, 33(11):  3204-3208. 
    Asbtract ( )   PDF (840KB) ( )  
    Related Articles | Metrics
    To eliminate the influence of visual difference on color image filtering, a method to compute the visual sensitivity value of three primary colors was given. The filter algorithm detected pixels by rough set first, then used visual sensitivity to correct the test results, finally filtered the noise pixels by the improved Directional Distance Filter (DDF). The experimental results show that the new filter, compared with several other representative vector filters, provides better performance on color keeping, edge detail preservation, and noise filtering rate. In addition, the normalized mean square error of our method is the smallest and the Peak Signal-to-Noise Ratio (PSNR) is the largest under different noise density.
    Tilt correction algorithm based on aggregation of grating projection sequences
    LIU Xu WU Ling CHEN Niannian FAN Yong DUAN Jingjing REN Xinyu XIA Jingjing
    2013, 33(11):  3209-3212. 
    Asbtract ( )   PDF (612KB) ( )  
    Related Articles | Metrics
    In view of the correction error problem which is caused by some factors such as dithering, the authors presented a new optical tilt correction method based on grating projection. The method was based on the analysis of each pixel of the data array in a sequence of fringe patterns having multiple frequencies, and setup model for pixel coordinates and pixel-slope. Then skew angles of fringes were calculated by trigonometry with the relationship between tilt angle and pixel-slope. At last, tilt correction was realized. The experimental results show that, the algorithm is capable of accurately detecting angle within the range [-90°,90°],accuracy is 99%. Compared with other algorithms such as Hough transform, the proposed algorithm improves precision and accuracy significantly.
    Texture clustering matting algorithm
    YANG Wei GAN Tao LAN Gang
    2013, 33(11):  3213-3216. 
    Asbtract ( )   PDF (660KB) ( )  
    Related Articles | Metrics
    To solve the problem that traditional matting methods do not perform well in highly textured regions, a Texture Clustering Matting (TCM) algorithm based on K-Nearest Neighbor (KNN) matting was proposed. First, the texture features were extracted. Second, a new feature space which contained color, position and texture information was constructed. Third, the matting Laplacian matrix was constructed by clustering neighbors in the new feature space. Last, the opacity was solved by using the closed-form solution. The experiments on benchmark datasets indicate that the overall ranking of the proposed method is significantly improved, which achieves relatively leading matting effect for highly textured image.
    Simulation of ink diffusion on Xuan paper
    FAN Dongyun LI Haisheng
    2013, 33(11):  3220-3223. 
    Asbtract ( )   PDF (663KB) ( )  
    Related Articles | Metrics
    Ink diffusion is a complex physical phenomenon. Concerning the problem of simulating ink diffusion on Xuan paper, this paper proposed a simulation method based on diffusion equation with variable coefficient, and its diffusion coefficient depended on Xuan paper structure and the residue of ink which reduced with time. There were two steps for simulation: simulating Xuan paper structure and simulating the dynamic procedure of diffusion. To simulate Xuan paper structure, a weighting fiber structure was proposed, which consisted of uniformly distributed line segments with different weights and random directions. The dynamic procedure of ink diffusion was described by the diffusion equation. To generate the diffusion image efficiently, Crank-Nicolson method was used to solve the diffusion equation, fiber structure was pre-computed, and the diffusion image was updated dynamically. Compared with the previous similar simulation methods, this method rendered more natural diffusion boundary, and overcame the problem of excessively smooth boundary. The experimental results demonstrate that this approach is able to simulate the effects of ink diffusion on different Xuan paper realistically.
    Information security
    Network security situational assessment based on link performance analysis
    HUANG Zhengxing SU Yang
    2013, 33(11):  3224-3227. 
    Asbtract ( )   PDF (650KB) ( )  
    Related Articles | Metrics
    Concerning the fusion essence of network situational assessment and the defect of being unaware of the unknown attack in the existing Analytic Hierarchy Process (AHP), a network security evaluation method based on link security situation was proposed. With the help of the theory of network performance analysis, the network security situational assessment model based on link performance analysis was proposed. Firstly, the paper calculated every links security situational value and showed them in a matrix; secondly, used every links weight and security situational value to get the network security situational value which was shown in vector. The experimental results show that the proposed method can not only reflect the changes of both the partial and entire security situation but also apperceive unknown attack, which provides administrator with much convenience.
    Subjective trust metric based on weighted multi-attribute cloud
    FAN Tao ZHANG Mingqing LIU Xiaohu CHENG Jian
    2013, 33(11):  3228-3231. 
    Asbtract ( )   PDF (737KB) ( )  
    Related Articles | Metrics
    The existing trust metrics based on the cloud model lack of the multi-granularity and timeliness consideration. For this reason, a trust metric algorithm based on weighted multi-attribute cloud was proposed. First of all, multi-attribute trust cloud on trust metric was used to refine the grain size, and time decay function was introduced in the entity trust computing; second, multi-attribute comprehensive and multi-path merge was used to get entity ultimate trust cloud. Finally, the trust level of the entity was obtained by comparison with basis trust cloud using cloud similarity comparison algorithm. The simulation results under grid computing environment show that when the node interaction reached 100 times, the interaction success rate of weighted multi-attribute cloud metric was 80%, significantly higher than 65% of the traditional method. The simulation results show that the cloud using the weighted multi-attribute trust cloud metric measurement method can improve the accuracy of trust metric.
    Secret information sharing algorithm based on CL multi-wavelet and combination bit plane for confidential communication
    ZHANG Tao REN Shuai JU Yongfeng LING Rao YANG Zhaohui
    2013, 33(11):  3232-3234. 
    Asbtract ( )   PDF (490KB) ( )  
    Related Articles | Metrics
    In view of the contradiction between capability, invisibility and robustness of the existing information hiding algorithms, a preprocessing algorithm for digital image based on CL (Chui-Lian) multi-wavelet transform and Combination Bit Plane (CBP) was proposed. Then the digital image preprocessed by CL multi-wavelet and CBP was taken as the cover image to embed secret information for confidential communication and image sharing. CL multi-wavelet transform could divide the cover image into four lowest resolution sub-images with different energy level. And CBP method could analyze the above four sub-images into different bit planes as the final embedding regions. During the hiding procedure, robust information, secret information and fragile information could be embedded according to the energy and robustness characteristics of the embedding regions. The experimental results show that the robustness against several common attacks described in this paper has certain enhancement compared with other two methods when the embedding rate is 25%. The Peak Signal-to-Noise Ratio (PSNR) is increased by 37.16% and 20.00% respectively compared with Discrete Cosine Transform-Least Significant Bit (DCT-LSB) and Discrete Wavelet Transform -Least Significant Bit (DWT-LSB) algorithm.
    Identity-based on-the-fly encryption and decryption scheme for controlled documents
    JIN Biao XIONG Jinbo YAO Zhiqiang LIU Ximeng
    2013, 33(11):  3235-3238. 
    Asbtract ( )   PDF (658KB) ( )  
    Related Articles | Metrics
    To deal with the increasingly serious situation of document's security and better protect the controlled documents, in this paper, an identity-based On-The-Fly Encryption (OTFE) and decryption scheme was proposed for the controlled documents, which combined an Identity-Based Encryption (IBE) algorithm with an on-the-fly encryption technique. In the scheme, file system filter driver technology was used to monitor program's behaviors on the controlled documents; meanwhile, the IBE algorithm was used to encrypt and decrypt the controlled documents. Specifically, a new algorithm that associated the original ciphertext and divided the associated ciphertext into two parts stored in different locations was proposed. Therefore, it is impossible for an adversary to obtain the whole ciphertext and further recover the original plaintext. Finally, an elaborate description was made on the scheme from system level and algorithm level. The security analysis indicates that the proposed scheme is able to effectively protect the controlled documents.
    Identification of encrypted function in malicious software
    CAI Jianzhang WEI Qiang ZHU Yuefei
    2013, 33(11):  3239-3243. 
    Asbtract ( )   PDF (773KB) ( )  
    Related Articles | Metrics
    To resolve that the malware (malicious software) usually avoids security detection and flow analysis through encryption function, this paper proposed a scheme which can identify the encrypted function in malware. The scheme generated the dynamic loop data flow graph by identifying the loop and input/output of the loop in the dynamic trace. Then the sets of input/output were abstracted according to the loop data flow graph, the reference of known encrypted function was designed and the reference whose parameters were elements of the input sets was computed. If the result could match any element of the output sets, then the scheme could conclude the malware encrypts information by the known encrypted function. The experimental results prove that the proposed scheme can analyze the encrypted function of payload in the obfuscated malware.
    Further study on algebraic structure of RSA algorithm
    PEI Donglin LI Xu
    2013, 33(11):  3244-3246. 
    Asbtract ( )   PDF (602KB) ( )  
    References | Related Articles | Metrics
    By making use of the theory of quadratic residues under the condition of strong prime, a method for studying the algebraic structure of Z*φ(n) of RSA (Rivest-Shamir-Adleman) algorithm was established in this work. A formula to determine the order of element in Z*φ(n) and an expression of maximal order were proposed; in addition, the numbers of quadratic residues and non-residues in the group Z*φ(n) were calculated. This work gave an estimate that the upper bound of maximal order was φ(φ(n))/4 and obtained a necessary and sufficient condition on maximal order being equal to φ(φ(n))/4. Furthermore, a sufficient condition for A1 being cyclic group was presented, where A1 was a subgroup composed of all quadratic residues in Z*φ(n), and a method of the decomposition of Z*φ(n) was also established. Finally, it was proved that the group Z*φ(n) could be generated by seven elements of quadratic non-residues and the quotient group Z*φ(n)/A1 was a Klein group of order 8.
    About the secret sharing scheme applied in the Pi calculus
    XU Jun
    2013, 33(11):  3247-3251. 
    Asbtract ( )   PDF (796KB) ( )  
    Related Articles | Metrics
    In this paper, an abstraction of secret-sharing schemes that is accessible to a fully mechanized analysis was given. This abstraction was formalized within the applied Pi-calculus by using an equational theory that characterized the cryptographic semantics of secret share. Based on that, an encoding method from the equational theory into a convergent rewriting system was presented, which was suitable for the automated protocol verifier ProVerif. Finally, the first general soundness result for verifiable multi-secret sharing schemes was concluded: for the multi-secret sharing schemes satisfying the specified security criterion in ProVerif, the realistic adversaries modeled on multi-secret sharing schemes in Pi-calculus can simulate the ideal adversaries in verifier ProVerif, which means that realistic adversaries and ideal adversaries are indistinguishable.
    Computer software technology
    Score distribution method for Web service composition
    WANG Wei FU Xiaodong XIA Yongying TIAN Qiang LI Changzhi
    2013, 33(11):  3252-3256. 
    Asbtract ( )   PDF (858KB) ( )  
    Related Articles | Metrics
    To distribute the score of composite service obtained from customer to each component service based on actual and historical performance of component services, Analytic Hierarchy Process (AHP) was used to calculate the distribution weight of each component service, in which a method was presented to convert Web service process into structure tree process, and the weight matrix was used to calculate the weight of each node in the tree structure. The relationship between actual Quality of Service (QoS) of component services and its advertised utility interval of QoS were taken into consideration, and through deviation function, the deviation proportion between actual QoS utility value of component service and actual QoS average utility value of all component services was calculated, meanwhile the influence on score distribution by history performance of each component service was considered. The experimental results show that actual QoS and history performance of component services have some influence on score which was distributed, and demonstrate that the proposed approach can achieve a reasonable and fair score distribution.
    Detection of array bound overflow by interval set based on Cppcheck
    Zhang Shijin SHANG Zhaowei
    2013, 33(11):  3257-3261. 
    Asbtract ( )   PDF (685KB) ( )  
    Related Articles | Metrics
    The false positive rate and the false negative rate are too high for the open source software Cppcheck, and defects cannot be detected during program running. Interval set algorithm was put forward on the basis of Cppcheck program and was used for detecting array bound overflow. Shaping interval set and array interval set were established by introducing the concept of interval set. Each program variables and expressions interval values were constructed under the framework of Cppcheck to detect contradictions to locate defects. The precision rate increased by 18.5%, the false negative rate decreased by 22.5% and the false positive rate increased by 3.5% with the algorithm compared to Cppcheck. The experimental results show that the proposed algorithm can effectively detect the defects of running program and the detection performance gets improved.
    Test case design algorithm for basic path test
    Wang min CHEN Shaomin CHEN Yaguang
    2013, 33(11):  3262-3266. 
    Asbtract ( )   PDF (736KB) ( )  
    Related Articles | Metrics
    McCabe's basic path testing method (McCABE T J. A complexity measure. IEEE Transactions on Software Engineering, 1976, SE-2(4): 308-320) is a more rigorous software testing one in dynamic white-box testing techniques, but the efficiency of McCabe method is lower. To solve this problem, this paper proposed an algorithm to design basic path test case according to the basic program structure. The algorithm first created a basic unit chart based on the Z-path coverage. Next, the rules that combined a control flow graph from basic unit were established. On this basis, the combination algorithm of the basic path was constructed. The algorithm collected the path set of basic program structure by scanning a program only once, and then generated the basic path set using those paths by combination ways. This method is more concise than the method proposed by McCabe, and it can improve the efficiency of the basic path test case design.
    Research on cache model in mobile database
    WENG Changling YANG Qing
    2013, 33(11):  3267-3270. 
    Asbtract ( )   PDF (667KB) ( )  
    Related Articles | Metrics
    To improve the performance of mobile database system, a cache model was proposed for mobile database. A kind of synchronization algorithm based on message digest was used in this model. By comparing the value of message digest in mobile client and server, the algorithm completed the cache synchronization, and maintained the consistency of mobile client cache and the data in server. The timeliness of the data and the priority of the transaction were considered in this model. A cache replacement algorithm based on cost function was designed. The experimental results show that the cache hit rate of the proposed algorithm is higher than Least Recently Used (LRU) and Least Access-to-Update Ratio (LA2U) algorithm along with the increase of the number of cache data. At the same time, the restart rate of transaction is lower than LRU and LA2U while the frequency of access increases. The performance of the cache of mobile database is improved.
    Typical applications
    Logistics service supply chain coordination based on forecast-commitment contract
    HE Chan LIU Wei
    2013, 33(11):  3271-3275. 
    Asbtract ( )   PDF (810KB) ( )  
    Related Articles | Metrics
    To coordinate the logistics service supply chain, composed by a sub-contractor with single function and an integrator, a forecast-commitment contract was proposed. In this contract, a forecast for a future order and a guarantee to purchase a portion of it were provided by the logistics service integrator. Base on the information from the integrator, the logistics services sub-contractor made a decision on logistics capabilities investment. It provided an optimal strategy for the logistics service sub-contractor and gave the optimal forecast for the logistics service integrator. Then a buyback parameter was drawn into the "forecast-commitment" contract. The experimental results show that if the parameters are reasonable, the proposed contract can moderate the logistics services sub-contractor to invest. It shows that this contract can coordinate the whole system by achieving Pareto improvement for the logistics service supply chain and the increase in revenue for the supply chain system and integrator. The buyback parameter can improve the logistics capabilities investment of the sub-contractor at the same forecast. Finally, a numerical experiment was carried out to illustrate the forecast-commitment contract, and results verified the theoretical analysis.
    Cyclic network model of optimal allocation for aircraft aviation security resources
    ZHU Qidan LYU Kaidong LI Xinfei
    2013, 33(11):  3276-3279. 
    Asbtract ( )   PDF (601KB) ( )  
    Related Articles | Metrics
    To optimize the aviation security resources of carrier-based aircraft which included traction equipments, security devices, equipped ammunition, personnel allocation, etc, the cycle operation characteristic of aviation security process was studied. The types of working procedures of aviation security and their logical relationship, and required resources of each working procedure were analyzed. The cyclic network technology was introduced and an optimization control method of aircraft aviation security resources was obtained. This method improved the utilization rate of equipments and ensured aviation security personnel at the work state of combining exertion and rest. The experimental results show that the optimal allocation method for aircraft aviation security resources based on the cyclic network is effective.
    Sound localization based on improved interaural time difference of cochlear nucleus model
    ZHANG Yi XING Wuchao LUO Yuan HE Chunjiang
    2013, 33(11):  3280-3283. 
    Asbtract ( )   PDF (621KB) ( )  
    Related Articles | Metrics
    The sound can be accurately located by human auditory system in noisy environment. The main element to realize the location is interaural time difference. But the effects are unsatisfactory when using interaural time difference to locate in noisy environment. In order to resolve this problem, this thesis put forward a sound source locating system based on cochlear nucleus model, and cochlear nucleus model simulated the process of how cochlear deals with auditory information. The process could draw the synchronization information and firing rate from the reaction of auditory nerve fibers to sound, thus realizing the inhibitory of noise, and locating the sound source in noisy environment. The location error of system in noisy environment was 1.297 degrees. The experimental results show that the improved sound locating system can complete locating in noisy environment.
    Real-time monitoring and warning system of tunnel strain based on improved principal component analysis method
    YANG Tongyao WANG Bin LI Chuan HE Bi XIONG Xin
    2013, 33(11):  3284-3287. 
    Asbtract ( )   PDF (823KB) ( )  
    Related Articles | Metrics
    An improved Principal Component Analysis (PCA) method was proposed with the synchronous multi-dimensional data stream anomaly analysis techniques. In this method, the problem of the original data stream variation tendency was mapped to the eigenvector space, and the steady-state eigenvector was solved, then the abnormal changes of the synchronous multi-dimensional data stream could be diagnosed by the relationship between the instantaneous eigenvector and the steady-state eigenvector. This method was applied to the abnormality diagnosis of the tunnel strain monitoring data stream, and the real-time monitoring and warning system for the tunnel strain was realized by using VC++. The experimental results show that the proposed method can reflect the changes of the aperiodic variables timely and realize the anomaly monitoring and early warning for multi-dimensional data stream effectively.
    Clutter-map constant false alarm rate detection for foreign object debris on runways
    WU Jing WANG Hong WANG Xuegang
    2013, 33(11):  3288-3290. 
    Asbtract ( )   PDF (592KB) ( )  
    Related Articles | Metrics
    Heavy land clutter with antenna scan is the main interference for Foreign Object Debris (FOD) detection. However, traditional Constant False Alarm Rate (CFAR) in space-domain is ineffective to detect targets on runways. To solve this problem, a cell-average clutter-map CFAR was proposed. First of all, an echo model based on the characteristics of FOD surveillance radar system was built. Then, the range-bearing two-dimensional CFAR detection could be realized by using clutter-map cells dividing, cell averaging and recursive filtering. Further analysis of the main factors that affected the detection performance of this method was studied in the end. The simulation results show that, the proposed algorithm can effectively detect the weak target and obtain high detection probability with a low signal-to-clutter ratio.
    Aircraft optimal target aiming control based on Gauss pseudospectral method
    CHENG Jianfeng DONG Xinmin XUE Jianping TAN Xueqin
    2013, 33(11):  3291-3295. 
    Asbtract ( )   PDF (692KB) ( )  
    Related Articles | Metrics
    In order to realize aircraft optimal target aiming in the situation of combat duel, a control method based on Gauss Pseudospectral Method (GPM) was proposed. Taking agility and multi-constraint into consideration, the dynamic equation of the aircraft was modeled, the two-stage target aiming condition expression was deduced, and the optimal index was designed. Afterwards, the aircraft optimal aiming control was described as the multi-stage optimal control problem with constraint and unknown final time. The GPM was used to equally convert the continuous optimal boundary value problem to a discrete Nonlinear Programming (NLP) problem and the initial solution was preprocessed through Genetic Algorithm (GA), then, the Sequential Quadratic Programming (SQP) algorithm was applied to solve it. The simulation results show that it can realize target aiming effectively and satisfy weapon launch condition.
    Applications of gravitational search algorithm in parameters estimation of penicillin fermentation process model
    WANG Lei CHEN Jindong PAN Feng
    2013, 33(11):  3296-3299. 
    Asbtract ( )   PDF (737KB) ( )  
    Related Articles | Metrics
    Concerning the identification of the accurate model parameters of biological fermentation process, a parameters estimation method for non-structural dynamical model of penicillin fermentation using the Gravitational Search Algorithm (GSA) was proposed. Based on the rule of fermentation mechanism, the appropriate state equations of non-structural dynamical model were chosen; and through virtue of the global searching ability of GSA, the parameters of state equation were estimated and the accurate fermentation model was obtained. The simulation results show that GSA accurately estimated model parameters in penicillin fermentation process, the accuracy of the obtained model can meet the requirements of state estimation and condition control in penicillin fermentation process. Therefore, GSA can be applied to model parameters estimation effectively.
    New medical image classification approach based on hypersphere multi-class support vector data description
    XIE Guocheng JIANG Yun CHEN Na
    2013, 33(11):  3300-3304. 
    Asbtract ( )   PDF (800KB) ( )  
    Related Articles | Metrics
    Concerning the low training speed of mammography multi-classification, the Hypersphere Multi-Class Support Vector Data Description (HSMC-SVDD) algorithm was proposed. The Hypersphere One-Class SVDD (HSOC-SVDD) was extended to a HSMC-SVDD as a kind of immediate multi-classification. Through extracting gray-level co-occurrence matrix features of mammography, then Kernel Principle Component Analysis (KPCA) was used to reduce dimension, finally HSMC-SVDD was used for classification. As each category trained only one HSOC-SVDD, its training speed was higher than that of the present multi-class classifiers. The experimental results show that compared with the combined classifier, in which the average train time is 40.2 seconds, proposed by Wei (WEI L Y, YANG Y Y, NISHIKAWA R M,et al.A study on several machine-learning methods for classification of malignant and benign clustered micro-calcifications. IEEE Transactions on Medical Imaging, 2005, 24(3): 371-380), the training time of HSMC-SVDD classifier is 21.369 seconds, the accuracy is up to 76.6929% and it is suitable for solving classification problems of many categories.
    Tire preset value inflation or deflation control based on fuzzy and genetic approximation strategy
    PAN Xiaobo
    2013, 33(11):  3305-3308. 
    Asbtract ( )   PDF (646KB) ( )  
    Related Articles | Metrics
    The influence factors of tire inflation or deflation are very complicated. In order to make tire to be conveniently and accurately inflated or deflated to the preset pressure value, a tire inflating or deflating control method based on fuzzy and genetic approximation was proposed in this paper. The entire inflating or deflating process was divided into two steps. Firstly the inflating or deflating time was reasoned by fuzzy reasoning according to the preset value and the difference between the current tire pressure and the preset value, and tire was preliminarily inflated or deflated according to this time. Then tire gradually approached the preset value by inheriting last inflating or deflating rate. The experimental results show that tire can be conveniently and accurately inflated or deflated to the preset pressure value in this way under the different condition, accuracy can reach ±0.04bar. It is suitable for various automatic inflators, and it is convenient, highly effective and practical.
    Calculation of submarine distribution probability in call searching submarine for several typical cases
    SHAN Zhichao JU Jianbo QU Xiaohui WEN Wei
    2013, 33(11):  3309-3312. 
    Asbtract ( )   PDF (521KB) ( )  
    Related Articles | Metrics
    Concerning the problem that has no formula to calculate the submarine distribution probability in call searching submarine, the paper gave out a formula to calculate the submarine distribution probability following the time changing for several typical cases. These typical cases included the initial position of the submarine obeying the normal distribution, the submarine heading obeying the uniform distribution in two-dimension plane, and the submarine velocity known or obeying the uniform distribution or obeying the Rayleigh distribution. Then the conclusion was proven right by Monte Carlo simulation. Also the joint probability density curves and the marginal probability density curves about the submarine distribution probability for several specific moments were given. The change of the submarine distribution probability in call searching submarine could be seen clearly through those curves, which is valuable to make the right search strategy in call searching submarine.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF