Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
01 February 2012, Volume 32 Issue 02
Previous Issue
Next Issue
Database technology
Survey on change detection over data stream
SONG Qin-bao DU Lei
2012, 32(02): 299-303. DOI:
10.3724/SP.J.1087.2012.00299
Asbtract
(
)
PDF
(918KB) (
)
References
|
Related Articles
|
Metrics
Data stream is a type of dynamic data, which is driven by some hidden contexts and may change with time. Generally speaking, the change implies some event in the real world. To detect change over data stream timely and accurately has a quite wide range of practical applications, and has been one of the hot topics in data stream mining. In this paper, a comprehensive survey on data stream change detection was presented, and the key task of change detection was introduced. An inductive unifying framework of change detection process was also given. Besides, a variety of already existing change detection approaches and their features were reviewed and evaluated in detail. Finally, the research outlook of data stream change detection was discussed.
Survey on emerging pattern based contrast mining and applications
DUAN Lei TANG Chang-jie Guozhu DONG YANG Ning GOU Chi
2012, 32(02): 304-308. DOI:
10.3724/SP.J.1087.2012.00304
Asbtract
(
)
PDF
(945KB) (
)
References
|
Related Articles
|
Metrics
Contrast mining is one of fairly new hot data mining topics. Contrast mining focuses on knowledge that describes differences between classes and conditions, or describes changes over time. Contrast mining aims at developing techniques to discover patterns or models that contrast, and characterize multiple datasets associated with different classes or conditions. Contrast mining has wide applications in reality, due to its ability of simplifying problems and classifying accurately. Research on the mining and application of emerging patterns represents a major direction of contrast mining. This paper provided a survey of such issue. More specifically, after introducing the background, basic concepts and principles of emerging patterns, the paper analyzed the mining methods of emerging patterns, discussed extended definitions of emerging patterns and their mining, stated methods for constructing emerging pattern based classifiers, and illustrated applications of emerging pattern in several real-world fields. Finally, this paper gave out some topics for future research on emerging pattern based contrast mining.
Improved data distribution strategy for cloud storage system
ZHOU Jing-li ZHOU Zheng-da
2012, 32(02): 309-312. DOI:
10.3724/SP.J.1087.2012.00309
Asbtract
(
)
PDF
(707KB) (
)
References
|
Related Articles
|
Metrics
Considering massive scale of cloud storage solutions, the traditional data distribution strategy confronts challenges to improve scalability and flexibility. This paper proposed an efficient data distribution strategy. Based on consistent hashing algorithm, the strategy introduced the virtualization technology, and employed virtual node to improve load balance. Moreover, the strategy used a new capacity-aware method to improve the performance of the cloud storage system. The evaluation experiments demonstrate that the proposed data distribution strategy improves system performance in both homogeneous and heterogeneous distributed storage architectures.
Time decay mode based on degressive cosine ramp
FAN Hai-kuan LIU Qi-zhi
2012, 32(02): 313-316. DOI:
10.3724/SP.J.1087.2012.00313
Asbtract
(
)
PDF
(629KB) (
)
References
|
Related Articles
|
Metrics
Unlimited growth is one of the main characteristics of data stream. Current computing systems can only process a portion of data instead of the full data set online because of the limited memory and space. In order to obtain reasonable results, decay functions are often used in data stream systems to map the weights of data from timestamps. Monotonic decay functions such as exponential and polynomial functions are widely used, but they decay too fast or too slowly. In this paper, a new decay mode based on cosine function whose decay speed is between exponential function and polynomial function was proposed. The experimental results show that compared to exponential and polynomial decay modes, the degressive cosine ramp decays more reasonably and it is easy to appoint the parameter but also applicable to out-of-order data stream.
Community structure division in complex networks based on gene expression programming algorithm
LUO Jin-kun YUAN Chang-an YANG Wen HU Hui-ying YUAN Hui
2012, 32(02): 317-321. DOI:
10.3724/SP.J.1087.2012.00317
Asbtract
(
)
PDF
(852KB) (
)
References
|
Related Articles
|
Metrics
Due to the uncertainty of complex networks, traditional community structures division algorithm of the complex network could easily lead to premature convergence and decreased accuracy. And because of the large amount of computation, time complexity is high. To overcome the above shortcomings, the paper adopted GEP's global search ability and adaptability, and other characteristics with parallel calculations, optimized the network structure of the division of community, and proposed a community structure division algorithm of complex network based on GEP, and verified the validity of the new algorithm by experiment. The new algorithm has more accurate community division of the complex network in the case of no prior information.
User behavior similarity analysis of location based social network
YUAN Shu-han CHEN Wei-bin FU Shun-kai
2012, 32(02): 322-325.
Asbtract
(
)
PDF
(670KB) (
)
References
|
Related Articles
|
Metrics
Location-based social network allows users to share location information. The complete geographical record about users kept by social network plays as the basis for analyzing the behaviors of the users in geographical track. For Location-Based Service (LBS) platform did not take the users' geographical location on the track into consideration, this paper proposed a new hierarchical density based clustering approach. It determined the similarity among users in different scales by classical Vector Space Model (VSM), with vectors composed of users' visiting frequencies about different cluster area. Overlapping the different scale user similarity value with different weighted obtained the geospatial similarity of the user behaviors. The experiments based on user data from a large LBS social network site demonstrate that the proposed approach can effectively identify similar users.
Mining algorithm for maximal frequent itemsets based on improved FP-tree
Ma Li-sheng YAO Guang-shun YANG Chuan-jian
2012, 32(02): 326-329. DOI:
10.3724/SP.J.1087.2012.00326
Asbtract
(
)
PDF
(654KB) (
)
References
|
Related Articles
|
Metrics
In order to reduce the repeated traversal times of path in the FP-tree, the conditional pattern bases of all frequent 1-itemsets in the FP-tree need to be saved in the existing algorithms. Concerning this problem, in the new algorithm, the data structure of FP-tree was improved that only the conditional pattern bases were saved which were constituted by the items in the path from every leaf node' parents to the root in the FP-tree, and 〖BP(〗the improved FP-tree could reduc 〖BP)〗the storage space of the conditional pattern bases was reduced. After studying search space and the method of data representation in the algorithm for mining maximal frequent itemsets, the pruning and compression strategies were developed through theoretical analysis and verification, which could decrease the search space and the scale of FP-tree. Finally, the new algorithm was compared with NHTFPG algorithm and FpMAX algorithm respectively in terms of accuracy and efficiency. The experimental results show that the new FP-tree algorithm saves the required conditions for model-based storage space more than 50% than NHTFPG algorithm, and the efficiency ratio improves by 2 to 3 times than FpMAX algorithm.
Network and communications
Topological evolution on synchronization of dynamic complex networks
ZHU Liang HAN Ding-ding
2012, 32(02): 330-339. DOI:
10.3724/SP.J.1087.2012.00330
Asbtract
(
)
PDF
(1004KB) (
)
References
|
Related Articles
|
Metrics
After qualitative discussion of the synchronization performance in complex network models, simulation analysis of the networks with relatively larger size was presented. Through data analysis, network topology visualization, and topology evolution with simulated annealing algorithm, some rules of synchronization optimization were found, that is, making the degree distribution and average distance uniform and centralized, and proper clustering coefficient can reduce network connection without influencing synchronization. Considering the situation of future power grid, optimization strategies for the stability of synchronization were developed and tested on the data of the actual power grid, exploring the application value of optimizing practical networks from the angle of topology and satisfying the requirement of real-time quality, stability and distribution. The optimization is proved to be effective.
Fast handover based route optimization approach in proxy mobile IPv6
ZHANG Yi-fang ZHANG Qi-zhi
2012, 32(02): 335-339. DOI:
10.3724/SP.J.1087.2012.00335
Asbtract
(
)
PDF
(735KB) (
)
References
|
Related Articles
|
Metrics
Concerning the long recovery time to re-establish the optimized path after mobile node's handover, this paper proposed a fast handover based route optimization approach in proxy mobile IPv6. The new approach, which was initiated by previous mobile access gateway in advance in mobile node's fast handover procedure, achieved fast recovery of optimized path. Performance analysis shows that the new intra-domain route optimization method can reduce average route optimization cost by 21.7% and the setup time of route optimization state by 45.4%, compared with the local mobility anchor initiated route optimization protocol. In addition, the new inter-domain route optimization method can decrease the setup time of route optimization state by 72.2% in comparison with the existing inter-domain handover and route optimization approach.
Online prediction of network traffic by integrating lifting wavelet de-noising and LSSVM
LI Ming-xun MENG Xiang-ru YUAN Rong-kun WEN Xiang-xi CHEN Xin-fu
2012, 32(02): 340-346. DOI:
10.3724/SP.J.1087.2012.00340
Asbtract
(
)
PDF
(598KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that the network traffic data has been polluted by noise so that accurate modeling and predicting cannot be achieved, an integrated network traffic online predicting method based on lifting wavelet de-noising and online Least Squares Support Vector Machines (LSSVM) was proposed. First, the Lifting Wavelet De-noising (LWD) was used to pre-process network traffic data, then the phase space reconstruction theory was introduced to calculate the delay time and embedded dimension. On this basis, the training samples were formed and the online LSSVM prediction model was constructed to predict the network traffic. The experimental results show that this prediction model can eliminate the noise effectively and predict the network traffic.
Faulty link identification based on factor graph and sum product algorithm
LV Xiang-ling ZHANG Zhi-yong HU Guang-min
2012, 32(02): 343-346. DOI:
10.3724/SP.J.1087.2012.00343
Asbtract
(
)
PDF
(616KB) (
)
References
|
Related Articles
|
Metrics
In order to estimate the prior probability of the link failure, the paper proposed a new method for estimating the link state distribution. The new scheme adopted the factor graph to describe the joint probability distribution between the link and the path, and used the sum-product algorithm to obtain the maximum posterior estimate of the state of the link, then used the failures probability of the link and the current measurement data to conclude the current state of the link. The simulation results show that, when the size of network reaches 400 nodes, the computation time of the new scheme is more than two orders of magnitude lower than the linear equations method, and has better scalability.
Congested link diagnosis algorithm based on Bayesian model in IP network
DU Yan-ming HAN Bing XIAO Jian-hua
2012, 32(02): 347-351. DOI:
10.3724/SP.J.1087.2012.00347
Asbtract
(
)
PDF
(763KB) (
)
References
|
Related Articles
|
Metrics
In IP network, tomography method can perform fault diagnosis by analyzing the end-to-end properties with low costs. However, most existing tomography based techniques have the following problems: 1) the end-to-end detected number is not sufficient to determine the state of each link; 2) as the scale of the network goes up, the diagnosis time may become unacceptable. To address these problems, a new congested link diagnosis algorithm based on Bayesian model was proposed in this paper. This method firstly modeled the problem as a Bayesian network, and then simplified the network by two steps and limited the number of multiple congested links. Therefore, the proposed method could greatly reduce the computational complexity and guarantee the diagnostic accuracy. Compared with the existing diagnosis algorithm which is called Clink, the proposed algorithm has higher diagnostic accuracy and shorter diagnosis time.
Topology control algorithm for wireless sensor network based on mean RSSI
WANG Chu-hang
2012, 32(02): 352-358. DOI:
10.3724/SP.J.1087.2012.00352
Asbtract
(
)
PDF
(618KB) (
)
References
|
Related Articles
|
Metrics
With regard to the problem that Received Signal Strength Indicator (RSSI) has errors in spatial and temporal extension, a new distributed real wireless environment faced topology control algorithm named RTC was proposed in this paper. RTC used the mean RSSI value to compute the bi-directional path-loss and found a path with two hops between two nodes of which the link cost across each hop was less than that of these two nodes. The communication complexity and network connectivity of RTC were analyzed and simulation based analysis on energy property was presented. The simulation results show that RTC has such advantages as balanced energy consumption and long network lifetime.
Weighted correction model in wireless sensor network localization
DANG Xiao-chao LI Xiao-yan
2012, 32(02): 355-358. DOI:
10.3724/SP.J.1087.2012.00355
Asbtract
(
)
PDF
(629KB) (
)
References
|
Related Articles
|
Metrics
In order to reduce the influence of positioning accuracy casued by the issue of localization algorithm itself and the Received Signal Strength Indicator (RSSI), through introducing the correction mechanism in the previous localization algorithm, such as Gauss-Markov mobility model, and combining the technology of mobile anchor with fixed anchor, a new method was proposed based on weighted correction model. The simulation results show that the precision of node localization is effectively improved, meanwhile, compared with the Gauss-Markov mobility model, the positioning accuracy of the proposed method increases by 36.2%.
Routing scheme for vehicle Ad Hoc network
LIU Jing WANG Xin-hua WANG Zhen WANG Shuo
2012, 32(02): 359-366. DOI:
10.3724/SP.J.1087.2012.00359
Asbtract
(
)
PDF
(763KB) (
)
References
|
Related Articles
|
Metrics
Through analyzing the application status of Vehicle Ad Hoc NETwork (VANET) in road transportation field, according to the characteristics of VANET and challenges in news transmission process, concerning the problems of previous algorithms being difficult to establish spatial model accurately and hardly considering the regularity characteristics of social behavior, a routing scheme named HBSR was proposed based on the historical behavior statistics of vehicles, including nodes connected algorithm calculating the connectivity between vehicles, topological overlap algorithm calculating the number of periods between the source node and destination node, paths selected algorithm selecting messages forwarding paths and loss strategy. Compared with several typical routing algorithms on ONE simulation platform, the simulation results prove that HBSR can find news forwarding paths more effectively, and reduces message delivery delay obviously while delivery rate increases significantly, and performance is relatively stable in VANET.
TDMA scheduling algorithm for multi-sink wireless sensor networks
LI Hai-ping MAO Jian-lin ZHANG Bin CHEN Bo
2012, 32(02): 363-366. DOI:
10.3724/SP.J.1087.2012.00363
Asbtract
(
)
PDF
(661KB) (
)
References
|
Related Articles
|
Metrics
Concerning the high packet delay and frequent transmission bottleneck in Wireless Sensor Network (WSN) with one-sink node, a multi-sink wireless sensor network model and its Time Division Multiple Access (TDMA) scheduling algorithm based on Genetic Algorithm (GA) were proposed. The algorithm divided the whole sensor network into some small sensor networks according to the number and position of the sink nodes, and adopted GA to optimize the slot allocation result. The simulation results show that, the TDMA time slot allocation method based on genetic algorithm is better in the length of time slot allocation frame, the average of packet delay and the average energy consumption than that of graph coloring algorithm.
Spray and Wait routing based on ACK-mechanism in disruption tolerant networks
ZHENG En LUO Qiu-xia
2012, 32(02): 367-369. DOI:
10.3724/SP.J.1087.2012.00367
Asbtract
(
)
PDF
(506KB) (
)
References
|
Related Articles
|
Metrics
Disruption Tolerant Network (DTN) attempts to transfer messages via intermittently connected nodes. The difficulty lies in that the routing connectivity is opportunistic, and the network topology information is lacking between nodes. Spray and Wait is an efficient routing scheme in such environment, it "sprays" a number of copies into the network, and then "waits" till one of these nodes meets the destination. Compared with epidemic routing, Spray and Wait consumes less network resources, but still suffers from severe contention under high network load. Therefore, based on the analysis of Spray and Wait routing, an ACK-mechanism was used to remove the redundant messages which lead to less network resource consumption. Meanwhile, token forwarding was adopted in order to improve the bandwidth utilization. The improved routing scheme was simulated on ONE platform and the simulation results show that it has higher average delivery rate and consumes fewer network resources though it has slight higher average delivery delay. The scheme works well in intermittently connected mobile networks without any priori connectivity knowledge.
Real-time analysis and improvement of KNX/EIB communication protocol
ZHANG Guo-dong ZHANG Xi-huang
2012, 32(02): 370-384. DOI:
10.3724/SP.J.1087.2012.00370
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
The KNX/EIB communications protocol, using CSMA/CA mechanism in its link layer to resolve the communication conflicts, has improved the KNX/EIB network conflict prevention capacity. However, it results in the jitter of transmission delay to a great extent, among the same level command frame data. Consequently, the real-time of transmission has been slashed. In this paper, the improved method named KNX/EIB-A was proposed for the real-time of the KNX/EIB without altering the original KNX/EIB protocol, which applied a scheduling program between the application layer and user application program of the KNX/EIB communication protocol, and changed the original distributed equal structure into a master-slave structure, and dispatched the sending of the data frames. Finally, the analysis and realization of the KNX/EIB-A prototype prove that it improves the real-time performance of the protocol by reducing the transmission delay jitter.
Improved linear fitting algorithm of modulated signal carrier frequency estimation
TIAN Ke-chun WEI Li DING Meng
2012, 32(02): 374-380. DOI:
10.3724/SP.J.1087.2012.00374
Asbtract
(
)
PDF
(597KB) (
)
References
|
Related Articles
|
Metrics
To enhance the noise restraint of modulated signal carrier frequency estimation algorithm based on linear fitting, in accordance with the features of modulating signal in the time domain, an estimation algorithm combined with RANdom Sample Consensus (RANSAC) based on least square method was proposed. It regarded modulating signals obtained by Amplitude-Shift Keying (ASK) and Phase-Shift Keying (PSK) which have single carrier frequency as research subjects, and did simulation experiments in Matlab. The experimental results indicate that the method introduced in this paper is better than the method based on least square method in the error rate and noise restraint.
Design and implementation of LDPC code encoder in LTE-Advanced standard
FANG Jian-wei XIONG Cheng-yi ZHOU Cheng
2012, 32(02): 377-380. DOI:
10.3724/SP.J.1087.2012.00377
Asbtract
(
)
PDF
(567KB) (
)
References
|
Related Articles
|
Metrics
By analyzing the structure of Low-Density Parity-Check (LDPC) code check matrix in LTE-Advanced standard, this paper proposed a low-cost encoder with high input packet throughput on Quasi Cyclic-LDPC (QC-LDPC) code. With exploiting the number of null matrices in the mother parity check matrix, the whole parity check matrix could be partitioned into an array of block matrices, where each block matrix was either a null sub-matrix or a cyclic shift of an identity sub-matrix, and then it encoded serially. The experimental results show that the proposed encoder's coding time is the same as 32% of the ideal time and the resources consumption is the same as 33% of the ideal situation within the analogous methods. This result achieves the balance between coding time and resources consumption, which means the designed encoder meets the LTE-Advanced standard: low cost with high transmission. In addition, by changing the parameters in the ROM which saves the check matrix, the proposed encoder is flexible to implement the encoding of LDPC code with different code length or rate.
New precoding algorithm with limited feedback and its performance analysis in multi-user MIMO systems
ZENG Hao YUAN Ang-fei LI Zheng-zhou
2012, 32(02): 381-384. DOI:
10.3724/SP.J.1087.2012.00381
Asbtract
(
)
PDF
(551KB) (
)
References
|
Related Articles
|
Metrics
In multi-user MIMO (Multiple Input Multiple Output) system, the users transmit the Channel State Information (CSI) to the base station through feedback. Then the system could achieve the multiplexing gain through precoding or user scheduling with the received CSI at the base station. Concerning that the traditional way of feedback takes up too many uplink resources, a limited feedback MIMO precoding algorithm was proposed based on Grassmannian codebook. Only a few bits were required to be the feedback to the base station because the new approach quantized the channel matrix and transmitted the sequence number of the quantized channel matrix according to the codebook, which reduced the feedback load considerably. The simulation results of the Bit Error Rate (BER) and throughput rate show that the new approach maintains the performance of the system. Furthermore, the system throughput is illustrated on the condition that the estimated error and quantized error of the channel matrix exist. The analytic expression with the throughput, variance of error and transmitted power will benefit the channel estimation process.
Improved sphere decoding detection algorithm combined with minimum mean square error
LI Shi-ping WANG Long
2012, 32(02): 385-387. DOI:
10.3724/SP.J.1087.2012.00385
Asbtract
(
)
PDF
(490KB) (
)
References
|
Related Articles
|
Metrics
Among all of the signal detection algorithms in multiple-input multiple-output systems, the capability of sphere decoding algorithm is most close to the capability of maximum-likelihood algorithm. But the calculation complexity of the sphere decoding algorithm is still very high. To decrease the calculation complexity of sphere algorithm, a new sphere decoding algorithm was proposed. The new algorithm was combined by an improved fast sphere decoding algorithm and the Minimum Mean Square Error (MMSE) algorithm. The improved fast sphere decoding algorithm can increase the decreasing rate of sphere radius via multiplying the contraction process of sphere radius by a constant parameter, so that it can reduce the number of signal points in search process to decrease calculation complexity. Meanwhile, the MMSE algorithm can reduce the interference that caused by noise, so that it can decrease the calculation complexity caused by the process of searching noise points. The channel matrix of the MMSE algorithm was applied to the improved fast sphere decoding algorithm, so these two algorithms can be combined with each other efficiently, and the combined algorithm can further reduce the calculation complexity. The simulation results show that, when Signal-to-Noise Ratio (SNR) is less than 10dB, the proposed algorithm improves average performance by 9% compared with original sphere decoding algorithm. Multiple-Input Multiple-Output (MIMO); signal detection; calculation complexity; Sphere Decoding (SD) algorithm; Minimum Mean Square Error (MMSE) algorithm
New collaborative spectrum sensing algorithm based on dual-threshold
ZENG Juan ZHANG Cui-fang WANG Yu-zhou
2012, 32(02): 388-391. DOI:
10.3724/SP.J.1087.2012.00388
Asbtract
(
)
PDF
(625KB) (
)
References
|
Related Articles
|
Metrics
Concerning the shortcomings in reliability and limited bandwidth of conventional dual-threshold energy sensing for cognitive radio, this paper introduced a new dual-threshold collaborative spectrum sensing algorithm based on two-bit hard combination. The algorithm made use of two kinds of information which included one-bit local decisions and two-bit local decisions of secondary users for eliminating the sensing failure. Then fusion center made a final decision based on the two kinds of decisions to determine whether the primary user was present or not. Compared to the conventional dual-threshold algorithm, the simulation results indicate that the new algorithm not only eliminates the sensing failure, but also improves the detection performance significantly (maximum increasing by about 21% at low false alarm probabilities) at the cost of slightly increasing the average number of sensing bits (on average increasing by about 1% at low sensing failure probabilities).
Artificial intelligence
Multiple-layer classification of Web emergency news based on rules and statistics
XIA Hua-Lin ZHANG Yang-sen
2012, 32(02): 392-415. DOI:
10.3724/SP.J.1087.2012.00392
Asbtract
(
)
PDF
(616KB) (
)
References
|
Related Articles
|
Metrics
The Web news grows in index tendency and disseminates rapidly, and the Web emergency news widely spreads on the Internet. While the traditional text classification is of low accuracy and efficiency, it is difficult to locate the emergency news and information of specific topics. The paper proposed a multiple-layer classification method for Web emergency news based on the rules and statistics. First, it extracted category keywords to form the library of rules. Second, the emergencies would be classified into four major categories by the rules, and then these major categories would be classified into small categories by the Bayesian classification method, thus a two-tier classification model based on rules and statistics was established. The experimental results show that the classification accuracy rate and the recall rate have reached over 90%, and the classification efficiency is generally higher than the traditional classification methods.
Collaborative filtering and recommendation algorithm based on matrix factorization and user nearest neighbor model
YANG Yang XIANG Yang XIONG Lei
2012, 32(02): 395-398. DOI:
10.3724/SP.J.1087.2012.00395
Asbtract
(
)
PDF
(660KB) (
)
References
|
Related Articles
|
Metrics
Concerning the difficulty of data sparsity and new user problems in many collaborative recommendation algorithms, a new collaborative recommendation algorithm based on matrix factorization and user nearest neighbor was proposed. To guarantee the prediction accuracy of the new users, the user nearest neighbor model based on user data and profile information was used. Meanwhile, large data sets and the problem of matrix sparsity would significantly increase the time and space complexity. Therefore, matrix factorization was introduced to alleviate the effect of data problems and improve the prediction accuracy. The experimental results show that the new algorithm can improve the recommendation accuracy effectively, and solve the problems of data sparsity and new user.
Synergistic cellular automata model for dissemination of Internet public opinion
FANG Wei HE Liu-jin SONG Liang-tu
2012, 32(02): 399-402. DOI:
10.3724/SP.J.1087.2012.00399
Asbtract
(
)
PDF
(619KB) (
)
References
|
Related Articles
|
Metrics
As for the present research on the dissemination of Internet Public Opinion (IPO), some research use mathematic statistics or intelligent learning to analyze the growing or descending process of a topic related text, and some use cellular automata or Hidden Markov Model (HMM) to find the tendency propagation of the subject of IPO. However, all of them lack the analyses of the impacts of the subject attributes in IPO on its tendency propagation. Based on the systematic synergy of IPO space, the synergistic transition probability between states on whole cells space of discussed IPO was computed firstly, and then it was compared with a local state probability in 9 neighbors of a central cell to decide whether the state of central cell should be converted. After several iterative operations, the degree (magnetisability) which expressed the tendency propagation upon to "+" or "-" was obtained. Through observing the magnetisability-time variable curve, one can clearly handle its evolution. Therefore, a new model and an algorithm of extensive synergistic cellular automata model were presented. The simulation results show that the order-variable parameters of society adaptability can express the subject's group psychology, and it goes towards the majority opinion. Similarly, the order-variable parameters of preference fast make tendency propagation close to the direction of preference, i.e "+" or "-". The model is relatively closer to the real situation of dissemination of IPO.
Label propagation algorithm based on LDA model
LIU Pei-qi SUN Jie-han
2012, 32(02): 403-410. DOI:
10.3724/SP.J.1087.2012.00403
Asbtract
(
)
PDF
(817KB) (
)
References
|
Related Articles
|
Metrics
Label Propagation (LP) algorithm is one kind of semi-supervised learning methods. However, its performance in text classification is not good enough, because LP algorithm demands manifold assumption and it has high computational complexity in calculating the similarity of high dimension data. A new method was proposed to combine Latent Dirichlet Allocation (LDA) model with LP algorithm to solve the above problems after analyzing their principles and complexities. It represented documents with latent topics in LDA. On one hand, it reduces the dimension of matrixes; on the other hand, it can help LDA model lead to the classification results with manifold assumption. The experimental results show that the new method performs better than traditional supervised text classification methods in testing sets when labeled data is less than unlabeled data.
Automatic identification of Uyghur domain term in Web text
ZHONG Jun TIAN Sheng-wei YU Long
2012, 32(02): 407-410. DOI:
10.3724/SP.J.1087.2012.00407
Asbtract
(
)
PDF
(599KB) (
)
References
|
Related Articles
|
Metrics
Since the Uyghur domain term is difficult to achieve, the workload of artificial expansion of the domain term is tremendous, and the efficiency is low, this paper used the Conditional Random Field (CRF) to identify the Uyghur domain term from the Web texts, which expanded the domain term with the conjunction word and the Mutual Information (MI) between the words based on the co-occurrence of terms. The experiments on the collected Web texts show that, for the short Uyghur domain terms, the algorithm achieves the precision as high as 97.59% and the recall 93.38%, and for the long Uyghur domain terms achieves the precision 55.72%.
Optimal design for adaptive associative memory cellular neural networks
YE Bo LI Chuan-dong
2012, 32(02): 411-415. DOI:
10.3724/SP.J.1087.2012.00411
Asbtract
(
)
PDF
(774KB) (
)
References
|
Related Articles
|
Metrics
In order to speed up the convergence of self-training AM-CNN (Associative Memories Cellular Neural Network) and enhance the performance of achieved AM-CNN, an algorithm for obtaining the space-invariant cloning templates of AM-CNN was proposed, which took the output error of objective CNN as objective function and took local searching and global searching respectively in two internals separated by a given objective function threshold, coupled with the idea of ant optimization algorithm and Particle Swarm Optimization (PSO). Concluded from the numerical simulation results, the proposed algorithm outputs the objective AM-CNN and converges quickly. Meanwhile, the performance of the achieved AM-CNN is better and more stable compared with previous methods. The achieved AM-CNN is also robust to Gauss noise of N(0,0.8) with recall rate of about 90%.
Minimum variance support vector data description
WANG Xiao-ming WANG Shi-tong PENG Hong
2012, 32(02): 416-424. DOI:
10.3724/SP.J.1087.2012.00416
Asbtract
(
)
PDF
(586KB) (
)
References
|
Related Articles
|
Metrics
Support Vector Data Description (SVDD), which is one of the widely applied kernel methods, has not taken the information of data distribution into full consideration. Concerning this issue, the optimization of SVDD was first reformulated equivalently, and then the distance in the optimization was redefined. Finally, a new algorithm called Minimum Variance Support Vector Data Description (MVSVDD) was presented, which exploited the information of data distribution. The experimental results denote that, in contrast to SVDD, MVSVDD obtains clear enhancement in generalization performance, and has better ability of describing data.
Two-stage fast training method based on core vector machine and support vector machine
PU Jun-yi LEI Xiu-ren
2012, 32(02): 419-424. DOI:
10.3724/SP.J.1087.2012.00419
Asbtract
(
)
PDF
(862KB) (
)
References
|
Related Articles
|
Metrics
Support Vector Machine (SVM) is a widely used classification technique. But the scalability of SVM to handle large data sets still needs much of exploration. Core Vector Machine (CVM) is a technique for scaling up a two class SVM to handle large data sets. However, it is computationally infeasible to use CVM to deal with the data set with mass Support Vectors (SV), as its training time is related to the number of SV. In this paper, a two-stage training algorithm combining CVM with SVM (CCS) was proposed. It first employed Minimum Enclosing Ball (MEB) based CVM algorithm to determine the potential core vectors, and then used labeling method to rapidly reconstruct training set, which aim is to reduce the scale of training set. After obtaining new training samples, SVM was adopted to deal with them. The experimental results indicate that the proposed approach can reduce the training time by 30% without losing the classification accuracy, and it is an efficient method for handling large-scale classification.
New strategy for improving performance of chained Lin-Kernighan algorithm
WANG Dong LI Ya WU Chen LIN Dong-mei
2012, 32(02): 425-431. DOI:
10.3724/SP.J.1087.2012.00425
Asbtract
(
)
PDF
(610KB) (
)
References
|
Related Articles
|
Metrics
Through analyzing the characteristics of the edge sets of the optimal solutions from Traveling Salesmen Problem (TSP), a kind of new model was proposed to produce the referring optimization edge sets for Lin-Kernighan algorithm on the basis of authors' previous research (WANG DONG, WU XIANG-BIN. Strategy for improving the performance of chained Lin-Kernighan algorithm. Journal of Computer Applications, 2007,27(11): 2826-2829). The number in the edge sets produced by the new model is less than those produced by normal algorithms or previous research. Meanwhile, the new edge sets include more edges that belong to the global optimal solution than them. Applying the new model to Lin-Kernighan algorithm, the execution time of the algorithm is further reduced, without losing the algorithm accuracy for a single call. Furthermore, the solution performance of Lin-Kernighan algorithm is improved also. With previous research achievement, the performance of all hybrid algorithms using Lin-Kernighan algorithm as the local search algorithm could be improved too.
Swarm intelligence algorithm based on combination of shuffled frog leaping algorithm and particle swarm optimization
SUN Hui LONG Teng ZHAO Jia
2012, 32(02): 428-431. DOI:
10.3724/SP.J.1087.2012.00428
Asbtract
(
)
PDF
(631KB) (
)
References
|
Related Articles
|
Metrics
Concerning the premature convergence of Particle Swarm Optimization (PSO) algorithm and Shuffled Frog Leaping Algorithm (SFLA), this paper proposed a swarm intelligence optimization algorithm based on the combination of SFLA and PSO. In this algorithm, the whole particle was divided into two equal groups: SFLA and PSO. An information replacement strategy was designed in the process of their iteration: comparing the fitness of PSO with that of SFLA, the worst individual in each subgroup of SFLA would replace some better individuals in PSO when SFLA is better; otherwise, some better individuals in PSO would replace the best individual in each subgroup of SFLA. Meanwhile, a collaborative approach between the two groups was also designed. Since the information replacement strategy could be influenced by the premature convergence problem in PSO, a random disturbance would be given on each particle's best position. The simulation results show that the proposed algorithm can improve the global search ability and convergence speed efficiently. For the complex functions with high-dimension, the algorithm has very good stability.
Fault diagnosis based on new particle swarm optimization particle filter
CHEN Zhi-min BO Yu-ming WU Pan-long TIAN Meng-chu LI Shao-xin ZHAO Wen-ke
2012, 32(02): 432-439. DOI:
10.3724/SP.J.1087.2012.00432
Asbtract
(
)
PDF
(720KB) (
)
References
|
Related Articles
|
Metrics
Particle Filter based on Particle Swarm Optimization (PSO-PF) algorithm is not precise and easily trapped in local optimum, which can hardly satisfy the requirement of fault diagnosis of temperature control system in power plant. To solve these problems, a new particle swarm optimization particle filter named NPSO-PF suitable for fault diagnosis was proposed. This algorithm introduced the cognition rule of individuals to groups to optimize the method for updating particles and improved the speed update strategy. As a result, the superior particle velocity can mutate with a small probability and improve the search ability. Meanwhile, due to the random initialization of on inferior particle, the diversity of samples is ensured. The simulation results show that NPSO-PF improves the precision and robustness compared with PSO-PF, and it is suitable for fault diagnosis of temperature control system.
Online learning algorithm for learner knowledge model
LI Wei-shi MAO Xiao-guang XIE Jian-wen
2012, 32(02): 436-439. DOI:
10.3724/SP.J.1087.2012.00436
Asbtract
(
)
PDF
(641KB) (
)
References
|
Related Articles
|
Metrics
Learner knowledge model is the foundation of the teaching process and strategy on Intelligent Tutoring Systems (ITS). Because of the uncertainty of recognizing knowledge level of learner and the real-time changes of knowledge level, it is very difficult to construct a model to reflect learner's knowledge level and its change correctly. The paper used Bayesian network for learner knowledge modeling. According to knowledge level's change during learners' learning process, problem knot was introduced into knowledge model, and Voting EM algorithm was used for online learning and updating of knowledge model's parameters. Finally, the paper introduced confidence factor and time updating mark to improve the efficiency of online parameters learning and revise the result. The experimental results indicate that the model can reflect learner's knowledge status better, and can quickly keep up with the change of knowledge level. It can help ITS to evaluate the learning effects better.
Bilevel programming model and algorithm for logistics network with interval constraints
LI Li-hua FU Zhuo HU Zheng-dong
2012, 32(02): 440-443. DOI:
10.3724/SP.J.1087.2012.00440
Asbtract
(
)
PDF
(606KB) (
)
References
|
Related Articles
|
Metrics
Considering the uncertainty of logistics demand network, the interval number was used to measure uncertain variables and parameters. The bilevel programming model of logistics network under interval demand mode was established and a hierarchical interval optimization genetic algorithm with interval variables and parameters was designed. The risk coefficient and the maximum decision-making deviation were defined to solve the problem, and the rules for logistics network structure were given to transform the model with certainty. The initial population was defined by interval slack variables and 0-1 decision variable, with two-stage genetic operation to solve interval optimal solution and node decision-making scheme of bilevel programming objects under different scenarios. The results of tested example show that the operability of the algorithm is much stronger and the solution result has superiority in interval optimal solution and scenario decision.
Ant colony optimization algorithm based on chaotic disturbance and neighborhood exchange for vehicle routing problem
LI Ya WANG Dong
2012, 32(02): 444-447. DOI:
10.3724/SP.J.1087.2012.00444
Asbtract
(
)
PDF
(656KB) (
)
References
|
Related Articles
|
Metrics
A new ant colony algorithm based on chaotic disturbance and neighborhood exchange was proposed to solve the Vehicle Routing Problem (VRP). Concerning the standard ant colony algorithm's shortcomings such as long search time, being prone to premature convergence, non-optimal solution and so on, the new algorithm used the randomness, ergodicity and regularity of chaos to adjust the pheromone of a small part of the routes with the chaotic disturbance strategy when the algorithm was getting into a prematurity. For the standard ant colony algorithm has the greedy rule with randomness, the new algorithm used the neighborhood exchange strategy to adjust the optimal solution. The simulation results show that the new algorithm is better than the standard ant colony algorithm and genetic algorithm when solving the VRPs of different sizes.
Multi-port and multi-berth integrated scheduling based on container port cluster
BI Ya LI Wen-feng
2012, 32(02): 448-451. DOI:
10.3724/SP.J.1087.2012.00448
Asbtract
(
)
PDF
(614KB) (
)
References
|
Related Articles
|
Metrics
The current schedule of ports and berths still remains in the single-port multi-berth, while multi-port and multi-berth integrated scheduling based on the container port cluster can achieve the optimal allocation of ports resources, reduce the time in port and improve the resource utilization rate, and at the same time, the optimal transportation cost of the ship company has been taken into account. Therefore, the multi-objective non-linear programming model for multi-port and multi-berth integrated scheduling system was constructed with certain assumptions as premises. Then, a heuristic algorithm for the model was designed according to the specific structure of the model's decision-making space. Finally, the numerical calculation was performed in order to verify the effectiveness and stability of the algorithm. It shows that the decision model is practical and the algorithm is valid.
Improved global optimization algorithm of intelligence control system with filled function
YUAN Liang LV Bo-quan ZHANG Chen LIANG Wei
2012, 32(02): 452-464. DOI:
10.3724/SP.J.1087.2012.00452
Asbtract
(
)
PDF
(705KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the speed of global optimization algorithm, a global optimization algorithm of intelligence control system was presented. The feedback idea of closed loop control system was applied in this algorithm that made the value of the object function gradually close to the input in the iterative process until reaching the global optimization. The key of the algorithm lies in the design of control strategy and the initial setting of parameters. In order to reduce the difficulty of initial setting of parameters and ensure the precision of the algorithm, the filled function was used to improve the global optimization algorithm of intelligence control system. Verified by twelve standard test functions, the improved algorithm is faster than filled function method, and is more accurate than the global optimization algorithm of intelligence control system.
Multi-objective particle swarm optimization algorithm of multi-swarm co-evolution
PENG Hu HUANG Wei DENG Chang-shou
2012, 32(02): 456-460. DOI:
10.3724/SP.J.1087.2012.00456
Asbtract
(
)
PDF
(658KB) (
)
References
|
Related Articles
|
Metrics
Particle Swarm Optimization (PSO) algorithm is a very competitive swarm intelligence algorithm for multi-objective optimization problems. Because it is easy to fall into local optimum solution, and the convergence and accuracy of Pareto set are not satisfactory, so the paper proposed a multi-objective particle swarm optimization algorithm of multi-swarm co-evolution based on decomposition (MOPSO_MC). In the proposed algorithm, each sub-swarm corresponded to a sub-problem decomposed by multi-objective decomposition method, and the authors constructed a new update strategy for the velocity. Each particle followed its own previous best position, sub-swarm best position and sub-swarm neighborhood best position, which resulted in enhancing the ability of local searching and getting evolutionary information from the neighborhood sub-swarm. Finally, the simulation results verify the convergence of the proposed algorithm, and the uniformity and correctness of the solution distribution with comparison of the state-of-the-art multi-objective particle swarm algorithm on ZDT test function.
Information security
α-hulls based localization for Jamming attack in wireless sensor network
ZHANG Jing XU Li ZHANG Shun-miao
2012, 32(02): 461-464. DOI:
10.3724/SP.J.1087.2012.00461
Asbtract
(
)
PDF
(608KB) (
)
References
|
Related Articles
|
Metrics
The special nature of sensor network makes it vulnerable to Radio Frequency Jamming Attacks (RF JA) and other attacks. To implement and deploy the security mechanism of the next step, and determine the location of the Jamming attacker called jammer in Wireless Sensor Network (WSN), α-hull was applied to calculate the Minimum Circumscribed Circle (MCC) of point set. An effective and accurate method for MCC detection was established through finding the least square circle of the point set and iteratively approaching the MCC with recursive subdivision. All vertices of the α-hull will be on the same circle, if 1/α is equal to the radius of points' MCC. On the basis of those rules, an algorithm for detecting MCC named α-MCC was developed. The simulation results show that, compared with the existing incremental algorithm, α-MCC is able to achieve higher accuracy in most cases. With the network node density, time consumption of α-MCC does not grow exponentially, but with only a slight linear increase.
ID-based non-interactive deniable authentication protocol
LI Zhi-min XU Xin LI Cun-hua
2012, 32(02): 465-471.
Asbtract
(
)
PDF
(675KB) (
)
References
|
Related Articles
|
Metrics
Non-interactive deniable authentication protocol can enable the receiver to identify the source of a received message and prevent a third party from identifying the source of the message, which is very suitable to be used in E-commerce and E-government. Based on computational Diffie-Hellman assumption, using bilinear pairing, a new identity-based deniable authentication protocol was constructed. The security of the scheme was proved under the random oracle model. The analytical results show that the new proposed protocol can resist the forgery attack, spoofing attack, middle attack and replay attack. This protocol is identity-based, which means it needs no certificate and has a simple key management. On the other hand, it is efficient in communications and computation, and its implementation is simple, so that it could be implemented in mobile devices with low power and small processor.
Cryptanalysis and improvement of TAKASIP protocol
TANG Hong-bin LIU Xin-song
2012, 32(02): 468-471. DOI:
10.3724/SP.J.1087.2012.00468
Asbtract
(
)
PDF
(680KB) (
)
References
|
Related Articles
|
Metrics
Session Initiation Protocol (SIP) provides authentication and session key agreement to ensure the security of the successive session. In 2010, Yoon et al. (YOON E-J, YOO K-Y. A three-factor authenticated key agreement scheme for SIP on elliptic curves. NSS '10: 4th International Conference on Network and System Security. Piscataway: IEEE, 2010: 334-339.) proposed a three-factor authenticated key agreement scheme named TAKASIP for SIP. However, the scheme is vulnerable to insider attack, server-spoofing attack, off-line password attack, and losing token attack. Moreover, it does not provide mutual authentication. To overcome these flaws of TAKASIP, a new three-factor authentication scheme named ETAKASIP based on Elliptic Curve Cryptosystem (ECC) was proposed. ETAKASIP, on the basis of elliptic curve discrete logarithm problem, provides higher security than TAKASIP. It needs 7 elliptic curve scalar multiplication operations, 1 additional operation and up to 6 Hash operations, and of high efficiency.
Cloud-model based decision-making for network risk assessment
CHEN Liang PAN Hui-yong
2012, 32(02): 472-479. DOI:
10.3724/SP.J.1087.2012.00472
Asbtract
(
)
PDF
(661KB) (
)
References
|
Related Articles
|
Metrics
In order to assess the risk of network security more reasonably, a cloud-model based method for network risk assessment was proposed. It took advantage of cloud model featuring perfect combination of randomness and fuzziness. Firstly, standard clouds were constructed by sampling normal system status. When making risk assessments, the current risk state was sampled to calculate the cloud characteristics, then the cloud similarity algorithm based on the distance measurement of cloud droplets was used to calculate the similarity between them, and the biggest similarity was the final output. Finally, Kddcup99 data set was used to do simulated attack and performance sampling test. The experimental results show that the proposed method retains the maximum uncertainty of network intrusion assessment and improves the credibility of the results.
E-commerce security certification based on rolling fingerprint digital signature
LIU Chao-fan ZHANG Yong-liang XIAO Gang
2012, 32(02): 475-479.
Asbtract
(
)
PDF
(863KB) (
)
References
|
Related Articles
|
Metrics
The security of E-commerce determines its development; however, traditional means of security certification cannot meet the demand on reliability. To ensure the security in E-commerce communication, a new method of security certification was proposed based on rolled fingerprint reconstruction and digital signature. Firstly, the image sequence of rolling fingerprints was reconstructed into a high-quality complete fingerprint image, and then the features were extracted as the key of digital signature. Thirdly, the message embedded with digital signature was transferred. At the same time, the correctness and completeness of the message were checked, and the validity of the sender was also identified. The experimental results show that the proposed algorithm can be used for rolling fingerprint reconstruction in any direction to get a high-quality fingerprint image. And it can run in real-time because of low complexity. The rolling fingerprint, as the security certification medium, ensures the security of E-commerce.
Network security evaluation model based on multi-person analytic network process in commercial banks
SHEN Li-xiang CAO Guo
2012, 32(02): 480-484. DOI:
10.3724/SP.J.1087.2012.00480
Asbtract
(
)
PDF
(773KB) (
)
References
|
Related Articles
|
Metrics
Considering the correlation and dependence among indicators, a multi-person decision model based on analytic network process was designed. In this model, the network security evaluation index was elicited by means of analytic network process, and then, a bi-level programming based on weighted Euclidean distance was used to synthesize the individual decision-making results. An illustrative example was given to demonstrate the feasibility and validity of the proposed method. The result shows that the proposed model has higher credibility to evaluate the network security in commercial banks.
Random method of social network based on spectral spectrum and feature significant constraints
XU Li-ming JIANG Xiao-qiang SONG Zhuan
2012, 32(02): 485-488. DOI:
10.3724/SP.J.1087.2012.00485
Asbtract
(
)
PDF
(597KB) (
)
References
|
Related Articles
|
Metrics
To protect the security of social network, ensure the availability of social network after perturbation, the paper proposed perturbation method of social network based on the signless Laplacian matrix and the social network non-randomness. In the perturbation process, this method controlled the social network spectral radius and the social network non-randomness by certain constraints, thus ensuring the usability and improving the privacy protection degree of the social network. The paper analyzed the security of this method in theory, and provided corresponding algorithm. At last, the experimental results on comparison of harmonic mean of the shortest distance of the social network, subgraph centrality and the social network non-randomness of change, show that the proposed method effectively protects the structural feature of social network and improving the availability of the social network.
Context-based usage control model for pervasive computing
WU Hai-ying
2012, 32(02): 489-492. DOI:
10.3724/SP.J.1087.2012.00489
Asbtract
(
)
PDF
(660KB) (
)
References
|
Related Articles
|
Metrics
At present, access control mostly adopts Role Based Access Control (RBAC) model in pervasive computing; however, Usage Control (UCON) model possesses mutability and continuity, and mostly fits pervasive computing, but no sufficiently considers context information. Based on UCON model, a context-based pervasive computing usage control (Con_UCON) model was proposed. Con_UCON model considered the context information, and divided the decisive factors of obligation and condition into the static and the dynamic. Dynamic obligation and condition were used as decisive factors in usage. The core rules of the model were established, and the description of the model was given by descriptive language. The model provides the means to meet the need of access control in pervasive computing. The results on three examples of pervasive computing intelligent office systems demonstrate the model's efficiency, flexibility and security.
Graphics and image technology
MRF exemplar inpainting algorithm based on dual-tree complex wavelet domain
WANG Shuang CHEN Guang-qiu SONG Ya-ya SUN Jun-xi
2012, 32(02): 493-503. DOI:
10.3724/SP.J.1087.2012.00493
Asbtract
(
)
PDF
(717KB) (
)
Related Articles
|
Metrics
To eliminate the mosaic and "bell" effects due to cumulative errors during large object image inpainting, the Markov Random Fields (MRF) exemplar inpainting based on dual-tree complex wavelet domain was proposed. The image was converted to complex-frequency domain by Dual-Tree Complex Wavelet Transform (DTCWT) and the exemplar inpainting order was computed by rational confidence and data item, the unknown region was inpainted based on multiscale and multiband. The inpainted images were reconstructed by dual-tree complex wavelet inverse transform. The experimental results show that compared with classical discrete wavelet methods, the mosaic and "bell" effects can be avoided and the more favorable textural and structural information can be preserved.
Sub-pixel discrete method of point spread function from blurred images
LIANG Min ZHU Hong OUYANG Guang-zheng LIU Wei
2012, 32(02): 496-498. DOI:
10.3724/SP.J.1087.2012.00496
Asbtract
(
)
PDF
(534KB) (
)
Related Articles
|
Metrics
Fast and accurate Point Spread Function (PSF) estimation method is the premise to obtain good results on the blur image restoration. To solve the deficiency of the discrete PSF of defocus-blurred and motion-blurred images, a discretization method was proposed based on the combination of geometric property of degradation model and sub-pixel estimation. Specifically, the principle of weight allocation was defined, which was related to the distance with the neighboring pixels. Thus, the discretization of PSF was realized. Finally, the experimental results illustrate that the proposed method improves result precision and outperforms the traditional one on visual quality, sharpness evaluation function, Peak Signal-to-Noise Ratio (PSNR) and Improved Signal-to-Noise Ratio (ISNR).
Robust image sequence auto-mosaic method
WU Qing-shuang FU Zhong-liang
2012, 32(02): 499-503. DOI:
10.3724/SP.J.1087.2012.00499
Asbtract
(
)
PDF
(912KB) (
)
Related Articles
|
Metrics
Considering the low robustness, large computation and low automation in the traditional image mosaic method, a robust automatic image sequence mosaic method was proposed. First, the method used Harris operator and Forstner operator to extract the image character points after using Wallis filter to enhance the images. Then the gray cross correlation of the certain domain of the character points was utilized to obtain the rough matching points, and the RANSAC algorithm was used to get accurate image matching points. After that, the matching points were used to calculate the foundation matrix and epipolar lines, and the epipolar constraint was used to induct the image matching and get the final high precise matching points. Hence, bidirectional relaxation whole matching algorithm was used to eliminate the error matching points located on the epipolar line. Finally, the accurate matching points were utilized to calculate the affine transform relations of the image sequence and thus completed the image sequence auto-mosaic. The experimental results show that the proposed method is effective in image sequence mosaic, and the process is automatic with high practical value.
Improved object tracking method based on mean shift and particle filter
LI Ke XU Ke-hu HUANG Da-shan
2012, 32(02): 504-506. DOI:
10.3724/SP.J.1087.2012.00504
Asbtract
(
)
PDF
(490KB) (
)
References
|
Related Articles
|
Metrics
To improve the accuracy and real-time performance of particle filter algorithm for tracking vision object, an improved algorithm in combination with mean shift and particle filter was proposed. Similar particles were clustered, and representative particles were iterated in each cluster by using mean shift algorithm. Then computation complexity was reduced by fewer mean shift iterative particles. Particle number and process noise distribution were adjusted adaptively based on tracking condition to improve tracking accuracy and reduce computation complexity. The experimental results prove the superiority of the proposed method, the average of each frame' operation time of this method is less than half of classic bybrid algorithm, and its computation complexity is also less than classic bybrid algorithm.
Source camera identification based on nonsubsampled Contourlet transform denoising filter design
CHEN Zong-ming ZHOU Zhi-ping
2012, 32(02): 507-513. DOI:
10.3724/SP.J.1087.2012.00507
Asbtract
(
)
PDF
(665KB) (
)
References
|
Related Articles
|
Metrics
Because obvious noise will occur when source camera identification and wavelet filters are getting the residual noise in the image scene, a new method for the extraction of pattern noise was proposed based on Nonsubsampled Contourlet Transform (NSCT). According to the process of source camera identification, the deficiencies of wavelet-based filter for the extraction of pattern noise were discussed first. And then, the discussion focused on the design of NSCT-based filter to extract pattern noise. The experimental results show that NSCT-based filter not only restrains the scene noise obviously, but also improves the average identification rate with 3.7% for identifying images from three different cameras compared to wavelet-based filter.
Color image edge detection based on quaternion and self-organizing map neural network
WANG Zheng LI Xing-ming
2012, 32(02): 510-513. DOI:
10.3724/SP.J.1087.2012.00510
Asbtract
(
)
PDF
(731KB) (
)
References
|
Related Articles
|
Metrics
As the correlation between color components in color image is not considered in the traditional edge detection algorithms, and the detection results will be affected by the threshold value, a new kind of edge detection method for color image was proposed combining quaternion with Self-Organizing Map (SOM). Based on the quaternionic Cauchy integral formula and vector product, the edge characteristic vector was constructed. Then the SOM was trained by the vector and used for edge detection. The experimental results show that the proposed method has stronger retention capacity of the details.
Edge thinning based on mathematical morphology thinning algorithm
LI Jie PENG Yue-Ying Yue-ying YUAN Chang-an LIN Mo WANG Ren-min
2012, 32(02): 514-520. DOI:
10.3724/SP.J.1087.2012.00514
Asbtract
(
)
Related Articles
|
Metrics
Given the improper threshold, Sobel operator could easily lead to the loss of image edges or produce broad pseudo-edges. To solve these problems, appropriate threshold was selected by using the constraint of variance firstly, and then mathematical morphology thinning algorithm was utilized to thin the edge images detected by Sobel operator. The experimental results show that this method can bring about satisfactory thinning effects while retaining the original edge information.
One-step singular correlation and most relevant vector filtering for color images
CAI Jian-chao LIU Chao
2012, 32(02): 517-520. DOI:
10.3724/SP.J.1087.2012.00517
Asbtract
(
)
PDF
(720KB) (
)
Related Articles
|
Metrics
Concerning the fuzzy and unclear details after color image denoising, the correlation properties of the adjacent pixels as well as the correlation among the color channels were analyzed. First, one-step singular correlation detection algorithm was used to detect the noises of each layer of the pre-treatment color image, and then the most relevant vector median was used to fill value of the noises. Finally, the color image filtering processing was realized. The experimental results show that this method not only accurately detects the salt-pepper noises, but also well restores and protects the original information such as the edge details. The color image filtering accuracy and performance criterion such as Peak-Signal-to Noise Ratio (PSNR) are further improved.
Improved algorithm for point cloud data simplification
ZHU Yu KANG Bao-sheng LI Hong-an SHI Fang-ling
2012, 32(02): 521-544. DOI:
10.3724/SP.J.1087.2012.00521
Asbtract
(
)
PDF
(670KB) (
)
Related Articles
|
Metrics
Due to geometrical features always being excessively lost in Kim's simplification process of scattered point cloud, an improved simplification method was proposed. At first, principal curvatures of points in point cloud were estimated by the least square parabolic fitting. Then an error metric based on Hausdorff distance of principal curvature was used to keep and extract the feature points. Finally, through testing and analyzing some measured data with different features, the results show that the proposed method simplifies the point cloud data to a large exntent, and the simplification results are more uniform, and it can fully retain the original point cloud geometry without breaking the small features, and the quality and efficiency are both guaranteed. The method can provide effective data information for three-dimensional reconstruction to save processing time and hardware resources.
Decision tree classification of hyperspectral remote sensing imagery based on independent component analysis
LIN Zhi-lei YAN Lu-ming
2012, 32(02): 524-527. DOI:
10.3724/SP.J.1087.2012.00524
Asbtract
(
)
PDF
(698KB) (
)
Related Articles
|
Metrics
Hyperspectral remote sensing imagery contains abundant spectral information because of its numerous bands, but it also causes the curse of dimensionality. How to resolve this conflict and improve the classification accuracy of hyperspectral remote sensing imagery is the major concern. Therefore, the thesis proposed ICA-DTC model that combined Independent Component Analysis (ICA) with Decision Tree Classifier (DTC) to research the hyperspectral imagery classification based on EO-1 Hyperion. First, ICA was applied to carry on the feature extraction on hyperspectral remote sensing imagery. Based on this, the characteristic components and other geography auxiliary elements were selected as test variables, the appropriate threshold was selected to set discriminating rule, and the DTC model was established to classify hyperspectral remote sensing imagery. Then the results obtained by this method were compared with that obtained by traditional maximum likelihood classification. The experimental results show that ICA can extract nonlinear characteristics from surface features well and ICA-DCT model can effectively improve the classification accuracy of surface features under complex terrain. In terms of the total classification accuracy, the former is up to 89.34%, 18.8% higher than the latter. In terms of the classification accuracy of a single surface feature, the former is only slightly lower than the latter on water, while 11 other surface features are higher than the latter.
Face recognition based on improved locality preserving projection
GONG Qu HUA Tao-tao
2012, 32(02): 528-534. DOI:
10.3724/SP.J.1087.2012.00528
Asbtract
(
)
PDF
(601KB) (
)
Related Articles
|
Metrics
Locality Preserving Projection (LPP) is a manifold learning method, while the face recognition application of LPP is known to suffer from singular value problem, so a solution scheme using Singular Value Decomposition (SVD) was proposed for recognition application. In this model, the sample data were projected on a non-singular orthogonal matrix to solve the problem of singular value. Then the data of the low dimensional sample space projection subspace were obtained according to the LPP method. The training samples and testing samples were projected onto low-dimensional subspace respectively. Finally the nearest neighbor classifier was used for classification. A series of experiments to compare the proposed algorithm with the traditional local projection algorithm and Principal Component Analysis (PCA) were given on ORL face database. The experimental results demonstrate the efficacy of the improved LPP approach for face recognition.
Supervised locality preserving projection based on class information
LI Xiao-man WANG Jing
2012, 32(02): 531-534. DOI:
10.3724/SP.J.1087.2012.00531
Asbtract
(
)
PDF
(649KB) (
)
Related Articles
|
Metrics
Locality Preserving Projection (LPP) is an approximation of Laplacian Eigenmap (LE), but it is an unsupervised method, and does not take advantage of the existing classification information to improve the classification efficiency. Therefore, a supervised locality preserving projection named SLPP-LI method was proposed based on class information. In the study of projection matrix, SLPP-LI took advantage of the comprehensive utilization of the geometrical structure of the manifold and the class information of the existing train set, SLPP-LI can effectively take advantage of the known low dimensional information by adjusting the control parameters and obtain the low-dimensional models of high dimensional data by directly solving the linear equation. The comparative experiments with several face databases and handwritten digital databases show, SLPP-LI is neither sensitive to the original dimension of high dimension data, nor the number of the training data. For the same kind of problems, SLPP-LI has higher recognition rate compared with PCA, LPP, OLPP and SLPP, and it can effectively deal with the classification issues.
New method for extracting region of interest of dynamic PET images based on curve clustering
TIAN Ping-ping LIU Li CHEN Yu-ting
2012, 32(02): 535-550. DOI:
10.3724/SP.J.1087.2012.00535
Asbtract
(
)
PDF
(581KB) (
)
Related Articles
|
Metrics
Concerning the problem that many current clustering methods based on kinetic characteristics ignore the continuous temporal information of Time Activity Curve (TAC), a method for Region Of Interest (ROI) extraction based on curve clustering was proposed in the paper. The proposed method contains three steps. Firstly, K-Means algorithm was used to remove the background to obtain a coarse mask of the heart. Secondly, curve clustering was used to extract myocardium from the heart obtained in the first step. Finally, blood cavity was delineated based on spatial relationship between pixels. The method was applied to extract the ROI from fourteen mouse PET images. The experimental results indicate that the proposed method is more accurate in delineating blood cavity of the fourteen mice than K-Means and Hybrid Clustering Method (HCM), and it is more precise and stable.
Interslice interpolation method for medical image based on voxel similarity
MA Wei CHEN Jia-xin PAN Wei-wei
2012, 32(02): 538-553. DOI:
10.3724/SP.J.1087.2012.00538
Asbtract
(
)
PDF
(617KB) (
)
Related Articles
|
Metrics
Cross-sectional interpolation is one of the key steps in 3D reconstruction of medical images. Considering the shortcomings of current interpolation methods, such as blurring the object boundary and low computational efficiency, an interslice interpolation method for medical images based on voxel similarity was proposed in this paper. According to the principle of voxel relativity in cross-section images and structures feature, this method calculated the relativity of voxels to classify pixels and interpolated it. The experimental results illustrate that the new method has less computational complexity and better improves the quality of image than the previous interpolation methods.
Compressed sensing-adaptive regularization for reconstruction of magnetic resonance image
LI Qing YANG Xiao-mei LI Hong
2012, 32(02): 541-544. DOI:
10.3724/SP.J.1087.2012.00541
Asbtract
(
)
PDF
(569KB) (
)
Related Articles
|
Metrics
The current Magnetic Resonance (MR) image reconstruction algorithms based on compressed sensing (CS-MR) commonly use global regularization parameter, which results in the inferior reconstruction that cannot restore the image edges and smooth the noise at the same time. In order to solve this problem, based on adaptive regularization and compressed sensing, the reconstruction method that used the sparse priors and the local smooth priors of MR image in combination was proposed. Nonlinear conjugate gradient method was used for solving the optimized procedure, and the local regularization parameter was adaptively changed during the iterative process. The regularization parameter can recover the image's edge and simultaneously smooth the noise, making cost function convex within the definition region. The prior information is involved in the regularization parameter to improve the high frequency components of the image. Finally, the experimental results show that the proposed method can effectively restore the image edges and smooth the noise.
Computer software technology
Formal description and analysis of conformance of composite Web service behavior
LI Jin ZHANG Hua WU Hao-xiong XIANG Jun
2012, 32(02): 545-550. DOI:
10.3724/SP.J.1087.2012.00545
Asbtract
(
)
PDF
(931KB) (
)
Related Articles
|
Metrics
Web service choreography and orchestration defines the global interaction of composite Web service and the local behavior of each participant from global and local perspectives, respectively. The conformance of each participant's local behavior to global interaction is the guarantee of the correctness of Web service composition. The paper firstly presented a set of definitions to formally describe the global interaction of composite Web service, the local behavior of each participant and the mapping rules between them based on process algebra. Accordingly, two formal judgmental rules for the conformance of each participant's local behavior to global interaction were proposed. The two formal rules were based on the relationship between the transition of global interaction and local process and bisimulation theorem. At last, the conformance formal checking approach was described through a case study. The result of the case study shows that the proposed conformance definition of Web service composition and conformance checking approach can formally check the conformance of Web service composition effectively. As a result, the correctness of Web service composition can be guaranteed.
XML-based standards compliance testing scheme
WU Jie-ming FAN Guo-mei
2012, 32(02): 551-553. DOI:
10.3724/SP.J.1087.2012.00551
Asbtract
(
)
PDF
(445KB) (
)
Related Articles
|
Metrics
To improve the efficiency of standards compliance testing, the common features of the information technology standards were studied and a XML-based standards compliance testing program was proposed. The specific test strategies of four stages of the program were given, including pre-test preparation, test case generation, test case running and compliance analysis for the running results. By using XML technology, the data type of the standards was formatted. Boundary value method and the equivalence class partition method were applied to generate test cases. The results of the running test cases were analyzed by the proposed compliance analysis method. The experimental results verify the efficiency of the standards compliance testing program.
Research and development of automated performance test tool for Android smartphone
YANG Yi-jun HUANG Da-qing
2012, 32(02): 554-556. DOI:
10.3724/SP.J.1087.2012.00554
Asbtract
(
)
PDF
(514KB) (
)
Related Articles
|
Metrics
In order to improve the efficiency of smartphone performance test, the methodology of automatic test was introduced. According to the method, an Android smartphone performance test tool called FLEX-ANDROID was developed. The components of the test tool and the test scripts were described in detail. In addition, it analyzed how to calculate and generate test results. Then, the FLEX-ANDROID test tool was used for automatic test. The time costs of automatic test and manual test were compared. The result shows that automatic test rate is about three times as fast as that of manual test rate. It indicates that the test tool can effectively improve the performance test efficiency, and greatly reduce test time and duplicate test.
Improved quantum genetic algorithm and its application in test data generation
ZHOU Qi JIANG Shu-juan ZHAO Xue-feng
2012, 32(02): 557-560. DOI:
10.3724/SP.J.1087.2012.00557
Asbtract
(
)
PDF
(630KB) (
)
Related Articles
|
Metrics
This paper proposed an Improved Quantum Genetic Algorithm (IQGA) for the problem of slow convergence in test data generation. There are two main improvements. First, every bit of every individual was reversed to conduct the evolution; second, the binary individuals were mutated after measurement, instead of the traditional exchange of the probability amplitude of quantum bits. IQGA was applied into test data generation. The experiments on three basic programs prove that IQGA is better than QGA in terms of coverage rate and the number of iterations. IQGA can not only ensure the right direction of the evolution of populations, but also avoid premature phenomenon, and it can get the solution at a faster convergence speed.
Strategy of SaaS addressing and interrupt for software generation based on partitioning algorithm
ZHOU Xiang-bing YANG Xing-jiang MA Hong-jiang
2012, 32(02): 561-565. DOI:
10.3724/SP.J.1087.2012.00561
Asbtract
(
)
PDF
(717KB) (
)
Related Articles
|
Metrics
There are some SaaS problems for Web service and REST (Representational State Transfer) interfaces recognition in the software generation process. Therefore, an approach was proposed based on partitioning algorithm, which employed partitioning algorithm to implement function partition of SaaS and define difference nodes for difference functions. At the same time, the similarity between nodes was defined to accomplish partition, which improved the efficiency of SaaS functions. Secondly, according to the changing requirements, addressing and interrupt approach was presented to realize software generation of SaaS. Finally, an SaaS online sale software in Amazon cloud computing was analyzed, which approves that the approach is feasible and available.
A remote file synchronization method
HE Qian ZHUO Bi-hua
2012, 32(02): 566-568. DOI:
10.3724/SP.J.1087.2012.00566
Asbtract
(
)
PDF
(479KB) (
)
Related Articles
|
Metrics
In remote file synchronization of rsync algorithm, to resolve the shortage of large delta data between Server and Client, a new method of remote file synchronization was proposed. On the basis of rsync algorithm, the proposed method adopted delta compression technology, using block-move and KMP algorithm to find out the delta and match between Client and Server and using sliding window compression algorithm to compress the delta data. It is effective in reducing the delta data in network traffic. The experimental results show that delta can be reduced more than 97%, thus reducing network bandwidth consumption and increasing the efficiency of remote file synchronization.
Typical applications
Survey on finding the periodic orbits in chaotic systems
YAO Shang-ping LI Qing-du
2012, 32(02): 569-594. DOI:
10.3724/SP.J.1087.2012.00569
Asbtract
(
)
PDF
(788KB) (
)
Related Articles
|
Metrics
The periodic orbits provide a skeleton for the organization of complex chaotic systems, for many important characteristics and dynamic properties of these systems can be determined by solving the periodic orbits, such as the accurate calculation of Lyapunov exponents, estimation of topological entropy and description of a chaotic invariant set. First, the paper reviewed the current commonly used four methods to find periodic orbits, which are NR algorithm, Broyden algorithm, SD algorithm and DL algorithm, and analyzed their characteristics and mutual relations. Second, the paper discussed the advantages, disadvantages and scope of each method with specific examples in detail. Finally, the paper pointed out that DL algorithm is more ideal among the four algorithms, and suggested the future research direction.
Elastic scheduling in real-time systems
YANG Zhi-bang XU Cheng ZHOU Xu ZHU Xue-qing
2012, 32(02): 573-577. DOI:
10.3724/SP.J.1087.2012.00573
Asbtract
(
)
PDF
(919KB) (
)
Related Articles
|
Metrics
Elastic scheduling is designed for real-time systems with variable load. It adjusts the task attributes dynamically to meet the flexibility requirements of system, and it is an effective task scheduling strategy. Concerning the research results and problems of elastic scheduling, an overview of elastic scheduling was given out, and the research progress of the elastic periodic task model, scheduling model and the elastic scheduling algorithm were analyzed. The existing problems in research were investigated, and possible research areas in future were suggested.
Verification method of vehicle driving tendency recognition model under free flow
ZHANG Yuan-yuan WANG Xiao-yuan ZHANG Jing-lei
2012, 32(02): 578-580. DOI:
10.3724/SP.J.1087.2012.00578
Asbtract
(
)
PDF
(475KB) (
)
Related Articles
|
Metrics
The lane change decision-making model was established based on the fuzzy multi-objective theory. The predicted results of lane change models which considered and did not consider driving tendency difference were calculated according to the experiment data of road. The predicted traffic flow macroscopic parameter (lane change rate) was compared with the result of road experiment to verify the inferential results. The experimental results show that accuracy of the method used for identifying vehicle driving tendency can be improved significantly.
Spatial data model for underground mine 3D visual production management and control system
XIONG Shu-min WANG Li-guan CHEN Zhong-qiang CHEN Jian-hong
2012, 32(02): 581-588. DOI:
10.3724/SP.J.1087.2012.00581
Asbtract
(
)
PDF
(835KB) (
)
Related Articles
|
Metrics
In order to realize the visualization, spatial analysis, automatic modeling and dynamic updating of model, the mine spatial environment and traditional 3D spatial data models were analyzed. Then an entity-oriented spatio-temporal hybrid data model was designed. It integrated the 3D parameter skeleton model, Topology-concerned and Entity-oriented 3D Vector Data Model (TEVDM) and Octree-Block Model (OBM). The TEVDM and OBM were used to express the boundary and internal property of orebodys. The Parametric Entity-Network Data Model (PENDM) was introduced into the model to describe the skeleton of the roadway engineering and production systems. Particularly, a behavior model was introduced into the model to describe the personnel and devices with behavior characteristics. The model also included the complete spatial feature which could describe the set of semantically related entities. Finally, some samples of using this model were given to describe the mines' spatial phenomenon and carry out spatial analysis. The samples show this model has better practicability in mining than traditional models.
Virtual remote laboratory based on Modelica
GE Jia-huan ZHU Shan-an
2012, 32(02): 585-588. DOI:
10.3724/SP.J.1087.2012.00585
Asbtract
(
)
PDF
(633KB) (
)
Related Articles
|
Metrics
To make up for the deficiency of traditional labs, an Internet-based electrical and electronic virtual remote laboratory was introduced, in which a unified object-oriented language Modelica was used for modeling and simulation. The architecture and the internal mechanism of the virtual remote lab operation were elaborated. Two subsystems, the analog circuit system and the plate angle control experiment system were introduced as examples. The modeling of the analog circuit system was achieved by calling the Modelica standard library. The plate angle control experiment system was divided into modulars and the simulation models of each modular were built by Modelica. The scalable model library was established, and on the basis of it, the plate angle control experiment system was constructed. The two subsystems were simulated on the virtual platform and the results were consistent with theoretical calculations.
Approximation model of piecewise stationary stochastic process autocorrelation function
CHENG Hao LIU Guo-qing CHENG Xiao-gang
2012, 32(02): 589-591. DOI:
10.3724/SP.J.1087.2012.00589
Asbtract
(
)
PDF
(427KB) (
)
Related Articles
|
Metrics
In order to deal with the frequently encountered non-stationary random signals in signal processing, they can be divided into sub-stationary random signals, and autocorrelation function can be used to reflect the essential characteristics of sub-stationary signals. The computation of piecewise stationary stochastic process autocorrelation function was discussed. In order to reduce the amount of calculation and errors of the existing function models, a new model to approximate autocorrelation function of piecewise stationary stochastic process was proposed in this paper. The computer simulation shows that the model can effectively approximate autocorrelation function. The computing speed is faster, and the errors are much fewer and smoother. Applying the model to the restoration of blurred digital images, a very good restoration effect can be got.
2-D direction of arrival estimation based on signal correlation and modified MUSIC algorithm
LIU Kang XI You-bao LI Zhi
2012, 32(02): 592-594. DOI:
10.3724/SP.J.1087.2012.00592
Asbtract
(
)
PDF
(445KB) (
)
Related Articles
|
Metrics
The coherent signal is widespread in the actual environment. Concerning the problem that traditional 2-D Direction Of Arrival (DOA) algorithm of Multiple Signal Classification (MUSIC) cannot process the coherent signal, the Modified MUSIC (MMUSIC) algorithm was used to realize the 2-D DOA estimation for the coherent imaginaries signal. The applied range of the modified MUSIC algorithm was extended from 1-D Uniform Linear Array (ULA) to 2-D centre-symmetric array, and it had been deduced by theory that bearing performance of MMUSIC algorithm was inversely proportional to the cosine value of phase difference. For two coherent signals of being separated by more than 4 degrees, the successful probability of the 2-D MMSUIC algorithm can be more than 90% in simulation experiments.
2024 Vol.44 No.10
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF