Loading...

Table of Content

    01 June 2013, Volume 33 Issue 06
    Network and communications
    Coverage problems in visual sensor networks
    CHEN Wenping YANG Meng HONG Yi LI Deying
    2013, 33(06):  1489-1522.  DOI: 10.3724/SP.J.1087.2013.01489
    Asbtract ( )   PDF (1216KB) ( )  
    References | Related Articles | Metrics
    Video monitoring has been applied to various occasions to provide efficient information for safe guarding. In this paper, the coverage problems in Visual Sensor Network (VSN) were surveyed. VSN has the characteristics as directional sensor network. However, if take the facing direction of monitoring target into account, VSN is different from the ordinary directional sensor network. According to whether considering the facing direction of monitoring target in the sensor model, the works on coverage of VSN including target coverage, area coverage and barrier coverage were introduced in this paper respectively. Finally, the current problems and future research direction were discussed.
    Network mobility and fast handover scheme within PMIPv6
    KONG Fanjie ZHANG Qizhi RAO Liang CHEN Yuan
    2013, 33(06):  1495-1504.  DOI: 10.3724/SP.J.1087.2013.01495
    Asbtract ( )   PDF (873KB) ( )  
    References | Related Articles | Metrics
    To solve the long handover latency of mobile network in NEtwork MObility (NEMO) Basic Support (NBS),this paper proposed a scheme for NEMO within PMIPv6. Then an improved handover procedure for this scheme was suggested. The proposed scheme, which decreased the number of handover messages transmitted on wireless link and in advance set up the tunnel to forward packets, achieved fast handover of mobile network. Compared with NBS’s handover procedure, the analytical results show that the standard handover procedure and fast handover procedure of the proposed scheme decrease handover latency by 56.55% and 58.63% respectively.
    Two-hop incentive compatible routing protocol in disruptiontolerant networks
    WEN Ding CAI Ying LI Zhuo
    2013, 33(06):  1500-1504.  DOI: 10.3724/SP.J.1087.2013.01500
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics
    A Two-hop Incentive Compatible (TIC) routing protocol was proposed for DisruptionTolerant Networks (DTN) to defend the degradation of communication performance caused by selfish nodes. TIC selected the optimal relay node, which took both the encounter probability and transmission cost into account and ensured that nodes could maximize their profit when they reported their encounter probability and transmission cost honestly. At the same time, a signature technology based on bilinear map was introduced to ensure the selected relay nodes to get the payment securely, which can effectively prevent the malicious nodes from tampering the messages.
    Opportunistic routing strategy based on local information in mobile sensor networks
    DONG Ting
    2013, 33(06):  1505-1518.  DOI: 10.3724/SP.J.1087.2013.01505
    Asbtract ( )   PDF (675KB) ( )  
    References | Related Articles | Metrics
    An opportunistic routing strategy was proposed based on the analyses of the channel quality, the movement speed and energy cost of a node synthetically. The received signal strength indicator was used to build opportunity probability. And then, the movement speed was used to reflect the node mobility. At last, the energy cost was used to express node residual life. The opportunistic routing was built by them and the preferred number of a candidate node was defined in order to determine its listening forwarding time. Furthermore the problem of package retransmission was solved. The analysis and simulation show that, compared with Extremely Opportunistic Routing (ExOR) and Opportunistic with Backtracking (OB), the new strategy is more suitable for opportunistic network with higher validity and lower energy cost.
    Sparsity adaptive matching pursuit algorithm based on adaptive threshold for OFDM sparse channel estimation
    JIANG Shan QIU Hongbing HAN Xu
    2013, 33(06):  1508-1514.  DOI: 10.3724/SP.J.1087.2013.01508
    Asbtract ( )   PDF (592KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the complexity of the reconstruction algorithm and improve the precision of estimation, the authors proposed a new Sparsity Adaptive Matching Pursuit (SAMP) algorithm by using the adaptive threshold applied in the OFDM (Orthogonal Frequency Division Multiplexing) sparse channel estimation. The Monte Carlo simulation results show that, compared with the traditional method, the CPU run time decreased by 44.7%. And in lower SNR (SignaltoNoise Ratio), the performance achieved obvious improvements. Besides, in OFDM sparse channel estimation, a new design of pilot pattern was presented based on the mutual coherence of the measurement matrix in Compressive Sensing (CS) theory. The Monto Carlo simulation results show that, the precision of channel is increased by 2-4 dB with the new pilot pattern.
    Initial quantization parameter selection algorithm of rate control for H.264
    YANG Kaifang GONG Yanchao
    2013, 33(06):  1511-1514.  DOI: 10.3724/SP.J.1087.2013.01511
    Asbtract ( )   PDF (510KB) ( )  
    References | Related Articles | Metrics
    As for the problem that initial Quantization Parameter (QP) selection of rate control in JVT-H017 has nothing to do with the video content, this paper analyzed the existing improved algorithms and puts forward a fast and effective initial quantization parameter selection algorithm. After obtaining the Mean Absolute Difference (MAD) of the first P frame and average bits per pixel, the proposed algorithm can calculate an appropriate initial quantization parameter. The experimental results show that compared with JVT-H017 under constant bit rate, the predicted initial quantization parameter is more accurate, and greatly improves the performance of rate control. The maximum performance gain in ΔPSNR is up to 1.1dB and works well when it contains the B frames.
    A Semi-supervised Network Traffic Classification Method Based on Support Vector Machine
    LI Pinghong WANG Yong TAO Xiaoling
    2013, 33(06):  1515-1518.  DOI: 10.3724/SP.J.1087.2013.01515
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    In order to solve low accuracy, large time consumption and limited application range in traditional network traffic classification, a semisupervised network traffic classification method of Support Vector Machine (SVM) was proposed. During the training of SVM, it determined the support vectors from the initial and new sample set by using incremental learning technology, avoided unnecessary repetition training, and improved the situation of original classifiers’ low accuracy and timeconsuming as a result of new samples that appeared. This paper also proposed an improved Tri-training method to train multiple classifiers, and a large number of unlabeled samples and a small amount of labeled samples were used to modify the classifiers, which reduced auxiliary classifier’s noise data and overcame the strict limitation of sample types and traditional Coverification for classification methods. The experimental results show that the proposed algorithm has excellent accuracy and speed in traffic classification.
    One base-band equivalent echo simulation method for radio proximity detector
    LU Zhaogan YIN Yingzeng
    2013, 33(06):  1519-1522.  DOI: 10.3724/SP.J.1087.2013.01519
    Asbtract ( )   PDF (602KB) ( )  
    References | Related Articles | Metrics
    The base-band equivalent model between the transmitted signal and its corresponding received signals was established in this paper by analyzing the transceiver of radio proximity detector. Thus, with considering the multiple path loss of received signal, it could be used as one general echo simulation model for proximity detectors, and their different frequency signals could be simulated by this based-band approach. The echo signal obtained by this method could indicate the meeting process of targets and detectors, and its various version waveform for various detecting systems with various Doppler frequency varieties could also be simulated. Furthermore, it has the strong-points of low computation complexity and simple implementation. Finally, the proposed base-band echo simulation method was tested in one radio proximity detector working scenario. The numerical results show that the simulated time signal could reflect the meeting process of bomb targets with relative movements.
    LTE MAC layer downlink scheduling and resource allocation with low calculation amount
    CUI Ya'nan SU Hansong LIU Gaohua
    2013, 33(06):  1523-1526.  DOI: 10.3724/SP.J.1087.2013.01523
    Asbtract ( )   PDF (516KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a new downlink scheduling algorithm based on the Quality of Service (QoS) Long Term Evolution (LTE),because the existing scheduling algorithms cannot satisfy the needs of multi-users’real-time services and non real-time services and have large amount of calculation. This algorithm introduced a balance factor on the basis of Modified Largest Weighted Delay First (M-LWDF) algorithm. In addition, the users’ reported Channel Quality Indicator (CQI) directly replaced the instantaneous speed. The simulation results show that the proposed algorithm can reduce the computational complexity, when users’number added to 45, packet loss rate fell by 6.71%, the system overall throughput increased by 12.91% on the premise of fairness.
    Femtocell downlink power control algorithm in long term evolution
    ZHU Shibing JIN Jie SU Hansong CUI Ya'nan
    2013, 33(06):  1527-1530.  DOI: 10.3724/SP.J.1087.2013.01527
    Asbtract ( )   PDF (540KB) ( )  
    References | Related Articles | Metrics
    When femtocells are deployed intensively in Long Term Evolution (LTE) systems, there would be strong interference between femtocells. Concerning downlink interference of femtocell, an adaptive power control algorithm based on pathloss was proposed. According to the threshold of Signal to Interference plus Noise Ratio (SINR), the user of femtocell was given the adjustment flags and then the flags were sent to femtocell through uplink channel. According to the pathloss and allocated resources block of the user, femtocell adaptively adjusted the transmission power for the purpose of controlling SINR of the user on the basis of adjustment flags received. The simulated results show that the algorithm is better at controlling the SINR of femtocell users and improves the average throughput of femtocell with 1.7Mbps at least compared with the situation of no power control. At the same time, it indicates that the more dense of femtocell, the more apparent of control effect.
    Network and distributed techno
    Research on error accumulative sum of single precision floating point
    CHEN Tianchao FENG Baiming
    2013, 33(06):  1531-1539.  DOI: 10.3724/SP.J.1087.2013.01531
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics
    Alignment and normalization are needed in the process of floating point summation in computer. Normalization operation conducts rounding processing which will generate errors. Accumulated operations of floating-point numbers will result in the accumulation of error, the lack of precision of the calculations and even wrong calculation results. This paper discussed the influence of different binding order of floating-point numbers on the error of accumulative sum in the process of single precision floating-point accumulation through experimental methods, and aimed to explore the law of caused calculation errors by the binding sequence, provided a basis for a method of selective binding for computing paradigm and calculating structure as multi-core calculation, GPU computation and multi-processor calculation, and facilitated the advantage of parallel computing.
    Cloud application classification and fine-grained resource provision based on prediction
    XIONG Hui WANG Chuan
    2013, 33(06):  1534-1539.  DOI: 10.3724/SP.J.1087.2013.01534
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics
    Considering the applications deployed in the cloud which are rather complicated and different applications exhibit different sensitivity to issues of specific resources, an architecture based main mode method was proposed to classify applications into CPU-intensive, memory-intensive, network-intensive, and I/O-intensive precisely, enabling better scheduling of resources in the cloud; An ARIMA (AutoRegressive Integrated Moving Average) model-based prediction algorithm, which was also implemented, can lower average prediction error (7.59% high average forecast error and 6.06% low average forecast error) when forecasting consumption of resources; Appropriate modifications have been made on the traditional virtualization-based application cloud architecture to solve the inflexibility and inefficiency of the architecture based on virtual machine.
    First-principle nonlocal projector potential calculation on GPU cluster
    FU Jiyun JIA Weile CAO Zongyan WANG Long YE Huang CHI Xuebin
    2013, 33(06):  1540-1552.  DOI: 10.3724/SP.J.1087.2013.01540
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics
    Plane Wave Pseudopotential (PWP) Density Functional Theory (DFT) calculation is the most widely used method for material calculation. The projector calculation plays an important part in PWP-DFT calculation for the self-consistent iteration solution, while it often becomes a hinder to the speed-up of software. Therefore, according to the features of Graphic Processing Unit (GPU), a speed-up algorithm was proposed: 1) using a new parallel mechanism to solve the potential energy of nonlocal projector, 2) redesigning the distribution structure of data, 3) reducing the use of computer memory, 4) Proposing a solution to the related data problems of the algorithm. Eventually got 18-57 times acceleration, and reached the 12 seconds per step of the molecular dynamics simulation. In this paper, the testing time of running this model on GPU platform was analysed in detail, meanwhile the calculation bottleneck of the implementation of this method into GPU clusters was discussed
    Parallel cost model for heterogeneous multi-core processors
    HUANG Pinfeng ZHAO Rongcai YAO Yuan ZHAO Jie
    2013, 33(06):  1544-1547.  DOI: 10.3724/SP.J.1087.2013.01544
    Asbtract ( )   PDF (634KB) ( )  
    References | Related Articles | Metrics
    The existing parallel cost models are mostly devised for shared memory or distributed memory architecture, thus not suitable for heterogeneous multi-core processors. In order to solve the problem, a new parallel cost model for heterogeneous multi-cores was proposed. It described the impact of computing capacity, memory access delay and data transfer cost on parallel execution time of loops quantitatively, thus improving the veracity of accelerated parallel loop recognition. The experimental results show that the proposed model can effectively recognize the accelerated parallel loops. Using its recognition results to generate parallel codes can improve the performance of parallel programs on heterogeneous multi-core processors significantly.
    Chip layer assignment method for analytical placement of 3D ICs
    GAO Wenchao ZHOU Qiang QIAN Xu CAI Yici
    2013, 33(06):  1548-1552.  DOI: 10.3724/SP.J.1087.2013.01548
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics
    Chip layer assignment is a key step in analytical placement of 3D Integrated Circuits (ICs). Analytical placement could face the conversion from 3D continuous space in z-direction to several connected 2D chip layer spaces by layer assignment. However, layer assignment may destroy the previous optimal solution in 3D continuous space. To realize the transition from an optimal 3D placement to a legalized, layer-assigned placement smoothly, a layer assignment method was proposed by using the minimum cost flow, which protected solution space and inherited optimal wirelength at most. The layer assignment method was embedded in a multilevel non-linear placement of 3D ICs which minimized the weighted sum of total wirelength and Through Silicon Via (TSV) number subject to area density constraints. The proposed placement algorithm can achieve better wirelength results, TSV number and run time in comparison with the recent 3D placement methods.
    Algorithm for discovering influential nodes in weighted social networks
    HAN Zhongming YUAN Liling YANG Weijie WAN Yueliang
    2013, 33(06):  1553-1562.  DOI: 10.3724/SP.J.1087.2013.01553
    Asbtract ( )   PDF (990KB) ( )  
    References | Related Articles | Metrics
    Key nodes discovery is very important for social network. Nowadays, most of methods of key nodes discovery do not take relationship strength of nodes into account. Social networks, in essence, are weighted networks because relationship strengths of nodes are different. In this paper, a new method to compute relationship strength of nodes based on node interactions was proposed, and the method combined local features with global features. A node activity degree using user behavior features was defined; as a result, social networks were represented as dual-weighted networks by taking relationship strength as edge weight and node activity as node weight. Based on PageRank algorithm, two improved algorithms were proposed. The node weights were used as damping coefficient, and the weight of the edges was used to compute importance of nodes during iterative process. Two datasets from different sources were selected and comprehensive experiments were conducted. The experimental results show that proposed algorithms can effectively discover key nodes in real social networks.
    Pollution detection model in microblogging
    SHI Lei DAI Linna WEI Lin TAO Yongcai CAO Yangjie
    2013, 33(06):  1558-1562.  DOI: 10.3724/SP.J.1087.2013.01558
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics
    The high speed of the information propagation exacerbates the diffusion of rumors or other network pollutions in the microblogging. As the size of microbloggers and information of sub-networks in microblogging is enormous, the study of the propagation mechanism of microblogging pollution and pollution detection becomes very significant. According to the rumor spreading model for the microblogging established on the basis of influence of users, in this paper, ant colony algorithm was used to search for the rumor spreading route. Based on the data obtained from Twitter and Sina microblogging, the feasibility of the model was verified by comparison and analysis. The results show that: with the search of the affected individual, this model narrows down the pollution detection range, and improves the efficiency and accuracy of pollution management in microblogging.
    Discrete free search algorithm
    GUO Xin SUN Lijie LI Guangming JIANG Kaizhong
    2013, 33(06):  1563-1570.  DOI: 10.3724/SP.J.1087.2013.01563
    Asbtract ( )   PDF (572KB) ( )  
    References | Related Articles | Metrics
    A free search algorithm was proposed for the discrete optimization problem. However,solutions simply got from free search algorithm often have crossover phenomenon. Then, an algorithm free search algorithm combined with cross elimination was put forward, which not only greatly improved the convergence rate of the search process but also enhanced the quality of the results. The experimental results using Traveling Saleman Problem (TSP) standard data show that the performance of the proposed algorithm increases by about 1.6% than that of the genetic algorithm.
    Binary cuckoo search algorithm
    FENG Dengke RUAN Qi DU Limin
    2013, 33(06):  1566-1570.  DOI: 10.3724/SP.J.1087.2013.01566
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics
    In order to find a new algorithm to solve the NP-complete problem, the new efficient Cuckoo Search (CS) algorithm was improved into a Binary Cuckoo Search (BCS) algorithm by some means. These means include: binary coding strings were used to express the position of bird’s nest, the path of Lévy flights that cuckoos searched for new bird’s nest was transformed into binary coding respectively according to Kennedy and Eberha’s formula and Liu Jianghuas formula, and then a binary coding control factor was introduced to the hybrid update of the transformed binary coding, and the elimination mechanism of cuckoo eggs was reserved. The BCS algorithm performs better than genetic algorithm and some mixed genetic algorithms in solving the knapsack problem, and also better than genetic algorithm, ant colony optimization and particle swarm optimization in solving the traveling salesman problem, but slightly worse than the improved particle swarm optimization through adjusting the inertia weights adaptively. In solving the NP-complete problem, BCS algorithm is a new efficient algorithm.
    Convergence analysis of general evolutionary algorithms
    PENG Fuming YAO Min BAI Shunke
    2013, 33(06):  1571-1573.  DOI: 10.3724/SP.J.1087.2013.01571
    Asbtract ( )   PDF (436KB) ( )  
    References | Related Articles | Metrics
    Traditional Evolutionary Algorithm (EA) convergence research focuses on specific algorithm; consequently the conclusion is only suitable for some specific algorithm. In order to study the convergence of all EAs, this paper presented a general EA including EAs of all operator types. A probability space was set up for the purpose of studying the algorithm’s convergence, and all terms on the algorithm were strictly defined in mathematical language, and seven theorems related to the algorithm’s convergence were completely proved in the probability space. One of the theorems found the sufficient and necessary conditions for the algorithm’s convergence in probability. More importantly, these theorems are suitable to all types of EAs. A system composed of these theorems was established, which could be used to guide the EA design and judge the correctness of an EA theoretically.
    Artificial intelligence
    Survey of text sentiment analysis
    YANG Ligong ZHU Jian TANG Shiping
    2013, 33(06):  1574-1607.  DOI: 10.3724/SP.J.1087.2013.01574
    Asbtract ( )   PDF (987KB) ( )  
    References | Related Articles | Metrics
    This survey summarized the studies on text sentiment analysis in the view of granularity from the following five aspects: sentiment word extraction, sentiment corpus and dictionary construction, entity and opinion holders analysis,document level sentiment analysis, and text sentiment analysis applications. It pointed out that the current sentiment analysis system cannot gain high precision. Further research should focus on: widely and appropriately applying study achievement of natural language processing to text sentiment analysis; finding and choosing suitable features and algorithms in text sentiment classifications; utilizing the existing language tools and relevant resources in fast building standard language tools and resources and applying them.
    Improved model for similarity computation of ontology concept
    YAO Jiamin YANG Sichun
    2013, 33(06):  1579-1586.  DOI: 10.3724/SP.J.1087.2013.01579
    Asbtract ( )   PDF (593KB) ( )  
    References | Related Articles | Metrics
    Ontology alignment is a good solution to the problem of ontology heterogeneity in semantic Web, and its core is to calculate the similarity of ontology concepts. Given that the accuracy and precision of the existing similarity computation methods are not high, an improved model for similarity computation of ontology concept was proposed. First, this paper established a formal context and its corresponding concept lattice by using the partial order relation between ontology features, then obtained the meet-irreducible elements between concepts in structural level, and calculated similarity of concepts through quantified semantic relationships of the elements in the set. The case study and analytical results show that the improved model gets higher F-Score than other methods.
    Classification of Chinese time expressions based on dependency parsing
    XIAO Sheng HE Yanxiang LI Yongfan
    2013, 33(06):  1582-1586.  DOI: 10.3724/SP.J.1087.2013.01582
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    Some Chinese time expressions consisting of "cardinal+time unit word" may be time point expressions or time slot expressions in different context. An approach of classification of Chinese time expressions based on dependency parsing was proposed, for the purpose of automatic classification of Chinese time expressions. First some syntactic constraints of Chinese time expressions in sentences were found with the help of dependency parsing. Then some computable dependency rules were extracted from those syntactic constraints. Finally the classification of Chinese time expressions was executed using dependency rules. The experimental results show that in this approach the precision, recall, F-Measure of the confirmation are 82.3%, 88.1%, 85.1%; and the precision, recall, F-Measure of the classification are 77.1%, 82.5%, 79.7%.
    Short text classification using latent Dirichlet allocation
    ZHANG Zhifei MIAO Duoqian GAO Can
    2013, 33(06):  1587-1590.  DOI: 10.3724/SP.J.1087.2013.01587
    Asbtract ( )   PDF (555KB) ( )  
    References | Related Articles | Metrics
    In order to solve the two key problems of the short text classification, very sparse features and strong context dependency, a new method based on latent Dirichlet allocation was proposed. The generated topics not only discriminate contexts of common words and decrease their weights, but also reduce sparsity by connecting distinguishing words and increase their weights. In addition, a short text dataset was constructed by crawling titles of Netease pages. Experiments were done by classifying these short titles using K-nearest neighbors. The proposed method outperforms vector space model and topic-based similarity.
    Chinese comparative sentences recognition based on associated feature vocabulary
    DU Wentao LIU Peiyu FEI Shaodong ZHANG Zhen
    2013, 33(06):  1591-1594.  DOI: 10.3724/SP.J.1087.2013.01591
    Asbtract ( )   PDF (671KB) ( )  
    References | Related Articles | Metrics
    Chinese comparative sentences are more focused in the field of linguistics. Using machine learning methods to identify comparative sentences, however, has only just started. According to the basic principle of the association rules mining algorithm, a method of comparative sentences based on the associated feature vocabulary was proposed. This method regarded word and part of speech as basic elements, defined the connecting way between the table definition core words and interdependent relationship words, and used the Support Vector Machine (SVM) classifier for the identification of comparative sentences. The experimental results show that this method can effectively identify Chinese comparative sentences, and achieves good results in precision, recall and F-measure.
    Bayesian network structure learning algorithm based on topological order and quantum genetic algorithm
    ZHAO Xuewu LIU Guangliang CHENG Xindang JI Junzhong
    2013, 33(06):  1595-1603.  DOI: 10.3724/SP.J.1087.2013.01595
    Asbtract ( )   PDF (965KB) ( )  
    References | Related Articles | Metrics
    Bayesian network is one of the most important theoretical models for the representation and reasoning of uncertainty. At present, its structure learning has become a focus of study. In this paper, a Bayesian network structure learning algorithm was developed, which was based on topological order and quantum genetic algorithm. With the richness of the quantum information and the parallelism of quantum computation, this paper designed generator strategy of topological order based on a quantum chromosome to improve not only the efficiency of search, but also the quality of Bayesian network structure. And then by using self-adaptive quantum mutation strategy with upper-lower limit, the diversity of the population was increased, so that the search performance of the new algorithm was improved. Compared to some existing algorithms, the experimental results show that the new algorithm not only searches higher quality Bayesian structure, but also has a quicker convergence rate.
    Model selection of extreme learning machine based on latent feature space
    MAO Wentao ZHAO Zhongtang HE Huanhuan
    2013, 33(06):  1600-1603.  DOI: 10.3724/SP.J.1087.2013.01600
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    Recently, Extreme Learning Machine (ELM) has been a promising tool in solving a wide range of classification and regression problems. However, the generalization performance of ELM will be decreased when there exits redundant hidden neurons. To solve this problem, this paper introduced a new regularizer that was the Frobenius norm of mapping matrix from hidden space to a new latent feature space. Furthermore, an alternating optimization strategy was adopted to learn the above regularization problem and the latent feature space. The proposed algorithm was tested empirically on the classical UCI data set as well as a load identification engineering data set. The experimental results show that the proposed algorithm obviously outperforms the classical ELM in terms of predictive precision and numerical stability, and needs much less computational cost than the present ELM model selection algorithm.
    Trajectory data generalization based on local multi-hierarchy grid
    YANG Guang ZHANG Lei LI Fan
    2013, 33(06):  1604-1607.  DOI: 10.3724/SP.J.1087.2013.01604
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    For current trajectory data generalization methods, the scope of the generalized regions cannot be controlled effectively, and the parameters of the grids can hardly be selected logically. This paper proposed the method of Local Multi-hierarchy Grid (LMG), so that the region with dense trajectory points would be divided iteratively. And then a method for trajectory data generalization named TRAGenLMG was proposed, which was based on LMG, and time-constraint was used to merge some adjoining grids, and finally the generalized trajectory was got. The experiments with real open dataset show that the generalized trajectories generated by TRAGenLMG can well maintain the temporal feature of the trajectory data and can be efficiently applied into further data analysis.
    Information security
    Privacy protection disturbance method of society network based on spectrum constraint and sensitive area division
    WANG Xiaohao GENG Hui CHEN Tieming
    2013, 33(06):  1608-1614.  DOI: 10.3724/SP.J.1087.2013.01608
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics
    To solve the attacks of sensitve edge identification using social individual neighborhood information as background knowledge inside the social networks, a random disturbance method based on spectrum constraint and sensitive area division was proposed. The main idea of this method was to divide the network into sensitive zone and non-sensitive zone. It compared the last disturbance spectrum size of the social network graph with the original social network graph, and chose the right edges to add, delete or convert based on the comparison results and spectrum constraints, thus improving the usability of social network data. This method can improve the degree of privacy protection by eliminating invalid disturbances. The experimental results show that it can protect structural characteristics of social network better.
    Trust model based on weight factor in P2P network
    CHEN Shanshan
    2013, 33(06):  1612-1614.  DOI: 10.3724/SP.J.1087.2013.01612
    Asbtract ( )   PDF (461KB) ( )  
    References | Related Articles | Metrics
    According to the inner security issue of Peer-to-Peer (P2P) networks, a trust model based on direct transaction and recommendation in P2P network was proposed. Parameters of direct transaction information, rating confidence degree of recommendation information and dynamic balance weight were applied in the model. The model described peer’s integration trust simply and accurately, and established trust relationship with target peer before transaction, which can restrain malicious peer to do malice behavior and rating cheat to other peers in the network, and improve the security of network trade.
    null
    ZHANG Sijie BAI Qing SU Yang
    2013, 33(06):  1615-1618.  DOI: 10.3724/SP.J.1087.2013.01615
    Asbtract ( )   PDF (618KB) ( )  
    References | Related Articles | Metrics
    null
    Security improvement on LAOR routing protocol
    ZHOU Xing LIU Jun DONG Chundong ZHANG Yujing
    2013, 33(06):  1619-1629.  DOI: 10.3724/SP.J.1087.2013.01619
    Asbtract ( )   PDF (675KB) ( )  
    References | Related Articles | Metrics
    This paper referred to the common routing protocol threats appearing in MANET and analyzed the properties of satellite network to get possible security threats of Location-Assisted On-demand Routing (LAOR) and what to do to make the protocol safer. It used Id-based cryptography to mainly realize mutual authentication between nodes and protection of the integrity of routing control packet by signature using each nodes private key. Finally strand space was used to analyze the improved routing protocol, and proved it satisfied plausible routing, and was secure.
    Multilevel reversible information hiding algorithm for multispectral images
    FANG Hai ZHOU Quan
    2013, 33(06):  1622-1645.  DOI: 10.3724/SP.J.1087.2013.01622
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics
    Because the existing reversible information hiding algorithm has low embedding capacity and is not suitable for multispectral images, a multilevel reversible information hiding method for multispectral images based on histogram shifting of prediction error and band re-ordering was proposed. For each hiding level, firstly, the bands were re-ordered by the proposed approach, then, an adaptive predictor was used to exploit the spatial correlation and spectral correlation of multispectral image, finally, the secret information were embedded by a histogram shifting mechanism based on prediction error. The experimental results on multispectral images acquired by Landsat show that the proposed method has better visual quality and higher hiding capacity than the representative methods.
    Image watermarking algorithm based on quaternion and singular value decomposition
    CHEN Shanxue FENG Yinbo
    2013, 33(06):  1626-1629.  DOI: 10.3724/SP.J.1087.2013.01626
    Asbtract ( )   PDF (683KB) ( )  
    References | Related Articles | Metrics
    A new method for embedding watermark in color image which combined Quaternion Discrete Cosine Transform (QDCT) and Singular Value Decomposition (SVD) was proposed. First of all, binary watermark was preprocessed with the help of the Arnold scrambling application and the color image was made block QDCT and SVD by using quaternion theory. Then, a number of image blocks were selected at random by Logistic mapping to realize the embedding of watermark. The experimental results show that this method has strong anti-JPEG compression as well as better robustness to various kinds of noise and filtering.
    Improved mandatory access control model for Android
    JIANG Shaolin WANG Jinshuang YU Han ZHANG Tao CHEN Rong
    2013, 33(06):  1630-1636.  DOI: 10.3724/SP.J.1087.2013.01630
    Asbtract ( )   PDF (1096KB) ( )  
    References | Related Articles | Metrics
    In order to protect Android platforms from the application-level privilege escalation attacks, this paper analyzed the XManDroid access control model, which has better ability on fighting these attacks, especially the collusion attack on the covert channel. To address the problem that XManDroid could not detect the multi-application and multi-permissions collusion attacks, this paper proposed an improved mandatory access control model which recorded the communication history of applications by building an IPC links colored diagram. At last, the test result on the prototype system show that the new model can solve the problem in the XManDroid well.
    Security scheme of XML database service using improved polyphonic splitting
    YANG Gang CHEN Yue HUANG Huixin YU Zhe
    2013, 33(06):  1637-1641.  DOI: 10.3724/SP.J.1087.2013.01637
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics
    Outsourcing data owner’s data to Database Services Provider (DSP) securely provides XML database service for companies and organizations, which is an important data service form in cloud computing. This paper proposed an improved polyphonic splitting scheme for XML database service(IPSS-XML). IPSS-XML overcame the drawback of low verifying efficiency in other existing schemes by adding an Assistant Verifying Data (AVD) to each non-leaf node at low cost. The improvement enhances query executing efficiency without breaking the confidentiality constraints.
    Value-at-risk quantitative method about password chip under differential power analysis attacks
    XU Kaiyong FANG Ming YANG Tianchi MENG Fanwei HUANG Huixin
    2013, 33(06):  1642-1645.  DOI: 10.3724/SP.J.1087.2013.01642
    Asbtract ( )   PDF (673KB) ( )  
    References | Related Articles | Metrics
    Based on the principle and characteristics of the Differential Power Analysis (DPA) attack, the kernel function was used to estimate the probability distribution density of the leakage of power consumption in the password chip work process. By calculating the mutual information between the attack model and the power leakage, when the guessed key was correct, this paper quantified the risk value of the password chip in the face of DPA attacks. The experiments show that the risk quantification method can be a good estimate of the correlation degree between the attack model and power leakage when the guessed key is correct and then provides important indicators to complete password chip risk evaluation.
    Research and design of trusted cryptography module driver based on unified extensible firmware interface
    ZHU Hexin WANG Zhengpeng LIU Yehui FANG Shuiping
    2013, 33(06):  1646-1649.  DOI: 10.3724/SP.J.1087.2013.01646
    Asbtract ( )   PDF (673KB) ( )  
    References | Related Articles | Metrics
    To extend the application range of Trusted Cryptography Module (TCM) and promote the safety and credibility on terminal machine and cloud platform, this paper analyzed the status quo and tendency of TCM firmware, proposed a TCM firmware driver framework based on Unified Extensible Firmware Interface (UEFI), and designed low-level the driver interface and core protocol based on this framework. This TCM driver adopted module design and layered implementation, made the TCM protocol packaged and registered to UEFI firmware system, and completed the low-level data sending and receiving as well as protocol encapsulation. The test results of TCM firmware driver indicate the high accuracy and effectiveness for this design through the conformance test, functional test as well as pressure test. Besides, the industrial situation also illustrates the feasibility of this driver.
    Network and distributed techno
    Interface design of heterogeneous workflow interconnection based on Web service
    TANG Di SUN Ruizhi XIANG Yong YUAN Gang
    2013, 33(06):  1650-1712.  DOI: 10.3724/SP.J.1087.2013.01650
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics
    In order to achieve complementary advantages and information sharing of heterogeneous workflow systems among enterprises, concerning the workflow showing heterogeneous distribution and other characteristics, an interface design of heterogeneous workflow processes interconnection based on Web service was proposed. For the interconnection of heterogeneous processes, the solution of heterogeneous workflow interconnection was described from call interface,call mode and call return respectively. Taking example of SynchroFlow workflow process described by XPDL and ODE (Open Dynamic Engine) workflow process described by BPEL, the process calls between the workflows was achieved.
    Middleware design for high-speed railway integrated dispatching system based on SCA and SDO
    LUO Qiang WANG Qian LIU Fanglin FAN Ruijuan
    2013, 33(06):  1654-1669.  DOI: 10.3724/SP.J.1087.2013.01654
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    In order to solve the system integration problems of high-speed railway integrated dispatching system in highly-distributed, highly heterogeneous environment, system integration framework based on Service Oriented Architecture (SOA) was proposed. The high-speed railway integrated dispatching system structure and its distributed SOA application were constructed based on Service Component Architecture (SCA) and Service Data Object (SDO) technology. The integration of power dispatching subsystem and other scheduling subsystems was achieved based on SCA and SDO technology on Java EE platform. The method fully embodies the openness and cross-platform features of SOA, and it is easy to implement.
    Research on metamorphic testing of slope and aspect calculating programs
    HUANG SONG DING Ruihao LI Hui YAO Yi
    2013, 33(06):  1657-1745.  DOI: 10.3724/SP.J.1087.2013.01657
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    Slope and aspect calculating is a basic function of Geographic Information System (GIS). However, due to the existence of rounding and truncation errors, it is difficult to obtain the oracles of the implementation. In order to provide test oracles, this paper applied metamorphic testing technique into the testing of slope and aspected calculating programs, and designed several metamorphic relations by analyzing the geometric and numerical features, algorithm of slope and aspect calculating programs. It also analyzed the applicable range of these metamorphic relations, and formed the metamorphic testing method of slope and aspect calculating programs that was proved useful based on mutant testing. Our researches can not only be used as references for alleviating the oracle problem of other kinds of programs in GIS, but also help to promote the development of metamorphic testing technique.
    Curvature estimation for scattered point cloud data
    ZHANG Fan KANG Baosheng ZHAO Jiandong LI Juan
    2013, 33(06):  1662-1681.  DOI: 10.3724/SP.J.1087.2013.01662
    Asbtract ( )   PDF (564KB) ( )  
    References | Related Articles | Metrics
    For resolving the problem of curvature calculation for scattered point cloud data with strong noise, a robust statistics approach to curvature estimation was presented. Firstly the local shape at a sample point in 3D space was fitted by a quadratic surface. In addition,the fitting was performed at multiple times with randomly sampled subsets of points, and the best fitting result evaluated by variable-bandwidth maximum kernel density estimator was obtained. At last, the sample point was projected onto the best fitted surface and the curvatures of the projected point was estimated. The experimental results demonstrate that the proposed method is robust to noise and outliers. Especially with increasing noise variance, the proposed method is significantly better than the traditional parabolic fitting method.
    Multi-feature suitability analysis of matching area based on D-S theory
    CHEN Xueling ZHAO Chunhui LI Yaojun CHENG Yongmei
    2013, 33(06):  1665-1669.  DOI: 10.3724/SP.J.1087.2013.01665
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics
    The suitability analysis of matching area plays a significant role in the field of vision-based navigation. There are many feature indexes that can only unilaterally describe the suitability of matching area. An algorithm was proposed to integrate several feature indexes to solve conflicts among different feature indexes and provide a kind of method that can measure the suitable confidence and unsuitable confidence of a feature.And then the confidences were fused by using the Dempster-Shafer (DS) rules. At last the algorithm was verified by simulation experiment.
    Image object detection based on local feature and sparse representation
    TIAN Yuanrong TIAN Song XU Yuelei ZHA Yufei
    2013, 33(06):  1670-1673.  DOI: 10.3724/SP.J.1087.2013.01670
    Asbtract ( )   PDF (649KB) ( )  
    References | Related Articles | Metrics
    Traditional image object detection algorithm based on local feature is sensitive to rotation and occlusion; meanwhile, it also obtains low detection precision and speed in many cases. In order to improve the performance of this algorithm, a new image objects detection method applying objects’ local feature to sparse representation theory was introduced. Employing supervised random tree method to learn local features of sample images, a dictionary could be formed. The combination of sub-image blocks of test image and well trained dictionary in first stage could predict the location of the object in the test image, in this way it could obtain a sparse representation of the test image as well as the object detection goal. The experimental results demonstrate that the proposed method achieves robust detection results in rotation, occlusion condition and intricate background. What’s more, the method obtains higher detection precision and speed.
    Image retrieval based on color and motif characteristics
    YU Sheng XIE Li CHENG Yun
    2013, 33(06):  1674-1708.  DOI: 10.3724/SP.J.1087.2013.01674
    Asbtract ( )   PDF (588KB) ( )  
    References | Related Articles | Metrics
    In order to improve image retrieval performance, this paper proposed a new image retrieval algorithm based on motif and color features. The color image edge gradient was detected, and by means of edge gradient image transform, a motif image was obtained. Adopting the gravity center of motif image as the datum point, the distances of all points were calculated to the datum point to get the motif center distance histogram. The all motifs of the motif image were projected in four different directions to get motif projective histogram. Color image was uniformly quantized into 64-color space from RGB space to obtain the color histogram. The above three histograms described image features for image retrieval. The experimental results show that the algorithm has high precision and recall.
    Feature extraction based on collaborative representation and fuzzy progressive maximal marginal embedding
    SU Baoli
    2013, 33(06):  1677-1681.  DOI: 10.3724/SP.J.1087.2013.01677
    Asbtract ( )   PDF (863KB) ( )  
    References | Related Articles | Metrics
    In the procedure of the construction of neighborhood graph, traditional graph-embedding algorithms adopt a simple two-value hard classifier criterion. Concerning this problem, with reference to the fuzzy mathematics, a new fuzzy progressive neighbor graph was proposed in this paper. Furthermore, collaborative representation classifies patterns by employing all the training images to represent the query image collaboratively. Therefore, in this paper, collaborative representation was introduced into classifier. Concerning the problems mentioned above, a feature extraction algorithm based on collaborative representation and fuzzy progressive maximal marginal embedding was proposed for face recognition. The experimental results on the ORL, AR face databases and USPS handwriting number database show that the proposed algorithm outperforms Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Localities Preserving Projections (LPP) and Margin Fisher Analysis (MFA).
    Improved foreground detection based on statistical model
    QIANG Zhenping LIU Hui SHANG Zhenhong CHEN Xu
    2013, 33(06):  1682-1694.  DOI: 10.3724/SP.J.1087.2013.01682
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics
    In this paper, the main idea was to improve the foreground detection method based on statistical model. On one hand, historical maximum probability of which feature vector belongs to background was recorded in the background model, which could improve the matched vectors updating speed and make it blended into the background quickly. On the other hand, a method using spatial feature match was proposed to reduce the shadow effect in the foreground detection. The experimental results show that, compared with the MoG method and Lis statistical model method, the method proposed in this paper has obvious improvement in shadow remove and obscured background restoration of big target object.
    Algorithm of point pattern matching based on quasi Laplacian spectrum and point pair topological characteristics
    ZHANG Guanliang ZOU Huanxin LU Chunyan ZHAO Jian
    2013, 33(06):  1686-1690.  DOI: 10.3724/SP.J.1087.2013.01686
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor robustness of the state-of-art spectrum-based algorithms when the outliers and noises exist,a new and robust point pattern matching algorithm based on Quasi Laplacian spectrum and Point Pair Topological Characteristic (QL-PPTC) was proposed. In this paper, firstly, a signless Laplacian matrix was constructed by using the minimal spanning tree of weighted graph, and then the eigenvalues and eigenvectors obtained from the spectrum decomposition were used to represent the point’s feature, which made it possible to calculate the matching probability. Secondly, the similarity measurement of point pair topological characteristic was computed to define local compatibility between the point pairs, and then correct matching results were achieved by using the method of probabilistic relaxation. The contrast experimental results show that the proposed algorithm is robust when the outliers and noises exist in point matching.
    Straight line extraction via phase-grouping method based on adaptive partitioning
    HAN Dan SONG Weidong WANG Jingxue
    2013, 33(06):  1691-1694.  DOI: 10.3724/SP.J.1087.2013.01691
    Asbtract ( )   PDF (697KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor performance in anti-noise and the limitations of the fixed partition of the phase-grouping method, an adaptive phase partitioning method was proposed. Firstly, this paper used the adaptive smoothing and Wallis filtering for image pre-processing and adopted the Canny method for edge detection; secondly, selected the optimal partitioning of the image based on the adaptive phase partitioning method to create the line supporting area; finally, fitted and merged the lines by the least squares method. The experimental results show that the proposed method can effectively solve the line breakage and mistaken extraction caused by the noise and fixed partition, and this method extracts line completely and precisely.
    Algorithm of edge extraction in intensively noisy log-polar space
    WEN Pengcheng ZHANG Yadi WANG Xiangjun
    2013, 33(06):  1695-1700.  DOI: 10.3724/SP.J.1087.2013.01695
    Asbtract ( )   PDF (455KB) ( )  
    References | Related Articles | Metrics
    Accurate extraction of a target’s edge in a log-polar space is a precondition and key point to successfully apply the visual invariance of the log-polar transformation. Since it is impossible for traditional algorithms to extract the single-pixel edge in an intensively noisy environment, a unique edge extraction algorithm on the basis of active contour model and level set method was designed. After noise removal on the whole via Canny operator based level set method, the energy-driving active contour model was used to iteratively approach the potential edges. By clearing out false edges with an improved tracing way, the true target’s edge was extracted finally. The experimental results demonstrate the effective performance of the proposed algorithm with the edge feature similarity up to 96%.
    Lossless image coding method with resolution scalable code-stream
    LI Shigao QIN Qianqing
    2013, 33(06):  1697-1700.  DOI: 10.3724/SP.J.1087.2013.01697
    Asbtract ( )   PDF (615KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a new decomposition scheme for lossless image compression by incorporating edge-directed adaptive prediction with wavelet lifting scheme. A vertical one-Dimension Discrete Wavelet Transform (1D-DWT) was first applied to images by means of lifting scheme. Second, edge-directed adaptive prediction procedure was applied to those high-frequency sub-band coefficients generated by the previous DWT. And then, a similar horizontal decomposition was performed in the low-frequency sub-band generated by vertical decomposition. A multi-resolution representation was thus acquired by an iterative repetition at the produced low-resolution approximation. Unlike the well-known coder CALIC and JPEG-LS, this scheme can provide a resolution scalable code-stream due to DWT. In addition, the experimental results indicate, due to the edge-directed prediction, this decomposition scheme has achieved noticeably better performance of lossless compression than JPEG2000 which supports resolution scalability.
    Human behavior recognition algorithm with space-time topological feature and sparse expression
    HUANG Wenli FAN Yong
    2013, 33(06):  1701-1710.  DOI: 10.3724/SP.J.1087.2013.01701
    Asbtract ( )   PDF (904KB) ( )  
    References | Related Articles | Metrics
    Behavior analysis based on vision is one of the important research topics in image processing, pattern recognition,etc, and it has wide application prospects on public security and military field. For the problems of a fixed camera such as lack of the single feature description, motion occlusions, holes and shadows, the paper proposed a behavior recognition algorithm which combines space-time topological feature with sparse expression. It used random projection to get a space-time topological feature of strong cohesion, high distinction and low dimension, which fused topology structure, geometric invariant and space-time Poisson information. The noise-adding sparse mechanism resolving problems by simulating human was combined to identify behaviors of human body in a close-range monitor scene. The experimental results show that the recognition rate of space-time topological feature is 12.79% higher than that of single one. The recognition rate of this proposed algorithm is only 6.15% down in a noisy scene, and that for multi-behavior reaches 87.78%. This algorithm has the properties of strong description for space-time feature, higher robustness against noise and high efficiency for behavior recognition.
    Fast depth video coding algorithm based on region division
    TIAN Tao PENG Zongju
    2013, 33(06):  1706-1710.  DOI: 10.3724/SP.J.1087.2013.01706
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics
    As the main scheme of 3D scene representation, multiview video plus depth attracts more and more attention. Depth video reflects the geometric information of the scene. It is important to design fast depth video encoding algorithm. A fast depth video coding algorithm based on region division was proposed. Firstly, the depth video was divided into four regions according to the features of edge and motion. Then, macroblock distribution proportion and multi-reference frame selection feature of different regions were analyzed. Consequently, different macroblock mode decision and reference frame selection methods were utilized to speedup depth video encoding. Finally, some experiments were conducted to estimate the proposed algorithm in terms of encoding time, bit rate and virtual view quality. Experimental results show that the proposed algorithm saves encoding time ranging from 85.73% to 91.06% while it maintains virtual view quality and bit rate.
    Research and implementation of realistic dynamic tree scene
    CUI Xiang JIANG Xiaofeng
    2013, 33(06):  1711-1714.  DOI: 10.3724/SP.J.1087.2013.01711
    Asbtract ( )   PDF (557KB) ( )  
    References | Related Articles | Metrics
    Dynamic tree rendering plays an important role in the natural scenery simulation. In this paper, by using Cook-Torrance lighting model and pre-computed translucency texture, rendering scattering and translucency of the leaf were implemented. Using the polynomial fitted from tapered circular beam model expression and length correct method, the speed of calculation deform was boosted. By introducing the hierarchical branches texture with index, branches deform could be calculated in Graphic Processing Unit (GPU). Using pre-compaction and GPU helps to balance the reality and real-time in the simulation. The experiments show that the proposed method can render the dynamic tree scene vividly and rapidly.
    Color image quality assessment algorithm based on color structural similarity
    ZHAO Xiuzhi XIE Dehong PAN Kangjun
    2013, 33(06):  1715-1718.  DOI: 10.3724/SP.J.1087.2013.01715
    Asbtract ( )   PDF (651KB) ( )  
    References | Related Articles | Metrics
    Concerning the disadvantages of quality assessment algorithms for color images, a new algorithm based on visual structural similarity was proposed. Firstly, testing images were transformed into a selected uniform color space LAB2000HL. Secondly, one luminance Contrast Sensitivity Function (CSF) and two chromatic CSFs were used to filter images respectively. Thirdly, three structural similarity indexes were computed by multi-scale structural similarity index measurement (M-SSIM). Lastly, the proposed algorithm was constructed by weighting the three structural similarity indexes, which depended on different visual sensitivities of luminance and chromaticity. In the experiment, testing results on TID2008 database were compared with the results of visual assessment by Spearman rank-order correlation coefficient and Kendall rank-order correlation coefficient. The experimental results show that the proposed algorithm is more consistent with visual assessment and outperforms several other popular image quality assessment algorithms.
    Simplification method of appearance preserved CAD model
    YIN Mingqiang LI Shiqi
    2013, 33(06):  1719-1722.  DOI: 10.3724/SP.J.1087.2013.01719
    Asbtract ( )   PDF (685KB) ( )  
    References | Related Articles | Metrics
    With the development on technology of CAD/CAM, the product design, virtual manufacturing and digital prototyping can all be done in the computer, which makes the design of large and complex assembly an essential part in the product design. As these assembly models tend to have a huge number of data, it is extremely inconvenient to process on ordinary PCs. In order to speed up, the large scale assembly model needs simplifying. On the premise of maintaining the style and facade of the system, two simplification methods were proposed: (1) by removing invisible parts from the assembly, (2) by removing the invisible features from the assembly. The proposed methods were based on an algorithm which can directly detect invisible parts or features by pre-rendering the model from multiple view directions and reading the rendered results from the frame buffer. The experimental results show that our methods can correctly remove the invisible parts or features correctly from assembly for simplification.
    Regularized marginal Fisher analysis and sparse representation for face recognition
    HUANG Kekun
    2013, 33(06):  1723-1726.  DOI: 10.3724/SP.J.1087.2013.01723
    Asbtract ( )   PDF (632KB) ( )  
    References | Related Articles | Metrics
    When Marginal Fisher Analysis (MFA) is applied to face recognition, it suffers the small size sample problem. If principal component analysis is used to deal with the problem, some useful components will get lost for classification. If replacing the objective function of MFA with maximum margin criterion, it would be difficult to find the optimal parameter. Therefore, in this paper, the regularized MFA method was proposed. It constructed a regularized item by a small number multiplying the identity matrix, and the regularized item was added to within-class scatter matrix so that the resulting matrix was not singular. This method does not lose any useful component for classification and is easy to determine the parameter. Because a sample usually can be linearly represented by few neighbors in the same class, the sparse representation classification was used to further improve the recognition accuracy after regularizing MFA. Experiments were carried out FERET and AR database, and results show that the proposed method can significantly improve the recognition accuracy compared with some classic dimensionality reduction methods.
    Medical images fusion of nonsubsampled Contourlet transform and regional feature
    LI Chao LI Guangyao TAN Yunlan XU Xianglong
    2013, 33(06):  1727-1731.  DOI: 10.3724/SP.J.1087.2013.01727
    Asbtract ( )   PDF (787KB) ( )  
    References | Related Articles | Metrics
    With reference to the properties of multiscale and shift invariance of nonsubsampled Contourlet transform, and concerning the characteristics of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) images, a medical images fusion method was proposed.The proposed method fused the low frequency subband and high frequency subband of these medical images separately by the regional feature strategy. The paper introduced the judgment criteria of images fusion and expatiated on the principle and implementation of Nonsubsampled Contourlet Transform (NSCT). And this gave the subjective judgment and numeric measurement of the fusion images based on visual effect and information indexes. To evaluate the performance of the proposed algorithm, the authors compared the results with those of the algorithms, such as wavelet transform and Contourlet transform. The CT and MRI images simulation results of mandibular system indicate that the proposed method outperforms the others in terms of both visual quality and objective evaluation criteria, while it can integrate and maintain much more effective and detailed information as well.
    Iterative image reconstruction for differential phase contrast CT based on compressive sensing
    QIN Feng SUN Fengrong SONG Shangling ZHANG Xinping LI Xincai
    2013, 33(06):  1732-1736.  DOI: 10.3724/SP.J.1087.2013.01732
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics
    The X-ray phase contrast Computed Tomography (CT) can produce high contrast images by the X-ray phase information alteration, which comes forth after the X-ray passes through the sample, and it is highly favorable to the imaging of light elements and can get much higher contrast resolution than the absorption contrast CT. Grating-based Differential Phase Contrast CT (DPC-CT) shows great clinical prospects due to the possibility of using a conventional X-ray source, but the X-ray radiation dose issue limits its clinical applications. Concerning such inadequacies, an image reconstruction method for DPC-CT named DD-L1 was proposed. This algorithm combined Compressive Sensing (CS) theory with CT iterative reconstruction technique and introduced distance driven forward and backward projection computation strategy. The experimental results show that DD-L1 algorithm can generate tomographic images of higher quality even when the projection data is incomplete.
    New method for multiple sclerosis white matter lesions segmentation
    XIANG Yan HE Jianfeng MA Lei YI Sanli XU Jiaping
    2013, 33(06):  1737-1741.  DOI: 10.3724/SP.J.1087.2013.01737
    Asbtract ( )   PDF (509KB) ( )  
    References | Related Articles | Metrics
    Multiple Sclerosis (MS) is a chronic disease that affects the central nervous system and MS lesions are visible in conventional Magnetic Resonance Imaging (cMRI). A new method for the automatic segmentation of MS White Matter Lesions (WML) on cMRI was presented, which enabled the efficient processing of images. Firstly the Kernel Fuzzy C-Means (KFCM) clustering was applied to the preprocessed T1-weight (T1-w) image for extracting the white matter image. Then region growing algorithm was applied to the white matter image to make a binary mask. This binary mask was then superimposed on the corresponding T2-weight (T2-w) image to yield a masked image only containing white matter, lesions and background. The KFCM was reapplied to the masked image to obtain WML. The testing results show that the proposed method is able to segment WML on simulated images of low noise quickly and effectively. The average Dice similarity coefficient of segmentation result is above 80%.
    Fast volume measurement algorithm based on image edge detection
    FENG Yangqin CHEN Fei
    2013, 33(06):  1739-1741.  DOI: 10.3724/SP.J.1087.2013.01739
    Asbtract ( )   PDF (494KB) ( )  
    References | Related Articles | Metrics
    A volume measurement algorithm based on edge detection was introduced in this paper. The algorithm was applied to the portable system for measuring bladder volume. In the system, the three-dimensional ultrasound probe performed the scanning with the axial angle of 180 degree. Twelve axial sections of a bladder with the same scanning interval were obtained and used to edge extraction. The group of bladder edges formed a three-dimensional bladder body and the integral method of spatial volume was used to calculate the bladder volume.The average error of the algorithm in clinical diagnosis experiment was within 12%.
    Improved syllable-based acoustic modeling for continuous Chinese speech recognition
    CHAO Hao YANG Zhanlei LIU Wenju
    2013, 33(06):  1742-1745.  DOI: 10.3724/SP.J.1087.2013.01742
    Asbtract ( )   PDF (691KB) ( )  
    References | Related Articles | Metrics
    Concerning the changeability of the speech signal caused by co-articulation phenomenon in Chinese speech recognition, a syllable-based acoustic modeling method was proposed. Firstly, context independent syllable-based acoustic models were trained, and the models were initialized by intra-syllable IFs based diphones to solve the problem of training data sparsity. Secondly, the inter-syllable co-articulation effect was captured by incorporating inter-syllable transition models into the recognition system. The experiments conducted on “863-test” dataset show that the relative character error rate is reduced by 12.13%. This proves that syllable-based acoustic model and inter-syllable transition model are effective in solving co-articulation effect.
    Modified speech enhancement algorithm based on de-correlation variable step size
    WANG yulinYulin TIAN Xuelong GAO Xueli
    2013, 33(06):  1746-1749.  DOI: 10.3724/SP.J.1087.2013.01746
    Asbtract ( )   PDF (604KB) ( )  
    References | Related Articles | Metrics
    Noise seriously affects the quality of speech signal under complex environment, leading us cannot convey the semantic correctly, so speech enhancement processing becomes very necessary. Traditional technology exist problems as follows: poor adaptability, slow convergence speed when the input signals are heavily correlated. Therefore, an improved algorithm unifies the advantages of Variable Step Size Least Mean Square (VSSLMS) algorithm and de-correlation was proposed to increase the convergence speed by optimizing the step size and update direction of the weight vector in adaptive filter. In order to improve the stability of algorithm implementation in embedded systems, continuous block processing principle was introduced to normalize the weight vector. Simulation tests show that the novel algorithm has fast convergence speed and good performance of tracing time-varying signals. The noise can be removed effectively from speech signal with strong noise, and the speech definition and intelligibility are improved significantly.
    Typical applications
    Consensus control for a class of heterogeneous multi-Agent systems
    FENG Yuanzhen TU Xiaoming LI Jianzhen
    2013, 33(06):  1750-1758.  DOI: 10.3724/SP.J.1087.2013.01750
    Asbtract ( )   PDF (414KB) ( )  
    References | Related Articles | Metrics
    The consensus problem for a class of heterogeneous multi-Agent systems was investigated in this paper, where the multi-Agent systems were composed of first-order Agents and second-order Agents. First, consensus protocols were proposed for first-order Agents and second-order Agents, respectively. Then, necessary and sufficient condition was presented for heterogeneous multi-Agent systems with fixed and directed communication topology to reach consensus by using tools from graph theory and matrix theory. Besides, the final consensus states were specified if consensus is achieved to the case of fixed communication topology. Furthermore, sufficient condition was derived for heterogeneous multi-Agent systems with switching and directed communication topologies to reach consensus. Finally, a simulation example was given to demonstrate the effectiveness of theoretical results.
    Global path planning based reciprocal velocity obstacles method for crowd evacuation
    HUANG Yangyu HU Wei YUAN Guodong
    2013, 33(06):  1753-1758.  DOI: 10.3724/SP.J.1087.2013.01753
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics
    Reciprocal Velocity Obstacles (RVO) can process collision avoiding between large-scale agents, and be used in many crowd simulation engines. However, due to the lack of optimized path planning, it is difficult for RVO to simulate crowd evacuation in complicated environment. In this paper, based on RVO mechanism, a new global optimal path planning method, comprising path preprocessing and dynamical computation, was proposed for crowd evacuation simulation in complicated environment. SPFA (Shortest Path Faster Algorithm) algorithm was firstly used for pre-calculating SSP (Scene Shortest Path), and then the SSP was utilized to compute optimized evacuation path for each Agent in complicated scenes in real-time. KD tree (K-Dimension tree) was also used to further improve processing performance. Some examples demonstrate that the method can do well in global path planning for large-scale crowd evacuation in complicated scenes, especially in multi-floor, multi-obstacle, multi-stair, and multi-outlet scenes.
    Dynamic identification of one-way road state based on floating car data
    JIANG Xinhua ZHU Dandan LIAO Lyuchao ZOU Fumin LAI Hongtu
    2013, 33(06):  1759-1766.  DOI: 10.3724/SP.J.1087.2013.01759
    Asbtract ( )   PDF (853KB) ( )  
    References | Related Articles | Metrics
    The identification of one-way road state can provide relevant information of road network to the public timely and accurately, improve the efficiency of public travel, and enhance the service level of dynamic traffic information. This paper presented a dynamic identification algorithm of one-way road state based on Floating Car Data (FCD). Firstly the line feature information of maps was got, and the matching of spatial information grid with the traffic roads was pretreated to achieve fast matching for massive FCD; Then statistical characteristics of FCD direction information was analyzed to filter dual-threshold information and direction information; Finally one-way road state information was got dynamically. The actual road network tests show the algorithm can identify one-way road state information effectively.
    Path planning algorithm based on hierarchical road network
    LUO Ya’nan FU Yongqing
    2013, 33(06):  1763-1766.  DOI: 10.3724/SP.J.1087.2013.01763
    Asbtract ( )   PDF (639KB) ( )  
    References | Related Articles | Metrics
    In order to improve the efficiency of path planning, a heuristic search algorithm was presented, which was based on the hierarchical road network and used binary heap to manage the open list. According to the characteristics of the network classification, the hierarchical map database was established. This article made the heuristic A* algorithm as the main search mode, and used the binary heap to manage the open list, thus realizing path planning. The statistical results of average time consumption of different algorithms show that: The efficiency of A* algorithm is improved about four times than Dijkstra algorithm. The use of binary heap makes time consumption reduced 5%. Last, the stratified strategy makes the proportion of fast sections reach 90% or more, and path planning is completed in 3 seconds. The results of experiments prove that this algorithm has high efficiency, and it also can meet the drivers’ psychological needs.
    Financial failure prediction using support vector machine with Q-Gaussian kernel
    LIU Zunxiong HUANG Zhiqiang YAN Feng ZHANG Heng
    2013, 33(06):  1767-1770.  DOI: 10.3724/SP.J.1087.2013.01767
    Asbtract ( )   PDF (601KB) ( )  
    References | Related Articles | Metrics
    Concerning the classification problems of complex data distribution of scientific practice, economic life and many other fields, the correlation between variables could not be well described by using traditional Support Vector Machine (SVM), which would influence the classification performance. For this situation, Q-Gaussian function that was a parametric generalization of Gaussian function was put forward as the kernel function of SVM, and a financial early-warning model based on SVM with Q-Gaussian kernel was presented. Based on the financial data of A-share manufacturing listed companies of the Shanghai and Shenzhen stock markets, T-2 and T-3 financial early-warning model were constructed in experiments, the significance test was used to select some suitable indicators and the Cross Validation (CV) was used to determine model parameters. Compared to SVM model with Gaussian kernel, the forecasting accuracies of T-2 and T-3 model constructed by SVM with Q-Gaussian kernel were improved about 3%, and high-cost type I errors were reduced by at most 14.29%.
    Boiler combustion efficiency optimization based on improved radial basis neural network
    JIN Yuping DANG Jie
    2013, 33(06):  1771-1779.  DOI: 10.3724/SP.J.1087.2013.01771
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics
    In order to improve the training accuracy of radial basis neural network, this paper proposed a hybrid optimization algorithm. The algorithm used the strong global search ability of Particle Swarm Optimization (PSO) algorithm to avoid the adverse effect by choosing initial point in the K-means algorithm, thus improving the network center search speed. Meanwhile, the dynamic weight algorithm was used to avoid the ill-posed problem, and to further improve the network approximation ability. The boiler combustion instance indicates that the improved algorithm is efficient and practical.
    Application glowworm-PID algorithmon motor actuator suspension
    XIAO Ping GAO Hong SHI Peicheng
    2013, 33(06):  1774-1779.  DOI: 10.3724/SP.J.1087.2013.01774
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics
    In order to enhance the performance of automobile suspension, a glowworm-PID (Proportion-Integral-Differentiation) algorithm was put forward. Firstly, on the basis of analyzing the basic principle of glowworm-PID algorithm, the glowworm-PID algorithm for motor actuator suspension was developed and the steps and flow diagram of the algorithm were given. Secondly, traditional motor actuator was improved and mathematical model and simulation model of 4 degrees of freedom motor actuator suspension were built. Hardware-In-the-Loop Simulation (HILS) algorithm testing system of active suspension was developed by taking dSPACE as the carrier of the simulation model. Lastly, simulation experiments of testing Glowworm-PID algorithm were carried out on the testing system with different vehicle data and road input. The simulation results indicated that Glowworm-PID algorithm designed in this paper could reduce acceleration of vehicle bodies, working space of suspensions and moving displacement of tires, and so on.
    Set-membership normalized least mean P-norm algorithm for second-order Volterra filter
    LI Feixiang ZHAO Zhijin ZHAO Zhidong
    2013, 33(06):  1780-1786.  DOI: 10.3724/SP.J.1087.2013.01780
    Asbtract ( )   PDF (585KB) ( )  
    References | Related Articles | Metrics
    In allusion to the problem that the computational complexity of Volterra for nonlinear adaptive filtering algorithm increases in power series, a second-order Volterra adaptive filter algorithm based on Set-Membership-Filtering (SMF) under the α-stable distributions noise was proposed. As the object function of SMF involved all signal pairs of input and output, through the threshold judgment of the p square of output errors amplitude the weight vectors of Volterra filter were updated, not only reducing the complexity of filtering algorithm, but also improving the robustness of the adaptive algorithm for input signal correlation. And the update formula of the weight vectors was derived. The simulation results show that the proposed algorithm has lower computational complexity, faster convergence rate, and better robustness against the noise and the input signal correlation.
    Mine gas monitoring by multi-source information clustering fusion
    SUN Yanbo LIU Zongzhu MENG Ke TANG Yang
    2013, 33(06):  1783-1786.  DOI: 10.3724/SP.J.1087.2013.01783
    Asbtract ( )   PDF (627KB) ( )  
    References | Related Articles | Metrics
    Due to the complexity and the dynamic changes of the coal mine environment, the concentrations of harmful gases are difficult to be accurately monitored. The traditional monitoring methods use a single sensor to pick-up information, and the collected data have simple data form, low reliability, big error and so on. Concerning these problems, a new method was proposed in this paper, that is, sampling a variety of heterogeneous gases sources, and then taking advantage of the strong classification algorithm to filter, lastly fusing the above obtained information. As experiments state, the new method significantly improve the reliability of the mine monitoring system.
    Spot-color matching system of flexo printing based on radius basis function neural network
    YANG Qiujuan ZHOU Shisheng LUO Rubai
    2013, 33(06):  1787-1789.  DOI: 10.3724/SP.J.1087.2013.01787
    Asbtract ( )   PDF (443KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that human physiology and psychology have enormous implications to the color matching artificially and it is difficult to guarantee the product quality, a computer color matching method of flexography based on RBF (Radius Basis Function) neural network was presented.The method obtained the sample data with experimental method, determined the center of hidden layer and output layer weights in RBF neural network by K-means clustering algorithm and pseudo-inverse technique and gained the spot-color matching system of flexo printing based on RBF neural network. The spot-color matching system can match the flexography color with high accuracy and high speed.
    Application of Android in remote medical information system
    LAN Kun ZHANG Yue
    2013, 33(06):  1790-1792.  DOI: 10.3724/SP.J.1087.2013.01790
    Asbtract ( )   PDF (511KB) ( )  
    References | Related Articles | Metrics
    To meet the requirements of Remote Medical Information System (RMIS) application, a proposal about how to design and develop Android applications in RMIS combined with the characteristic of Android system was proposed. Firstly, the framework of RMIS and the architecture of Android Operating System (OS) were introduced, and then the development method of serial port, bluetooth, socket, HTTP and other means of communication and the application of these technologies in medical data acquisition and mobile data processing were analyzed, the Representational State Transfer (REST) Web service’s realization was introduced as well. Finally, data acquisition application and mobile information management application based on Android were carried out. The results show that Android can be used in RMIS in many aspects.
2024 Vol.44 No.10

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF