Loading...

Table of Content

    01 September 2013, Volume 33 Issue 09
    Database technology
    K-medoids algorithm based on improved manifold distance
    QIU Xingxing CHENG Xiao
    2013, 33(09):  1001-9081.  DOI: 10.11772/j.issn.1001-9081.2013.09.2482
    Asbtract ( )   PDF (741KB) ( )  
    Related Articles | Metrics
    In this paper, an improved manifold distance based dissimilarity measure was designed to identify clusters in complex distribution and unknown reality data sets. This dissimilarity measure can mine the space distribution information of the data sets with no class labels by utilizing the global consistency between all data points. A K-medoids algorithm based on the improved manifold distance was proposed using the dissimilarity measure. The experimental results on eight artificial data sets with different structure and the USPS handwritten digit data sets indicate that the new algorithm outperforms or performs similarly to the other two K-medoids algorithms based on the existing manifold distance and Euclid distance and has the ability to identify clusters with simple or complex, convex or non-convex distribution.
    Network and distributed techno
    Recursive algorithm for k-cycle preclusion problem in k-ary n-cubes
    YANG Yuxing WANG Shiying
    2013, 33(09):  2401-2403.  DOI: 10.11772/j.issn.1001-9081.2013.09.2419
    Asbtract ( )   PDF (586KB) ( )  
    Related Articles | Metrics
    In order to measure the fault tolerance ability of the parallel computers which take the k-ary n-cube as underlying topology, by constructing the minimum node set whose removal will lead to every k-ary 1-cube in the k-ary n-cube faulty, a recursive algorithm for finding the k-ary 1-cube subnet preclusion node cut of the k-ary n-cube was proposed. It is proved that at least kn-1 nodes need to be damaged if a rival wants to destroy all k-ary 1-cubes in the k-ary n-cube. The result indicates that there are still undamaged k-ary 1-cubes in the parallel computers which take the k-ary n-cube as underlying topology if the number of the faulty nodes does not exceed kn-1-1.
    Low-power oriented cache design for multi-core processor
    FANG Juan GUO Mei DU Wenjuan LEI Ding
    2013, 33(09):  2404-2409.  DOI: 10.11772/j.issn.1001-9081.2013.09.2423
    Asbtract ( )   PDF (880KB) ( )  
    Related Articles | Metrics
    This paper proposed a Low-Power oriented cache Design (LPD) of Level 2 (L2) cache for multi-core processors. LPD considered three different ways to reduce the power consumption while promising the best performance: Low Power oriented Hybrid cache Partition algorithm (LPHP), Cache Reconfiguration Algorithm (CRA), and Way-Prediction based on L2 cache Partition algorithm (WPP-L2). LPHP and CRA closed the columns that were not in use dynamically. WPP-L2 predicted one appropriate way before cache accesses, which could save the access time, so as to save power. These three methods of LPD saved power consumption by 20.5%, 17% and 64.6% on average over the traditional Least Recently Used (LRU) strategy with improvement of the throughput and little influence on the runtime of programs. The experimental results show that this method can reduce the power of multi-core processors significantly and maintain the system performance.
    Energy efficient scheduling for multiple directed acyclic graph in cloud computing
    LIU Danqi YU Jiong Ying Changtian
    2013, 33(09):  2410-2415.  DOI: 10.11772/j.issn.1001-9081.2013.09.2428
    Asbtract ( )   PDF (846KB) ( )  
    Related Articles | Metrics
    Energy-efficient scheduling algorithms based on multiple Directed Acyclic Graph (DAG) fail to save energy efficiently, have a narrow application scope and cannot take performance optimization into account. In order to solve these problems, Multiple Relation Energy Optimizing (MREO) was proposed for multiple DAG workflows. MREO integrated independent tasks to reduce the number of processors used, on the basis of analyzing the characteristics of computation-intensive and communication-intensive tasks. Backtracking and branch-and-bound algorithm were employed to select the best integration path dynamically and reduce the complexity of the algorithm at the same time. The experimental results demonstrate that MREO can reduce the computation and communication energy cost efficiently and get a good energy saving effect on the premise of guaranteeing the performance of multiple DAG workflows.
    Fourth-order high-resolution entropy consistent algorithm
    ZHENG Supei FENG Jianhu
    2013, 33(09):  2416-2418.  DOI: 10.11772/j.issn.1001-9081.2013.09.2416
    Asbtract ( )   PDF (531KB) ( )  
    Related Articles | Metrics
    The fourth-order entropy consistent schemes were proposed for one-dimensional Burgers equation and one-dimensional Euler systems. Semi-discrete method was used in time, the fourth-order Central Weighted Essentially Non-Oscillatory (CWENO) reconstruction was utilized in space and the Ismails numerical flux function was introduced for the new algorithm. The new scheme was applied for the static shock wave, the Sod shock tube and strong rarefaction wave problems. The numerical results were compared with their corresponding exact solutions and the other existing algorithms results. According to the results, this new method has higher resolution than Roes algorithm, the central upwind schemes and Ismails method have. Moreover, the new algorithm can accurately capture the shock waves and the rarefaction waves without non-physical oscillations. In a word, it is a feasible and accurate numerical method for one-dimensional Burgers equation and one-dimensional Euler systems.
    Density biased sampling algorithm based on variable grid division
    SHENG Kaiyuan QIAN Xuezhong WU Qin
    2013, 33(09):  2419-2422.  DOI: 10.11772/j.issn.1001-9081.2013.09.2419
    Asbtract ( )   PDF (640KB) ( )  
    Related Articles | Metrics
    As the most commonly used method of reducing large-scale datasets, simple random sampling usually causes the loss of some clusters when dealing with unevenly distributed dataset. A density biased sampling algorithm based on grid can solve these defects, but both the efficiency and effect of sampling can be affected by the granularity of grid division. To overcome the shortcoming, a density biased sampling algorithm based on variable grid division was proposed. Every dimension of original dataset was divided according to the corresponding distribution, and the structure of the constructed grid was matched with the distribution of original dataset. The experimental results show that density biased sampling based on variable grid division can achieve higher quality of sample dataset and uses less execution time of sampling compared with the density biased sampling algorithm based on fixed grid division.
    HDF5 based parallel I/O technology for structure grid CFD applications
    YANG Lipeng CHE Yonggang
    2013, 33(09):  2423-2427.  DOI: 10.11772/j.issn.1001-9081.2013.09.2423
    Asbtract ( )   PDF (791KB) ( )  
    Related Articles | Metrics
    Large scale parallel Computational Fluid Dynamics (CFD) simulation has posed high demand on the data I/O capabilities. Hierarchical Data Format version 5 (HDF5) can effectively manage large scale scientific data and has a good support to parallel I/O. The HDF5 data storage model for a parallel CFD application was designed. The parallel I/O method for the application's data was implemented based on the HDF5 parallel I/O API. Performance experiments were performed on a parallel computer system. The results show that the data writing speed of this HDF5 based parallel I/O method outperforms the parallel I/O methods (i.e. each process writes an ordinary data file independently) by 6.9 to 16.1 times, when 4 to 32 processes are used. The data reading speed of this HDF5 based parallel I/O method is slower than the parallel I/O methods, with a speed of 20% to 70% times of the latter. However, the cost of data reading time is much smaller than the cost of data writing time, hence has minor effect on the total performance.
    Parallel computing and numerical analysis of laminar diffusion combustion on GPU
    WEI Haoyang ZENG Guosun DING Chunling
    2013, 33(09):  2428-2431.  DOI: 10.11772/j.issn.1001-9081.2013.09.2428
    Asbtract ( )   PDF (712KB) ( )  
    Related Articles | Metrics
    In practical engineering applications, using traditional CPU serial computation for combustion numerical simulation can hardly meet the requirements on simulation speed. This paper took the advantage of GPU which has more computing capability than CPU, by discretizing the combustion physical equations on staggered grid, solving the discrete equations with preconditioned bi-conjugate gradient stabilized (PBiCGSTAB) method, exploring the parallel algorithm of GPU-oriented matrix vector multiplication and the parallel algorithm of inverse matrix vector multiplication. Hence a feasible method for numerical calculation of laminar diffusion combustion on GPU was offered. The experimental results indicate that the parallel program on GPU achieved more than 10 times speedup relative to serial program on CPU. Since the calculated results are in line with the actual situation, the method is feasible and efficient.
    Numerical simulation of one-dimensional Burgers' equation based on lattice Boltzmann method
    LAN Zhongzhou LE Lihua GAO Yun
    2013, 33(09):  2432-2435.  DOI: 10.11772/j.issn.1001-9081.2013.09.2432
    Asbtract ( )   PDF (482KB) ( )  
    Related Articles | Metrics
    For the numerical simulation of one-dimensional Burgers' equation based on Lattice Boltzmann method, there had been 2-bit and 4-bit models. In this paper, an equilibrium distribution function was constructed by choosing the proper kind of discrete velocity model. And then, using Lattice Bhatnagar-Gross-Krook (LBGK) model, Chapman-Enskog expansion and multiscale technique, a 3-bit Lattice Boltzmann Method (LBM) called D1Q3 model was proposed for the one-dimensional Burgers equation. Some numerical experiments were carried out and the numerical results were in good agreement with analytical solutions, therefore the effectiveness of the new method was verified.
    Network and communications
    Influence optimization model based on community structure
    GUO Jinshi TANG Hongbo WU Kai YANG Sen
    2013, 33(09):  2436-2439.  DOI: 10.11772/j.issn.1001-9081.2013.09.2436
    Asbtract ( )   PDF (782KB) ( )  
    Related Articles | Metrics
    The relatively large time cost of the existing influence algorithms does not fit the social networks of which the scale keeps expanding. An influence optimization model was proposed based on the network community structure for solving problem of large time cost. Firstly evaluate nodes influence in each community and dig core members, and then find a small subset of nodes in the set composed of the core nodes and linking community nodes to get the maximization diffusion with minimization cost. The experimental results demonstrate that our model achieves the subset with more abroad influence diffusion and reduces running time compared with traditional methods. Its influence coverage is up to 90%.
    Mobile Agent routing algorithm for WSN based on Q learning hybrid with ant colony optimization
    DANG Xiaochao YAO Haohao HAO Zhanjun
    2013, 33(09):  2440-2443.  DOI: 10.11772/j.issn.1001-9081.2013.09.2440
    Asbtract ( )   PDF (754KB) ( )  
    Related Articles | Metrics
    In view of mobile Agent routing problem in Wireless Sensor Networks (WSN), a mobile Agent routing algorithm for WSN based on Q learning hybrid with ant colony optimization was proposed. A new path choosing probability model was introduced and the optimal path was efficiently maintained in the algorithm. The simulation results show that the mobile Agent routing efficiency is highly improved and delay requirements in multiple tasks are fulfilled, the reliability of the optimal path is enhanced, and network energy consumption is reduced.
    Multi-objective particle swarm optimization with decomposition for network community discovery
    YING Jiawei CHEN Yuzhong
    2013, 33(09):  2444-2449.  DOI: 10.11772/j.issn.1001-9081.2013.09.2444
    Asbtract ( )   PDF (821KB) ( )  
    Related Articles | Metrics
    A multi-objective particle swarm optimization with decomposition for network community discovery was proposed and the multi-objective optimization model of community discovery was constructed through comparing the optimization objectives of different community discovery algorithms in social network. The proposed algorithm adopted the Chebyshev method to decompose the multi-objective optimization problem into a number of single-objective optimization sub-problems and used Particle Swarm Optimization (PSO) to discover the community structure. Moreover, a new local search based mutation strategy was put forward to improve the search efficiency and speed up convergence. The proposed algorithm overcame the defects of single objective optimization methods. The experimental results on synthetic networks and real-world networks show that the proposed algorithm can discover the community structure rapidly and accurately and reveal the hierarchical community structure.
    Multiple decision-tree packet classification algorithm based on rule set partitioning
    MA Teng CHEN Shuqiao ZHANG Xiaaohui TIAN Le
    2013, 33(09):  2450-2454.  DOI: 10.11772/j.issn.1001-9081.2013.09.2450
    Asbtract ( )   PDF (736KB) ( )  
    Related Articles | Metrics
    For solving the problem of decision-tree algorithms' too much memory usage when coping with packet classification under the circumstance of high rate network and large volume rule set, a multiple decision-tree algorithm based on rule set partitioning was proposed in this paper. On the condition of controlling the number of subsets, heuristics were used to partition the rule set into limited number of subsets, in which the overlapping rules had been separated. Cascading decision-tree structure was proposed to lower the depth and reduce search time. The theoretical analysis shows that space complexity has been reduced greatly compared to traditional single decision-tree algorithm. The simulation results demonstrate that the algorithm reduces memory usage about 30% and has better dimension scalability when being compared with EffiCuts, which has the best performance for memory usage so far.
    Joint call admission control algorithm based on reputation model
    LI Zhen ZHU Lei CHEN Xushan JIANG Haixia
    2013, 33(09):  2455-2459.  DOI: 10.11772/j.issn.1001-9081.2013.09.2455
    Asbtract ( )   PDF (721KB) ( )  
    Related Articles | Metrics
    In order to make up for the limitation of research scenario of call admission control in heterogeneous wireless networks and reduce blindness of access network selection, the scenario was extended from integrated system with two networks to integrated system with multiple networks, and a joint call admission control algorithm based on reputation model was proposed. The reputation model was applied in the network selection and feedback mechanism. On the user side, the terminals chose the access network according to the networks' reputation; on the network side, the networks made decisions by adaptive bandwidth management and buffer queuing policy to enhance the probability of successful acceptation. Simulation results show that by using the proposed algorithm, new call blocking probability and handoff call dropping probability can be reduced effectively.
    Signal decomposition algorithm for reducing peak-to-average power ratio of OFDM
    TONG Yinghua GENG Shengling
    2013, 33(09):  2460-2462.  DOI: 10.11772/j.issn.1001-9081.2013.09.2460
    Asbtract ( )   PDF (393KB) ( )  
    Related Articles | Metrics
    In the Orthogonal Frequency Division Multiplexing (OFDM) system, comparatively high signal Peak-to-Average Power Ratio (PAPR) leads to the distortion of transmitter signal. Therefore, this paper put forward a better way to reduce PAPR. It introduced the OFDM method of signal decomposition in great detail; it decomposed base band part OFDM symbol into two signals; and it verified the improvement degree of PAPR with different comparison threshold and decomposition threshold. The simulation results prove that, signal decomposition method can reduce the PAPR value of base band by 3dB-4dB under the reasonable comparison threshold and decomposition threshold.
    Normalized cumulant blind equalization algorithm based on oversampling technology
    ZHANG Xiaoqin HU Yongsheng ZHANG Liyi
    2013, 33(09):  2463-2466.  DOI: 10.11772/j.issn.1001-9081.2013.09.2463
    Asbtract ( )   PDF (573KB) ( )  
    Related Articles | Metrics
    The traditional baud spaced equalizer is used to compensate the aliasing frequency response characteristics of the received signal, but it cannot compensate the channel distortion. Concerning this problem, a normalized cumulant blind equalization algorithm based on oversampling technology was proposed. The received signal was oversampled firstly, and then the variable step-size was used to adaptively adjust weight coefficients of equalizer. It can not only avoid falling into local optimum, but also compensate the channel distortion effectively. The simulation results show that the algorithm can effectively speed up the convergence, and reduce the steady residual error.
    Porting of protocol SAE J1939 to Android system
    LI Jia QI Yanyan ZHU Weijie
    2013, 33(09):  2467-2469.  DOI: 10.11772/j.issn.1001-9081.2013.09.2467
    Asbtract ( )   PDF (581KB) ( )  
    Related Articles | Metrics
    Considering the fact that there is a lack of Controller Area Network (CAN) bus application-layer driver in Android system, a method was developed to transplant the CAN bus application-layer driver from Linux to Android system. SAE J1939 protocol was selected to be the protocol of Android CAN bus application-layer protocol and the linux-can-j1939 project developed by Kurt Van Dijck and Pieter Beyens was ported to Android system. First, the authors analyzed the project structure and merged the corresponding files to Android kernel, then modified the header files and protocol implementation sources, and added the missing structures and functions in kernel. Finally, Android kernel was compiled after modifying the Makefile and Kbuild files. The experiment results show that the new kernel realizes the functions of SAE J1939 protocol such as address declaring, data packing and unpacking, and network management. Based on this porting, a variety of CAN-based Android applications can be developed with the help of Android application-layer interface.
    Three dimensional localization algorithm for wireless sensor networks based on projection and grid scan
    TANG Jie HUANG Hongguang
    2013, 33(09):  2470-2473.  DOI: 10.11772/j.issn.1001-9081.2013.09.2470
    Asbtract ( )   PDF (561KB) ( )  
    Related Articles | Metrics
    The paper proposed a method to solve the shortcomings of the current Wireless Sensor Network (WSN) three-dimensional localization algorithm in terms of accuracy and complexity. The raster scan was used to resolve the projection cross domain of the neighboring anchor nodes on the two coordinate planes, and got the corresponding positions of the unknown nodes on the two coordinate planes, thus ultimately realizing the three-dimensional position estimate. Finally, the locations of the unknown nodes in three-dimension were estimated. The simulation result shows that when 200 sensor nodes were deployed randomly confined to the space of 100m*100m*100m, the coverage ratio of unknown nodes reached 99.1%, and the relative error decreased to 0.5533. The use of projection reduced the complexity of the algorithm efficiently.
    Database technology
    Database index optimization algorithm based on cluster B+ tree
    HU Tingbo ZHONG Jun
    2013, 33(09):  2474-2476.  DOI: 10.11772/j.issn.1001-9081.2013.09.2474
    Asbtract ( )   PDF (604KB) ( )  
    Related Articles | Metrics
    Currently, the most popular index structure used in database is B+ tree, which is easy to search randomly. But in some cases of which keywords are sequential, this structure is not efficient. In order to solve this problem, this paper proposed Cluster B+ tree (CB+ tree) on the basis of B+ tree. This tree considered the order between keywords to a large extent, and also reduced the tree height to improve the efficiency of search. Simulation shows that when the record number was one million, CB+ tree had the same efficiency as B+ tree. When the record number was five million, it took CB+ tree 6.7s to insert, which was 8% less than B+ tree's 7.6s, and it took CB+ tree 9.9s to inquire, which was 10% less than B+ tree's 11.1s. Otherwise, CB+ tree also reduced the time of deleting by 10% compared with that of B+ tree's, from 11.2s to 10.1s. According to the simulations, when the key words are in order and the record number is larger than one million, CB+ tree is a more efficient index structure, and its efficiency will promote obviously while the record number increases.
    Distributed data stream clustering algorithm based on affinity propagation
    ZHANG Jianpeng JIN Xin CHEN Fucai CHEN Hongchang HOU Ying
    2013, 33(09):  2477-2481.  DOI: 10.11772/j.issn.1001-9081.2013.09.2477
    Asbtract ( )   PDF (839KB) ( )  
    Related Articles | Metrics
    As to the low clustering quality and high communication cost of the existed distributed clustering algorithm, a distributed data stream clustering algorithm (DAPDC) which combined the density with the idea of representative points clustering was proposed. The concept of the class cluster representative point to describe the local distribution of data flows was introduced in the local sites using affinity propagation clustering, while the global site got the global model by merging the summary data structure that was uploaded from the local site by the improved density clustering algorithm. The simulation results show that DAPDC can improve the clustering quality of data streams in distributed environment significantly. Simultaneously, the algorithm can find the clusters of different shapes and reduce the amount of data transferred significantly by using class cluster representative points.
    Programming model based on MapReduce for importing big table into HDFS
    CHEN Jirong LE Jiajin
    2013, 33(09):  2486-2489.  DOI: 10.11772/j.issn.1001-9081.2013.09.2486
    Asbtract ( )   PDF (715KB) ( )  
    Related Articles | Metrics
    To solve the problems of instability and inefficiency when data from a relation database system are transferred into Hadoop Distributed File System (HDFS) using Sqoop, the authors proposed and implemented a new programming model based on MapReduce framework. The algorithm splitting a big table in this model was as follows: firstly a step was calculated by dividing the total lines by the mapper number, then a SQL statement corresponding to each split could be constructed with a start line index and a span range equal to the above step, so this approach could guarantee that each mapper task would issue identical SQL workload. In map phrase, a mapper would only call map function once, with the single key-value pair below: the key was the above SQL statement corresponding to a split, and the value was null. The comparison experiments show that, for two different big tables with the same number of records, the respective importing time was approximately identical regardless of the records distribution, while using two different splitting fields in one big table, the importing time was also the same. At the same time, when applying two different approaches to one big table, the importing efficiency using the model was largely promoted than that using Sqoop.
    Approach for cleaning uncertain data based on information entropy theory
    QIN Yuanxing DUAN Liang YUE Kun
    2013, 33(09):  2490-2492.  DOI: 10.11772/j.issn.1001-9081.2013.09.2490
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    In response to the issue that data anomalies in the uncertain databases often hamper the efficient and effective use of data, an uncertain data cleaning method was proposed to reduce abnormal data based on the information entropy theory. First, the uncertainty degree of uncertain data was defined by using information entropy. Then, the confidence interval of uncertain data was obtained based on statistical method with the degree of uncertainty. By means of the confidence interval, the uncertain databases were cleaned. The experimental results show the effectiveness and efficiency of the proposed method.
    Data deduplication in Web information integration
    LIU Xueqiong WU Gang DENG Houping
    2013, 33(09):  2493-2496.  DOI: 10.11772/j.issn.1001-9081.2013.09.2493
    Asbtract ( )   PDF (645KB) ( )  
    Related Articles | Metrics
    Since traditional data dedupliation methods are of low time efficiency and detection accuracy, a Stepwise Clustering Data Elimination (SCDE) method was presented based on the features of Web information integration. Firstly the whole record set was divided into sub-sets using both key attributes division and the Canopy clustering technique, and then the similar records in each sub-set were accurately eliminated. A fuzzy entity matching strategy based on dynamic weight was proposed to accurately eliminate the duplicate records, which reduced the influence of missing attribute on record similarity calculation, and the name of company was especially treated to improve the matching accuracy. The results show that the method is superior to traditional algorithms in time efficiency and detection accuracy, and the precision is improved by 12.6%. The method is applied in forestry yellow page system and performs well.
    Information security
    Image scrambling algorithm based on cloud model
    FAN Tiesheng ZHANG Zhongqing SUN Jing LUO Xuechun LU Guiqiang ZHANG Pu
    2013, 33(09):  2497-2500.  DOI: 10.11772/j.issn.1001-9081.2013.09.2497
    Asbtract ( )   PDF (704KB) ( )  
    Related Articles | Metrics
    Concerning the deficiency of the digital image scrambling algorithm in double scrambling, an image scrambling algorithm based on cloud model was proposed. The algorithm used the function value generated by the three-dimensional cloud model to change the positions and values of the image pixels, to achieve a double scrambling. The experimental verification as well as quantitative and qualitative analysis show that the scrambling image renders white noise and realizes the image scrambling. There is no security issue on cyclical recovery. The algorithm can quickly achieve the desired effect, resistant to shear, plus noise, filtering and scaling attacks. This proves that the algorithm is effective and reasonable, and also can be better applied to the image scrambling.
    Fast image scrambling based on position interchange
    CAO Guanghui JIA Dan ZHANG Yizhi
    2013, 33(09):  2501-2504.  DOI: 10.11772/j.issn.1001-9081.2013.09.2501
    Asbtract ( )   PDF (569KB) ( )  
    Related Articles | Metrics
    Abstract: In order to efficiently scramble image, based on skew tent map, a fast random permutation procedure was firstly presented, and then a fast image scrambling algorithm, based on the preceding procedure, was designed. The main idea behind the fast random permutation was position interchange. Its implementation process was, based on geometrical meaning of probability, non-uniform distribution chaos sequence generated by skew tent map was transformed into uniform random sequence, which then drove image element to interchange position. Theory and experiments results demonstrate that the fast random permutation has better efficiency then sorting-based random permutation, the proposed image scrambling method has larger key space and higher running efficiency than sorting-based image scrambling.
    Energy-saving data aggregation algorithm for protecting privacy and integrity
    LI Wei YANG Geng
    2013, 33(09):  2505-2510.  DOI: 10.11772/j.issn.1001-9081.2013.09.2505
    Asbtract ( )   PDF (1094KB) ( )  
    Related Articles | Metrics
    How to protect data privacy and data integrity is the major challenge in data aggregation of Wireless Sensor Networks (WSN). This paper proposed an integrity-Energy-Saving Privacy-preserving Aggregation (iESPART), which was based on Energy-Saving Privacy-preserving Aggregation (ESPART). By using a homomorphic message authentication code scheme, iESPART was not only able to protect data integrity, but also able to determine which nodes had been attacked. The simulation results show that iESPART achieves the same data privacy-preserving effect as integrity-protecting Private Data Aggregation (iPDA), with a more comprehensive integrity detection mechanism and less communication overhead.
    Cross-domain authorization management model based on two-tier role mapping
    REN Zhiyu CHEN Xingyuan SHAN Dibin
    2013, 33(09):  2511-2515.  DOI: 10.11772/j.issn.1001-9081.2013.09.2511
    Asbtract ( )   PDF (785KB) ( )  
    Related Articles | Metrics
    With regard to the singleness of the role establishment method in the traditional cross-domain authorization management models, and the problems such as implicit promotion of privilege and the separation of duties conflict, a new cross-domain authorization management model based on two-tier role mapping was proposed. The two-tier role architecture met the practical needs of role establishment and management. On this basis, unidirectional role mapping can avoid the role mapping rings. By introducing attribute and condition, dynamic adjustment of permissions was realized. The model was formalized by dynamic description logic, including concepts, relations and management actions. In the end, the security of the model was analyzed.
    Privacy protection method in vehicular networks
    CUI Liqun ZHANG Mingjie
    2013, 33(09):  2516-2519.  DOI: 10.11772/j.issn.1001-9081.2013.09.2516
    Asbtract ( )   PDF (734KB) ( )  
    Related Articles | Metrics
    Concerning the privacy protection problem in Vehicular Ad Hoc NETwork (VANET), a scheme based on K-anonymous chain privacy protection was proposed. The scheme built a k anonymous area near the query node and forwarded the minimum boundary rectangle containing the k vehicles as location data. The process of forwarding constructed an anonymous chain to confuse the corresponding relationship of identity information and location information, thus greatly reducing the probability of successful attack. Seen from the analysis of security and simulation experiment results, this scheme can well protect the privacy of mobile vehicles and improve the security and privacy of vehicular network communication.
    Software tamper resistance based on function-level control-flow monitoring
    ZHANG Guimin LI Qingbao WANG Wei ZHU Yi
    2013, 33(09):  2520-2524.  DOI: 10.11772/j.issn.1001-9081.2013.09.2520
    Asbtract ( )   PDF (798KB) ( )  
    Related Articles | Metrics
    Software tamper resistance is an important method for software protection. Concerning the control-flow tampering invoked by buffer overflow as well as some other software attacks, a software tamper-proofing method based on Function-Level Control-Flow (FLCF) monitoring was proposed. This method described the software's normal behaviors by FLCF and instrumented one guard at every entrance of functions by binary rewriting technology. The monitoring module decided whether the software was tampered or not by comparing the running status received from the guards' reports with the expected condition. A prototype system was realized and its performance was analyzed. The experimental results show that this method can effectively detect the control-flow tampering with less overhead and no false positives. It can be easily deployed and transplanted as its implementation does not need source code or any modifications of underlying devices, and system security is strengthened by isolating the monitoring module with the software being protected.
    Software protection game model based on divided-storage strategy
    WANG Rui YANG Qiuxiang CHEN Gouxi MA Qiaomei
    2013, 33(09):  2525-2528.  DOI: 10.11772/j.issn.1001-9081.2013.09.2525
    Asbtract ( )   PDF (641KB) ( )  
    Related Articles | Metrics
    Current software protection technologies generally achieve the software protection through improving the code and applying encryption scheme. To address the problem of whether the static authorized anti-attack capability of software code and the strength of the software encryption can sufficiently resist attack, the authors proposed a software protection game model based on divided-storage strategy. The strategy of divided-storage was used by the model to divide secret key into many segments, so multiple verified functions that were used to inspect and resist the cracker's attack were received. After being hidden in the program, the program was protected by multiple different verified functions during the running of the software. The model was analyzed and demonstrated from the perspective of game theory, also applied to the instances of software registration code verification. The security of the software code had been improved. The experimental results and analysis show that the proposed model is correct and effective.
    Digital watermarking protocol based on El Gamal algorithm
    YAN Lixia XIAO Mingbo
    2013, 33(09):  2529-2531.  DOI: 10.11772/j.issn.1001-9081.2013.09.2529
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    In light of the drawbacks of current digital watermarking protocols, such as requiring frequent involvement of buyers, assuming that buyers' knowledge of signature or watermark, and not considering appropriate usage control of digital products, a secure, practical and extensible watermarking protocol was proposed, by utilizing the homomorphic, commutative El Gamal encryption algorithm and the machine fingerprint-based copyright control scheme. Besides the basic functions of the digital watermarking protocol, this protocol also considered the interests of both buyer and seller to some extent, and improved user's experience with a transaction model similar to the traditional one.
    Application of secret sharing and network coding to wiretap networks
    CAO Zhanghua JI Xiaodong LIU Min
    2013, 33(09):  2532-2535.  DOI: 10.11772/j.issn.1001-9081.2013.09.2532
    Asbtract ( )   PDF (655KB) ( )  
    Related Articles | Metrics
    Inspired by the cryptography method of confidential communication and the idea of secret shcaring, the authors proposed a secure communication scheme suitable to the wiretap network that employed network coding for data transmission. The proposed scheme used the original random bits to generate a new random bit string for enrypting the source messages, and then mixed the original random bit string with a secret string which was generated by the obtianed ciphertexts. Lastly, efficient use of network capacity with randong linear network coding was achieved. What's more, our scheme neither resorted to a secret channel which transmited the secret key to sinks, nor constucted linear network coding to meet special conditions which depended on network topology.
    Nonlinear once time once password (t,n) threshold secret sharing scheme
    FAN Chang RU Peng
    2013, 33(09):  2536-2539.  DOI: 10.11772/j.issn.1001-9081.2013.09.2536
    Asbtract ( )   PDF (727KB) ( )  
    Related Articles | Metrics
    To address the problem that secret sharing scheme constructed by linear algorithm has security vulnerabilities, and to solve the problem that it easily leads to a single point of failure and unreliable situations with trusted party, this paper proposed a nonlinear threshold secret sharing scheme which combined nonlinear algorithm and cryptography. The scheme was based on two nonlinear structures of chaos algorithm and finite state automata, so it can generate random and dynamic shares. Participants can control each round shares to achieve the security level of once or N times a password. Secret was recovered by the Lagrange interpolation formula. Secure multiparty computation restricted every participant so that the scheme satisfied resilient equilibrium and could withstand chicanery or conspiracy attack.
    Artificial intelligence
    Establishment and application of consumption sentiment ontology library based on three-dimensional coordinate
    QIU Yunfei LIN Mingming SHAO Liangshan
    2013, 33(09):  2540-2545.  DOI: 10.11772/j.issn.1001-9081.2013.09.2540
    Asbtract ( )   PDF (925KB) ( )  
    Related Articles | Metrics
    Since the positive comments may have the non-truly satisfied comments, a method which can truly reflect the sentiment of the consumers was constructed in order to decrease the non-truly satisfied comments from the positive comments. The research oriented to the consumption sentiment shows that the consumption sentiment vocabulary should be extracted at first. According to the consumption sentimental features, consumption sentiment got classified into seven classes and twenty-five subclasses, and the three-dimensional coordinate model was established. Afterwards, Protégé was used to build a consumption sentiment ontology library so that the consumption sentiment can be automatically classified by the three-dimensional coordinate vocabulary classification algorithm. Moreover, the consumption sentiment judging algorithm was applied to automatically judge consumer comments based on the completed library. Finally, compared with the positive comment ratio of Taobao, the F-measure can reach more than 95%. It can eliminate the non-truly satisfied comments from positive comments and reflect the consumer's real emotion.
    Improved algorithm of thematic term extraction based on increment term-set frequency from Chinese document
    LIU Xinglin
    2013, 33(09):  2546-2549.  DOI: 10.11772/j.issn.1001-9081.2013.09.2546
    Asbtract ( )   PDF (607KB) ( )  
    Related Articles | Metrics
    In order to solve the problem that the thematic term extraction algorithm based on incremental term-set frequency cannot extract compound-words, this paper added text preprocessing, compound-word recognition, to the original algorithm. Compound-word recognition was based on part-of-speech detection and word co-occurrence directed graph, and corrected the results of segmentation. When generating thematic term candidate set, the position of each word was examined and determined its weight. And then, the total weight of the same word was accumulated, and a candidate set of thematic terms was generated by the weight from high to low. When this algorithm got a term from thematic term candidate set, the increment frequency was calculated. If the increment was less than a given threshold, the algorithm stopped; otherwise, the thematic term candidate was added into thematic term set. The experimental results show this algorithm achieves sound effects, the thematic terms acquired by this algorithm can more aptly reflect the main contents of the article, and the satisfaction of thematic term increased 5% than the original algorithm.
    Real-coded quantum evolutionary algorithm based on cloud model
    LI Guozhu
    2013, 33(09):  2550-2552.  DOI: 10.11772/j.issn.1001-9081.2013.09.2550
    Asbtract ( )   PDF (632KB) ( )  
    Related Articles | Metrics
    To deal with the problems of easily falling into local optimum and low accuracy in the quantum evolutionary algorithm, a real-coded quantum evolutionary algorithm based on cloud model (CRCQEA) was proposed by using the characteristics of cloud model randomness and stable disposition. The algorithm used a single-dimensional variation of cloud for rapid global search, and used a multi-dimensional cloud evolution for enhancing local search ability to explore the global optimal solution. Dynamic adjustment of search range and resetting of the chromosomes, on the basis of the evolutionary process of algorithm, can speed up the convergence and prevent falling into local optimum. The simulation results show that the algorithm improves search accuracy and efficiency, and the algorithm is well suitable for the complex function optimization.
    Optimal hyperplane modification of support vector machine based on Fisher within-class scatter
    YANG Ting MENG Xiangru WEN Xiangxi WU Wen
    2013, 33(09):  2553-2556.  DOI: 10.11772/j.issn.1001-9081.2013.09.2553
    Asbtract ( )  
    Related Articles | Metrics
    The generalization of Support Vector Machines (SVM) will decline when the training data sets get imbalanced distribution. A modification method of the optimal hyperplane based on average divergence ratio according to Fisher within-class scatter was proposed to solve the problem. The normal vector of the optimal hyperplane was got after SVM training. The Fisher within-class scatter was introduced to evaluate the distribution of the two classes. On this basis, the optimal hyperplane was modified by the ratio of the average distribution scatter that was obtained according to the number of samples. The experimental results on benchmarks data sets show that the proposed method improves the classification accuracy of the class with less training data, so as to improve the SVM's generalization.
    Study Knapsack Problem Based On Greedy Discrete Electromagnetism-like Mechanism Algorithm
    WANG Jianlong SUN Heming
    2013, 33(09):  2557-2561.  DOI: 10.11772/j.issn.1001-9081.2013.09.2557
    Asbtract ( )   PDF (678KB) ( )  
    Related Articles | Metrics
    The basic Electromagnetism-like Mechanism (EM) algorithm cannot solve the discrete problems like knapsack problem. A new algorithm called Greedy Discrete Electromagnetism-like Mechanism (GDEM) algorithm was proposed. Firstly, a cross-operating was proposed; then the operating was used to modify the methods of the force calculation and particles movement in the basic EM algorithm; at last, greedy algorithm was introduced to process the constraint condition. Based on the three classical test knapsack problems, the results have shown that GDEM algorithm can solve knapsack problems efficiently.
    Lane recognition and departure warning based on hyperbolic model
    CHEN Benzhi
    2013, 33(09):  2562-2565.  DOI: 10.11772/j.issn.1001-9081.2013.09.2562
    Asbtract ( )   PDF (668KB) ( )  
    Related Articles | Metrics
    To improve the accuracy, reliability and computing efficiency of lane recognition and departure warning algorithm, a new lane detection and departure warning framework based on the hyperbolic model was proposed. Firstly, lane edge points were obtained from preprocessed image by searching feature points, and a least square fitting method was used to identify hyperbolic model of lane. The confidence factor of identified lane model was evaluated by a confidence function according to the characteristics of lane model and points adjacent to lane, and the reliability was compared with a threshold value to extract lane line. Then, in view of the continuous changing characteristics of lane between consecutive frames, particle filter algorithm was used to search lane edge points near the lane model obtained in previous frame, and an updated lane was identified and evaluated by confidence factor. Finally, based on a hyperbolic lane model established in aforementioned procedure, a spatial and temporal warning model of lane departure was proposed in image coordination system. The proposed algorithm was implemented on the PC platform and experiments on lane were done. The experimental results show that the method possesses good performances in recognition accuracy (92%) and average processing speed (40ms/frame), which can meet the application requirements.
    Hotbox level detection of railway vehicle using fuzzy neural networks
    CUI Zhuanling LI Guoning LIN Sen
    2013, 33(09):  2566-2569.  DOI: 10.11772/j.issn.1001-9081.2013.09.2566
    Asbtract ( )   PDF (615KB) ( )  
    Related Articles | Metrics
    Concerning the low accuracy, simple algorithm and multiple parameters but difficulty in modification of hotbox detection of Infrared Train Hotbox Detecting System (THDS), a new hotbox detection model based on fuzzy neural networks was proposed. The model selected three variables as inputs, such as temperature difference, train temperature difference and vehicle temperature difference, and four hotbox grades as outputs. One hundred and twenty-five fuzzy rules and learning algorithm were used to train the fuzzy neural networks, which can be as expert system to detect hot axis. The practical simulation results show that the hotbox detection model using fuzzy neural networks can reduce the number of detecting parameters, and the discrete concordance rate reaches 95%.
    Network and distributed techno
    Estimation of fundamental matrix by using layer-by-layer iterative approach
    YANG Lei LI Guiju
    2013, 33(09):  2570-2572.  DOI: 10.11772/j.issn.1001-9081.2013.09.2570
    Asbtract ( )   PDF (449KB) ( )  
    Related Articles | Metrics
    Concerning the estimation of fundamental matrix in motion sequences under the unknown environment, a layer-by-layer iterative approach was proposed. This approach was based on optimal robust estimation method, and added the constrained condition of motion continuity and multi-scale correspondence to reduce false correspondence. Then data inliers of model from high level were added into the data set of low level in order to update the data set and estimate the homographic model simultaneously. Finally, global optimization was done on bottom level to rectify the model. The experiments show that geometric transform error is no more than 2.891821pixel and variance of error fluctuated range is no more than 0.295172pixel, both the mean and the fluctuated variance of error are reduced in some degree when the scene surface has relatively plentiful depth levels and good continuity from motion sequences.
    Codebook design algorithm for image vector quantization based on improved artificial bee colony
    GUO Yanju CHEN Lei CHEN Guoying
    2013, 33(09):  2573-2576.  DOI: 10.11772/j.issn.1001-9081.2013.09.2573
    Asbtract ( )   PDF (678KB) ( )  
    Related Articles | Metrics
    A new vector quantization image compression algorithm based on an improved artificial bee colony was proposed for improving the quality of the code book. In this method, Mean Squared Error (MSE) was used as fitness function and the improved artificial bee colony algorithm was used to optimize it. The self-organization and convergence of the algorithm were improved. At the same time, the possibility of falling into local convergence was reduced. In order to reduce calculation amount of the algorithm, a fast codebook search idea based on sum of vectors was inroduced into the process of fitness function calculation. The simulation results show that the algorithm has the advantages of time-saving calculation and rapid convergence, and the quality and robustness of the codebook generated by this algorithm are good.
    Compressed Video Sensing Method Based on Motion Estimation and Backtracking based Adaptive Orthogonal Matching Pursuit
    ZHUANG Yanbin GUI Yuan XIAO Xianjian
    2013, 33(09):  2577-2579.  DOI: 10.11772/j.issn.1001-9081.2013.09.2577
    Asbtract ( )   PDF (649KB) ( )  
    Related Articles | Metrics
    In order to remove the image blurring caused by reconstructing video frames independently frame by frame using traditional compressed video sensing method, this paper proposed a new approach to video compressed sensing based on motion estimation and motion compensation by combining the compressed sensing theory with related technology of MPEG standard video coding, so as to remove the spatial and temporal redundancy of video signal. This method took full account of the temporal correlations of video sequences and firstly compensated video frames using forward, backward and bidirectional prediction, then adopted the Backtracking-based Adaptive Orthogonal Matching Pursuit (BAOMP) algorithm to reconstruct the motion prediction residuals and finally reconstructed current frames. The experimental results indicate that the proposed method can gain a better video image quality compared with frame-by-frame reconstruction method and achieve a higher Peak Signal-to-Noise Ratio (PSNR).
    Mathematical model for reconstructing luminance distribution of the projector-screen system
    ZHANG Jun LI Hui
    2013, 33(09):  2580-2583.  DOI: 10.11772/j.issn.1001-9081.2013.09.2580
    Asbtract ( )   PDF (649KB) ( )  
    Related Articles | Metrics
    A new practical mathematical model was presented to characterize the luminance distribution of projector-screen system. Based on the integration of the projector lens vignetting, the screen diffuse reflection and the parametric surface model, the proposed mathematical model provides an accurate description of the luminance distribution function with 28 adjustable parameters. These parameters can be determined by the nonlinear least squares method with a small quantity of measured luminance values on the sparse positions of a projection screen. Real applications imply that the proposed mathematical model can be widely applied to describe the luminance distribution of various types of projector-screen system. Therefore, it can be used to reduce the complexity of the real project for building a large multi-projectors display system.
    Adaptive tracking algorithm based on multi-criteria feature fusion
    ZHAO Qian ZHOU Yong ZENG Zhaohua HOU Yuanbin LIU Shulin
    2013, 33(09):  2584-2587.  DOI: 10.11772/j.issn.1001-9081.2013.09.2584
    Asbtract ( )   PDF (643KB) ( )  
    Related Articles | Metrics
    Multiple feature fusion based tracking is one of the most active research topic in tracking field, but the tracking accuracy needs improving in complex environment and most of them use single fusion rule. In this paper, a new adaptive fusion strategy was proposed for multi-feature fusion. First, the local background information was introduced to strengthen the description of the target, and then the feature weight was calculated by a variety of criteria in the fusion process. In addition, the framework of mean shift was considered to realize target tracking. An extensive number of comparative experimental results show that the proposed algorithm is more stable and robust than the single fusion rule.
    Color image denoising based on Gaussian weighting and manifold for high fidelity
    CHEN Zhongqiu SHI Rui LIU Jingmiao
    2013, 33(09):  2588-2591.  DOI: 10.11772/j.issn.1001-9081.2013.09.2588
    Asbtract ( )   PDF (822KB) ( )  
    Related Articles | Metrics
    Using vector method for color image denoising, the complexity of the algorithm is high and cannot achieve real-time performance. A method for high fidelity color image denoising was proposed based on Gaussian weighting and adaptive manifold. Firstly, it used the non-local means algorithm to get high-dimensional data, and used the improved Gaussian kernel to calculate the weight of color image. Secondly, splatting method was used to deal with the high-dimensional data, and a Gaussian distance-weighted projection of the colors of all pixels was performed onto each adaptive manifold. Thirdly, smooth dimensionality reduction was done on convection shape, and iterative method was used for image smoothing. Finally, the final filter response was computed for each pixel by interpolating blurred values gathered from all adaptive manifolds. The experimental results show that the algorithm has a superior denoising performance than the original one, and it also can improve real-time performance. By using this algorithm, the details can be preserved well. Peak Signal-to-Noise Ratio (PSNR) can be improved nearly 2.0dB, and Structural Similarity Index Measurement (SSIM) can be improved more than 1%.
    Multiplicative noise removal via adaptive kernel regression and total variation
    WU Yulian FENG Xiangchu
    2013, 33(09):  2592-259.  DOI: 10.11772/j.issn.1001-9081.2013.09.2592
    Asbtract ( )   PDF (606KB) ( )  
    Related Articles | Metrics
    To remove the multiplicative noise better, a new three-stage method for multiplicative noise removal was proposed. In the first stage, log-image was processed by adaptive Steer Kernel Regression (SKR). Then in the second stage, the Total Variation (TV) regularization method was used to amend the image obtained. At last, via an exponential function and bias correction, the result was transformed back from the log-domain to the real one. The new method combined the advantages of steer kernel regression and total variation method. The experimental results show that the new method is more effective to filter out multiplicative noise.
    Improved wavelet denoising with dual-threshold and dual-factor function
    REN Zhong LIU Ying LIU Guodong HUANG Zhen
    2013, 33(09):  2595-2598.  DOI: 10.11772/j.issn.1001-9081.2013.09.2595
    Asbtract ( )   PDF (632KB) ( )  
    Related Articles | Metrics
    Since the traditional wavelet threshold functions have some drawbacks such as the non-continuity on the points of threshold, large deviation of estimated wavelet coefficients, Gibbs phenomenon and distortion are generated and Signal-to-Noise Ratio (SNR) can be hardly improved for the denoised signal. To overcome these drawbacks, an improved wavelet threshold function was proposed. Compared with the soft, hard, semi-soft threshold function and others, this function was not only continuous on the points of threshold and more convenient to be processed, but also was compatible with the performances of traditional functions and the practical flexibility was greatly improved via adjusting dual threshold parameters and dual variable factors. To verify this improved function, a series of simulation experiments were performed, the SNR and Root-Mean-Square Error (RMSE) values were compared between different denoising methods. The experimental results demonstrate that the smoothness and distortion are greatly enhanced. Compared with soft function, its SNR increases by 22.2% and its RMSE decreases by 42.6%.
    Local variance based anisotropic diffusion denoising method for ultrasonic image
    LIU Wanzhen FU Zhongliang
    2013, 33(09):  2599-2602.  DOI: 10.11772/j.issn.1001-9081.2013.09.2599
    Asbtract ( )   PDF (734KB) ( )  
    Related Articles | Metrics
    Since the anisotropic diffusion methods cannot make a distinction between strong noise and weak edge effectively, the authors proposed an improved anisotropic diffusion denoising method based on local statistical characteristics. While denoising images by anisotropic diffusion method, points with large gray-level variations were found by using gradient threshold, and whether the point was a noise point or not was judged by calculating local variance and local deleted variance, and then mean filtering was used for the noise points. The experiments upon simulation images and clinical ultrasonic images show that this method preserves features and edges more efficiently than traditional anisotropic diffusion methods while denoising images.
    Destriping method based on transform domain
    LIU Haizhao YANG Wenzhu ZHANG Chen
    2013, 33(09):  2603-2605.  DOI: 10.11772/j.issn.1001-9081.2013.09.2603
    Asbtract ( )   PDF (503KB) ( )  
    Related Articles | Metrics
    To remove the stripe noise from the line scan images, a transform domain destriping method which combined Fourier transform and wavelet decomposition was proposed. Firstly, the image was decomposed using multi-resolution wavelet decomposition to separate the subband which contained the stripe noise from other subbands. Then the subband that contained stripe noise was transformed into Fourier coefficients. The Fourier coefficients were processed by a band-stop filter to remove the stripe noise. The live collected cotton foreign fiber images with stripe noise were used in the simulation experiment. The experimental results indicate that the proposed approach which combined Fourier transform with wavelet decomposition can effectively remove the stripe noise from the image while preserving the characteristics of the original image. It gets better destriping effect than just using Fourier transform or wavelet decomposition separately.
    Graph-based semi-supervised method for image classification in combination with mean shift
    BAI Yina WANG Xili
    2013, 33(09):  2606-2609.  DOI: 10.11772/j.issn.1001-9081.2013.09.2606
    Asbtract ( )   PDF (617KB) ( )  
    Related Articles | Metrics
    There are some disadvantages of graph-based semi-supervised manifold regularization image classification algorithm, such as high space complexity and time complexity, and all of the labeled and unlabeled samples are involved in training. Therefore, it is hard to classify large-scale images. And high error rate often occursin images with complex background or target. In order to deal with these problems, a graph-based semi-supervised algorithm combining mean shift for image classification was proposed. The improvement of the method lay in two aspects: Firstly, mean shift method was used to smooth the image and the result replaced the original image as the image to be classified. Secondly, only a small number of unlabeled samples were used instead of all the unlabeled samples. The experimental results indicate that the proposed method can improve the classification accuracy and largely reduce the complexity. This algorithm makes it possible for graph-based semi-supervised classification algorithms to classify large-scale images.
    Improved object detection method of adaptive Gaussian mixture model
    LI Hongsheng XUE Yueju HUANG Xiaolin HUANG Ke HE Jinhui
    2013, 33(09):  2610-2613.  DOI: 10.11772/j.issn.1001-9081.2013.09.2610
    Asbtract ( )   PDF (659KB) ( )  
    Related Articles | Metrics
    The deficiency of Gaussian Mixture Model (GMM) is the high computation cost and cannot deal with the shadow and ghosting. An improved foreground detection algorithm based on GMM is proposed in this paper. By analyzing the stability of the background, intermittent or continuous frame updating is chose to update the parameters of the GMM.It can efficiently reduce the runtime of the algorithm. In the background updating,the updating rate is associated with the weight and this makes it change with the weight.The background pixels which appear after the objects moving set a larger updating rate.It can improve the stability of the background and solve the problem of ghosting phenomenon and the transformation of background and foreground.After objects detection,the algorithm eliminates the shadow based on the RGB color space distortion model and treats the result by Gauss Pyramid filtering and morphological filtering.Through the whole process,a better contour is obtained. The experimental results show that this algorithm has improved the calculation efficiency and accurately segmented the foreground object.
    Collision detection algorithm based on changeable direction hull in virtual surgery
    SHI Lingling WANG Weidong YAN Zhiyuan
    2013, 33(09):  2614-2616.  DOI: 10.11772/j.issn.1001-9081.2013.09.2614
    Asbtract ( )   PDF (642KB) ( )  
    Related Articles | Metrics
    In order to achieve fast collision detection in robot assisted virtual surgery, an algorithm based on changeable direction hull was proposed. It combined fixed direction hull algorithm with the two characteristics in virtual scene, complex motion of surgical instrument end and continuous deformation of soft tissues. Action mode between instruments and soft tissue was analyzed and deformation of soft tissue was predicted. Then the set of box directions was changed to improve the tightness of bounding volume trees. Accordingly, collision detection was accelerated with decreased interaction tests. The simulation results show that collision information can be obtained through the proposed algorithm and the new method implements faster compared with fixed direction hull algorithm.
    Adaptive approach for point cloud based CAD model reconstruction
    LIU Jin
    2013, 33(09):  2617-2622.  DOI: 10.11772/j.issn.1001-9081.2013.09.2617
    Asbtract ( )   PDF (1026KB) ( )  
    Related Articles | Metrics
    Basic RANdom SAmple Consensus (RANSAC) approach cannot set segmentation parameters adaptively by the noise of point clouds and has no efficient way to determine whether the segmentation results are reasonable. In order to solve these problems, an adaptive approach for point cloud based CAD model reconstruction was presented. First, the approach extracted primitive shapes from point clouds by RANSAC algorithm, then it analyzed deviations of points from the fitted primitive shapes by histograms. For unreasonably segmented point cloud patches, the approach updated parameters of segmentation and repeated the primitive shape detection process. After certain rounds of iteration, the approach detected primitive shapes from point clouds reasonably. By calibrating primitive shapes' position and orientation and trimming primitive shapes according to intersection curves, the approach reconstructed the CAD model. Deviations from points to the surface of the CAD model were analyzed by error distribution graph and histogram, which demonstrated that 70.71% of the points whose projection distance were no more than 1% of the bounding box height. The experimental results show that, by setting segmentation parameters adaptively, the approach can extract small primitive shapes from the experimental point cloud data distorted by noise with scale equal to 1% of the bounding box height.
    Distortion correction technique for airborne large-field-of-view lens
    XU Fang LIU Jinghong
    2013, 33(09):  2623-2626.  DOI: 10.11772/j.issn.1001-9081.2013.09.2623
    Asbtract ( )   PDF (806KB) ( )  
    Related Articles | Metrics
    For the purpose of correcting the distortion of the large-field-of-view lens used on aerial cameras, this paper proposed a method using the Matlab calib_toolbox. By calibrating the captured images of a range of angles-of-view and distances, the internal parameters and distortion coefficients of the camera were obtained, and the correct mathematical model of distortion correction was built. The proposed method made an improvement on the Bouguet method, and extended its applications to distortion correction of color images for airborne cameras via subsequent programming. Furthermore, a new and efficient backstepping reconstruction pattern matching method for image-distortion-rate analysis was proposed, which quantified the level of distortion. The simulation results show that the proposed method reduces the distortion rate of color images averagely by about 10%. As all the experimental results indicate, the proposed method is simple and efficient, and it is convenient to be readily transplanted to hardware platform for real-time image distortion correction.
    Fuzzy diffusion PET reconstruction algorithm based on anatomical non-local means prior
    SHANG Guanhong LIU Yi ZHANG Quan GUI Zhiguo
    2013, 33(09):  2627-2630.  DOI: 10.11772/j.issn.1001-9081.2013.09.2627
    Asbtract ( )   PDF (608KB) ( )  
    Related Articles | Metrics
    A fuzzy diffusion Positron Emission Tomography (PET) reconstruction algorithm based on anatomical non-local means prior was proposed to solve the problem in traditional Maximum A Posteriori (MAP) algorithm, that the details at low gradient value of reconstruction image cannot be maintained effectively and the appeared ladder artifacts. Firstly, the median prior distribution MAP reconstruction algorithm was improved, namely an anisotropic diffusion filter combined with fuzzy function was introduced before each median filtering. Secondly, the fuzzy membership function was used as diffusion coefficient in the anisotropic diffusion process, and the details of the image were considered by anatomical non-local prior information. The simulation results show that, compared with the traditional algorithms, the new algorithm improves the Signal-to-Noise Ratio (SNR) and anti-noise capability, and has good visual effects and clear edges. Thus the algorithm achieves a good balance between noise reduction and edge maintenance.
    Video key frame extraction method based on image dominant color
    WANG Song HAN Yongguo WU Yadong ZHANG Sainan
    2013, 33(09):  2631-2635.  DOI: 10.11772/j.issn.1001-9081.2013.09.2631
    Asbtract ( )   PDF (852KB) ( )  
    Related Articles | Metrics
    Video key frame reflects the main content of the video sequence. Video key frame extraction is one of the key steps for video content retrieval. Although there are some effective key frame extraction algorithms, these algorithms still have some problems such as heavy load of computing, difficulty in choosing suitable threshold value for different type sequences and limited types of videos. In this paper, a video key frame extraction method based on frame dominant color was proposed. Firstly, every frame was simplified by the dominant color which was obtained by octree structure color quantization algorithm. Secondly, shot boundary was detected according to the color similarity between adjacent frames. Finally, key frames were decided from candidate frames by K-means clustering algorithm. The experimental results show that the proposed method is simpler in computation and requires lower time and space complexity than other key frame extraction methods.
    Complementary panoramic image fusion based on wavelet multi-scale decomposition
    LOU Jingtao LI Yongle WANG Wei ZHANG Maojun
    2013, 33(09):  2636-2639.  DOI: 10.11772/j.issn.1001-9081.2013.09.2636
    Asbtract ( )   PDF (630KB) ( )  
    Related Articles | Metrics
    In order to solve the problem of low and non-uniform resolution in catadioptric omnidirectional imaging, a new image fusion method based on wavelet multi-scale decomposition was proposed in this paper according to the characteristics of complementary panoramic images. Using wavelet transform, the two complementary source images were decomposed into components with different resolutions and different directions first. And then based on specific fusion rules, low frequency was fused by average operator. With high frequency fusion, the exchanging by scales principle was utilized. At last, the fused image was reconstructed by inverse wavelet transform. The experimental results show that the fusion algorithm is simple and effective in the fusion of complementary panoramic images, and has good fusion results.
    Typical applications
    Harmony search algorithm for solving selection of multimodal transportation scheme with several time windows
    LAIZhizhu
    2013, 33(09):  2640-2642.  DOI: 10.11772/j.issn.1001-9081.2013.09.2640
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    To solve the selection of multimodal transportation scheme, a multimodal transportation scheme selection model with several software time windows, which considered several time windows in the transportation network, was put forward. And then a Harmony Search (HS) algorithm based on character-encoding was developed. The proposed algorithm adopted a new harmony generation method and a new fine-tuning mode. Finally, numerical examples demonstrate that, compared with greedy algorithm, the HS algorithm can find best transportation scheme with less total cost and less delay time.
    Algorithm analysis of adaptive active vibration control based on recursive least squares
    HUANG Quanzhen YI Jincong LI Hengyu WANG Xiaohua
    2013, 33(09):  2643-2646.  DOI: 10.11772/j.issn.1001-9081.2013.09.2643
    Asbtract ( )   PDF (747KB) ( )  
    Related Articles | Metrics
    An adaptive filter control method based on Recursive Least Squares (RLS) was proposed for solving the low convergence speed of Filtered-X Least Mean Square (FXLMS) and Filtered-U Least Mean Square (FULMS) algorithms. It was roughly composed of two parts: Infinite Impulse Response (IIR) filter and RLS algorithm. IIR filter was as the main frame of the whole algorithm and adjusted the filter weights in real-time to realize the adaptive filter control. Seen from the analysis and comparison, the algorithm has higher convergence speed and the overall vibration response of the controlled object drops by about 65%, which full proves the validation and feasibility of the algorithm.
    Effect of call distance on detecting probability in call magnetic anomaly searching submarine
    SHAN Zhichao QU Xiaohui ZHOU Zheng
    2013, 33(09):  2647-2649.  DOI: 10.11772/j.issn.1001-9081.2013.09.2647
    Asbtract ( )   PDF (466KB) ( )  
    Related Articles | Metrics
    For analyzing the effect of the call distance on the detecting probability in call magnetic anomaly searching submarine, the model for calculating the submarine distribution probability was deduced, and then the relation between the call distance and the call magnetic anomaly searching submarine probability was established. At last, some calculation results were given out for some typical cases. The results show the call magnetic anomaly searching submarine probability descends rapidly with the call distance increasing, just only in near call distance, small initial distribution radius and low velocity, the call magnetic anomaly searching submarine has a high detecting probability. This shows that the call distance has a serious effect on the detecting probability in call magnetic anomaly searching submarine, and the magnetic anomaly detecting is not fit for searching submarine for far call distance.
    Simulator design and its simulation of meteorological satellite channel
    GUO Yecai YUAN Tao ZHANG Tao
    2013, 33(09):  2650-2652.  DOI: 10.11772/j.issn.1001-9081.2013.09.2650
    Asbtract ( )   PDF (478KB) ( )  
    Related Articles | Metrics
    In order to study the influence of multipath, shadows and the weather variations on meteorological satellite channel, the Suzuki and extension Suzuki channel models were studied according to the analyses on the different weather conditions. Then the two-state Markov model was introduced into satellite channel model, which could describe the transformation between two kinds of channel state models caused by the changes of weather. Finally, the meteorological satellite channel simulator was designed and simulated based on the the shaping filter method. The results show that the proposed meteorological satellite channel simulator can be used in the description of actual meteorological satellite channel propagation characteristics.
    Health degree evaluation model of miners escaping from a mine fire
    WANG Bin ZHOU Xuemei SHENG Jingfang
    2013, 33(09):  2653-2657.  DOI: 10.11772/j.issn.1001-9081.2013.09.2653
    Asbtract ( )   PDF (748KB) ( )  
    Related Articles | Metrics
    The physical condition of miners escaping from a mine fire in the harmful circumstance is critical to the success of escape. This paper proposed the concept of health degree of escaping miners and analyzed the effect of each harmful factor. A model was built to evaluate all factors' influence on escaping miners based on fuzzy comprehensive evaluation approach, and then a dynamic health degree evaluation method of miners escaping from a mine fire was proposed. Fire Dynamics Simulator (FDS) software was used to simulate a simplified mine fire, and escaping miner's health degree was calculated using the method. The rationality of the method was verified by the experiment. Miner's health degree can evaluate miner's physical state in complex disaster environment synthetically, and it provides a quantifiable basis to guide the decision-making of escape path in the underground disasters.
    Liver segmentation method based on hierarchical vascular tree
    WEN Hui CHEN Yufei WANG Zhicheng ZHAO Xiaodong YUE Xiaodong
    2013, 33(09):  2658-2661.  DOI: 10.11772/j.issn.1001-9081.2013.09.2658
    Asbtract ( )   PDF (663KB) ( )  
    Related Articles | Metrics
    For the sensitivity of the portal vein data to classical liver functional segmentation method, a liver segment method based on hierarchical vascular tree combining with the Couinaud theory and portal vein distribution characteristics is proposed. Firstly, liver and vessels are extracted from the abdominal CT image by image segmentation and skeletonization methods. Secondly, secondary subtree set was determined through statistical analysis on average radius of vascular branches, so as to divide the secondary subtree set into several different classes by k-means++ clustering algorithm according to their own blood-supply area. Thirdly, a nearest neighbor segment approximation algorithm was used to segment the liver into parts. Finally, the internal anatomical structure of liver and its vascular system was demonstrated using three-dimensional visualization technology, and then making annotations on liver segments to extract clinical interest information. Experimental result shows that the method can obtain good results when vascular tree contains plenty branches and complex structure. Furthermore, for considering the impact of major secondary branches, the final liver segment distribution and attribute results are in line with the Couinaud liver segment theory.
    School of Computer Science and Technology, Harbin Institute of Technology, Harbin Heilongjiang 150001, China
    LIU Jinming WANG Kuanquan
    2013, 33(09):  2662-2666.  DOI: 10.11772/j.issn.1001-9081.2013.09.2662
    Asbtract ( )   PDF (872KB) ( )  
    Related Articles | Metrics
    Imaging and visualization of heart play an important role in cardiac disease diagnosis and therapy planning. To visualize the segmented cardiac volume datasets, an approach based on Graphics Processing Unit (GPU) acceleration raycasting algorithm was proposed to reconstruct high-quality image. Firstly, the transfer function was designed by using statistical information of the cardiac volume datasets, which was used for increasing opacity of the subtle tissues. Secondly, the sampling step was adaptively adjusted based on the gradient, which was utilized to increase sampling frequency of the borderline tissues. Finally, the improved Blinn-Phong illumination model with multiple lamp-houses was also employed for enhancing the visualization effect. The experimental results show that the proposed method can meet the requirements of real-time rendering and obtain high-quality volume rendering results. Cardiac subtle tissues are rendered clearly such as valves and coronary artery veins.
    Face recognition based on complete orthogonal neighbourhood preserving discriminant embedding
    CHEN Dayao CHEN Weiqi CHEN Xiuhong
    2013, 33(09):  2667-2670.  DOI: 10.11772/j.issn.1001-9081.2013.09.2667
    Asbtract ( )   PDF (742KB) ( )  
    Related Articles | Metrics
    In order to address Small Sample Size (SSS) problem encountered by Neighbourhood Preserving Discriminant Embedding (NPDE) and make full use of the discriminant information in the null space and non-null space of within-neighbourhood scatter matrix for face recognition, this paper proposed a Complete Orthogonal Neighbourhood Preserving Discriminant Embedding (CONPDE) algorithm for face recognition. The algorithm firstly removed the null space of the total neighbourhood scatter matrix using eigen decomposition method indirectly. Then, the optimal discriminant vectors were extracted in the null space and non-null space of within-neighbourhood scatter matrix, respectively. Besides, to further improve the recognition performance, the orthogonal projection matrix obtained based on economic QR decomposition was given. The experiments on ORL and Yale face database show the efficiency of the proposed method.
    Face recognition based on combination of monogenic filtering and local quantitative pattern
    YAN Haiting WANG Ling LI Kunming LIU Jifu
    2013, 33(09):  2671-2674.  DOI: 10.11772/j.issn.1001-9081.2013.09.2671
    Asbtract ( )   PDF (637KB) ( )  
    Related Articles | Metrics
    Concerning the disadvantages of traditional face recognition methods, such as high dimension of extracted feature, higher computational complexity, a new method of face recognition combining monogenic filtering with Local Quantiztative Pattern (LQP) was proposed. Firstly, the multi-modal monogenic features of local amplitude, local orientation and local phase were extracted by a multi-scale monogenic filter; secondly, the LQP operator was used to get LQP feature maps by encoding the three kinds of monogenic features in each pixel; finally, the LQP feature maps were divided into several blocks, from which spatial histograms were extracted and connected as the face feature. ORL and CAS-PEAL face databases were used to test the proposed algorithm and the recognition rates were higher than all the other methods used in the experiments. The results validate that the proposed method has higher recognition accuracy and can reduce the computational complexity significantly.
    Application of three-dimensional medical image registration algorithm in image-guided radiotherapy
    WUQian JIA Jing CAO Ruifen PEI Xi WU Aidong WU Yichan FDS Team
    2013, 33(09):  2675-2678.  DOI: 10.11772/j.issn.1001-9081.2013.09.2675
    Asbtract ( )   PDF (714KB) ( )  
    Related Articles | Metrics
    To acquire an accurate patient positioning in image-guided radiotherapy, an improved Demons deformable registration method was developed. The FDK algorithm was adopted to reconstruct Cone Beam CT (CBCT) and the reconstruction result was visualized by a volume rendering method with Visualization ToolKit (VTK). Based on the Insight segmentation and registration ToolKit (ITK), the Demons algorithm was completed incorporating the gradient information of fixed image and floating image by the concept of symmetric gradient, and a new formula of Demons force was demonstrated. Registrion experiments were carried out using medical images both from single modality and multi-modality. The results show that the improved Demons algorithm achieves a faster convergence speed and a higher precision compared with the original demons algorithm, which indicates that the Demons algorithm based on symmetric gradient is more suitable for the registration of CBCT reconstruction image and CT plan image in image-guided radiotherapy.
    Application of adaptive wavelet scalogram threshold in diaphragmatic electromyographic signal denoising
    YANG Zhi LUO Guo YUAN Fangfang
    2013, 33(09):  2679-2682.  DOI: 10.11772/j.issn.1001-9081.2013.09.2679
    Asbtract ( )   PDF (599KB) ( )  
    Related Articles | Metrics
    As weak bioelectricity signals, diaphragmatic electromyographic (EMGdi) signals are always corrupted by strong electrocardiography (ECG) signals. A denoising algorithm based on wavelet scalogram adaptive threshold was proposed in this paper to improve the precision of threshold in EMGdi signal denoising. This algorithm found the position of ECG interference by performing wavelet transform on the EMGdi signals and conveying wavelet coefficients to wavelet scalogram, and then automatically adjusted the threshold by ECG neighborhood wavelet energy in order to remove ECG interference. By comparing the results with the wavelet threshold, it shows that the proposed method can eliminate the ECG interference from EMGdi and reserve EMGdi signal characteristics effectively.
    Improved fuzzy level set method for automatic MR image segmentation based on edge competition
    ZHAO Wendian DENG Zhensheng
    2013, 33(09):  2683-2685.  DOI: 10.11772/j.issn.1001-9081.2013.09.2683
    Asbtract ( )   PDF (684KB) ( )  
    Related Articles | Metrics
    A level set segmentation method based on edge competition for automatic segmentation of brain Magnetic Resonance (MR) images was proposed, which employed the Robust Fuzzy C-Means Based Kernel Function (RFCMK) result as priori knowledge to solve the problems of noise and edge leakage. Firstly, the image was pre-segmented with the RFCMK algorithm. Then, the sub-class images derived from pre-segmentation were processed by threshold operation to get the edge, which was used as the initial contour for the level set evolution. Finally, a competition mechanism combining the gradient information of sub-class image and original image was introduced to the edge indicator. An image set from a simulated brain database and a real brain MR image were tested to validate the accuracy of the proposed method. The range of area-based and edge-based statistical value is [0.91, 0.95] and [0.05, 0.22], respectively. The experimental results show that the proposed method can detect the edge accurately, and reduce the effect of noise.
    Segmentation algorithm of intervertebral disc magnetic resonance images based on two-dimensional automatic active shape model
    FU Xiaojuan HUANG Dongjun
    2013, 33(09):  2686-2689.  DOI: 10.11772/j.issn.1001-9081.2013.09.2686
    Asbtract ( )   PDF (643KB) ( )  
    Related Articles | Metrics
    In response to the issue that the intervertebral disk manual modeling was time-consuming and subjective, and the existing segmentation method was not accurate enough, a new method named two-diememsional Automatic Active Shape Model (2D-AASM) was proposed. It included three parts: automatic statistical shape modeling of intervertebral disk based on minimum description length, 2D local gradient modeling and segmentation. Adopting the manual segmentation results of 25 sets of spinal MR images as the training set, the study used minimum description length method to determine the point correspondence, built statistical shape model and 2D local gradient model for intervertebral disk T4-5. The generated shape model had lower variance and the objective function value than the manual and arc length parameter method. After the model was built, three sets of Magnetic Resonance Image (MRI) images were used to test the proposed method. Compared with the traditional ASM and 1D-ASM, the segmentation result of the proposed method had a higher Dice coefficient and lower over-segmentation and under-segmentation rate. The experiment results indicate that the proposed method generates a better model and more accurate segmentation result.
    Improvement and application for method of relaxation iterative segmentation based on embedded system
    GAN Lan LIN Huaqing
    2013, 33(09):  2690-2693.  DOI: 10.11772/j.issn.1001-9081.2013.09.2690
    Asbtract ( )   PDF (686KB) ( )  
    Related Articles | Metrics
    The method for iterative probability relaxation segmentation used in cell division can overcome the difficult issues on account of complicated cellular structure and phenomenon of serious adhesion, while the general segmentation algorithm cannot make it effectively. In addition, because of tense embedded resources under the environment of Linux system, the iterative relaxation cellular segmentation algorithm has been improved and then added to the embedded cellular segmentation system based on Qt and OpenCV. The experimental results demonstrate that the improved algorithm can effectively solve the difficult problem of cell division efficaciously and the naked eye can clearly distinguish the difference between the nucleus, cytoplasm and glands. The improved algorithm increases the processing speed and can be transplanted to the embedded facilities convenient for carrying, diagnosing and treating.
    Segmentation of cell two-photon microscopic image based on center location algorithm
    HU Hengyang CHEN Guannan WANG Ping LIU Yao
    2013, 33(09):  2694-2697.  DOI: 10.11772/j.issn.1001-9081.2013.09.2694
    Asbtract ( )   PDF (701KB) ( )  
    Related Articles | Metrics
    Complex background, critical noise and fuzzy boundary made the performance of the available cell image segmentation methods disappointing. Thus, a new method that can locate and detect nucleus effectively was proposed in this paper. A coarse-to-fine segmentation strategy was adopted to extract the edge of nucleus gradually. First, by using C-means clustering algorithm, the image was divided to three parts: nucleus, cytoplasm and cell intercellular substance. Second, the center of cell was located by calculating the circularity of Canny edge image. Finally, a reformed level set evolution was introduced to extract the edge of nucleus. The experimental results show that, nucleus can be located accurately; even if the cell image has a complex background and is disturbed by much stuff. Moreover, the edge of nucleus extracted by this method has a higher accuracy.
    LED wafer edge detection based on improved Canny operator
    WEN Yangdong GU Qianyun CHEN Xuefeng
    2013, 33(09):  2698-2700.  DOI: 10.11772/j.issn.1001-9081.2013.09.2698
    Asbtract ( )   PDF (460KB) ( )  
    Related Articles | Metrics
    An improved Canny operator was proposed to ensure the accuracy of the Light Emitting Diode (LED) wafer edge detection applied in the vision system of automatic LED die bonder. In the traditional Non-Maxima Suppression (NMS) process, inaccuracy would be caused by selecting two pixels near gradient direction as contrastive points. To solve this problem, bilinear interpolation based on the center pixel and three pixels around was made along gradient direction to implement new NMS process. In addition, the gray histogram of LED wafer was characterized by typical three peaks, and then the high threshold and the low threshold could be got by Otsu dual threshold method with new evaluation function to replace traditional artificial adjustment. The experimental results indicate that greater wafer outline and poles edges can be got by the improved Canny algorithm which is appropriate for LED wafer edge detection.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF